content
stringlengths
86
994k
meta
stringlengths
288
619
Horizontal translations refer to movements of a graph of a function horizontally along the x-axis by changing the x values. So, if y = f(x), then y = (x –h) results in a horizontal shift. If h > 0, then the graph shifts h units to the right; while If h < 0, then the graph shifts h units to the left. Compared to $y=f(x)$: $y=f(x-8)$: shift 8 units to the right $y=f(x+3)$: shift 3 units to the left
{"url":"https://www.studypug.com/algebra-help/transformations-of-functions-horizontal-translations","timestamp":"2024-11-03T22:21:11Z","content_type":"text/html","content_length":"371468","record_id":"<urn:uuid:02caeeeb-049c-497e-95c4-e57eb6477fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00339.warc.gz"}
more games of life | R-bloggersmore games of life [This article was first published on R – Xi'an's Og , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. Another puzzle in memoriam of John Conway in The Guardian: Find the ten digit number, abcdefghij. Each of the digits is different, and • a is divisible by 1 • ab is divisible by 2 • abc is divisible by 3 • abcd is divisible by 4 • abcde is divisible by 5 • abcdef is divisible by 6 • abcdefg is divisible by 7 • abcdefgh is divisible by 8 • abcdefghi is divisible by 9 • abcdefghij is divisible by 10 Which brute force R coding by checking over random permutations of (1,2,…,9) [since j=0] solves within seconds: if (prod(!(x<-sum(10^{0:8}*sample(1:9)))%/%10^{7:0}%%2:9))break() into x=3816547290. And slightly less brute force R coding even faster:
{"url":"https://www.r-bloggers.com/2020/05/more-games-of-life/","timestamp":"2024-11-09T09:50:17Z","content_type":"text/html","content_length":"84801","record_id":"<urn:uuid:5fd02dba-472a-4d46-88a8-c7223f979253>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00809.warc.gz"}
Speeding the Industrial Design Process with Modern Calculation Management Challenge In the pursuit of developing high-performance hauling trucks, Hitachi Construction Truck Manufacturing Ltd (HTM) needed new design techniques that would help them manage all of the calculations involved during development. Their current methods for deriving new designs would often rely on simulation software and a collection of disparate mathematics tools. Some calculation tools were developed with spreadsheets, which have the disadvantages of hard-to-diagnose errors, and a lack of unified, transparent calculation auditing. Solution HTM has selected Maple as a key piece of calculation management software to streamline the way that design improvements are created and used. By using Maple, HTM is now able to perform fully traceable design analyses, make use of powerful optimization techniques, and seamlessly integrate calculations into auditable reports. Result HTM has already begun to remove a variety of slowdowns and design errors that stem from using their old calculation tools. They’ve used Maple’s powerful optimization features to push more performance out of their designs, and are creating a stable foundation of design calculations that are fully auditable and easy to understand - an invaluable resource for both current and future In the current climate of industrial trucking products, truck manufacturing companies are striving to improve their products in more efficient ways than ever. These improvements can come in many forms, but in an industry of expensive prototypes and stiff competition, it is critical to find innovations that fit within tight margins, fast development cycles, and that deliver guaranteed reliability in the field. Regardless of the particular innovation, it will require some amount of careful calculations to ensure that the concept is feasible. The engineering department at Hitachi Construction Truck Manufacturing (HTM) is taking steps to improve their rigid-frame hauling truck design through improving their analysis tools. The company has a history of using spreadsheet programs and older coding languages as a means to solve some engineering problems. While sometimes sufficient, these legacy approaches to managing intellectual company property can end up introducing redundancies and slowdowns in a variety of ways during a design process. The engineering department at HTM decided to gradually implement the use of Maple in several key areas to help speed development, and reduce the risk of errors from manually handling calculations across many tools. During typical analysis tasks at HTM, engineers use specific mathematics to work through design concepts, model their kinematic behavior, and analyze their structural integrity over time. In the truck’s initial design stage, HTM has historically used some developed spreadsheet programs, but has recently moved to Maple to reduce the chance of errors and improve calculation efficiency. In addition, Maple has been adopted to optimize design parameters throughout the truck design process. “Using Maple makes calculations more efficient than using spreadsheets,” noted Dr. Shen, a senior manager of the technical analysis group, when explaining his past experience using spreadsheet tools for analysis work. “Using Maple, HTM engineers can set up their analysis with an intuitive math input, and use built-in functions to automatically solve and simplify work, reducing many of the possible errors they had associated with traditional, manual effort.” The rigid frames developed at HTM must support massive payloads ranging up to almost 300 tonnes. To make sure these frames are suitable for the job, HTM uses finite element analysis (FEA) to investigate the life of welded joints, helping to determine the required size of each weld. While FEA is a powerful tool, it can be very resource-intensive, slowing down critical aspects of design analysis. Using Maple, HTM engineers perform initial stress estimations that give them a much better sense of their design before spending large amounts of time doing FEA iterations. With a better starting point, the FEA work is already honed in on rough approximations, so it can arrive at accurate solutions much quicker. By adopting Maple as a tool for calculation management, HTM is joining the growing number of organizations that are treating their engineering calculations as an essential company asset. Responding to an increasingly competitive market, HTM is finding success by reducing the many sources of inefficiencies caused by legacy calculation tools. The success that HTM is finding with Maple is the result of treating calculations as a structured asset, ensuring they are created, reused, and distributed with care and attention. The migration from old techniques still continues, but HTM engineers are already seeing the benefits of adopting Maple for a variety of tasks that were once performed in older, general-purpose tools. With proper calculation management tools in place, HTM is creating powerful, efficient tools that reduce development risk and get their products to market faster. Contact Maplesoft to learn how Maple can help with your projects.
{"url":"https://cn.maplesoft.com/company/casestudies/stories/speedingtheindustrialdesignprocess.aspx?L=C","timestamp":"2024-11-13T05:54:14Z","content_type":"application/xhtml+xml","content_length":"79473","record_id":"<urn:uuid:f9c936a5-22b6-4bf7-b97e-75681b5b467a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00411.warc.gz"}
Modern Atomic and Optical Physics 3 Undergraduate Programme and Module Handbook 2024-2025 Module PHYS3721: Modern Atomic and Optical Physics 3 Department: Physics PHYS3721: Modern Atomic and Optical Physics 3 Type Open Level 3 Credits 20 Availability Available in 2024/2025 Module Cap None. Location Durham • Foundations of Physics 2A (PHYS2581) AND Foundations of Physics 2B (PHYS2591) AND (Mathematical Methods in Physics (PHYS2611) OR Analysis in Many Variables (MATH2031)) • Foundations of Physics 3A (PHYS3621) Excluded Combination of Modules • This module is designed primarily for students studying Department of Physics or Natural Sciences degree programmes. • It builds on Level 2 courses in geometric optics and quantum mechanics by providing courses on modern optics and atomic physics. • Fourier Optics: Fourier toolkit, angular spectrum, Gaussian beams, lasers and cavities, Fresnel and Fraunhofer, 2D diffraction – letters, circles, Babinet and apodization, lenses, imaging, spatial filtering. • Atomic Clocks: History of precision measurement of time. Principle of atomic clocks, revision of atomic structure, electric and magnetic dipole interactions with electromagnetic fields, selection rules. Visualising electron distributions in atoms during transitions. Spontaneous emission, Einstein A coefficient and relationship with atomic clocks, lifetimes, line widths, line intensities and line shapes. Fine-structure and hyperfine splitting, using degenerate perturbation theory to calculate the ground-state hyperfine splitting of the H atom. Lifetimes of electric dipole forbidden transitions, selection rules and relationship with atomic clocks. Zeeman effect, using degenerate perturbation theory to calculate Zeeman shifts of the hyperfine states of the ground-state of the H atom, relationship with atomic clocks. Derivation of Rabi equation for two-level system, transit-time broadening, relationship with atomic clocks. Light forces, the scattering force. Laser cooling of atoms, optical molasses, Doppler limit. Zeeman slowing and Sisyphus cooling of atoms. Magneto-optical trapping of atoms. Moving molasses, caesium fountain clock, Ramsay Interferometry. Optical frequency standards, laser locking. Optical frequency combs, ion trapping, Lamb-Dicke regime. Aluminium quantum logic clock, Ytterbium ion clock. Strontium optical lattice clock, AC Stark effect, dipole force, optical dipole traps and optical lattices, magic wavelength optical lattice. Systematic effects in optical frequency standards, comparisons between clocks. Applications of atomic clocks, time-variation of fundamental constants, electric-dipole moment of the electron and relativistic geodesy. Learning Outcomes Subject-specific Knowledge: • Having studied this module, students will be able to use Fourier methods to describe interference and diffraction and their applications in modern optics. • They will be familiar with some of the applications of quantum mechanics to atomic physics and the interaction of atoms with light. Subject-specific Skills: • In addition to the acquisition of subject knowledge, students will be able to apply the principles of physics to the solution of complex problems. • They will know how to produce a well-structured solution, with clearly-explained reasoning and appropriate presentation. Key Skills: Modes of Teaching, Learning and Assessment and how these contribute to the learning outcomes of the module • Teaching will be by lectures and workshops. • The lectures provide the means to give a concise, focused presentation of the subject matter of the module. The lecture material will be defined by, and explicitly linked to, the contents of the recommended textbooks for the module, thus making clear where students can begin private study. When appropriate, the lectures will also be supported by the distribution of written material, or by information and relevant links online. • Regular problem exercises and workshops will give students the chance to develop their theoretical understanding and problem solving skills. • Students will be able to obtain further help in their studies by approaching their lecturers, either after lectures or at other mutually convenient times. • Student performance will be summatively assessed through an open-book examination and formatively assessed through problem exercises and a progress test. The open-book examination will provide the means for students to demonstrate the acquisition of subject knowledge and the development of their problem-solving skills. • The problem exercises and progress test provide opportunities for feedback, for students to gauge their progress and for staff to monitor progress throughout the duration of the module. Teaching Methods and Learning Hours Activity Number Frequency Duration Total/Hours Lectures 38 2 per week 1 Hour 38 Workshops 17 Weekly 1 Hour 17 Preparation and Reading 145 Total 200 Summative Assessment Component: Open-book examination Component Weighting: 100% Element Length / duration Element Weighting Resit Opportunity Open-book examination 100% None Formative Assessment: Problem exercises and self-assessment; one progress test, workshops and problems solved therein. ■ Attendance at all activities marked with this symbol will be monitored. Students who fail to attend these activities, or to complete the summative or formative assessment specified above, will be subject to the procedures defined in the University's General Regulation V, and may be required to leave the University
{"url":"https://apps.dur.ac.uk/faculty.handbook/2024/UG/module/PHYS3721","timestamp":"2024-11-10T18:34:27Z","content_type":"text/html","content_length":"10210","record_id":"<urn:uuid:55eaab9d-f5f7-419e-9fe7-8596fe2a4e7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00211.warc.gz"}
inear regression Error calculation When training a linear regression model, error calculation measures the disparity between the expected result and the predicted score. It plays a crucial role in adjusting the model's coefficients and intercept during the learning process. The error is computed using a formula that subtracts the predicted score from the expected result. Updating coefficients and intercept Updating coefficients and intercept is a fundamental step in the training of linear regression models. It involves adjusting these parameters based on the calculated error, learning rate, and input values. By iteratively updating coefficients and intercept, the model aims to minimize prediction errors and improve its accuracy over time. Learning rate The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model coefficients are updated. A higher learning rate means the model coefficients will be updated more significantly. It’s a crucial factor that can affect the speed and quality of learning. Too high a learning rate can cause the model to converge too quickly to a suboptimal solution, whereas too low a learning rate can make the training process excessively slow.
{"url":"http://fbc98132bdfd6.stack.run/linear_learn.html","timestamp":"2024-11-08T23:36:35Z","content_type":"text/html","content_length":"7338","record_id":"<urn:uuid:4b50f9d6-4daa-4bd9-ae6c-5adec32c081b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00630.warc.gz"}
What is stated annual interest rate on bonds The Effective Annual Rate (EAR) is the interest rate that is adjusted for compounding over a given period. Simply put, the effective annual interest rate is the rate 17 Feb 2018 The stated interest rate is the interest rate listed on a bond coupon. This is the actual amount of interest paid by the bond issuer. Thus, if the Corporate yield spreads and bond liquidity, Chen, L., Lesmond, D. A., & Wei, J. ( 2007). The Journal of Finance, 62(1), 119-149. This study shows that liquidity is Compounding occurs as often as stated by the terms of the loan or investment; Annual percentage rate can be reported in one of two ways: a simple APR is We can see that the effective yield for Bond B is higher, so, that's a better investment. The nominal interest is also known as Annual Percentage Rate (APR ). The more often compounding occurs, the higher the effective interest rate. The relationship between nominal annual and effective annual interest rates is: ia = [ 1 + There are several different terms used to describe the interest rate or yield on a loan, including annual percentage yield, annual percentage rate, effective rate, For example, if you deposit 100 dollars in a bank account with an annual interest rate of 6% compounded annually, you will receive 100∗(1+0.06) = 106 dollars at What Is Annual Percentage Yield (APY)?. Overview, Calculations, and Comparison to APR. Corporate yield spreads and bond liquidity, Chen, L., Lesmond, D. A., & Wei, J. ( 2007). The Journal of Finance, 62(1), 119-149. This study shows that liquidity is An interest rate in a given year that does not account for more frequent compounding.For example, if a loan of $100 has a stated annual interest rate of 5%, the amount owed at the end of the year is $105. However, if the interest compounds monthly, the actual amount is $105.12. See also: Effective annual interest rate. Coupon Rate is referred to the stated rate of interest on fixed income securities such as bonds. In other words, it is the rate of interest that the bond issuers pay to the bondholders for their investment. It is the periodic rate of interest paid on the bond’s face value to its purchasers. The stated annual interest rate and the effective interest rate can be significantly different, due to compounding. The effective interest rate is important in figuring out the best loan or determining which investment offers the highest rate of return. The difference between the interest calculated from the stated interest and the effective interest can be quite significant. Using the above example, you would pay $2,500 in interest for a $10,000 one-year loan, if you were only charged interest for one year (thus, the effective interest rate would remain 25 percent). What is the stated annual rate of interest on the bonds? What is the interest expense on the bonds for the year ended December 31, 2022? Show transcribed image text. Expert Answer . Answer $ 300000 Cash is the six monthly semi annual interest amount. Annual interest = $ 300000 x 2 = view the full answer. If you know how to calculate interest rates, you will better understand your loan contract with your bank. Also, you will be in a better position to negotiate your interest rate with your bank. Bank loans carry two interest rates, the stated or nominal interest rate and the effective interest rate or annual percentage rate (APR). The effective interest rate is more than the stated rate. 57. How much cash interest does Auerbach pay on March 31, 2014? This is $300 million x 4% x 6/12. 58. Assuming that Auerbach issued the bonds for $255,369,000, what interest expense would it recognize in its 2013 income statement? A. An Effective Interest Rate (EIR) is a rate revealing the real profit earned on an Thus, Effective Annual Interest Rate (EAIR), Annual Equivalent Rate (AER) or known in an economy that every bond has a restated face value, interest rate, and Par value is stated value or face value, with a typical bond making a The YTM is often given in terms of Annual Percentage Rate (A.P.R.), but usually market To calculate the interest payment on a bond, look at the bond’s face value and the coupon rate, or interest rate, at the time it was issued. The coupon rate may also be called the face, nominal, or contractual interest rate. Multiply the bond’s face value by the coupon interest rate to get the annual interest … The stated interest rate of a bond payable is the annual interest rate that is printed on the face of the bond. The stated interest rate multiplied by the bond's face The stated annual interest rate on the bonds is 3.5%. Tax planning for the use of TIPS at retirement · More results ▻. Financial browser ? The Effective Annual Rate (EAR) is the interest rate that is adjusted for compounding over a given period. Simply put, the effective annual interest rate is the rate The difference between the interest calculated from the stated interest and the effective interest can be quite significant. Using the above example, you would pay $2,500 in interest for a $10,000 one-year loan, if you were only charged interest for one year (thus, the effective interest rate would remain 25 percent). The Effective Annual Rate (EAR) is the interest rate that is adjusted for compounding over a given period. Simply put, the effective annual interest rate is the rate 17 Feb 2018 The stated interest rate is the interest rate listed on a bond coupon. This is the actual amount of interest paid by the bond issuer. Thus, if the Corporate yield spreads and bond liquidity, Chen, L., Lesmond, D. A., & Wei, J. ( 2007). The Journal of Finance, 62(1), 119-149. This study shows that liquidity is Compounding occurs as often as stated by the terms of the loan or investment; Annual percentage rate can be reported in one of two ways: a simple APR is Coupon Rate is referred to the stated rate of interest on fixed income securities such as bonds. In other words, it is the rate of interest that the bond issuers pay to the bondholders for their investment. It is the periodic rate of interest paid on the bond’s face value to its purchasers. The stated annual interest rate and the effective interest rate can be significantly different, due to compounding. The effective interest rate is important in figuring out the best loan or determining which investment offers the highest rate of return. The difference between the interest calculated from the stated interest and the effective interest can be quite significant. Using the above example, you would pay $2,500 in interest for a $10,000 one-year loan, if you were only charged interest for one year (thus, the effective interest rate would remain 25 percent). What is the stated annual rate of interest on the bonds? What is the interest expense on the bonds for the year ended December 31, 2022? Show transcribed image text. Expert Answer . Answer $ 300000 Cash is the six monthly semi annual interest amount. Annual interest = $ 300000 x 2 = view the full answer.
{"url":"https://tradenefstl.netlify.app/bungert53308tel/what-is-stated-annual-interest-rate-on-bonds-wo.html","timestamp":"2024-11-11T13:09:31Z","content_type":"text/html","content_length":"32944","record_id":"<urn:uuid:e18483ca-165f-4549-a403-93a486fdeda3>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00493.warc.gz"}
Application oriented research topic of WIAS Model based investigations of electrochemical double layers, porous catalysts in fuel cells, and battery materials The behavior of electrochemical systems is widely investigated with continuum physics models. Applications range from single crystal electrochemistry to lithium batteries and fuel cells, from biological nano-pores to electrolysis and corrosion science, and further. The common basis for all application is the theory of non-equilibrium thermo-electrodynamics [1,2]. At the Weierstraß-Institute general models for electrochemical systems are systematically derived. Asymptotic analysis methods are further employed to derived reduced non-equilibrium models for various electrodes and electrolytes, and the corresponding boundary conditions. Metal/electrolyte interface A detailed model for a general electrolytic mixture was developed, which may adsorb and react on a metal surface. It is based on coupled volume and surface thermodynamics [3] where we account for adsorption and solvation of the ionic species. Figure 1: Computed structure and a resulting sketch of the Ag/0.1M NaF interface. For charged interfaces in equilibrium, this model leads to a coupled Poisson-momentum equation systems [4], which can be used to compute the structure of the double layer (Fig. 1). It is further possible to derive the capacity of the electrode/electrolyte interface (Fig. 2), a property which is precisely measured for many materials. This allows for a rigorous validation of our model, which shows some remarkable agreement to experimental data [3]. Figure 2: Computed capacity of the Ag(110) 0.1M NaF interface. Electron transfer reactions Electron transfer reactions at the interface between an electrolyte and an electrode are the pivotal phenomenon of all electrochemical systems for storage and conversion of energy. The Butler-Volmer equation describes the dependence of the reaction rates on a potential difference across the interface and on the concentrations of the different species at the interface. At WIAS, new boundary conditions of generalized Butler-Volmer type were derived based on non-equilibrium thermodynamics [5]. The predictive capabilities of this theory are validated for various, well defined electrochemical cells. and is applied to complex systems like batteries and fuel cells. Figure 3: Experimental setup copper deposition. Figure 4: Current-Voltage diagram for the electro-deposition of metal with different electrolyte concentrations. At high imposed currents diffusional transport in the electrolyte causes a lack of reacting ions at the electrode, leading to a blow-up of the potential. Thermodynamically consistent discretizations The numerical solution of generalized Poisson-Nernst-Planck systems like the one derived in [5] in higher space dimensions and general geometries requires the development of specifically tailored discretization approaches which have the potential to preserve the thermodynamic properties of the continuous problem. For this purpose, a generalization of the Scharfetter-Gummel upwind finite volume scheme successfully employed in the field of semiconductor device simulation to the case Poisson-Nernst-Planck problems with ion volume constraints and solvent balancing has been proposed [6]. Figure 5: Simulated IV curves for an electrolytic diode. Difference between standard and improved Nernst-Planck models. Modeling of transport and reaction processes for magnesium-air batteries As magnesium is highly abundant, comparably cheap and sufficiently reactive, rechargeable magnesium air batteries are an interesting option for large scale energy storage. The development of strategies for the realization of this battery type is the subject of the research network which is funded by the German Ministry of Education and Research. The WIAS subproject is concerned with the model based interpretation of flow cell experiments supporting the acquisition of transport data in organic electrolytes and information or reaction kinetics [7] and the modeling of transport and reaction processes in the electrodes of such a cell. Figure 6: Calculated streamlines of electrolyte flow in an experimental flow cell. [1] I. Müller, Thermodynamics, Pitman, 1985. [2] S. de Groot, P. Mazur, Non-Equilibrium Thermodynamics, Dover Publications, 1984. [3] W. Dreyer, C. Guhlke and M. Landstorfer, Theory and structure of the metal electrolyte/interface incorporating adsorption and solvation effects, Preprint no. 2058, WIAS, Berlin, 2014. [4] W. Dreyer, C. Guhlke and M. Landstorfer, A mixture theory of electrolytes containing solvation effects, Electrochemistry Communications, 43 (2014), pp. 75-78. [5] W. Dreyer, C. Guhlke and R. Müller, Modeling of electrochemical double layers in thermodynamic non- equilibrium, Phys. Chem. Chem. Phys., 17 (2015), pp. 27176-27194. [6] J. Fuhrmann, Comparison and numerical treatment of generalised Nernst-Planck models, Computer Physics Communications, 196 (2015), pp. 166-178. [7] J. Fuhrmann, A. Linke, C. Merdon, F. Neumann, T. Streckenbach, H. Baltruschat, and M. Khodayari, Inverse modeling of thin layer flow cells for detection of solubility, transport and reaction coefficients from experimental data, Preprint no. 2161, WIAS, Berlin, 2015. • R. Klöfkorn, E. Keilegavlen, F.A. Radu , J. Fuhrmann, eds., Finite Volumes for Complex Applications IX -- Methods, Theoretical Aspects, Examples -- FVCA 9, Bergen, June 2020, 323 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2020, 775 pages, (Collection Published), DOI 10.1007/978-3-030-43651-3 . • H.-Chr. Kaiser, D. Knees, A. Mielke, J. Rehberg, E. Rocca, M. Thomas, E. Valdinoci, eds., PDE 2015: Theory and Applications of Partial Differential Equations, 10 of Discrete and Continuous Dynamical Systems -- Series S, American Institute of Mathematical Science, Springfield, 2017, iv+933 pages, (Collection Published). Articles in Refereed Journals • R. Müller, M. Landstorfer, Galilean bulk-surface electrothermodynamics and applications to electrochemistry, Entropy. An International and Interdisciplinary Journal of Entropy and Information Studies, 25 (2023), pp. 416/1--416/27, DOI 10.3390/e25030416 . In this work, the balance equations of non-equilibrium thermodynamics are coupled to Galilean limit systems of the Maxwell equations, i.e. either to (i) the quasi-electrostatic limit or (ii) the quasi-magnetostatic limit. We explicitly consider a volume $Omega$ which is divided into $Omega^+$ and $Omega^-$ by a possibly moving singular surface S, where a charged reacting mixture of a viscous medium can be present on each geometrical entity ($Omega$^+, S, $Omega^-$). By the restriction to Galilean limits of the Maxwell equations, we achieve that only subsystems of equations for matter and electric field are coupled that share identical transformation properties with respect to observer transformations. Moreover, the application of an entropy principle becomes more straightforward and finally it helps to estimate the limitations of the more general approach based the full set of Maxwell equations. Constitutive relations are provided based on an entropy principle and particular care is taken for the analysis of the stress tensor and the momentum balance in the general case of non-constant scalar susceptibility. Finally, we summarize the application of the derived model framework to an electrochemical system with surface reactions • G.L. Celora, R. Blossey, A. Münch, B. Wagner, Counterion-controlled phase equilibria in a charge-regulated polymer solution, Journal of Chemical Physics, 159 (2023), pp. 184902/1--184902/17, DOI 10.1063/5.0169610 . We study phase equilibria in a minimal model of charge-regulated polymer solutions. Our model consists of a single polymer species whose charge state arises from protonation-deproto- nation processes in the presence of a dissolved acid, whose anions serve as screening counteri- ons. We explicitly account for variability in the polymers' charge states. Homogeneous equilibria in this model system are characterised by the total concentration of polymers, the concentration of counter-ions and the charge distributions of polymers which can be computed with the help of analytical approximations. We use these analytical results to characterise how parameter values and solution acidity influence equilibrium charge distributions and identify for which regimes uni- modal and multi-modal charge distributions arise. We then study the interplay between charge regulation, solution acidity and phase separation. We find that charge regulation has a significant impact on polymer solubility and allows for non-linear responses to the solution acidity: re-entrant phase behaviour is possible in response to increasing solution acidity. Moreover, we show that phase separation can yield to the coexistence of local environments characterised by different charge distributions and mixture com • G.L. Celora, M.G. Hennessy, A. Münch, B. Wagner, S.L. Waters, The dynamics of a collapsing polyelectrolyte gel, SIAM Journal on Applied Mathematics, 83 (2023), pp. 1146--1171, DOI 10.1137/ 21M1419726 . • M.G. Hennessy, G.L. Celora, S.L. Waters, A. Münch, B. Wagner, Breakdown of electroneutrality in polyelectrolyte gels, European Journal of Applied Mathematics, published online on 06.09.2023, DOI 10.1017/S0956792523000244 . • E. Meca, A.W. Fritsch, J. Iglesias--Artola, S. Reber, B. Wagner, Predicting disordered regions driving phase separation of proteins under variable salt concentration, Frontiers in Physics, section Biophysics, 11 (2023), pp. 1213304/1--1213304/13, DOI 10.3389/fphy.2023.1213304 . We determine the intrinsically disordered regions (IDRs) of phase separating proteins and investigate their impact on liquid-liquid phase separation (LLPS) with a random-phase approx- imation (RPA) that accounts for variable salt concentration. We focus on two proteins, PGL-3 and FUS, known to undergo LLPS. For PGL-3 we predict that an IDR near the C-terminus pro- motes LLPS, which we validate through direct comparison with in vitro experimental results. For the structurally more complex protein FUS the role of the low complexity (LC) domain in LLPS is not as well understood. Apart from the LC domain we here identify two IDRs, one near the N-terminus and another near the C-terminus. Our RPA analysis of these domains predict that, surprisingly, the IDR at the N-terminus (aa 1-285) and not the LC domain promotes LLPS of FUS by comparison to in vitro experiments under physiological temperature and salt conditions. • G.L. Celora, M.G. Hennessy, A. Münch, B. Wagner, S.L. Waters, A kinetic model of a polyelectrolyte gel undergoing phase separation, Journal of the Mechanics and Physics of Solids, 160 (2022), pp. 104771/1--104771/27, DOI 10.1016/j.jmps.2021.104771 . In this study we use non-equilibrium thermodynamics to systematically derive a phase-field model of a polyelectrolyte gel coupled to a thermodynamically consistent model for the salt solution surrounding the gel. The governing equations for the gel account for the free energy of the internal interfaces which form upon phase separation, as well as finite elasticity and multi-component transport. The fully time-dependent model describes the evolution of small changes in the mobile ion concentrations and follows their impact on the large-scale solvent flux and the emergence of long-time pattern formation in the gel. We observe a strong acceleration of the evolution of the free surface when the volume phase transition sets in, as well as the triggering of spinodal decomposition that leads to strong inhomogeneities in the lateral stresses, potentially leading to experimentally visible patterns. • D. Bothe, W. Dreyer, P.-É. Druet, Multicomponent incompressible fluids -- An asymptotic study, ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik, published online on 14.01.2022, DOI 10.1002/zamm.202100174 . This paper investigates the asymptotic behavior of the Helmholtz free energy of mixtures at small compressibility. We start from a general representation for the local free energy that is valid in stable subregions of the phase diagram. On the basis of this representation we classify the admissible data to construct a thermodynamically consistent constitutive model. We then analyze the incompressible limit, where the molar volume becomes independent of pressure. Here we are confronted with two problems: (i) Our study shows that the physical system at hand cannot remain incompressible for arbitrary large deviations from a reference pressure unless its volume is linear in the composition. (ii) As a consequence of the 2nd law of thermodynamics, the incompressible limit implies that the molar volume becomes independent of temperature as well. Most applications, however, reveal the non-appropriateness of this property. According to our mathematical treatment, the free energy as a function of temperature and partial masses tends to a limit in the sense of epi-- or Gamma--convergence. In the context of the first problem, we study the mixing of two fluids to compare the linearity with experimental observations. The second problem will be treated by considering the asymptotic behavior of both a general inequality relating thermal expansion and compressibility and a PDE-system relying on the equations of balance for partial masses, momentum and the internal energy. • J. Fischer, K. Hopf, M. Kniely, A. Mielke, Global existence analysis of energy-reaction-diffusion systems, SIAM Journal on Mathematical Analysis, 54 (2022), pp. 220--267, DOI 10.1137/20M1387237 . We establish global-in-time existence results for thermodynamically consistent reaction-(cross-)diffusion systems coupled to an equation describing heat transfer. Our main interest is to model species-dependent diffusivities, while at the same time ensuring thermodynamic consistency. A key difficulty of the non-isothermal case lies in the intrinsic presence of cross-diffusion type phenomena like the Soret and the Dufour effect: due to the temperature/energy dependence of the thermodynamic equilibria, a nonvanishing temperature gradient may drive a concentration flux even in a situation with constant concentrations; likewise, a nonvanishing concentration gradient may drive a heat flux even in a case of spatially constant temperature. We use time discretisation and regularisation techniques and derive a priori estimates based on a suitable entropy and the associated entropy production. Renormalised solutions are used in cases where non-integrable diffusion fluxes or reaction terms appear. • V. Miloš, P. Vágner, D. Budáč, M. Carda, M. Paidar, J. Fuhrmann, K. Bouzek, Generalized Poisson--Nernst--Planck-based physical model of the O$_2$ I LSM I YSZ electrode, Journal of The Electrochemical Society, 169 (2022), pp. 044505/1--044505/17, DOI 10.1149/1945-7111/ac4a51 . The paper presents a generalized Poisson--Nernst--Planck model of an yttria-stabilized zirconia electrolyte developed from first principles of nonequilibrium thermodynamics which allows for spatial resolution of the space charge layer. It takes into account limitations in oxide ion concentrations due to the limited availability of oxygen vacancies. The electrolyte model is coupled with a reaction kinetic model describing the triple phase boundary with electron conducting lanthanum strontium manganite and gaseous phase oxygen. By comparing the outcome of numerical simulations based on different formulations of the kinetic equations with results of EIS and CV measurements we attempt to discern the existence of separate surface lattice sites for oxygen adatoms and O^2- from the assumption of shared ones. Furthermore, we discern mass-action kinetics models from exponential kinetics models. • K. Hopf, Weak-strong uniqueness for energy-reaction-diffusion systems, Mathematical Models & Methods in Applied Sciences, 21 (2022), pp. 1015--1069, DOI 10.1142/S0218202522500233 . We establish weak-strong uniqueness and stability properties of renormalised solutions to a class of energy-reaction-diffusion systems, which genuinely feature cross-diffusion effects. The systems considered are motivated by thermodynamically consistent models, and their formal entropy structure allows us to use as a key tool a suitably adjusted relative entropy method. Weak-strong uniqueness is obtained for general entropy-dissipating reactions without growth restrictions, and certain models with a non-integrable diffusive flux. The results also apply to a class of (isoenergetic) reaction-cross-diffusion systems. • P.-É. Druet, Maximal mixed parabolic-hyperbolic regularity for the full equations of multicomponent fluid dynamics, Nonlinearity, 35 (2022), pp. 3812--3882, DOI 10.1088/1361-6544/ac5679 . We consider a Navier--Stokes--Fick--Onsager--Fourier system of PDEs describing mass, energy and momentum balance in a Newtonian fluid with composite molecular structure. For the resulting parabolic-hyperbolic system, we introduce the notion of optimal regularity of mixed type, and we prove the short-time existence of strong solutions for a typical initial boundary-value-problem. By means of a partial maximum principle, we moreover show that such a solution cannot degenerate in finite time due to blow-up or vanishing of the temperature or the partial mass densities. This second result is however only valid under certain growth conditions on the phenomenological coefficients. In order to obtain some illustration of the theory, we set up a special constitutive model for volume-additive mixtures. • M. Landstorfer, R. Müller, Thermodynamic models for a concentration and electric field dependent susceptibility in liquid electrolytes, Electrochimica Acta, 428 (2022), pp. 140368/1--140368/19, DOI 10.1016/j.electacta.2022.140368 . The dielectric susceptibility $chi$ is an elementary quantity of the electrochemical double layer and the associated Poisson equation. While most often $chi$ is treated as a material constant, its dependency on the salt concentration in liquid electrolytes is demonstrated by various bulk electrolyte experiments. This is usually referred to as dielectric decrement. Further, it is theoretically well accepted that the susceptibility declines for large electric fields. This effect is frequently termed dielectric saturation. We analyze the impact of a variable susceptibility in terms of species concentrations and electric fields based on non-equilibrium thermodynamics. This reveals some non-obvious generalizations compared to the case of a constant susceptibility. In particular the consistent coupling of the Poisson equation, the momentum balance and the chemical potentials functions are of ultimate importance. In a numerical study, we systematically analyze the effects of a concentration and field dependent susceptibility on the double layer of a planar electrode electrolyte interface. We compute the differential capacitance and the spatial structure of the electric potential, solvent concentration and ionic distribution for various non-constant models of $chi$. • M. Landstorfer, M. Ohlberger, S. Rave, M. Tacke, A modelling framework for efficient reduced order simulations of parametrised lithium-ion battery cells, European Journal of Applied Mathematics, 34 (2023), pp. 554--591 (published online on 29.11.2022), DOI 10.1017/S0956792522000353 . In this contribution we present a new modeling and simulation framework for parametrized Lithium-ion battery cells. We first derive a new continuum model for a rather general intercalation battery cell on the basis of non-equilibrium thermodynamics. In order to efficiently evaluate the resulting parameterized non-linear system of partial differential equations the reduced basis method is employed. The reduced basis method is a model order reduction technique on the basis of an incremental hierarchical approximate proper orthogonal decomposition approach and empirical operator interpolation. The modeling framework is particularly well suited to investigate and quantify degradation effects of battery cells. Several numerical experiments are given to demonstrate the scope and efficiency of the modeling framework. • D. Abdel, P. Farrell, J. Fuhrmann, Assessing the quality of the excess chemical potential flux scheme for degenerate semiconductor device simulation, Optical and Quantum Electronics, 53 (2021), pp. 163/1--163/10, DOI 10.1007/s11082-021-02803-4 . The van Roosbroeck system models current flows in (non-)degenerate semiconductor devices. Focusing on the stationary model, we compare the excess chemical potential discretization scheme, a flux approximation which is based on a modification of the drift term in the current densities, with another state-of-the-art Scharfetter-Gummel scheme, namely the diffusion-enhanced scheme. Physically, the diffusion-enhanced scheme can be interpreted as a flux approximation which modifies the thermal voltage. As a reference solution we consider an implicitly defined integral flux, using Blakemore statistics. The integral flux refers to the exact solution of a local two point boundary value problem for the continuous current density and can be interpreted as a generalized Scharfetter-Gummel scheme. All numerical discretization schemes can be used within a Voronoi finite volume method to simulate charge transport in (non-)degenerate semiconductor devices. The investigation includes the analysis of Taylor expansions, a derivation of error estimates and a visualization of errors in local flux approximations to extend previous discussions. Additionally, drift-diffusion simulations of a p-i-n device are performed. • P.-É. Druet, Global-in-time existence for liquid mixtures subject to a generalised incompressibility constraint, Journal of Mathematical Analysis and Applications, 499 (2021), pp. 125059/ 1--125059/56, DOI 10.1016/j.jmaa.2021.125059 . We consider a system of partial differential equations describing diffusive and convective mass transport in a fluid mixture of N > 1 chemical species. A weighted sum of the partial mass densities of the chemical species is assumed to be constant, which expresses the incompressibility of the fluid, while accounting for different reference sizes of the involved molecules. This condition is different from the usual assumption of a constant total mass density, and it leads in particular to a non-solenoidal velocity field in the Navier-Stokes equations. In turn, the pressure gradient occurs in the diffusion fluxes, so that the PDE-system of mass transport equations and momentum balance is fully coupled. Another striking feature of such incompressible mixtures is the algebraic formula connecting the pressure and the densities, which can be exploited to prove a pressure bound in L^1. In this paper, we consider incompressible initial states with bounded energy and show the global existence of weak solutions with defect measure. • A.S. Shatla, M. Landstorfer, H. Baltruschat, On the differential capacitance and potential of zero charge of Au(111) in some aprotic solvents, ChemElectroChem, 8 (2021), pp. 1817--1835, DOI 10.1002/celc.202100316 . A combined experimental and theoretical investigation on various aprotic solvents and their electrochemical behaviors at gold surfaces is presented. The potential of zero charge was determined for all the solvents and the differential capacity was measured and simulated for various salts. Conclusions about the adsorption behavior and solvent-specific solvation number could be drawn from this combined study. • M. Landstorfer, B. Prifling, V. Schmidt, Mesh generation for periodic 3D microstructure models and computation of effective properties, Journal of Computational Physics, 431 (2021), pp. 110071/ 1--110071/20 (published online on 23.12.2020), DOI https://doi.org/10.1016/j.jcp.2020.110071 . Understanding and optimizing effective properties of porous functional materials, such as permeability or conductivity, is one of the main goals of materials science research with numerous applications. For this purpose, understanding the underlying 3D microstructure is crucial since it is well known that the materials? morphology has an significant impact on their effective properties. Because tomographic imaging is expensive in time and costs, stochastic microstructure modeling is a valuable tool for virtual materials testing, where a large number of realistic 3D microstructures can be generated and used as geometry input for spatially-resolved numerical simulations. Since the vast majority of numerical simulations is based on solving differential equations, it is essential to have fast and robust methods for generating high-quality volume meshes for the geometrically complex microstructure domains. The present paper introduces a novel method for generating volume-meshes with periodic boundary conditions based on an analytical representation of the 3D microstructure using spherical harmonics. Due to its generality, the present method is applicable to many scientific areas. In particular, we present some numerical examples with applications to battery research by making use of an already existing stochastic 3D microstructure model that has been calibrated to eight differently compacted cathodes. • P. Vágner, M. Pavelka, O. Esen, Multiscale thermodynamics of charged mixtures, Continuum Mechanics and Thermodynamics, published online on 25.07.2020, DOI 10.1007/s00161-020-00900-5 . A multiscale theory of interacting continuum mechanics and thermodynamics of mixtures of fluids, electrodynamics, polarization and magnetization is proposed. The mechanical (reversible) part of the theory is constructed in a purely geometric way by means of semidirect products. This leads to a complex Hamiltonian system with a new Poisson bracket, which can be used in principle with any energy functional. The thermodynamic (irreversible) part is added as gradient dynamics, generated by derivatives of a dissipation potential, which makes the theory part of the GENERIC framework. Subsequently, Dynamic MaxEnt reductions are carried out, which lead to reduced GENERIC models for smaller sets of state variables. Eventually, standard engineering models are recovered as the low-level limits of the detailed theory. The theory is then compared to recent literature. • C. Cancès, C. Chainais-Hillairet, J. Fuhrmann, B. Gaudeul, A numerical analysis focused comparison of several finite volume schemes for an unipolar degenerated drift-diffusion model, IMA Journal of Numerical Analysis, 41 (2021), pp. 271--314 (published online on 17.07.2020), DOI 10.1093/imanum/draa002 . In this paper, we consider an unipolar degenerated drift-diffusion system where the relation between the concentration of the charged species c and the chemical potential h is h(c) = log ^c/ [1-c]. We design four different finite volume schemes based on four different formulations of the fluxes. We provide a stability analysis and existence results for the four schemes. The convergence proof with respect to the discretization parameters is established for two of them. Numerical experiments illustrate the behaviour of the different schemes. • D.H. Doan, A. Fischer, J. Fuhrmann, A. Glitzky, M. Liero, Drift-diffusion simulation of S-shaped current-voltage relations for organic semiconductor devices, Journal of Computational Electronics, 19 (2020), pp. 1164--1174, DOI 10.1007/s10825-020-01505-6 . We present an electrothermal drift-diffusion model for organic semiconductor devices with Gauss-Fermi statistics and positive temperature feedback for the charge carrier mobilities. We apply temperature dependent Ohmic contact boundary conditions for the electrostatic potential and discretize the system by a finite volume based generalized Scharfetter-Gummel scheme. Using path-following techniques we demonstrate that the model exhibits S-shaped current-voltage curves with regions of negative differential resistance, which were only recently observed • J. Fuhrmann, M. Landstorfer, R. Müller, Modeling polycrystalline electrode-electrolyte interfaces: The differential capacitance, Journal of The Electrochemical Society, 167 (2020), pp. 106512/ 1--106512/15, DOI 10.1149/1945-7111/ab9cca . We present and analyze a model for polycrystalline electrode surfaces based on an improved continuum model that takes finite ion size and solvation into account. The numerical simulation of finite size facet patterns allows to study two limiting cases: While for facet size diameter $d^facet to 0$ we get the typical capacitance of a spatially homogeneous but possible amorphous or liquid surface, in the limit $L^Debye << d^facet$ , an ensemble of non-interacting single crystal surfaces is approached. Already for moderate size of the facet diameters, the capacitance is remarkably well approximated by the classical approach of adding the single crystal capacities of the contributing facets weighted by their respective surface fraction. As a consequence, the potential of zero charge is not necessarily attained at a local minimum of capacitance, but might be located at a local capacitance maximum instead. Moreover, the results show that surface roughness can be accurately taken into account by multiplication of the ideally flat polycrystalline surface capacitance with a single factor. In particular, we find that the influence of the actual geometry of the facet pattern in negligible and our theory opens the way to a stochastic description of complex real polycrystal surfaces. • A. Mielke, A. Stephan, Coarse-graining via EDP-convergence for linear fast-slow reaction systems, Mathematical Models & Methods in Applied Sciences, 30 (2020), pp. 1765--1807, DOI 10.1142/ S0218202520500360 . We consider linear reaction systems with slow and fast reactions, which can be interpreted as master equations or Kolmogorov forward equations for Markov processes on a finite state space. We investigate their limit behavior if the fast reaction rates tend to infinity, which leads to a coarse-grained model where the fast reactions create microscopically equilibrated clusters, while the exchange mass between the clusters occurs on the slow time scale. Assuming detailed balance the reaction system can be written as a gradient flow with respect to the relative entropy. Focusing on the physically relevant cosh-type gradient structure we show how an effective limit gradient structure can be rigorously derived and that the coarse-grained equation again has a cosh-type gradient structure. We obtain the strongest version of convergence in the sense of the Energy-Dissipation Principle (EDP), namely EDP-convergence with tilting. • M. Landstorfer, Mathematische Modellierung elektrokatalytischer Zellen, Mitteilungen der Deutschen Mathematiker-Vereinigung, 26 (2019), pp. 161--163. • P. Vágner, C. Guhlke, V. Miloš, R. Müller, J. Fuhrmann, A continuum model for yttria-stabilised zirconia incorporating triple phase boundary, lattice structure and immobile oxide ions, Journal of Solid State Electrochemistry, 23 (2019), pp. 2907--2926, DOI 10.1007/s10008-019-04356-9 . A continuum model for yttria-stabilised zirconia (YSZ) in the framework of non-equilibrium thermodynamics is developed. Particular attention is given to i) modeling of the YSZ-metal-gas triple phase boundary, ii) incorporation of the lattice structure and immobile oxide ions within the free energy model and iii) surface reactions. A finite volume discretization method based on modified Scharfetter-Gummel fluxes is derived in order to perform numerical simulations. The model is used to study the impact of yttria and immobile oxide ions on the structure of the charged boundary layer and the double layer capacitance. Cyclic voltammograms of an air-half cell are simulated to study the effect of parameter variations on surface reactions, adsorption and anion diffusion. • V. Klika , M. Pavelka , P. Vágner, M. Grmela, Dynamic maximum entropy reduction, Entropy. An International and Interdisciplinary Journal of Entropy and Information Studies, 21 (2019), pp. 1--27. • W. Dreyer, C. Guhlke, R. Müller, The impact of solvation and dissociation on the transport parameters of liquid electrolytes: Continuum modeling and numerical study, European Physical Journal Special Topics, 227 (2019), pp. 2515--2538, DOI 10.1140/epjst/e2019-800133-2 . Electro-thermodynamics provides a consistent framework to derive continuum models for electrochemical systems. For the application to a specific experimental system, the general model must be equipped with two additional ingredients: a free energy model to calculate the chemical potentials and a kinetic model for the kinetic coefficients. Suitable free energy models for liquid electrolytes incorporating ion-solvent interaction, finite ion sizes and solvation already exist and have been validated against experimental measurements. In this work, we focus on the modeling of the mobility coefficients based on Maxwell--Stefan setting and incorporate them into the general electro-thermodynamic framework. Moreover, we discuss the impact of model parameter on conductivity, transference numbers and salt diffusion coefficient. In particular, the focus is set on the solvation of ions and incomplete dissociation of a non-dilute electrolyte. • J. Fuhrmann, C. Guhlke, Ch. Merdon, A. Linke, R. Müller, Induced charge electroosmotic flow with finite ion size and solvation effects, Electrochimica Acta, 317 (2019), pp. 778--785, DOI 10.1016/ j.electacta.2019.05.051 . • W. Dreyer, P. Friz, P. Gajewski, C. Guhlke, M. Maurelli, Stochastic many-particle model for LFP electrodes, Continuum Mechanics and Thermodynamics, 30 (2018), pp. 593--628, DOI 10.1007/ s00161-018-0629-7 . In the framework of non-equilibrium thermodynamics we derive a new model for porous electrodes. The model is applied to LiFePO4 (LFP) electrodes consisting of many LFP particles of nanometer size. The phase transition from a lithium-poor to a lithium-rich phase within LFP electrodes is controlled by surface fluctuations leading to a system of stochastic differential equations. The model is capable to derive an explicit relation between battery voltage and current that is controlled by thermodynamic state variables. This voltage-current relation reveals that in thin LFP electrodes lithium intercalation from the particle surfaces into the LFP particles is the principal rate limiting process. There are only two constant kinetic parameters in the model describing the intercalation rate and the fluctuation strength, respectively. The model correctly predicts several features of LFP electrodes, viz. the phase transition, the observed voltage plateaus, hysteresis and the rate limiting capacity. Moreover we study the impact of both the particle size distribution and the active surface area on the voltagecharge characteristics of the electrode. Finally we carefully discuss the phase transition for varying charging/discharging rates. • M. Landstorfer, On the dissociation degree of ionic solutions considering solvation effects, Electrochemistry Communications, 92 (2018), pp. 56--59, DOI 10.1016/j.elecom.2018.05.011 . In this work the impact of solvation effects on the dissociation degree of strong electrolytes and salts is discussed. The investigation is based on a thermodynamic model which is capable to predict qualitatively and quantitatively the double layer capacity of various electrolytes. A remarkable relationship between capacity maxima, partial molar volume of ions in solution, and solvation numbers, provides an experimental access to determine the number of solvent molecules bound to a specific ion in solution. This shows that the Stern layer is actually a saturated solution of 1 mol L-1 solvated ions, and we point out some fundamental similarities of this state to a saturated bulk solution. Our finding challenges the assumption of complete dissociation, even for moderate electrolyte concentrations, whereby we introduce an undissociated ion-pair in solution. We re-derive the equilibrium conditions for a two-step dissociation reaction, including solvation effects, which leads to a new relation to determine the dissociation degree. A comparison to Ostwald's dilution law clearly shows the shortcomings when solvation effects are neglected and we emphasize that complete dissociation is questionable beyond 0.5 mol L-1 for aqueous, mono-valent electrolytes. • L. Donati, M. Heida, M. Weber, B. Keller, Estimation of the infinitesimal generator by square-root approximation, Journal of Physics: Condensed Matter, 30 (2018), pp. 425201/1--425201/14, DOI 10.1088/1361-648X/aadfc8 . For the analysis of molecular processes, the estimation of time-scales, i.e., transition rates, is very important. Estimating the transition rates between molecular conformations is -- from a mathematical point of view -- an invariant subspace projection problem. A certain infinitesimal generator acting on function space is projected to a low-dimensional rate matrix. This projection can be performed in two steps. First, the infinitesimal generator is discretized, then the invariant subspace is approximated and used for the subspace projection. In our approach, the discretization will be based on a Voronoi tessellation of the conformational space. We will show that the discretized infinitesimal generator can simply be approximated by the geometric average of the Boltzmann weights of the Voronoi cells. Thus, there is a direct correlation between the potential energy surface of molecular structures and the transition rates of conformational changes. We present results for a 2d-diffusion process and Alanine dipeptide. • W. Dreyer, C. Guhlke, R. Müller, Bulk-surface electro-thermodynamics and applications to electrochemistry, Entropy. An International and Interdisciplinary Journal of Entropy and Information Studies, 20 (2018), pp. 939/1--939/44, DOI 10.3390/e20120939 . We propose a modeling framework for magnetizable, polarizable, elastic, viscous, heat conducting, reactive mixtures in contact with interfaces. To this end we first introduce bulk and surface balance equations that contain several constitutive quantities. For further modeling the constitutive quantities, we formulate constitutive principles. They are based on an axiomatic introduction of the entropy principle and the postulation of Galilean symmetry. We apply the proposed formalism to derive constitutive relations in a rather abstract setting. For illustration of the developed procedure, we state an explicit isothermal material model for liquid electrolyte metal electrode interfaces in terms of free energy densities in the bulk and on the surface. Finally we give a survey of recent advancements in the understanding of electrochemical interfaces that were based on this model. • W. Dreyer, C. Guhlke, M. Landstorfer, R. Müller, New insights on the interfacial tension of electrochemical interfaces and the Lippmann equation, European Journal of Applied Mathematics, 29 (2018), pp. 708--753, DOI 10.1017/S0956792517000341 . The Lippmann equation is considered as universal relationship between interfacial tension, double layer charge, and cell potential. Based on the framework of continuum thermo-electrodynamics we provide some crucial new insights to this relation. In a previous work we have derived a general thermodynamic consistent model for electrochemical interfaces, which showed a remarkable agreement to single crystal experimental data. Here we apply the model to a curved liquid metal electrode. If the electrode radius is large compared to the Debye length, we apply asymptotic analysis methods and obtain the Lippmann equation. We give precise definitions of the involved quantities and show that the interfacial tension of the Lippmann equation is composed of the surface tension of our general model, and contributions arising from the adjacent space charge layers. This finding is confirmed by a comparison of our model to experimental data of several mercury-electrolyte interfaces. We obtain qualitative and quantitative agreement in the 2V potential range for various salt concentrations. We also discuss the validity of our asymptotic model when the electrode curvature radius is comparable to the Debye length. • M. Khodayari, P. Reinsberg, A.A. Abd-El-Latif, Ch. Merdon, J. Fuhrmann, H. Baltruschat, Determining solubility and diffusivity by using a flow cell coupled to a mass spectrometer, ChemPhysChem, 17 (2016), pp. 1647--1655. • W. Dreyer, C. Guhlke, M. Landstorfer, Theory and structure of the metal/electrolyte interface incorporating adsorption and solvation effects, Electrochimica Acta, 201 (2016), pp. 187--219. In this work we present a continuum theory for the metal/electrolyte interface which explicitly takes into account adsorption and partial solvation on the metal surface. It is based on a general theory of coupled thermo-electrodynamics for volumes and surfaces, utilized here in equilibrium and a 1D approximation. We provide explicit free energy models for the volumetric metal and electrolyte phases and derive a surface free energy for the species present on the metal surface. This surface mixture theory explicitly takes into account the very different amount of sites an adsorbate requires, originating from solvation effects on the surface. Additionally we account for electron transfer reactions on the surface and the associated stripping of the solvation shell. Based on our overall surface free energy we thus provide explicit expressions of the surface chemical potentials of all constituents. The equilibrium representations of the coverages and the overall charge are briefly summarized. Our model is then used to describe two examples: (i) a silver single crystal electrode with (100) face in contact to a (0.01M NaF + 0.01M KPF6) aqueous solution, and (ii) a general metal surface in contact to some electrolytic solution AC for which an electron transfer reaction occurs in the potential range of interest. We reflect the actual modeling procedure for these examples and discuss the respective model parameters. Due to the representations of the coverages in terms of the applied potential we provide an adsorption map and introduce adsorption potentials. Finally we investigate the structure of the space charge layer at the metal/surface/electrolyte interface by means of numerical solutions of the coupled Poisson-momentum equation system for various applied potentials. It turns out that various layers self-consistently form within the overall space charge region, which are compared to historic and recent pictures of the double layer. Based on this we present new interpretations of what is known as inner and outer Helmholtz-planes and finally provide a thermodynamic consistent picture of the metal/electrolyte interface structure. • W. Dreyer, C. Guhlke, R. Müller, A new perspective on the electron transfer: Recovering the Butler--Volmer equation in non-equilibrium thermodynamics, Physical Chemistry Chemical Physics, 18 (2016), pp. 24966--24983, DOI 10.1039/C6CP04142F . Understanding and correct mathematical description of electron transfer reaction is a central question in electrochemistry. Typically the electron transfer reactions are described by the Butler-Volmer equation which has its origin in kinetic theories. The Butler-Volmer equation relates interfacial reaction rates to bulk quantities like the electrostatic potential and electrolyte concentrations. Since in the classical form, the validity of the Butler-Volmer equation is limited to some simple electrochemical systems, many attempts have been made to generalize the Butler-Volmer equation. Based on non-equilibrium thermodynamics we have recently derived a reduced model for the electrode-electrolyte interface. This reduced model includes surface reactions but does not resolve the charge layer at the interface. Instead it is locally electroneutral and consistently incorporates all features of the double layer into a set of interface conditions. In the context of this reduced model we are able to derive a general Butler-Volmer equation. We discuss the application of the new Butler-Volmer equations to different scenarios like electron transfer reactions at metal electrodes, the intercalation process in lithium-iron-phosphate electrodes and adsorption processes. We illustrate the theory by an example of electroplating. • J. Fuhrmann, A numerical strategy for Nernst--Planck systems with solvation effect, Fuel Cells, 16 (2016), pp. 704--714. • J. Fuhrmann, A. Linke, Ch. Merdon, F. Neumann, T. Streckenbach, H. Baltruschat, M. Khodayari, Inverse modeling of thin layer flow cells for detection of solubility, transport and reaction coefficients from experimental data, Electrochimica Acta, 211 (2016), pp. 1--10. Thin layer flow cells are used in electrochemical research as experimental devices which allow to perform investigations of electrocatalytic surface reactions under controlled conditions using reasonably small electrolyte volumes. The paper introduces a general approach to simulate the complete cell using accurate numerical simulation of the coupled flow, transport and reaction processes in a flow cell. The approach is based on a mass conservative coupling of a divergence-free finite element method for fluid flow and a stable finite volume method for mass transport. It allows to perform stable and efficient forward simulations that comply with the physical bounds namely mass conservation and maximum principles for the involved species. In this context, several recent approaches to obtain divergence-free velocities from finite element simulations are discussed. In order to perform parameter identification, the forward simulation method is coupled to standard optimization tools. After an assessment of the inverse modeling approach using known realistic data, first results of the identification of solubility and transport data for O2 dissolved in organic electrolytes are presented. A plausibility study for a more complex situation with surface reactions concludes the paper and shows possible extensions of the scope of the presented numerical tools. • W. Dreyer, C. Guhlke, R. Müller, Modeling of electrochemical double layers in thermodynamic non-equilibrium, Physical Chemistry Chemical Physics, 17 (2015), pp. 27176--27194, DOI 10.1039/ C5CP03836G . We consider the contact between an electrolyte and a solid electrode. At first we formulate a thermodynamic consistent model that resolves boundary layers at interfaces. The model includes charge transport, diffusion, chemical reactions, viscosity, elasticity and polarization under isothermal conditions. There is a coupling between these phenomena that particularly involves the local pressure in the electrolyte. Therefore the momentum balance is of major importance for the correct description of the layers. The width of the boundary layers is typically very small compared to the macroscopic dimensions of the system. In a second step we thus apply the method of asymptotic analysis to derive a simpler reduced model that does not resolve the boundary layers but instead incorporates the electrochemical properties of the layers into a set of new boundary conditions. For a metal-electrolyte interface, we derive a qualitative description of the double layer capacitance without the need to resolve space charge layers. • J. Fuhrmann, Comparison and numerical treatment of generalized Nernst--Planck models, Computer Physics Communications. An International Journal and Program Library for Computational Physics and Physical Chemistry, 196 (2015), pp. 166--178. In its most widespread, classical formulation, the Nernst-Planck-Poisson system for ion transport in electrolytes fails to take into account finite ion sizes. As a consequence, it predicts unphysically high ion concentrations near electrode surfaces. Historical and recent approaches to an approriate modification of the model are able to fix this problem. Several appropriate formulations are compared in this paper. The resulting equations are reformulated using absolute activities as basic variables describing the species amounts. This reformulation allows to introduce a straightforward generalisation of the Scharfetter-Gummel finite volume discretization scheme for drift-diffusion equations. It is shown that it is thermodynamically consistent in the sense that the solution of the corresponding discretized generalized Poisson-Boltzmann system describing the thermodynamic equilibrium is a stationary state of the discretized time-dependent generalized Nernst-Planck system. Numerical examples demonstrate the improved physical correctness of the generalised models and the feasibility of the numerical approach. • A. Mielke, J. Haskovec, P.A. Markowich, On uniform decay of the entropy for reaction-diffusion systems, Journal of Dynamics and Differential Equations, 27 (2015), pp. 897--928. In this work we derive entropy decay estimates for a class of nonlinear reaction-diffusion systems modeling reversible chemical reactions under the assumption of detailed balance. In particular, we provide explicit bounds for the exponential decay of the relative logarithmic entropy, being based essentially on the application of the log-Sobolev inequality and a convexification argument only, making it quite robust to model variations. An important feature of our analysis is the interaction of the two different dissipative mechanisms: pure diffusion, forcing the system asymptotically to the homogeneous state, and pure reaction, forcing the solution to the (possibly inhomogeneous) chemical equilibrium. Only the interaction of both mechanisms provides the convergence to the homogeneous equilibrium. Moreover, we introduce two generalizations of the main result: we allow for vanishing diffusion constants in some chemical components, and we consider different entropy functionals. We provide a few examples to highlight the usability of our approach and shortly discuss possible further applications and open questions. • M. Liero, A. Mielke, Gradient structures and geodesic convexity for reaction-diffusion systems, Philosophical Transactions of the Royal Society A : Mathematical, Physical & Engineering Sciences, 371 (2013), pp. 20120346/1--20120346/28. We consider systems of reaction-diffusion equations as gradient systems with respect to an entropy functional and a dissipation metric given in terms of a so-called Onsager operator, which is a sum of a diffusion part of Wasserstein type and a reaction part. We provide methods for establishing geodesic lambda-convexity of the entropy functional by purely differential methods, thus circumventing arguments from mass transportation. Finally, several examples, including a drift-diffusion system, provide a survey on the applicability of the theory. We consider systems of reaction-diffusion equations as gradient systems with respect to an entropy functional and a dissipation metric given in terms of a so-called Onsager operator, which is a sum of a diffusion part of Wasserstein type and a reaction part. We provide methods for establishing geodesic lambda-convexity of the entropy functional by purely differential methods, thus circumventing arguments from mass transportation. Finally, several examples, including a drift-diffusion system, provide a survey on the applicability of the theory. • M. Liero, Passing from bulk to bulk/surface evolution in the Allen--Cahn equation, NoDEA. Nonlinear Differential Equations and Applications, 20 (2013), pp. 919--942. In this paper we formulate a boundary layer approximation for an Allen-Cahn-type equation involving a small parameter $eps$. Here, $eps$ is related to the thickness of the boundary layer and we are interested in the limit when $eps$ tends to 0 in order to derive nontrivial boundary conditions. The evolution of the system is written as an energy balance formulation of the L^2-gradient flow with the corresponding Allen-Cahn energy functional. By transforming the boundary layer to a fixed domain we show the convergence of the solutions to a solution of a limit system. This is done by using concepts related to Gamma- and Mosco convergence. By considering different scalings in the boundary layer we obtain different boundary conditions. • A. Glitzky, A. Mielke, A gradient structure for systems coupling reaction-diffusion effects in bulk and interfaces, ZAMP Zeitschrift fur Angewandte Mathematik und Physik. ZAMP. Journal of Applied Mathematics and Physics. Journal de Mathematiques et de Physique Appliquees, 64 (2013), pp. 29--52. We derive gradient-flow formulations for systems describing drift-diffusion processes of a finite number of species which undergo mass-action type reversible reactions. Our investigations cover heterostructures, where material parameter may depend in a nonsmooth way on the space variable. The main results concern a gradient flow formulation for electro-reaction-diffusion systems with active interfaces permitting drift-diffusion processes and reactions of species living on the interface and transfer mechanisms allowing bulk species to jump into an interface or to pass through interfaces. The gradient flows are formulated in terms of two functionals: the free energy and the dissipation potential. Both functionals consist of a bulk and an interface integral. The interface integrals determine the interface dynamics as well as the self-consistent coupling to the model in the bulk. The advantage of the gradient structure is that it automatically generates thermodynamically consistent models. • W. Dreyer, C. Guhlke, R. Müller, Overcoming the shortcomings of the Nernst--Planck model, Physical Chemistry Chemical Physics, 15 (2013), pp. 7075--7086, DOI 10.1039/C3CP44390F . This is a study on electrolytes that takes a thermodynamically consistent coupling between mechanics and diffusion into account. It removes some inherent deficiencies of the popular Nernst-Planck model. A boundary problem for equilibrium processes is used to illustrate the new features of our model. • A. Mielke, Thermomechanical modeling of energy-reaction-diffusion systems, including bulk-interface interactions, Discrete and Continuous Dynamical Systems -- Series S, 6 (2013), pp. 479--499. We show that many couplings between parabolic systems for processes in solids can be formulated as a gradient system with respect to the total free energy or the total entropy. This includes Allen-Cahn, Cahn-Hilliard, and reaction-diffusion systems and the heat equation. For this, we write the coupled system as an Onsager system (X,Φ,K) defining the evolution $dot U$= - K(U) DΦ(U). Here Φ is the driving functional, while the Onsager operator K(U) is symmetric and positive semidefinite. If the inverse G=K^-1 exists, the triple (X,Φ,G) defines a gradient system. Onsager systems are well suited to model bulk-interface interactions by using the dual dissipation potential Ψ^*(U, Ξ)= ½ 〈Ξ K(U) Ξ〉. Then, the two functionals Φ and Ψ^* can be written as a sum of a volume integral and a surface integral, respectively. The latter may contain interactions of the driving forces in the interface as well as the traces of the driving forces from the bulk. Thus, capture and escape mechanisms like thermionic emission appear naturally in Onsager systems, namely simply through integration by parts. • M. Liero, Th. Roche, Rigorous derivation of a plate theory in linear elastoplasticity via $Gamma$-convergence, NoDEA. Nonlinear Differential Equations and Applications, 19 (2012), pp. 437--457. This paper deals with dimension reduction in linearized elastoplasticity in the rate-independent case. The reference configuration of the elastoplastic body is given by a two-dimensional middle surface and a small but positive thickness. We derive a limiting model for the case in which the thickness of the plate tends to 0. This model contains membrane and plate deformations which are coupled via plastic strains. The convergence analysis is based on an abstract Gamma convergence theory for rate-independent evolution formulated in the framework of energetic solutions. This concept is based on an energy-storage functional and a dissipation functional, such that the notion of solution is phrased in terms of a stability condition and an energy balance. • CH. Batallion, F. Bouchon, C. Chainais-Hillairet, J. Fuhrmann, E. Hoarau, R. Touzani, Numerical methods for the simulation of a corrosion model in a nuclear waste deep repository, Journal of Computational Physics, 231 (2012), pp. 6213--6231. In this paper, we design numerical methods for a PDE system arising in corrosion modelling. This system describes the evolution of a dense oxide layer. It is based on a drift-diffusion system and includes moving boundary equations. The choice of the numerical methods is justified by a stability analysis and by the study of their numerical performance. Finally, numerical experiments with real-life data shows the efficiency of the developed methods. • A. Glitzky, An electronic model for solar cells including active interfaces and energy resolved defect densities, SIAM Journal on Mathematical Analysis, 44 (2012), pp. 3874--3900. We introduce an electronic model for solar cells taking into account heterostructures with active interfaces and energy resolved volume and interface trap densities. The model consists of continuity equations for electrons and holes with thermionic emission transfer conditions at the interface and of ODEs for the trap densities with energy level and spatial position as parameters, where the right hand sides contain generation-recombination as well as ionization reactions. This system is coupled with a Poisson equation for the electrostatic potential. We show the thermodynamic correctness of the model and prove a priori estimates for the solutions to the evolution system. Moreover, existence and uniqueness of weak solutions of the problem are proven. For this purpose we solve a regularized problem and verify bounds of the corresponding solution not depending on the regularization level. • J. Fuhrmann, H. Zhao, H. Langmach, Y.E. Seidel, Z. Jusys, R.J. Behm, The role of reactive reaction intermediates in two-step heterogeneous electro-catalytic reactions: A model study, Fuel Cells, 11 (2011), pp. 501--510. Experimental investigations of heterogeneous electrocatalytic reactions have been performed in flow cells which provide an environment with controlled parameters. Measurements of the oxygen reduction reaction in a flow cell with an electrode consisting of an array of Pt nanodisks on a glassy carbon substrate exhibited a decreasing fraction of the intermediate $H_2O_2$ in the overall reaction products with increasing density of the nanodiscs. A similar result is true for the dependence on the catalyst loading in the case of a supported Pt/C catalyst thin-film electrode, where the fraction of the intermediate decreases with increasing catalyst loading. Similar effects have been detected for the methanol oxidation. We present a model of multistep heterogeneous electrocatalytic oxidation and reduction reactions based on an adsorption-reaction-desorption scheme using the Langmuir assumption and macroscopic transport equations. A continuum based model problem in a vertical cross section of a rectangular flow cell is proposed in order to explain basic principles of the experimental situation. It includes three model species A, B, C, which undergo adsorption and desorption at a catalyst surface, as well as adsorbate reactions from A to B to C. These surface reactions are coupled with diffusion and advection in the Hagen Poiseuille flow in the flow chamber of the cell. Both high velocity asymptotic theory and a finite volume numerical are used to obtain approximate solutions to the model. Both approaches show a behaviour similar to the experimentally observed. Working in more general situations, the finite volume scheme was applied to a catalyst layer consisting of a number of small catalytically active areas corresponding to nanodisks. Good qualitative agreement with the experimental findings was established for this case as well. • A. Mielke, A gradient structure for reaction-diffusion systems and for energy-drift-diffusion systems, Nonlinearity, 24 (2011), pp. 1329--1346. In recent years the theory of Wasserstein distances has opened up a new treatment of the diffusion equations as gradient systems, where the entropy takes the role of the driving functional and where the space is equipped with the Wasserstein metric. We show that this structure can be generalized to closed reaction-diffusion systems, where the free energy (or the entropy) is the driving functional and further conserved quantities may exists, like the total number of chemical species. The metric is constructed by using the dual dissipation potential, which is a convex function of the chemical potentials. In particular, it is possible to treat diffusion and reaction terms simultaneously. The same ideas extend to semiconductor equations involving the electron and hole densities, the electrostatic potential, and the temperature. • A. Glitzky, Uniform exponential decay of the free energy for Voronoi finite volume discretized reaction-diffusion systems, Mathematische Nachrichten, 284 (2011), pp. 2159--2174. Our focus are energy estimates for discretized reaction-diffusion systems for a finite number of species. We introduce a discretization scheme (Voronoi finite volume in space and fully implicit in time) which has the special property that it preserves the main features of the continuous systems, namely positivity, dissipativity and flux conservation. For a class of Voronoi finite volume meshes we investigate thermodynamic equilibria and prove for solutions to the evolution system the monotone and exponential decay of the discrete free energy to its equilibrium value with a unified rate of decay for this class of discretizations. The essential idea is an estimate of the free energy by the dissipation rate which is proved indirectly by taking into account sequences of Voronoi finite volume meshes. Essential ingredient in that proof is a discrete Sobolev-Poincaré inequality. • A. Glitzky, J.A. Griepentrog, Discrete Sobolev--Poincaré inequalities for Voronoi finite volume approximations, SIAM Journal on Numerical Analysis, 48 (2010), pp. 372--391. We prove a discrete Sobolev-Poincare inequality for functions with arbitrary boundary values on Voronoi finite volume meshes. We use Sobolev's integral representation and estimate weakly singular integrals in the context of finite volumes. We establish the result for star shaped polyhedral domains and generalize it to the finite union of overlapping star shaped domains. In the appendix we prove a discrete Poincare inequality for space dimensions greater or equal to two. • R. Haller-Dintelmann, Ch. Meyer, J. Rehberg, A. Schiela, Hölder continuity and optimal control for nonsmooth elliptic problems, Applied Mathematics and Optimization. An International Journal with Applications to Stochastics, 60 (2009), pp. 397--428. The well known De Giorgi result on Hölder continuity for solutions of the Dirichlet problem is re-established for mixed boundary value problems, provided that the underlying domain is a Lipschitz domain and the border between the Dirichlet and the Neumann boundary part satisfies a very general geometric condition. Implications of this result for optimal control theory are presented. • R. Haller-Dintelmann, J. Rehberg, Maximal parabolic regularity for divergence operators including mixed boundary conditions, Journal of Differential Equations, 247 (2009), pp. 1354--1396. We show that elliptic second order operators $A$ of divergence type fulfill maximal parabolic regularity on distribution spaces, even if the underlying domain is highly non-smooth and $A$ is complemented with mixed boundary conditions. Applications to quasilinear parabolic equations with non-smooth data are presented. • J. Fuhrmann, A. Linke, H. Langmach, H. Baltruschat, Numerical calculation of the limiting current for a cylindrical thin layer flow cell, Electrochimica Acta, 55 (2009), pp. 430--438. • A. Glitzky, Energy estimates for electro-reaction-diffusion systems with partly fast kinetics, Discrete and Continuous Dynamical Systems, 25 (2009), pp. 159--174. We start from a basic model for the transport of charged species in heterostructures containing the mechanisms diffusion, drift and reactions in the domain and at its boundary. Considering limit cases of partly fast kinetics we derive reduced models. This reduction can be interpreted as some kind of projection scheme for the weak formulation of the basic electro--reaction--diffusion system. We verify assertions concerning invariants and steady states and prove the monotone and exponential decay of the free energy along solutions to the reduced problem and to its fully implicit discrete-time version by means of the results of the basic problem. Moreover we make a comparison of prolongated quantities with the solutions to the basic model. • A. Glitzky, K. Gärtner, Energy estimates for continuous and discretized electro-reaction-diffusion systems, Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal. Series A: Theory and Methods, 70 (2009), pp. 788--805. We consider electro-reaction-diffusion systems consisting of continuity equations for a finite number of species coupled with a Poisson equation. We take into account heterostructures, anisotropic materials and rather general statistic relations. We investigate thermodynamic equilibria and prove for solutions to the evolution system the monotone and exponential decay of the free energy to its equilibrium value. Here the essential idea is an estimate of the free energy by the dissipation rate which is proved indirectly. The same properties are shown for an implicit time discretized version of the problem. Moreover, we provide a space discretized scheme for the electro-reaction-diffusion system which is dissipative (the free energy decays monotonously). On a fixed grid we use for each species different Voronoi boxes which are defined with respect to the anisotropy matrix occurring in the flux term of this species. • R. Haller-Dintelmann, H.-Chr. Kaiser, J. Rehberg, Elliptic model problems including mixed boundary conditions and material heterogeneities, Journal de Mathématiques Pures et Appliquées, 89 (2008), pp. 25--48. • M. Hieber, J. Rehberg, Quasilinear parabolic systems with mixed boundary conditions on nonsmooth domains, SIAM Journal on Mathematical Analysis, 40 (2008), pp. 292--305. In this paper we investigate quasilinear systems of reaction-diffusion equations with mixed Dirichlet-Neumann bondary conditions on non smooth domains. Using techniques from maximal regularity and heat-kernel estimates we prove existence of a unique solution to systems of this type. • J. Fuhrmann, H. Zhao, E. Holzbecher, H. Langmach, Flow, transport, and reactions in a thin layer flow cell, Journal of Fuel Cell Science and Technology, 5 (2008), pp. 021008/1--021008/10. • A. Glitzky, Exponential decay of the free energy for discretized electro-reaction-diffusion systems, Nonlinearity, 21 (2008), pp. 1989--2009. Our focus are electro-reaction-diffusion systems consisting of continuity equations for a finite number of species coupled with a Poisson equation. We take into account heterostructures, anisotropic materials and rather general statistical relations. We introduce a discretization scheme (in space and fully implicit in time) using a fixed grid but for each species different Voronoi boxes which are defined with respect to the anisotropy matrix occurring in the flux term of this species. This scheme has the special property that it preserves the main features of the continuous systems, namely positivity, dissipativity and flux conservation. For the discretized electro-reaction-diffusion system we investigate thermodynamic equilibria and prove for solutions to the evolution system the monotone and exponential decay of the free energy to its equilibrium value. The essential idea is an estimate of the free energy by the dissipation rate which is proved indirectly. • J.A. Griepentrog, Maximal regularity for nonsmooth parabolic problems in Sobolev--Morrey spaces, Advances in Differential Equations, 12 (2007), pp. 1031--1078. This text is devoted to maximal regularity results for second order parabolic systems on LIPSCHITZ domains of space dimension greater or equal than three with diagonal principal part, nonsmooth coefficients, and nonhomogeneous mixed boundary conditions. We show that the corresponding class of initial boundary value problems generates isomorphisms between two scales of SOBOLEV-MORREY spaces for solutions and right hand sides introduced in the first part of our presentation. The solutions depend smoothly on the data of the problem. Moreover, they are HOELDER continuous in time and space up to the boundary for a certain range of MORREY exponents. Due to the complete continuity of embedding and trace maps these results remain true for a broad class of unbounded lower order coefficients. • J.A. Griepentrog, Sobolev--Morrey spaces associated with evolution equations, Advances in Differential Equations, 12 (2007), pp. 781--840. In this text we introduce new classes of SOBOLEV-MORREY spaces being adequate for the regularity theory of second order parabolic boundary value problems on LIPSCHITZ domains of space dimension greater or equal than three with nonsmooth coefficients and mixed boundary conditions. We prove embedding and trace theorems as well as invariance properties of these spaces with respect to localization, LIPSCHITZ transformation, and reflection. In the second part of our presentation we show that the class of second order parabolic systems with diagonal principal part generates isomorphisms between the above mentioned SOBOLEV-MORREY spaces of solutions and right hand sides. • O. Minet, H. Gajewski, J.A. Griepentrog, J. Beuthan, The analysis of laser light scattering during rheumatoid arthritis by image segmentation, Laser Physics Letters, 4 (2007), pp. 604--610. • J. Elschner, H.-Chr. Kaiser, J. Rehberg, G. Schmidt, $W^1,q$ regularity results for elliptic transmission problems on heterogeneous polyhedra, Mathematical Models & Methods in Applied Sciences, 17 (2007), pp. 593--615. • J. Elschner, J. Rehberg, G. Schmidt, Optimal regularity for elliptic transmission problems including $C^1$ interfaces, Interfaces and Free Boundaries. Mathematical Modelling, Analysis and Computation, 9 (2007), pp. 233--252. We prove an optimal regularity result for elliptic operators $-nabla cdot mu nabla:W^1,q_0 rightarrow W^-1,q$ for a $q>3$ in the case when the coefficient function $mu$ has a jump across a $C^1$ interface and is continuous elsewhere. A counterexample shows that the $C^1$ condition cannot be relaxed in general. Finally, we draw some conclusions for corresponding parabolic operators. • A. Glitzky, R. Hünlich, Resolvent estimates in $W^-1,p$ related to strongly coupled linear parabolic systems with coupled nonsmooth capacities, Mathematical Methods in the Applied Sciences, 30 (2007), pp. 2215--2232. We investigate linear parabolic systems with coupled nonsmooth capacities and mixed boundary conditions. We prove generalized resolvent estimates in $W^-1,p$ spaces. The method is an appropriate modification of a technique introduced by Agmon to obtain $L^p$ estimates for resolvents of elliptic differential operators in the case of smooth boundary conditions. Moreover, we establish an existence and uniqueness result. • H. Gajewski, J.A. Griepentrog, A descent method for the free energy of multicomponent systems, Discrete and Continuous Dynamical Systems, 15 (2006), pp. 505--528. • H.-Chr. Kaiser, H. Neidhardt, J. Rehberg, Classical solutions of quasilinear parabolic systems on two dimensional domains, NoDEA. Nonlinear Differential Equations and Applications, 13 (2006), pp. • M. Baro, H. Neidhardt, J. Rehberg, Current coupling of drift-diffusion models and dissipative Schrödinger--Poisson systems: Dissipative hybrid models, SIAM Journal on Mathematical Analysis, 37 (2005), pp. 941--981. • A. Glitzky, R. Hünlich, Global existence result for pair diffusion models, SIAM Journal on Mathematical Analysis, 36 (2005), pp. 1200--1225. • A. Glitzky, R. Hünlich, Stationary energy models for semiconductor devices with incompletely ionized impurities, ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik, 85 (2005), pp. 778--792. • J. Rehberg, Quasilinear parabolic equations in $L^p$, Progress in Nonlinear Differential Equations and their Applications, 64 (2005), pp. 413-419. • V. Maz'ya, J. Elschner, J. Rehberg, G. Schmidt, Solutions for quasilinear nonsmooth evolution systems in $L^p$, Archive for Rational Mechanics and Analysis, 171 (2004), pp. 219--262. • H. Gajewski, I.V. Skrypnik, On unique solvability of nonlocal drift-diffusion-type problems, Nonlinear Analysis. Theory, Methods & Applications. An International Multidisciplinary Journal. Series A: Theory and Methods, 56 (2004), pp. 803--830. • H. Gajewski, I.V. Skrypnik, To the uniqueness problem for nonlinear parabolic equations, Discrete and Continuous Dynamical Systems, 10 (2004), pp. 315--336. • A. Glitzky, W. Merz, Single dopant diffusion in semiconductor technology, Mathematical Methods in the Applied Sciences, 27 (2004), pp. 133--154. • A. Glitzky, R. Hünlich, Stationary solutions of two-dimensional heterogeneous energy models with multiple species, Banach Center Publications, 66 (2004), pp. 135-151. • A. Glitzky, Electro-reaction-diffusion systems with nonlocal constraints, Mathematische Nachrichten, 277 (2004), pp. 14--46. • H. Gajewski, K. Zacharias, On a nonlocal phase separation model, Journal of Mathematical Analysis and Applications, 286 (2003), pp. 11--31. • G. Albinus, H. Gajewski, R. Hünlich, Thermodynamic design of energy models of semiconductor devices, Nonlinearity, 15 (2002), pp. 367--383. • H. Gajewski, On a nonlocal model of non-isothermal phase separation, Advances in Mathematical Sciences and Applications, 12 (2002), pp. 569--586. • J.A. Griepentrog, K. Gröger, H.-Chr. Kaiser, J. Rehberg, Interpolation for function spaces related to mixed boundary value problems, Mathematische Nachrichten, 241 (2002), pp. 110--120. • J.A. Griepentrog, Linear elliptic boundary value problems with non-smooth data: Campanato spaces of functionals, Mathematische Nachrichten, 243 (2002), pp. 19--42. • A. Glitzky, R. Hünlich, Global properties of pair diffusion models, Advances in Mathematical Sciences and Applications, 11 (2001), pp. 293--321. • J.A. Griepentrog, H.-Chr. Kaiser, J. Rehberg, Heat kernel and resolvent properties for second order elliptic differential operators with general boundary conditions on $Lsp p$, Advances in Mathematical Sciences and Applications, 11 (2001), pp. 87--112. • W. Merz, A. Glitzky, R. Hünlich, K. Pulverer, Strong solutions for pair diffusion models in homogeneous semiconductors, Nonlinear Analysis. Real World Applications. An International Multidisciplinary Journal, 2 (2001), pp. 541-567. • J.A. Griepentrog, L. Recke, Linear elliptic boundary value problems with non-smooth data: Normal solvability on Sobolev-Campanato spaces, Mathematische Nachrichten, 225 (2001), pp. 39--74. • A. Glitzky, R. Hünlich, Electro-reaction-diffusion systems including cluster reactions of higher order, Mathematische Nachrichten, 216 (2000), pp. 95--118. Contributions to Collected Editions • A. Linke, Ch. Merdon, On the significance of pressure-robustness for the space discretization of incompressible high Reynolds number flows, in: Finite Volumes for Complex Applications IX -- Methods, Theoretical Aspects, Examples -- FVCA 9, Bergen, June 2020, R. Klöfkorn, E. Keilegavlen, A.F. Radu, J. Fuhrmann, eds., 323 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2020, pp. 103--112. • A. Linke, Ch. Merdon, Well-balanced discretisation for the compressible Stokes problem by gradient-robustness, in: Finite Volumes for Complex Applications IX -- Methods, Theoretical Aspects, Examples -- FVCA 9, Bergen, June 2020, R. Klöfkorn, E. Keilegavlen, A.F. Radu, J. Fuhrmann, eds., 323 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2020, pp. 113--121. • C. Cancès, C. Chainais-Hillairet, J. Fuhrmann, B. Gaudeul, On four numerical schemes for a unipolar degenerate drift-diffusion model, in: Finite Volumes for Complex Applications IX -- Methods, Theoretical Aspects, Examples -- FVCA 9, Bergen, June 2020, R. Klöfkorn, F. Radu, E. Keijgavlen, J. Fuhrmann, eds., Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2020, pp. 163--171, DOI 10.1007/978-3-030-43651-3_13 . • J. Fuhrmann, C. Guhlke, A. Linke, Ch. Merdon, R. Müller, Models and numerical methods for electrolyte flows, in: Topics in Applied Analysis and Optimisation, M. Hintermüller, J.F. Rodrigues, eds., CIM Series in Mathematical Sciences, Springer Nature Switzerland AG, Cham, 2019, pp. 183--209. • J. Fuhrmann, C. Guhlke, A. Linke, Ch. Merdon, R. Müller, Voronoi finite volumes and pressure robust finite elements for electrolyte models with finite ion sizes, in: Numerical Geometry, Grid Generation and Scientific Computing. Proceedings of the 9th International Conference, NUMGRID 2018 / Voronoi 150, V.A. Garanzha, L. Kamenski, H. Si, eds., 131 of Lecture Notes in Computational Science and Engineering, Springer Nature Switzerland AG, Cham, 2019, pp. 73--83, DOI 10.1007/978-3-030-23436-2 . • A. Fiebach, A. Glitzky, Uniform estimate of the relative free energy by the dissipation rate for finite volume discretized reaction-diffusion systems, in: Finite Volumes for Complex Applications VII -- Methods and Theoretical Aspects -- FVCA 7, Berlin, June 2014, J. Fuhrmann, M. Ohlberger, Ch. Rohde, eds., 77 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2014, pp. 275--283. We prove a uniform Poincare-like estimate of the relative free energy by the dissipation rate for implicit Euler, finite volume discretized reaction-diffusion systems. This result is proven indirectly and ensures the exponential decay of the relative free energy with a unified decay rate for admissible finite volume meshes. • A. Glitzky, A. Mielke, L. Recke, M. Wolfrum, S. Yanchuk, D2 -- Mathematics for optoelectronic devices, in: MATHEON -- Mathematics for Key Technologies, M. Grötschel, D. Hömberg, J. Sprekels, V. Mehrmann ET AL., eds., 1 of EMS Series in Industrial and Applied Mathematics, European Mathematical Society Publishing House, Zurich, 2014, pp. 243--256. • J. Fuhrmann, A. Linke, Ch. Merdon, Coupling of fluid flow and solute transport using a divergence-free reconstruction of the Crouzeix--Raviart element, in: Finite Volumes for Complex Applications VII -- Elliptic, Parabolic and Hyperbolic Problems -- FVCA 7, Berlin, June 2014, J. Fuhrmann, M. Ohlberger, Ch. Rohde, eds., 78 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham et al., 2014, pp. 587--595. • J. Fuhrmann, K. Gärtner, Modeling of two-phase flow and catalytic reaction kinetics for DMFCs, in: Device and Materials Modeling in PEM Fuel Cells, S. Paddison, K. Promislow, eds., 113 of Topics in Applied Physics, Springer, Berlin/Heidelberg, 2009, pp. 297--316. • H. Gajewski, J.A. Griepentrog, A. Mielke, J. Beuthan, U. Zabarylo, O. Minet, Image segmentation for the investigation of scattered-light images when laser-optically diagnosing rheumatoid arthritis, in: Mathematics -- Key Technology for the Future, W. Jäger, H.-J. Krebs, eds., Springer, Heidelberg, 2008, pp. 149--161. • M. Ehrhardt, J. Fuhrmann, A. Linke, E. Holzbecher, Mathematical modeling of channel-porous layer interfaces in PEM fuel cells, in: Proceedings of FDFC2008 --- Fundamentals and Developments of Fuel Cell Conference 2008, Nancy, France, December 10--12 (CD), 2008, pp. 8 pages. In proton exchange membrane (PEM) fuel cells, the transport of the fuel to the active zones, and the removal of the reaction products are realized using a combination of channels and porous diffusion layers. In order to improve existing mathematical and numerical models of PEM fuel cells, a deeper understanding of the coupling of the flow processes in the channels and diffusion layers is necessary. After discussing different mathematical models for PEM fuel cells, the work will focus on the description of the coupling of the free flow in the channel region with the filtration velocity in the porous diffusion layer as well as interface conditions between them. The difficulty in finding effective coupling conditions at the interface between the channel flow and the membrane lies in the fact that often the orders of the corresponding differential operators are different, e.g., when using stationary (Navier-)Stokes and Darcy's equation. Alternatively, using the Brinkman model for the porous media this difficulty does not occur. We will review different interface conditions, including the well-known Beavers-Joseph-Saffman boundary condition and its recent improvement by Le Bars and Worster. • U. Bandelow, H. Gajewski, R. Hünlich, Thermodynamic designed energy model, in: Proceedings of the IEEE/LEOS 3rd International Conference on Numerical Simulation of Semiconductor Optoelectronic Devices (NUSOD'03), J. Piprek, ed., 2003, pp. 35--37. • H. Gajewski, H.-Chr. Kaiser, H. Langmach, R. Nürnberg, R.H. Richter, Mathematical modelling and numerical simulation of semiconductor detectors, in: Mathematics --- Key Technology for the Future. Joint Projects Between Universities and Industry, W. Jäger, H.-J. Krebs, eds., Springer, Berlin [u.a.], 2003, pp. 355--364. • I.V. Skrypnik, H. Gajewski, On the uniqueness of solutions to nonlinear elliptic and parabolic problems (in Russian), in: Differ. Uravn. i Din. Sist., dedicated to the 80th anniversary of the Academician Evgenii Frolovich Mishchenko, Suzdal, 2000, 236 of Tr. Mat. Inst. Steklova, Moscow, Russia, 2002, pp. 318--327. • U. Bandelow, H. Gajewski, H.-Chr. Kaiser, Modeling combined effects of carrier injection, photon dynamics and heating in Strained Multi-Quantum-Well Laser, in: Physics and Simulation of Optoelectronic Devices VIII, R.H. Binder, P. Blood, M. Osinski, eds., 3944 of Proceedings of SPIE, SPIE, Bellingham, WA, 2000, pp. 301--310. • G. Schwarz, E. Schöll, R. Nürnberg, H. Gajewski, Simulation of current filamentation in an extended drift-diffusion model, in: EQUADIFF 99: International Conference on Differential Equations, Berlin 1999, B. Fiedler, K. Gröger, J. Sprekels, eds., 2, World Scientific, Singapore [u. a.], 2000, pp. 1334--1336. • H. Gajewski, K. Zacharias, On a reaction-diffusion system modelling chemotaxis, in: EQUADIFF 99: International Conference on Differential Equations, Berlin 1999, B. Fiedler, K. Gröger, J. Sprekels, eds., 2, World Scientific, Singapore [u. a.], 2000, pp. 1098--1103. • H.-Chr. Kaiser, J. Rehberg, About some mathematical questions concerning the embedding of Schrödinger-Poisson systems into the drift-diffusion model of semiconductor devices, in: EQUADIFF 99: International Conference on Differential Equations, Berlin 1999, B. Fiedler, K. Gröger, J. Sprekels, eds., 2, World Scientific, Singapore [u. a.], 2000, pp. 1328--1333. Preprints, Reports, Technical Reports • CH. Keller, J. Fuhrmann, M. Landstorfer, B. Wagner, A model framework for ion channels with selectivity filters based on continuum non-equilibrium thermodynamics, Preprint no. 3072, WIAS, Berlin, 2023, DOI 10.20347/WIAS.PREPRINT.3072 . Abstract, PDF (7287 kByte) A mathematical model framework to describe ion transport in nanopores is presented. The model is based on non-equilibrium thermodynamics and considers finite size effects, solvation phenomena as well as the electrical charges of membrane surfaces and channel proteins. Par- ticular emphasis is placed on the consistent modelling of the selectivity filter in the pore. It is treated as an embedded domain in which the constituents can change their chemical properties. The diffusion process through the filter is governed by an independent diffusion coefficient and at the interfaces, de- and resolvation reactions are introduced as Neumann interface conditions. The evolution of the molar densities is described by drift-diffusion equations, where the fluxes depend on the gradient of the chemical potentials and the electric force. The chemical potentials depend on the molar fractions and on the pressure in the electrolyte and accounts for solvation effects. The framework allows the calculation of current-voltage relations for a variety of chan- nel properties and ion concentrations. We compare our model framework to experimental results for calcium-selective ion channels and show the general validity of our approach. Our parameter studies show that calcium and sodium currents are proportional to the surface charge in the se- lectivity filter and to the diffusion coefficients of the ions. Moreover, they show that the negative charges inside the pore have a decisive influence on the selectivity of divalent over monovalent ions. • J. Fuhrmann, Modellierung, experimentelle Untersuchung und Simulation für Direkt-Methanol-Mikrobrennstoffzellen (MikroDMFC), Report no. 28, WIAS, Berlin, 2010, DOI 10.20347/WIAS.REPORT.28 . Abstract, PDF (19 MByte) Der Verbund "Modellierung, experimentelle Untersuchung und Simulation für Direkt-Methanol-Mikrobrennstoffzellen" (MikroDMFC) wurde im Rahmen des Programms "Netzwerke Grundlagenforschung erneuerbare Energien und rationelle Energieanwendung" durch das Bundesministerium für Bildung und Forschung im Zeitraum von 2005-2008 gefördert. Ziel des Verbundes war es, auf der Grundlage experimenteller und numerischer Untersuchungen ein vertieftes Verständnis des Verhaltens von Direkt-Methanol-Brennstoffzellen (DMFC) zu gewinnen, und auf dieser Grundlage Materialien und Design für Mikrobrennstoffzellen für den portablen Einsatz weiterzuentwickeln. Die Spannweite der im Verbund vertretenen Arbeitsgruppen reichte von Vertretern der Mathematik über Vertreter der experimentellen Grundlagenforschung bis hin zu ingenieurtechnisch orientierten Arbeitsgruppen. Dieser Bericht fasst die Ergebnisse der Arbeit des Verbundes zusammen und beschreibt die unternommenen Aktivitäten auf Verbundebene. Details zu den Arbeiten finden sich in den enthaltenen Einzelberichten der Arbeitsgruppen. • P.-É. Druet, Analysis of improved Nernst--Planck--Poisson models of isothermal compressible electrolytes subject to chemical reactions: The case of a degenerate mobility matrix, Preprint no. 2321, WIAS, Berlin, 2016, DOI 10.20347/WIAS.PREPRINT.2321 . Abstract, PDF (387 kByte) We continue our investigations of the improved Nernst-Planck-Poisson model introduced by Dreyer, Guhlke and Müller 2013. In the paper by Dreyer, Druet, Gajewski and Guhlke 2016, the analysis relies on the hypothesis that the mobility matrix has maximal rank under the constraint of mass conservation (rank N-1 for the mixture of N species). In this paper we allow for the case that the positive eigenvalues of the mobility matrix tend to zero along with the partial mass densities of certain species. In this approach the mobility matrix has a variable rank between zero and N-1 according to the number of locally available species. We set up a concept of weak solution able to deal with this scenario, showing in particular how to extend the fundamental notion of emphdifferences of chemical potentials that supports the modelling and the analysis in Dreyer, Druet, Gajewski and Guhlke 2016. We prove the global-in-time existence in this solution class. • W. Dreyer, P.-É. Druet, P. Gajewski, C. Guhlke, Existence of weak solutions for improved Nernst--Planck--Poisson models of compressible reacting electrolytes, Preprint no. 2291, WIAS, Berlin, 2016, DOI 10.20347/WIAS.PREPRINT.2291 . Abstract, PDF (638 kByte) We consider an improved Nernst-Planck-Poisson model for compressible electrolytes first proposed by Dreyer et al. in 2013. The model takes into account the elastic deformation of the medium. In particular, large pressure contributions near electrochemical interfaces induce an inherent coupling of mass and momentum transport. The model consists of convection-diffusion-reaction equations for the constituents of the mixture, of the Navier-Stokes equation for the barycentric velocity and the Poisson equation for the electrical potential. Cross-diffusion phenomena occur due to the principle of mass conservation. Moreover, the diffusion matrix (mobility matrix) has a zero eigenvalue, meaning that the system is degenerate parabolic. In this paper we establish the existence of a global-in- time weak solution for the full model, allowing for cross-diffusion and an arbitrary number of chemical reactions in the bulk and on the active boundary. • P. Gajewski, On existence and uniqueness of the equilibrium state for an improved Nernst--Planck--Poisson system, Preprint no. 2059, WIAS, Berlin, 2014, DOI 10.20347/WIAS.PREPRINT.2059 . Abstract, PDF (208 kByte) This work deals with a model for a mixture of charged constituents introduced in [W. Dreyer et al. Overcoming the shortcomings of the Nernst-Planck model. emphPhys. Chem. Chem. Phys., 15:7075-7086, 2013]. The aim of this paper is to give a first existence and uniqueness result for the equilibrium situation. A main difference to earlier works is a momentum balance involving the gradient of pressure and the Lorenz force which persists in the stationary situation and gives rise to the dependence of the chemical potentials on the particle densities of every species. • W. Dreyer, C. Guhlke, R. Müller, Rational modeling of electrochemical double-layers and derivation of Butler--Volmer equations, Preprint no. 1860, WIAS, Berlin, 2013, DOI 10.20347/ WIAS.PREPRINT.1860 . Abstract, PDF (443 kByte) We derive the boundary conditions for the contact between an electrolyte and a solid electrode. At first we revisit the thermodynamic consistent complete model that resolves the actual electrode--electrolyte interface and its adjacent boundary layers. The width of these layers is controlled by the Debye length that is typically very small, leading to strongly different length scales in the system. We apply the method of asymptotic analysis to derive a simpler reduced model that does not resolve the boundary layers but instead incorporates the electrochemical properties of the layers into a set of new boundary conditions. This approach fully determines the relation of bulk quantities to the boundary conditions of the reduced model. In particular, the Butler-Volmer equations for electrochemical reactions, which are still under discussion in the literature, are rational consequences of our approach. For illustration and to compare with the literature, we consider a simple generic reaction. • J. Rehberg, A criterion for a two-dimensional domain to be Lipschitzian, Preprint no. 1695, WIAS, Berlin, 2012, DOI 10.20347/WIAS.PREPRINT.1695 . Abstract, Postscript (187 kByte), PDF (64 kByte) We prove that a two-dimensional domain is already Lipschitzian if only its boundary admits locally a one-dimensional, bi-Lipschitzian parametrization. Talks, Poster • CH. Keller, A model framework for calcium ion channels: Consistent modeling of selectivity filters, The European Conference on Mathematical and Theoretical Biology (ECMTB 2024), July 22 - 26, 2024, University of Castilla La Mancha, Toledo, Spain, July 25, 2024. • A. Mielke, On the stability of NESS in gradient systems with ports, Gradient Flows face-to-face 4, September 9 - 12, 2024, Technische Universität München, Raitenhaslach, September 10, 2024. • M. Kniely, A thermodynamically correct framework for electro-energy-reaction-diffusion systems, 22nd ECMI Conference on Industrial and Applied Mathematics, June 26 - 30, 2023, Wrocław University of Science and Technology, Poland, June 30, 2023. • M. Kniely, On a thermodynamically consistent electro-energy-reaction-diffusion system, 93rd Annual Meeting of the International Association of Applied Mathematics and Mechanics (GAMM 2023), Session 14 ``Applied Analysis'', May 30 - June 2, 2023, Technische Universität Dresden, June 1, 2023. • J. Fuhrmann, Ch. Keller, M. Landstorfer, B. Wagner, Development of an ion-channel model-framework for in-vitro assisted interpretation of current voltage relations, MATH+ Day, Humboldt-Universität zu Berlin, October 20, 2023. • M. Landstorfer, Modeling and validation of material and transport models for electrolytes, Energetic Methods for Multi-Component Reactive Mixtures Modelling, Stability, and Asymptotic Analysis (EMRM 2023), September 13 - 15, 2023, WIAS, Berlin, September 15, 2023. • M. Landstorfer, Thermodynamic modeling of the electrode-electrolyte interface -- Double-layer capacitance, solvation number, validation, Van Marum Colloquia, Leiden University, Institute of Chemistry, Netherlands, November 14, 2023. • M. Landstorfer, Thermodynamic modelling of aqueous and aprotic electrode-electrolyte interfaces and their and double layer capacitance, Bunsen-Tagung 2023 - Physical Chemistry of the Energy Transition, 122nd Annual Conference of the German Bunsen Society for Physical Chemistry, June 5 - 7, 2023, Berlin, June 7, 2023. • CH. Keller, J. Fuhrmann, M. Landstorfer, B. Wagner, Development of an ion-channel model-framework for in-vitro assisted interpretation of current voltage relations, MATH+-Day 2022, Technische Universität Berlin, November 18, 2022. • R. Müller, Non-equilibrium thermodynamics modeling of polycrystalline electrode liquid electrolyte interface, 31st Topical Meeting of the International Society of Electrochemistry, Meeting topic: ``Theory and Computation in Electrochemistry: Seeking Synergies in Methods, Materials and Systems'', Session 2: ``Theory and Computation of Interfacial and Nanoscale Phenomena'', May 15 - 19, 2022, Rheinisch-Westfälische Technische Hochschule Aachen, May 17, 2022. • P. Vágner, Capacitance of the blocking YSZ I Au electrode, 18th Symposium on Modeling and Experimental Validation of Electrochemical Energy Technologies, March 14 - 16, 2022, DLR Institut für Technische Thermodynamik, Hohenkammer, March 16, 2022. • M. Kniely, Global solutions to a class of energy-reaction-diffusion systems, Conference on Differential Equations and Their Applications (EQUADIFF 15), Minisymposium NAA-03 ``Evolution Differential Equations with Application to Physics and Biology'', July 11 - 15, 2022, Masaryk University, Brno, Czech Republic, July 12, 2022. • K. Hopf, Relative entropies and stability in strongly coupled parabolic systems (online talk), SIAM Conference on Analysis of Partial Differential Equations (PD22) (Online Event), Minisymposium ``Variational Evolution: Analysis and Multi-Scale Aspects'', March 14 - 18, 2022, March 16, 2022. • K. Hopf, The Cauchy problem for a cross-diffusion system with incomplete diffusion, Annual Workshop of the GAMM Activity Group ``Analysis of PDEs'' 2022, October 5 - 7, 2022, Institute of Science and Technology Austria (ISTA), Klosterneuburg, October 5, 2022. • TH. Eiter, On the resolvent problems associated with rotating viscous flow, DMV Annual Meeting 2022, Section 09 ``Applied Analysis and Partial Differential Equations", September 12 - 16, 2022, Freie Universität Berlin, September 14, 2022. • TH. Eiter, On uniform resolvent estimates associated with time-periodic rotating viscous flow, Mathematical Fluid Mechanics in 2022 (Hybrid Event), August 22 - 26, 2022, Czech Academy of Sciences, Prague, Czech Republic, August 24, 2022. • M. Landstorfer, Modeling electrochemical systems with continuum thermodynamics -- From fundamental electrochemistry to porous intercalation electrodes (online talk), Stochastic & Multiscale Modeling and Computation Seminar (Online Event), Illinois Institute of Technology, Chicago, USA, October 28, 2021. • M. Landstorfer, Modeling of concentration and electric field dependent susceptibilities in electrolytes (online talk), AA2 -- Materials, Light, Devices, Freie Universität Berlin, Humboldt-Universität zu Berlin, WIAS Berlin, February 26, 2021. • R. Müller, Modeling polycrystalline electrode-electrolyte interfaces: The differential capacitance (online talk), 14th Virtual Congress WCCM & ECCOMAS 2020, January 11 - 15, 2021, January 11, • A. Selahi, The double layer capacity of non-ideal electrolyte solutions - A numerical study (online talk available during the whole conference), 240th ECS Meeting (Online Event), October 10 - 14, • A. Selahi, M. Landstorfer, The double layer capacity of non-ideal electrolyte solutions -- A numerical study (online poster), 240th ECS meeting (Online Event), October 10 - 14, 2021. • M. Landstorfer, M. Eigel, M. Heida, A. Selahi, Recovery of battery ageing dynamics with multiple timescales (online poster), MATH+ Day 2021 (Online Event), Technische Universität Berlin, November 5, 2021. • C. Cancès, C. Chainais-Hillairet, J. Fuhrmann, B. Gaudeul, On four numerical schemes for a unipolar degenerate drift-diffusion model, Finite Volumes for Complex Applications IX (Online Event), Bergen, Norway, June 15 - 19, 2020. • K. Hopf, Global existence analysis of energy-reaction-diffusion systems, Workshop ``Variational Methods for Evolution'', September 13 - 19, 2020, Mathematisches Forschungsinstitut Oberwolfach, September 15, 2020. • J. Fuhrmann, C. Guhlke, M. Landstorfer, A. Linke, Ch. Merdon, R. Müller, Quality preserving numerical methods for electroosmotic flow, Einstein Semester on Energy-based Mathematical Methods for Reactive Multiphase Flows: Kick-off Conference (Online Event), October 26 - 30, 2020. • M. Landstorfer, Theory and validation of the electrochemical double layer, PC Seminar, AG Prof. Baltruschat, Universität Bonn, Abt. Elektrochemie, March 8, 2019. • R. Müller, Transport of solvated ions in nanopores: Modeling, asymptotics and simulation, Conference to celebrate the 80th jubilee of Miroslav Grmela, May 18 - 19, 2019, Czech Technical University, Faculty of Nuclear Sciences and Physical Engineering, Prag, Czech Republic, May 18, 2019. • R. Müller, Transport phenomena in electrolyte within a battery cell, Battery Colloquium, Technische Universität Berlin, April 18, 2019. • K. Hopf, On the singularity formation and relaxation to equilibrium in 1D Fokker--Planck model with superlinear drift, Gradient Flows and Variational Methods in PDEs, November 25 - 29, 2019, Universität Ulm, November 25, 2019. • M. Landstorfer, Continuum thermodynamic modelling of electrolytes, BMBF Kickoff Meeting LuCaMag, Bonn, November 7, 2018. • M. Landstorfer, Homogenization methods for electrochemical systems, Workshop ``Numerical Optimization of the PEM Fuel Cell Bipolar Plate'', Zentrum für Solarenergie- und Wasserstoff-Forschung (ZSW), Ulm, March 20, 2018. • M. Landstorfer, Thermodynamic modeling of electrolytes and their boundary conditions to electrodes, AMaSiS 2018: Applied Mathematics and Simulation for Semiconductors, October 8 - 10, 2018, WIAS, Berlin, October 9, 2018. • W. Dreyer, J. Fuhrmann, P. Gajewski, C. Guhlke, M. Landstorfer, M. Maurelli, R. Müller, Stochastic model for LiFePO4-electrodes, ModVal14 -- 14th Symposium on Fuel Cell and Battery Modeling and Experimental Validation, Karlsruhe, March 2 - 3, 2017. • CH. Merdon, A novel concept for the discretisation of the coupled Nernst--Planck--Poisson--Navier--Stokes system, 14th Symposium on Fuel Cell Modelling and Experimental Validation (MODVAL 14), March 2 - 3, 2017, Karlsruher Institut für Technologie, Institut für Angewandte Materialien, Karlsruhe, Germany, March 3, 2017. • P.-É. Druet, Analysis of recent Nernst--Planck--Poisson--Navier--Stokes systems of electrolytes, 88th Annual Meeting of the International Association of Applied Mathematics and Mechanics (GAMM 2017), Section S14 ``Applied Analysis'', March 6 - 10, 2017, Bauhaus Universität Weimar/Technische Universität Ilmenau, Weimar, March 7, 2017. • P.-É. Druet, Existence of weak solutions for improved Nernst--Planck--Poisson models of compressible electrolytes, Seminar EDE, Czech Academy of Sciences, Institute of Mathematics, Department of Evolution Differential Equations (EDE), Prague, Czech Republic, January 10, 2017. • CH. Merdon, J. Fuhrmann, A. Linke, A.A. Abd-El-Latif, M. Khodayari, P. Reinsberg, H. Baltruschat, Inverse modelling of thin layer flow cells and RRDEs, The 67th Annual Meeting of the International Society of Electrochemistry, Den Haag, Netherlands, August 21 - 26, 2016. • R. Müller, W. Dreyer, J. Fuhrmann, C. Guhlke, New insights into Butler--Volmer kinetics from thermodynamic modeling, The 67th Annual Meeting of the International Society of Electrochemistry, Den Haag, Netherlands, August 21 - 26, 2016. • P.-É. Druet, Existence of global weak solutions for generalized Poisson--Nernst--Planck systems, 7th European Congress of Mathematics (ECM), minisymposium ``Analysis of Thermodynamically Consistent Models of Electrolytes in the Context of Battery Research'', July 18 - 22, 2016, Technische Universität Berlin, Berlin, July 20, 2016. • J. Fuhrmann, Ch. Merdon, A thermodynamically consistent numerical approach to Nernst--Planck--Poisson systems with volume constraints, The 67th Annual Meeting of the International Society of Electrochemistry, Den Haag, Netherlands, August 21 - 26, 2016. • J. Fuhrmann, W. Dreyer, C. Guhlke, M. Landstorfer, R. Müller, A. Linke, Ch. Merdon, Modeling and numerics for electrochemical systems, Micro Battery and Capacitive Energy Harvesting Materials -- Results of the MatFlexEnd Project, Universität Wien, Austria, September 19, 2016. • J. Fuhrmann, A. Linke, Ch. Merdon, M. Khodayari , H. Baltruschat, Detection of solubility, transport and reaction coefficients from experimental data by inverse modelling of thin layer flow cells , 1st Leibniz MMS Mini Workshop on CFD & GFD, WIAS Berlin, September 8 - 9, 2016. • J. Fuhrmann, A. Linke, Ch. Merdon, W. Dreyer, C. Guhle, M. Landstorfer, R. Müller, Numerical methods for electrochemical systems, 2nd Graz Battery Days, Graz, Austria, September 27 - 28, 2016. • C. Guhlke, J. Fuhrmann, W. Dreyer, R. Müller, M. Landstorfer, Modeling of batteries, Batterieforum Deutschland 2016, Berlin, April 6 - 8, 2016. • CH. Merdon, Inverse modeling of thin layer flow cells for detection of solubility transport and reaction coefficients from experimental data, 17th Topical Meeting of the International Society of Electrochemistry Multiscale Analysis of Electrochemical Systems, May 31 - June 3, 2015, Saint Malo Congress Center, France, June 1, 2015. • A. Mielke, Chemical Master Equation: Coarse graining via gradient structures, Kolloquium des SFB 1114 ``Scaling Cascades in Complex Systems'', Freie Universität Berlin, Fachbereich Mathematik, Berlin, June 4, 2015. • A. Mielke, Evolutionary $Gamma$-convergence for generalized gradient systems, Workshop ``Gradient Flows'', June 22 - 23, 2015, Université Pierre et Marie Curie, Laboratoire Jacques-Louis Lions, Paris, France, June 22, 2015. • M. Liero, On dissipation distances for reaction-diffusion equations --- The Hellinger--Kantorovich distance, Workshop ``Entropy Methods, PDEs, Functional Inequalities, and Applications'', June 30 - July 4, 2014, Banff International Research Station for Mathematical Innovation and Discovery (BIRS), Canada, July 1, 2014. • M. Liero, On dissipation distances for reaction-diffusion equations --- The Hellinger--Kantorovich distance, RIPE60 -- Rate Independent Processes and Evolution Workshop, June 24 - 26, 2014, Prague, Czech Republic, June 24, 2014. • A. Linke, Ch. Merdon, Optimal and pressure-independent $L^2$ velocity error estimates for a modified Crouzeix--Raviart element with BDM reconstructions, The International Symposium of Finite Volumes for Complex Applications VII (FVCA 7), Berlin-Brandenburgische Akademie der Wissenschaften, June 15 - 20, 2014. • A. Fiebach, A. Glitzky, Uniform estimate of the relative free energy by the dissipation rate for finite volume discretized reaction-diffusion systems, The International Symposium of Finite Volumes for Complex Applications VII (FVCA 7), Berlin, June 15 - 20, 2014. • A. Glitzky, Drift-diffusion models for heterostructures in photovoltaics, 8th European Conference on Elliptic and Parabolic Problems, Minisymposium ``Qualitative Properties of Nonlinear Elliptic and Parabolic Equations'', May 26 - 30, 2014, Universität Zürich, Institut für Mathematik, organized in Gaeta, Italy, May 27, 2014. • M. Thomas, Thermomechanical modeling of dissipative processes in elastic media via energy and entropy, The 10th AIMS Conference on Dynamical Systems, Differential Equations and Applications, Special Session 8: Emergence and Dynamics of Patterns in Nonlinear Partial Differential Equations from Mathematical Science, July 7 - 11, 2014, Madrid, Spain, July 8, 2014. • J. Fuhrmann, A. Linke, Ch. Merdon, M. Khodayari, H. Baltruschat, Detection of solubility, transport and reaction coefficients from experimental data by inverse modeling of thin layer flow cells, 65th Annual Meeting of the International Society of Electrochemistry, Lausanne, Switzerland, August 31 - September 5, 2014. • J. Fuhrmann, A. Linke, Ch. Merdon, Coupling of fluid flow and solute transport using a divergence-free reconstruction of the Crouzeix--Raviart element, The International Symposium of Finite Volumes for Complex Applications VII (FVCA 7), Berlin-Brandenburgische Akademie der Wissenschaften, June 15 - 20, 2014. • J. Fuhrmann, Activity based finite volume methods for generalised Nernst--Planck--Poisson systems, The International Symposium of Finite Volumes for Complex Applications VII (FVCA 7), Berlin-Brandenburgische Akademie der Wissenschaften, June 15 - 20, 2014. • A. Mielke, On a metric and geometric approach to reaction-diffusion systems as gradient systems, Mathematics Colloquium, Jacobs University Bremen, School of Engineering and Science, December 1, • A. Mielke, A reaction-diffusion equation as a Hellinger--Kantorovich gradient flow, ERC Workshop on Optimal Transportation and Applications, October 27 - 31, 2014, Centro di Ricerca Matematica ``Ennio De Giorgi'', Pisa, Italy, October 29, 2014. • J. Rehberg, On non-smooth parabolic equations, Workshop ``Maxwell--Stefan meets Navier--Stokes/Modeling and Analysis of Reactive Multi-Component Flows'', March 31 - April 2, 2014, Universität Halle, April 1, 2014. • J. Rehberg, Optimal Sobolev regularity for second order divergence operators, 85th Annual Meeting of the International Association of Applied Mathematics and Mechanics (GAMM 2014), Session on Applied Operator Theory, March 10 - 14, 2014, Friedrich-Alexander Universität Erlangen-Nürnberg, March 13, 2014. • S. Reichelt, Homogenization of degenerated reaction-diffusion equations, Doktorandenforum der Leibniz-Gemeinschaft, Sektion D, Berlin, June 6 - 7, 2013. • M. Liero, On gradient structures for drift-reaction-diffusion systems and Markov chains, Analysis Seminar, University of Bath, Mathematical Sciences, UK, November 21, 2013. • M. Liero, Gradient structures and geodesic convexity for reaction-diffusion system, SIAM Conference on Mathematical Aspects of Materials Science (MS13), Minisymposium ``Material Modelling and Gradient Flows'' (MS100), June 9 - 12, 2013, Philadelphia, USA, June 12, 2013. • M. Liero, On gradient structures and geodesic convexity for reaction-diffusion systems, Research Seminar, Westfälische Wilhelms-Universität Münster, Institut für Numerische und Angewandte Mathematik, April 17, 2013. • A. Glitzky, Continuous and finite volume discretized reaction-diffusion systems in heterostructures, Asymptotic Behaviour of Systems of PDE Arising in Physics and Biology: Theoretical and Numerical Points of View, November 6 - 8, 2013, Lille 1 University -- Science and Technology, France, November 6, 2013. • A. Mielke, Gradient structures and uniform global decay for reaction-diffusion systems, Mathematisches Kolloquium, Universität Bielefeld, Fakultät für Mathematik, April 25, 2013. • A. Mielke, On the geometry of reaction-diffusion systems: Optimal transport versus reaction, Recent Trends in Differential Equations: Analysis and Discretisation Methods, November 7 - 9, 2013, Technische Universität Berlin, Institut für Mathematik, November 9, 2013. • A. Mielke, Using gradient structures for modeling semiconductors, Eindhoven University of Technology, Institute for Complex Molecular Systems, Netherlands, February 21, 2013. • M. Liero, Interfaces in reaction-diffusion systems, Seminar ``Dünne Schichten'', Technische Universität Berlin, Institut für Mathematik, February 9, 2012. • A. Glitzky, An electronic model for solar cells taking into account active interfaces, International Workshop ``Mathematics for Semiconductur Heterostructures: Modeling, Analysis, and Numerics'', September 24 - 28, 2012, WIAS Berlin, September 27, 2012. • M. Thomas, Thermomechanical modeling via energy and entropy, Seminar on Applied Mathematics, University of Pavia, Department of Mathematics, Italy, February 14, 2012. • M. Thomas, Thermomechanical modeling via energy and entropy using GENERIC, Workshop ``Mechanics of Materials'', March 19 - 23, 2012, Mathematisches Forschungsinstitut Oberwolfach, March 22, 2012. • A. Mielke, Entropy gradient flows for Markow chains and reaction-diffusion systems, Berlin-Leipzig-Seminar ``Analysis/Probability Theory'', WIAS Berlin, April 13, 2012. • A. Mielke, Gradienten-Strukturen und geodätische Konvexität für Markov-Ketten und Reaktions-Diffusions-Systeme, Augsburger Kolloquium, Universität Augsburg, Institut für Mathematik, May 8, 2012. • A. Mielke, Multidimensional modeling and simulation of optoelectronic devices, Challenge Workshop ``Modeling, Simulation and Optimisation Tools'', September 24 - 26, 2012, Technische Universität Berlin, September 24, 2012. • A. Mielke, On geodesic convexity for reaction-diffusion systems, Seminar on Applied Mathematics, Università di Pavia, Dipartimento di Matematica, Italy, March 6, 2012. • A. Mielke, On gradient flows and reaction-diffusion systems, Institutskolloquium, Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig, December 3, 2012. • A. Mielke, On gradient structures and geodesic convexity for energy-reaction-diffusion systems and Markov chains, ERC Workshop on Optimal Transportation and Applications, November 5 - 9, 2012, Centro di Ricerca Matematica ``Ennio De Giorgi'', Pisa, Italy, November 8, 2012. • A. Mielke, On gradient structures for Markov chains and reaction-diffusion systems, Applied & Computational Analysis (ACA) Seminar, University of Cambridge, Department of Applied Mathematics and Theoretical Physics (DAMTP), UK, June 14, 2012. • A. Mielke, Using gradient structures for modeling semiconductors, International Workshop ``Mathematics for Semiconductur Heterostructures: Modeling, Analysis, and Numerics'', September 24 - 28, 2012, WIAS Berlin, September 24, 2012. • A. Mielke, Thermodynamical modeling of bulk-interface interaction in reaction-diffusion systems, Interfaces and Discontinuities in Solids, Liquids and Crystals (INDI2011), June 20 - 23, 2011, Gargnano (Brescia), Italy, June 20, 2011. • A. Mielke, Mathematical approaches to thermodynamic modeling, Autumn School on Mathematical Principles for and Advances in Continuum Mechanics, November 7 - 12, 2011, Centro di Ricerca Matematica ``Ennio De Giorgi'', Pisa, Italy. • A. Glitzky, Uniform exponential decay of the free energy for Voronoi finite volume discretized reaction-diffusion systems, 8th AIMS International Conference on Dynamical Systems, Differential Equations and Applications, Special Session on Reaction Diffusion Systems, May 25 - 28, 2010, Technische Universität Dresden, May 26, 2010. • A. Mielke, Gradient structures for reaction-diffusion systems and semiconductor equations, 81th Annual Meeting of the International Association of Applied Mathematics and Mechanics (GAMM 2010), Session on Applied Analysis, March 22 - 26, 2010, Universität Karlsruhe, March 24, 2010. • A. Linke, Divergence-free mixed finite elements for the incompressible Navier--Stokes equations, Universität Stuttgart, Institut für Wasserbau, December 8, 2009. • M. Ehrhardt, J. Fuhrmann, A. Linke, Finite volume methods for the simulation of flow cell experiments, Workshop ``New Trends in Model Coupling --- Theory, Numerics & Applications'' (NTMC'09), Paris, France, September 2 - 4, 2009. • M. Ehrhardt, The fluid-porous interface problem: Analytic and numerical solutions to flow cell problems, 6th Symposium on Fuel Cell Modelling and Experimental Validation (MODVAL 6), March 25 - 26, 2009, Evangelische Akademie Baden, Bad Herrenalb, March 26, 2009. • M. Ehrhardt, The fluid-porous interface problem: Analytic and numerical solutions to flow cell problems, Mathematical Models in Medicine, Business, Engineering (XI JORNADAS IMM), September 8 - 11, 2009, Technical University of Valencia, Institute of Multidisciplinary Mathematics, Spain, September 10, 2009. • J. Fuhrmann, Mathematical and numerical models of electrochemical processes related to porous media, International Conference on Non-linearities and Upscaling in Porous Media (NUPUS), October 5 - 7, 2009, Universität Stuttgart, October 6, 2009. • J. Fuhrmann, Model based numerical impedance calculation in electrochemical systems, 6th Symposium on Fuel Cell Modelling and Experimental Validation (MODVAL 6), March 25 - 26, 2009, Evangelische Akademie Baden, Bad Herrenalb, March 25, 2009. • J. Fuhrmann, Numerical modeling in electrochemistry, Conference on Scientific Computing (ALGORITMY 2009), March 15 - 20, 2009, Slovak University of Technology, Department of Mathematics and Descriptive Geometry, Podbanské, March 17, 2005. • A. Linke, Mass conservative coupling of fluid flow and species transport in electrochemical flow cells, 13th Conference on Mathematics of Finite Elements and Applications (MAFELAP 2009), June 9 - 12, 2009, Brunel University, London, UK, June 10, 2009. • A. Linke, The discretization of coupled flows and the problem of mass conservation, Workshop on Discretization Methods for Viscous Flows, Part II: Compressible and Incompressible Flows, June 24 - 26, 2009, Porquerolles, Toulon, France, June 25, 2009. • A. Linke, The discretization of coupled flows and the problem of mass conservation, Seventh Negev Applied Mathematical Workshop, July 6 - 8, 2009, Ben Gurion University of the Negev, Jacob Blaustein Institute for Desert Research, Sede Boqer Campus, Israel, July 7, 2009. • J. Rehberg, Quasilinear parabolic equations in distribution spaces, International Conference on Nonlinear Parabolic Problems in Honor of Herbert Amann, May 10 - 16, 2009, Stefan Banach International Mathematical Center, Bedlewo, Poland, May 12, 2009. • E. Holzbecher, H. Zhao, J. Fuhrmann, A. Linke, H. Langmach, Numerical investigation of thin layer flow cells, 4th Gerischer Symposium, Berlin, June 25 - 27, 2008. • E. Bänsch, H. Berninger, U. Böhm, A. Bronstert, M. Ehrhardt, R. Forster, J. Fuhrmann, R. Klein, R. Kornhuber, A. Linke, A. Owinoh, J. Volkholz, Pakt für Forschung und Innovation: Das Forschungsnetzwerk ``Gekoppelte Strömungsprozesse in Energie- und Umweltforschung'', Show of the Leibniz Association ``Exzellenz durch Vernetzung. Kooperationsprojekte der deutschen Wissenschaftsorganisationen mit Hochschulen im Pakt für Forschung und Innovation'', Berlin, November 12, 2008. • M. Ehrhardt, O. Gloger, Th. Dietrich, O. Hellwich, K. Graf, E. Nagel, Level Set Methoden zur Segmentierung von kardiologischen MR-Bildern, 22. Treffpunkt Medizintechnik: Fortschritte in der medizinischen Bildgebung, Charité, Campus Virchow Klinikum Berlin, May 22, 2008. • A. Glitzky, Energy estimates for continuous and discretized reaction-diffusion systems in heterostructures, Annual Meeting of the Deutsche Mathematiker-Vereinigung 2008, minisymposium ``Analysis of Reaction-Diffusion Systems with Internal Interfaces'', September 15 - 19, 2008, Friedrich-Alexander-Universität Erlangen-Nürnberg, September 15, 2008. • A. Glitzky, Energy estimates for space and time discretized electro-reaction-diffusion systems, Conference on Differential Equations and Applications to Mathematical Biology, June 23 - 27, 2008, Université Le Havre, France, June 26, 2008. • A. Linke, Mass conservative coupling of fluid flow and species transport in electrochemical flow cells, Annual Meeting of the Deutsche Mathematiker-Vereinigung 2008, September 15 - 19, 2008, Friedrich-Alexander-Universität Erlangen-Nürnberg, September 16, 2008. • A. Linke, Mass conservative coupling of fluid flow and species transport in electrochemical flow cells, Georg-August-Universität Göttingen, November 11, 2008. • J. Rehberg, Hölder continuity for elliptic and parabolic problems, Analysis-Tag, Technische Universität Darmstadt, Fachbereich Mathematik, November 27, 2008. • A. Glitzky, Energy estimates for reaction-diffusion processes of charged species, 6th International Congress on Industrial and Applied Mathematics (ICIAM), July 16 - 20, 2007, ETH Zürich, Switzerland, July 16, 2007. • J. Rehberg, An elliptic model problem including mixed boundary conditions and material heterogeneities, Fifth Singular Days, April 23 - 27, 2007, International Center for Mathematical Meetings, Luminy, France, April 26, 2007. • J. Rehberg, Maximal parabolic regularity on Sobolev spaces, The Eighteenth Crimean Autumn Mathematical School-Symposium (KROMSH-2007), September 17 - 29, 2007, Laspi-Batiliman, Ukraine, September 18, 2007. • F. Schmid, An evolution model in contact mechanics with dry friction, 6th International Congress on Industrial and Applied Mathematics (ICIAM), July 16 - 20, 2007, ETH Zürich, Switzerland, July 19, 2007. • A. Glitzky, Energy estimates for electro-reaction-diffusion systems with partly fast kinetics, 6th AIMS International Conference on Dynamical Systems, Differential Equations & Applications, June 25 - 28, 2006, Université de Poitiers, France, June 27, 2006. • J. Rehberg, Existence and uniqueness for van Roosbroeck's system in Lebesque spaces, Conference ``Recent Advances in Nonlinear Partial Differential Equations and Applications'', Toledo, Spain, June 7 - 10, 2006. • J. Rehberg, Regularity for nonsmooth elliptic problems, Crimean Autumn Mathematical School, September 20 - 25, 2006, Vernadskiy Tavricheskiy National University, Laspi, Ukraine, September 21, • J. Rehberg, Elliptische und parabolische Probleme aus Anwendungen, Kolloquium im Fachbereich Mathematik, Universität Darmstadt, May 18, 2005. • J. Rehberg, Existence, uniqeness and regularity for quasilinear parabolic systems, International Conference ``Nonlinear Partial Differential Equations'', September 17 - 24, 2005, Institute of Applied Mathematics and Mechanics Donetsk, Alushta, Ukraine, September 18, 2005. • J. Rehberg, H$^1,q$-regularity for linear, elliptic boundary value problems, Regularity for nonlinear and linear PDEs in nonsmooth domains - Analysis, simulation and application, September 5 - 7, 2005, Universität Stuttgart, Deutsche Forschungsgemeinschaft (SFB 404), Hirschegg, Austria, September 6, 2005. • J. Rehberg, Regularität für elliptische Probleme mit unglatten Daten, Oberseminar Prof. Escher/Prof. Schrohe, Technische Universität Hannover, December 13, 2005. • J. Rehberg, Analysis of macroscopic and quantum mechanical semiconductor models, International Visitor Program ``Nonlinear Parabolic Problems'', August 8 - November 18, 2005, Finnish Mathematical Society (FMS), University of Helsinki, and Helsinki University of Technology, Finland, November 1, 2005. • J. Rehberg, Existence, uniqueness and regularity for quasilinear parabolic systems, Conference ``Nonlinear Parabolic Problems'', October 17 - 21, 2005, Finnish Mathematical Society (FMS), University of Helsinki, and Helsinki University of Technology, Finland, October 20, 2005. • J. Rehberg, Elliptische und parabolische Probleme mit unglatten Daten, Technische Universität Darmstadt, Fachbereich Mathematik, December 14, 2004. • J. Rehberg, Quasilinear parabolic equations in $L^p$, Nonlinear Elliptic and Parabolic Problems: A Special Tribute to the Work of Herbert Amann, June 28 - 30, 2004, Universität Zürich, Institut für Mathematik, Switzerland, June 29, 2004. • J. Rehberg, The two-dimensional van Roosbroeck system has solutions in $L^p$, Workshop ``Advances in Mathematical Semiconductor Modelling: Devices and Circuits'', March 2 - 6, 2004, Chinese-German Centre for Science Promotion, Beijing, China, March 5, 2004. • A. Glitzky, R. Hünlich, Stationary solutions of two-dimensional heterogeneous energy models with multiple species, Nonlocal Elliptic and Parabolic Problems, September 9 - 11, 2003, Bedlewo, Poland, September 10, 2003. • H.-Chr. Kaiser, On space discretization of reaction-diffusion systems with discontinuous coefficients and mixed boundary conditions, 2nd GAMM Seminar on Microstructures, January 10 - 11, 2003, Ruhr-Universität Bochum, Institut für Mechanik, January 10, 2003. • J. Rehberg, Solvability and regularity for parabolic equations with nonsmooth data, International Conference ``Nonlinear Partial Differential Equations'', September 15 - 21, 2003, National Academy of Sciences of Ukraine, Institute of Applied Mathematics and Mechanics, Alushta, September 17, 2003. External Preprints • J. Fuhrmann, C. Guhlke, Ch. Merdon, A. Linke, R. Müller, Induced charge electroosmotic flow with finite ion size and solvation effects, Preprint no. arXiv:1901.06941, Cornell University Library, 2019, DOI 10.1016/j.electacta.2019.05.051 .
{"url":"https://wias-berlin.de/research/ats/Elektrochemie/?lang=1","timestamp":"2024-11-08T05:03:54Z","content_type":"text/html","content_length":"241781","record_id":"<urn:uuid:b8db0134-de5e-40f4-9c68-1b076744ec12>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00523.warc.gz"}
Devang Thakkar Day 14: Parabolic Reflector Dish This is Day 14 of Advent of Code 2023! If you would like to solve the problems before looking at the solution, you can find them here. In part 1 of Day 14, we have one of my favorite puzzles this edition, simply because it takes me back to my childhood when I used to have a toy that functioned in a way very similar to this. We have a matrix with some fixed pegs denoted by hashes and movable circles denoted by Os. For the first part, we have to see how far north each movable circle can go. This is pretty easy - I rotate the grid because I prefer working with rows over columns and then I move each circle as far left as is possible by keeping track of empty spaces available and resetting when you hit a peg. Hitting a circle does not affect the empty space counter because it moves to the left most space available and leaves its own space empty. Pretty easy one today! Click here to jump to the part 2 discussion. For part 2 of Day 14, here is where things get interesting, we have to move things in the four cardinal directions over and over, kind of as if you were tilting the board in a circle. All four directions, a BILLION times. One of the key things that I figured out early on was that I would have to check for loops in the movement - we can do this by keeping a list of what position has been seen before and stopping when we find a repeat. This makes the bit about a billion repetitions irrelevant as I found a loop around a hundred cycles in. Not sure if that was lucky or intended but I did see someone come up with a grid that would not loop even in a billion iterations! That's all folks! If you made it this far, enjoy some AI art generated by u/encse using the prompt for this puzzle. Cheers, Devang
{"url":"https://www.devangthakkar.com/blog/aoc14","timestamp":"2024-11-05T03:16:28Z","content_type":"text/html","content_length":"9448","record_id":"<urn:uuid:f5ad1f25-359f-4920-93c1-9471d54998dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00401.warc.gz"}
Math Department What is Mathematics? Mathematics: deductive study of numbers, geometry, and various abstract constructs, or structures. The latter often arise from analytical models in the empirical sciences, but may emerge from purely mathematical considerations. Some definitions of mathematics heard from others: • That which mathematicians do. • The study of well-defined things. • The study of statements of the form “P implies Q”. • The branch of science which you could continue to do if you woke up and the universe were gone. Contrary to many a layman’s perception, mathematics does not consist only of crunching numbers or solving equations. There are also parts of mathematics which have nothing at all to do with numbers or equations, though at Snow College it seems that we do a lot of number-crunching before we can get to the more interesting stuff. For a taste of a mostly-non-number crunching math experience check out MATH 1030.
{"url":"https://www.snow.edu/academics/science_math/math/index.html","timestamp":"2024-11-07T13:17:02Z","content_type":"text/html","content_length":"32562","record_id":"<urn:uuid:3679656d-9370-47d6-ae32-9b050798740e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00074.warc.gz"}
Fast Fourier Transforms over Prime Fields of Large Characteristic and their Implementation on Graphics Processing Units The University of Western Ontario The University of Western Ontario, 2017 title={Fast Fourier Transforms over Prime Fields of Large Characteristic and their Implementation on Graphics Processing Units}, author={Mohajerani, Davood}, school={The University of Western Ontario} Prime field arithmetic plays a central role in computer algebra and supports computation in Galois fields which are essential to coding theory and cryptography algorithms. The prime fields that are used in computer algebra systems, in particular in the implementation of modular methods, are often of small characteristic, that is, based on prime numbers that fit on a machine word. Increasing precision beyond the machine word size can be done via the Chinese Remaindering Theorem or Hensel Lemma. In this thesis, we consider prime fields of large characteristic, typically fitting on n machine words, where n is a power of 2. When the characteristic of these fields is restricted to a subclass of the generalized Fermat numbers, we show that arithmetic operations in such fields offer attractive performance both in terms of algebraic complexity and parallelism. In particular, these operations can be vectorized, leading to efficient implementation of fast Fourier transforms on graphics processing units.
{"url":"https://hgpu.org/?p=16959","timestamp":"2024-11-06T02:31:50Z","content_type":"text/html","content_length":"86166","record_id":"<urn:uuid:ba9d4a30-36ba-4db2-842f-bb51b9af0527>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00141.warc.gz"}
The Students of R.L. Moore There has been in the last decades of the twentieth century an increased interest in student-oriented teaching. To many, R. L. Moore served as an exemplar of such, and he is now regarded as a prophet of modern methods in pedagogy. Moore began his teaching career as a Teaching Fellow at The University of Texas in 1901-02. He taught in high school in 1902-03 and in universities from 1905 until 1969. (See the chronology.) His teaching has been the subject of a number of articles (Jones, Whyburn, Wilder, Mahavier, Eyles), interviews, and the MAA film Challenge in the Classroom (1967). (See the bibliography.) A number of mathematicians influenced by his methods have adopted their own versions in their classes. The Third Annual Legacy of R. L. Moore Conference was held in April of 2000, under the auspices of the Educational Advancement Foundation. Its theme was "Modified Moore Methods as Used Today." In this paper we list the Ph.D. students of R.L. Moore and some other students influenced by him, and we give some information about them, in the belief that a teacher is best known by means of the activities and accomplishments of his students. 1. John R. Kline (1891-1955), Ph.D., University of Pennsylvania, 1916. A.B. (1912) Muhlenburg; A.M. (1914) Pennsylvania. Thesis title: "Double elliptic geometry in terms of point and order", 19 pages in journal article. Kline was Professor of Mathematics at Penn from 1920 (the year Moore returned to Texas) until his death in 1955. He was Chair of the Department from 1933 to 1954. He was a Guggenheim Fellow (Göttingen) in 1926-27. He was Associate Editor of the Transactions AMS, Bulletin AMS, American Journal of Mathematics at various times and served on the editorial board of the AMS Colloquium Publications. He was Associate Secretary of the AMS from 1933 to 1936 and Secretary (AMS) from 1941 to 1950. He directed the Ph.D. theses of thirteen students, including National Research Fellows W. L. Ayres, H. M. Gehman, N.E. Rutt, and Leo Zippin. Several of his students went to Austin in their Fellowship years. He also directed the Ph.D. theses of W.W. Dudley and W.W.S. Claytor, the second and third African American mathematicians to earn their Ph.D.'s. (In this instance, the sins of the father were not visited on the son). On his recommendation, Claytor went to the University of Michigan on a post-doctoral fellowship, where he worked with R. L. Wilder. After this, he was offered the opportunity for further study at Princeton but declined it in order to accept a position at Howard University, where he had a distinguished career. Kline was the thesis director of record for Lida K. Barrett, but she began her work with R.L. Moore at Texas; the thesis problems were suggested by Kline after her move to the University of Pennsylvania and essentially solved under his direction. After Kline suffered a heart attack the work was completed under the direction of R.D. Anderson, who had joined the faculty at Pennsylvania in 1948. Perhaps Kline's best known student was Leo Zippin, who wrote with Deane Montgomery the classic monograph Topological Transformation Groups. The work described in this treatise, together with work done by Andrew Gleason, provides a solution to Hilbert's Fifth Problem. Throughout his career, Kline maintained regular contact with RLM; for example, he was instrumental in getting Mary-Elizabeth Hamstrom to go to Austin for her graduate studies. Kline wrote only one joint paper, and it was with RLM, who also wrote no other joint work. Title: "On the most general closed and bounded plane point set through which it is possible to pass an arc." 2. George H. Hallett (1895-1985), Ph.D., University of Pennsylvania, 1918. A.B. (1915) Haverford College; A.M. (1916) Harvard. Thesis title: "Linear order in three dimensional Euclidean and double elliptical spaces". Hallett did not enter academia; instead, he had a long career in public service, beginning with a post as Secretary of the Proportional Representation League. In 1937 he wrote Proportional Representation: The Key to Democracy. He taught courses in government at several colleges in New York City: Brooklyn, Hunter, NYU, and CCNY. He was given special awards from LaGuardia Memorial Association (1963), New York City Club (1964), and Citizens Union (1947, 1969). During the Second World War he was active in the Committee for Civilian Defense. In his book Creative Teaching: The Heritage of R.L. Moore, D. R. Traylor quotes Hallett as saying, "... I think such success as I've had in the field of government probably has a good deal to do with that [ Moore's teaching] - because they don't catch me up very often in theories of logic in bills, or different parts of bills, that don't hang together." 3. Anna M. Mullikin (1893-1975), Ph.D., University of Pennsylvania, 1922. A.B. (1915) Goucher College; A.M. (1919) U. Pennsylvania. Thesis title: "Certain theorems relating to plane connected point sets". When Moore went to Texas in 1920, Mullikin went there, as well, to continue her work with him. Her thesis, "Certain theorems relating to plane connected point sets," received considerable attention. It overlapped somewhat with some of Janiszewski's work (which had been published in Polish). Mullikin went on to a long career as a high school teacher in Pennsylvania; Mary-Elizabeth Hamstrom was one of her students. During the Second World War she was active in Civilian Defense. In 1952, Goucher College, where she received a B.A. in 1915, honored her with an alumnae achievement citation. [See also Zitarelli, David, and Bartlow, Thomas L.,"Who was Miss Mullikin?" American Mathematical Monthly 116(2009): 99-114.] 4. Raymond L. Wilder (1896-1982 ), Ph.D., University of Texas, 1923. B.Phil. (1920) Brown; M.Sc. (1921) Brown. Thesis title: "Concerning continuous curves", (62 pages). Signers: R.L. Moore, H.J. Ettlinger, A.A. Bennett, H.Y. Benedict, M.B. Porter. Wilder was a member of the National Academy of Sciences. He served as President of the American Mathematical Society in 1955-1956 and as President of the Mathematical Association of America in 1965-1966. The bulk of his academic career was at the University of Michigan, where he directed twenty-five Ph.D. students. In his honor the University of Michigan established the Raymond L. Wilder Professorship of Mathematics. Before going to Michigan he taught at Ohio State for two years, and after retiring from Michigan he taught at UC Santa Barbara. He was the AMS Colloquium Lecturer in 1943 (The Topology of Manifolds was published in the Colloquium series), and the Gibbs Lecturer in 1969. He wrote Introduction to the Foundations of Mathematics, The Evolution of Mathematical Concepts; an Elementary Study, and Mathematics as a Cultural System. He was a recipient of the Distinguished Service Award of the MAA and the Lester R. Ford Award. He has observed that he went from Brown to Texas to study actuarial mathematics. He recalled that Moore did not at first want him as a student ( "I was a Yankee") and did not really accept him as a member of the class until he had proved the arcwise connectedness theorem. Wilder was one of the developers of algebraic topology. He maintained close contacts with Moore and also with Lefschetz at Princeton. Wilder was largely responsible for recognition and development of the abilities of Norman Steenrod (See the extensive correspondence between them at the Center for American History at the University of Texas). He also had influence on E.G. Begle and mathematics education reform. 5. Renke G. Lubben (1898-1980), Ph.D., University of Texas, 1925. B.A. (1921) Texas. Thesis title: "The double-elliptic case of the Lie-Riemann-Helmholtz-Hilbert problem of the foundations of geometry", (102 pages). Signers: R.L. Moore, M.B. Porter, H.J. Ettlinger, G. Watts Cunningham, Edward L. Dodd, A. P. Brogan, Albert A. Bennett. Lubben's Ph.D. thesis gave the solution to the then last remaining problem in the foundations of geometry. He was an independent discoverer of maximal compactifications of completely regular spaces, although priority in publication is assigned to Stone and Cech. He was a National Research Fellow (Gouml;ttingen) in 1926-27. He served on the Texas mathematics faculty until his 6. Gordon T. Whyburn (1904-1969), Ph.D., University of Texas, 1927. A.B. (1925) Texas; M.A. (Chemistry) (1926), Texas. Thesis title: "Concerning continua in the plane", (57 pages). Signers: R.L. Moore, M.B. Porter, H.J. Ettlinger, Edward L. Dodd, J.R. Bailey, H.L. Lochte, Arnold Romberg. Whyburn was a member of the National Academy of Sciences. He was President of the American Mathematical Society in 1953-54. After leaving Texas, he taught at Johns Hopkins, but his principal academic appointment was at the University of Virginia, where he and E. J. McShane were brought in to develop a graduate program in mathematics. He served as Chair of the department from 1934 until 1966. He was the AMS Colloquium Lecturer in 1940; his Colloquium book Analytic Topology resulted. He also published Topological Analysis. He received the Chauvenet Prize of the MAA. He directed thirty-two Ph.D. students, many of whom had distinguished academic careers. Whyburn studied chemistry as an undergraduate and in graduate school, obtaining an M.A. in Chemistry before deciding to concentrate on mathematics. 7. John H. Roberts (1906-1997), Ph.D., University of Texas, 1929. A.B. (1927) Texas. Thesis title: "Concerning non-dense plane continua", (47 pages). Signers: R.L. Moore, M.B. Porter, H.S. Vandiver, H.J. Ettlinger, Edward L. Dodd, J.W. Calhoun, A.E. Cooper. On leaving Texas, Roberts went to the University of Pennsylvania for a year, where he worked with J.R. Kline. He went to Duke University in 1931, and he remained there until his retirement. During the Second World War he served in the United States Navy. He directed twenty-four Ph.D. students, a number of whom are prominent mathematicians. He was Director of Graduate Studies in the Department from 1948 until 1960 and managing editor of the Duke Mathematical Journal from 1951 until 1960. He also served as Chair of the Department. His research centered at first on continua, then on dimension theory. 8. Clark M. Cleveland (1892-1969), Ph.D., University of Texas, 1930. B.S.in C.E. (1917) Mississippi. Thesis title: "On the existence of acyclic curves satisfying certain conditions with respect to a given continuous curve", Signers: R.L. Moore, M.B. Porter, Edward L. Dodd, H.J. Ettlinger, J.M. Kuehner, Arnold Romberg. Cleveland joined the Department of Applied Mathematics and Astronomy at Texas and remained there throughout his academic career. He became Chair of the Department of Mathematics in 1953 when the Department of Applied Mathematics and Astronomy and the Department of Pure Mathematics were merged. 9. Joe L. Dorroh (1904-1989), Ph.D., University of Texas, 1930. B.A. (1926) Texas; M.A. (1927) Texas. Thesis title: "Some metric properties of descriptive planes", (38 pages). Signers: R.L. Moore, H.S. Vandiver, M.B. Porter, H.J. Ettlinger, J.M. Kuehner, S. Leroy Broun. Dorroh was a National Research Fellow at Cal Tech in 1930-31 and at Princeton in 1931-32. He taught at LSU from 1942 to 1946 and at Illinois Tech in 1946-47. He then went to Texas A&I until his retirement in 1966. He was Chair there from 1952 until his retirement. 10. Charles W. Vickery (1906-1982), Ph.D., University of Texas, 1932. B.A. (1928) Texas. Thesis title: "Vickery: Spaces in which there exist uncountable convergent sequences of points", (37 pages). Signers: R.L. Moore, M.B. Porter, H.S. Vandiver, Edward L. Dodd, E.T. Mitchell, A.P. Brogan. He worked as a statistician and economist for the State of Texas and the U.S. Government. Following the Second World War he taught at LSU but returned to work for the government and in the aircraft industry. He published in Econometrica, Bull. AMS, and the Amer. Math. Monthly. He was a Fellow of the Royal Statistical Society. 11. Edmund C. Klipple (1906-1992), Ph.D., University of Texas, 1932. B.A. (1926) Texas. Thesis title: "Spaces in which there exist contiguous points", (42 pages). Signers: R.L. Moore, H.S. Vandiver, M.B. Porter, Edward L. Dodd, H.J. Ettlinger, Homer V. Craig. Klipple joined the Texas A&M mathematics faculty in 1935, and he stayed there until his retirement in 1971. He was Chair of the department for many years; he resigned as Chair in 1966 when asked by the Dean to rank his faculty in order, 1-40. In 1968 he was given a Faculty Distinguished Achievement Award. He nurtured a good many students who later became well known mathematicians, including Peter Lax, Efraim Armendariz, and William T. Guy. 12. Robert E. Basye (1908-2000), Ph.D., University of Texas, 1933. B.A. (1929) Missouri; M.A. (1931) Princeton. Thesis title: "Simply connected sets", (26 pages). Signers: R.L. Moore, M.B. Porter, H.S. Vandiver, P.M. Batchelder, H.J. Ettlinger, R.V. Haskell. Basye's principal academic appointment was at Texas A&M University, from 1940 until his retirement in 1968. After retirement, he devoted his full time to rose research, becoming a renowned genetic hybridizer and grower. Among his many achievements was Basye's Purple Rose. He served on active duty in the U.S. Naval Reserve during the Second World War. 13. F. Burton Jones (1910-1999), Ph.D., University of Texas, 1935. B.A. (Chemistry) (1932) Texas. Thesis title: "Concerning R.L. Moore's Axiom 5-1", (80 pages). Signers: R.L. Moore, M.B. Porter, H.J. Ettlinger, Edward L. Dodd, E.P. Tchoch, W.A. Felsing. As was the case with Whyburn before him, Jones was a chemistry student who changed to work with Moore. He stayed on the Texas faculty until 1950, serving as chair. He later served as Chair at the University of North Carolina (Chapel Hill) and at the University of California, Riverside. During the war years he worked for the U.S. Navy in Cambridge, on methods for locating and identifying submarines. He directed fifteen Ph.D. students, many of whom became well known mathematicians. At Texas he helped develop a number of students; in fact, Mary Ellen Rudin has described him as the mathematician who had the greatest influence on her development. After leaving Texas he continued to provide counsel and support and guidance to younger mathematicians trained in the Texas tradition. He worked in continua and in abstract spaces. He originated the famous normal Moore space problem. In 1975 he received a Fulbright-Hays Fellowship to visit Canterbury University in Christchurch, New Zealand. He was twice a fellow at the Institute for Advanced Study and spent two summers in Europe on the National Academy of Sciences Exchange Program. 14. Robert L. Swain (1913-1962), Ph.D., University of Texas, 1941. B.A. (1934) Reed College. Thesis title: "I. Proper and reductive transformations. II. Continua obtained from sequences of simple chains of point sets. III. Distance axioms in Moore spaces. IV. Linear metric space. V. A space in which there may exist uncountable convergent sequences of points", (101 pages). Signers: R.L. Moore, H.S. Vandiver, M.B. Porter, P.M. Batchelder, Edward L. Dodd, H.J. Ettlinger. Swain's major academic appointments were at the University of Wisconsin (Madison), Teacher's College at New Paltz, and Rutgers University. In 1955-56 he held a Ford Foundation Faculty Fellowship. 15. Robert H. Sorgenfrey (1915-1996), Ph.D., University of Texas, 1941. B.A. (1937) UCLA. Thesis title: "Concerning triodic continua", (56 pages). Signers: R.L. Moore, F. Burton Jones, H.S. Vandiver, P.M. Batchelder, Edward L. Dodd, H.J. Ettlinger. W.M. Whyburn, who was Department Head at UCLA at the time of Sorgenfrey's undergraduate work, arranged for him to study with Moore. Sorgenfrey was on the faculty at UCLA from 1942 until his retirement in January of 1979. In 1963 he received the UCLA Distinguished Teaching Award, the first mathematician to do so. He is known for his work in general topology, especially in the production of counter-examples. He directed four Ph.D. students. After retirement he wrote several successful high school mathematics textbooks. 16. Harlan C. Miller (1896-1981), Ph.D., University of Texas, 1941. B.A. (1916) Wellesley; M.A. (1930) Columbia. Thesis title: "On compact unicoherent continua", (72 pages). Signers: Miller: R.L. Moore, R.G. Lubben, H.S. Vandiver, P.M. Batchelder, Edward L. Dodd, H.J. Ettlinger. Prior to her graduate program, Miller taught for a number of years at the Hockaday school in Dallas. After her Ph.D., she taught at Winthrop College and North Texas for one year each before joining the faculty at Texas Woman's University, where she spent the rest of her academic career. She was active in University administration there, serving as Director of Mathematics. She helped direct Lida K. Barrett toward studying with RLM. She advised J.R. Boyd in his graduate program. He later developed the Moore-style mathematics program at Guilford College. Texas Woman's University has an annual Harlan Miller Lecture Series. 17. Gail S. Young (1915-1999), Ph.D., University of Texas, 1942. B.A. (1939) Texas. Thesis title: "Concerning the outer boundaries of certain connected domains", (70 pages). Signers: R.L. Moore, F. Burton Jones, R.G. Lubben, Edward L. Dodd, H.S. Vandiver, N. Coburn, H.J. Ettlinger. Young held appointments at Purdue, Michigan, Tulane, Rochester, Case-Western Reserve, Wyoming, and Columbia. He was Chair at at least two of these. He served as President of the Mathematical Association of America in 1970-71. He won the Distinguished Service Award of the MAA in 1987.He worked with the School Mathematics Study Group and with the Committee on the Undergraduate Program in Mathematics. He directed fourteen Ph.D. students, with one of whom (John Hocking) he wrote the successful textbook Topology. Another was Beauregard Stubblefield, who has engagingly described in an interview conducted by Albert C. Lewis for the Center for American History how he became aware of Moore's principles of teaching through R.L. Wilder, Gail Young, and E.E. Moise, three of Moore's students at Michigan at the time of Stubblefield's graduate program. 18. R.H. Bing (1914-1986), Ph.D., University of Texas, 1945. B.S. (1935) Southwest Texas; M.Ed. (1938) Texas. Thesis title: "Concerning simple plane webs", (34 pages). Signers: R.L. Moore, Edwin Ford Beckenbach, H.J. Ettlinger, H.S. Vandiver, P. M. Batchelder, C.T. Gray, J.G. Umstattd, Hob Gray. Bing's principal academic appointments were at Wisconsin (1943-1973) and Texas (1973-1978). He was a member of the National Academy of Sciences. He served as President of the American Mathematical Society in 1977-78 and President of the Mathematical Association of America in 1963-64. His work was centered at first on continuum theory, then on 3-manifolds. He is also well known for the Bing metrization theorem. He was the Colloquium Lecturer in 1970, resulting in his Colloquium book Topology of 3-manifolds. The American Mathematical Society published, in two volumes, the Collected Papers of R.H. Bing. He directed thirty-eight Ph.D. students, many of whom developed substantial reputations. His background was that of a high school teacher and football coach. When F.B. Jones was asked whether RLM at first did not recognize Bing's talent, Jones replied, with a twinkle in his eye, that "In later years Moore didn't remember it that way." Bing served as Chair of the Wisconsin and of the Texas Mathematics Departments. He was responsible for the MAA's film Challenge in the Classroom, which was about Moore's teaching methods. 19. Edwin E. Moise (1919-1998), Ph.D., University of Texas, 1947. B.A. (1940) Tulane. Thesis title: "An indecomposable continuum which is homeomorphic to each of its nondegenerate subcontinua", (24 pages). Signers: R.L. Moore, H.S. Wall, F. Burton Jones, H.J. Moise's dissertation involved the pseudo-arc, a term he coined. It was used to solve an old problem of Knaster. He held academic appointments at Michigan, Harvard, and Queen's College, CUNY. It was at Michigan that he began his most important work on 3-manifolds, culminating in his proof, completed at the Institute for Advanced Study, that every 3-manifold can be triangulated. He went to Harvard as James B. Conant Professor of Mathematics and Education. From 1955 to 1958 he was a member of the Executive Committee of the International Comission on Mathematical Instruction. He served as Vice-President of the American Mathematical Society in 1973-74 and as President of the Mathematical Association of America in 1967-1968. He wrote a number of successful textbooks, and a treatise on Geometric Topology in Dimensions 2 and 3. He directed three Ph.D. students. In his last years he devoted his attention to 19th century English poetry. During the Second World War he served in the U.S. Navy as a Japanese translator. 20. Richard D. Anderson (1922-2008), Ph.D., University of Texas, 1948. B.A. (1941) Minnesota. Thesis title: "Concerning upper semi-continuous collections of continua", (22 pages). Signers: R.L. Moore, F. Burton Jones, H.S. Wall, H.J. Ettlinger, H.S. Vandiver. Anderson was recruited by Moore to do graduate work in the Fall of 1941. His graduate program was interrupted by a ture of duty in the U.S. Navy, where he served at sea. His principal academic appointments were at Pennsylvania and LSU. As a number of Moore's other students, including Bing, Jones, Moise, and Burgess, he held appointments at the Institute for Advanced Study in Princeton. His work at first centered around the geometric topology of continua. He subsequently was largely responsible, along with his students, for developing infinite-dimensional topology. He directed ten Ph.D. students at LSU and, as noted earlier, contributed to the direction of Lida Barrett at Pennsylvania. A number of his students have had distinguished careers. He served as Vice-President of the American Mathematical Society in 1972-73 and as President of the Mathematical Association of America in 1981-82. He received the Distinguished Service Award of the MAA. He has in more recent years devoted his major efforts to reform in mathematics education, more generally, in science education. He is currently Senior Consultant to the NSF sponsored Louisiana Systems Initiatives Program. 21. Mary Ellen (Estill) Rudin (1924-2013), Ph.D., University of Texas, 1949. B.A. (1944) Texas. Thesis title: "Concerning abstract spaces", (27 pages). Signers: R.L. Moore, H.J. Ettlinger, H.S. Wall, F. Burton Jones, E.F. Mitchell, David L. Miller. Rudin's principal academic appointments were at Duke, where she met Walter Rudin, Rochester, and Wisconsin, from which she retired as Grace Chisolm Young Professor of Mathematics. She served as Vice-President of the American Mathematical Society in 1980-81 and has been very active in AMS affairs and committee work. Her research has been in set-theoretic topology, especially using axiomatic set theory. She has, as C.E. Aull has noted, ushered in the Rudin Era in general topology. She has directed sixteen Ph.D. students and is largely responsible for directing the Ph.D. research of several others, including Judy Roitman and William Fleissner at UC Berkeley. Note: In their article "By their fruits shall ye know them: some remarks on the interaction of general topology with other areas of mathematics", appearing in History of Topology, edited by I.M. James, Elsevier, 1999, Teun Koetsier and Jan van Mill write: In that period general topology rather unexpectedly succeeded in solving several difficult problems outside its own area of research, in functional analysis and in geometric and algebraic topology. ...There were in that period at least two major developments in general topology that revolutionized the field: the creations of infinite-dimensional topology and set theoretic topology. It was mainly due to the efforts of Dick Anderson and Mary Ellen Rudin that these fields have played such a dominant role in general topology ever since. It is interesting to note that Anderson and Rudin comprised a two person class under Moore in the immediate post-war years. Note that of the preceding five students, four became President of the MAA and four became vice-president of the AMS. One might wonder whether this is duplicated by any other successive group of five students by any one thesis advisor. 22. Cecil E. Burgess (1920-2004), Ph.D., University of Texas, 1951. B.S. (1941) West Texas. Thesis title: "Concerning continua and their complementary domains in the plane", (38 pages). Signers: R.L. Moore, H.S. Wall, H.S. Vandiver, H.J. Ettlinger. Burgess's graduate program was also interrupted by service in the U.S. Navy. After leaving Texas, he went to the University of Utah, where he remained throughout his career, except for leaves, which he usually spent working with R. H. Bing. Most of his work, and that of his students, has been centered on Bing-style topology. He directed ten Ph.D. students, and some of them are quite prominent. He served for a number of years as Chair of the Department. 23. B.J. Ball (1925-1996), Ph.D., University of Texas, 1952. B.A. (1948) Texas. Thesis title: "Concerning continuous and equicontinuous collections of arcs", (36 pages). Signers: R.L. Moore, H.S. Wall, R.G. Lubben, H.J. Ettlinger, Homer V. Craig. Ball entered the Navy before getting his BA. On his return in 1946 he moved into Moore's graduate program. His major academic appointments were at Virginia and Georgia. He served as Chair at Georgia for a number of years. His work was in continuum theory, general topology, and, in later years, shape theory. He directed eight Ph.D. students and contributed to the direction of many 24. Eldon Dyer (1929-1993), Ph.D., University of Texas, 1952. B.A. (1947) Texas; B.S. (1947) Texas. Thesis title: "Certain conditions under which the sum of the elements of a continuous collection of continua is an arc", (14 pages). Signers: R.L. Moore, H.S. Wall, R.G. Lubben, H.J. Ettlinger, Homer V. Craig. Dyer's academic appointments included Georgia, Johns Hopkins, Chicago, Rice, and CUNY, from which he retired as Distinguished Professor in 1991. He chaired the Department of Mathematics at CUNY, Center for Graduate Studies 1967-1970. He is best known for his work in algebraic topology and for his six Ph.D. students, among whom is Robion Kirby. As were Wilder and Bing, he was a consulting editor for the Encyclopedia Britannica. He held a Sloan Fellowship in 1960-62 and an NSF Post-Doctoral Fellowship in 1955-56. On two occasions he was a visiting member of the Institute for Advanced Study in Princeton. He served as Editor of the Proceedings AMS 1960-65 and as Associate Editor Transactions AMS. 25. Mary-Elizabeth Hamstrom (1927–2009), Ph.D., University of Texas, 1952. B.A. (1948) Pennsylvania. Thesis title: "Concerning webs in the plane", (35 pages). Signers: R.L. Moore, H.S. Wall, R.G. Lubben, H.J. Ettlinger, Homer V. Craig. Hamstrom's principal academic appointment was at the University of Illinois. Her research was mostly in geometric topology. She directed nine Ph.D. students. Before going to Austin Hamstrom had been taught by two of Moore's earlier students: Anna Mullikin in high school and J.R. Kline at Penn. As were many of the students in Austin in the 1940s and early 1950s, she was strongly influenced by F.B. Jones. 26. John M. Slye (1923-2014), Ph.D., University of Texas, 1953. B.S. (1945) California Institute of Technology. Thesis title: "Flat spaces for which the Jordan Curve Theorem holds true", (19 pages). Signers: R.L. Moore, H.S. Wall, H.J. Ettlinger, D.S. Hughes, R.N. Little, Jr. Slye's academic appointments were at the Universities of Minnesota and Houston. His work was in geometric topology. He directed two Ph.D. students. During the Second World War he served in the U.S. Navy. 27. John T. Mohat (1924-1993), Ph.D., University of Texas, 1955. B.A. (1950) Texas Western. Thesis title: "Concerning spirals in the plane", (76 pages). Signers: R.L. Moore, H.S.Wall, H.J. Ettlinger, Homer V. Craig. Mohat spent his academic career at the University of North Texas. He served in the U.S. Army during the Second World War. 28. Bennie J. Pearson (1929--), Ph.D., University of Texas, 1955. B.A. (1950) Texas. Thesis title: "A connected point set in the plane that spirals down on each of its points", (18 pages). Signers: R.L. Moore, H.S.Wall, H.J. Ettlinger, Homer V. Craig. Pearson's major academic appointment was at the University of Missouri- Kansas City. He served as Chair of the Department for six years. He directed three Ph.D. students. 29. Steve Armentrout (1930–2020), Ph.D., University of Texas, 1956. B.A. (1951) Texas. Thesis title: "On spirals in the plane", (34 pages). Signers: R.L. Moore, H.J. Ettlinger, D.S. Hughes, C.W. Horton, H.S. Wall. Armentrout taught at the University of Iowa for a number of years and then at Penn State. He has been active in AMS committee work and served as Treasurer of the AMS. He has worked in geometric topology and differential topology. He has directed twelve Ph.D. students, many of whom are very active. 30. William S. Mahavier (1930–2010), Ph.D., University of Texas, 1957. B.S. (1951) Texas. Thesis title: "A theorem on spirals in the plane", (26 pages). Signers: R.L. Moore, H.S. Wall, H.J. Ettlinger, C.W. Horton. Mahavier was a physics major; in fact, his only degree in mathematics was the Ph.D. Mahavier's academic appointments included Illinois Institute of Technology, University of Tennessee, and Emory University. His work has centered on continuum theory. He has directed eight Ph.D. students. 31. L. Bruce Treybig (1931–2008), Ph.D., University of Texas, 1958. B.S. (1953) Texas. Thesis title: "Concerning locally peripherally separable spaces", (21 pages). Signers: R.L. Moore, H.J. Ettlinger, F.A. Matsen?, Homer V. Craig. Treybig has taught at Tulane University and at Texas A&M University. His work has been primarily in general topology and continuum theory. He has directed seven Ph.D. students. 32. James N. Younglove (1927--), Ph.D., University of Texas, 1958. B.A. (1951) Texas. Thesis title: "Concerning dense metric subspaces of certain non-metric spaces", (22 pages). Signers: R.L. Moore, R.G. Lubben, R.N. Little, H.S. Wall. After graduating from high school, Younglove served two years in the U.S. Navy before entering Texas as a freshman. Younglove's academic appointments were at University of Missouri (Columbia), and at University of Houston, where he served as Chair for a number of years. His work was primarily in general topology, especially, metrication theory. He directed one Ph.D. student. 33. George W. Henderson (1936--), Ph.D., University of Texas, 1959. B.A. (1958) Texas. Thesis title: "Proof that every compact continuum which is topologically equivalent to each of its nondegenerate subcontinua is an arc", (19 pages). Signers: R.L. Moore, H.P. Hanson, R.E. Lane, H.S. Wall. Henderson's thesis was a proof that every decomposable hereditarily equivalent continuum is an arc. (The word "decomposable" was inadvertently omitted from title pages of final copies of his thesis.) He taught at the University of North Carolina, University of Virginia, Rutgers, and the University of Wisconsin, Milwaukee, where he directed one Ph.D. student. 34. John M. Worrell (1933--), Ph.D., University of Texas, 1961. B.A. (1954) Texas; M.D. (1957) Texas. Thesis title: "Concerning scattered point sets", (41 pages). Signers: R.L. Moore, H.S.Wall, H.J. Ettlinger. Before entering graduate school, Worrell obtained an M.D. degree. After he received his Ph.D. in mathematics, he worked at Sandia for a number of years, on his own research and on problems of interest to the space program. His mathematical work has centered on general topology, often in collaboration with Howard Wicke. He later taught at Ohio University and is now in private (medical) practice. While at Ohio University he created and developed, with the assistance of George M. Reed, the Institute for Medicine and Mathematics. He held an NSF post-doctoral fellowship in 1961-62. 35. Howard Cook (1933--2010), Ph.D., University of Texas, 1962. B.S. (1956) Clemson. Thesis title: "On the most general closed and bounded plane point set through which it is possible to pass a pseudo-arc", (34 pages). Signers: R.L. Moore, H.J. Ettlinger, R.E. Lane, H.S. Wall. Cook's academic appointments have been at Auburn, North Carolina, Georgia, Tasmania, and Houston, mostly at Houston. In his thesis he characterized those compact sets in the plane which can be embedded in pseudo-arcs, an analogue of the Moore-Kline characterization of those which are subsets of arcs. His work has been in continuum theory and in general topology (Moore spaces). He has directed five Ph.D. students. 36. James L. Cornette (1935--), Ph.D., University of Texas, 1962. B.S. (1956) West Texas; M.A. (1959) Texas. Thesis title: "Continuumwise accessibility", (52 pages). Signers: R.L. Moore, H.J. Ettlinger, R.E. Lane, H.S. Wall. Cornette's principal position has been at Iowa State University. His earlier work was in continuum theory; he has turned to biomathematics in more recent years. He has directed seven Ph.D. students. He is currently University Professor of Mathematics and Director of the Center for Bioinfomatics and Biological Statistics at Iowa State. In 1985 he began a collaborative program of research with three other scientists which has resulted in twenty-one journal articles, ten review and expository articles, and three patents. 37. Dennis K. Reed (1933-1986), Ph.D., University of Texas, 1965. B.S. (1959) Texas. Thesis title: "Concerning upper semi-continuous collections of finite point sets", (41 pages). Signers: R.L. Moore, H.S. Wall, Homer V. Craig, Patrick L. Odell. Reed's academic career was at the University of Utah. He won the University's Distinguished Teaching Award in 1973. 38. Harvy L. Baker (1938--), Ph.D., University of Texas, 1965. B.A. (1960) Texas. Thesis title: "Complete amonotonic collections", (35 pages). Signers: R.L. Moore, Homer V. Craig, R.G. Lubben, H.S. Wall. Baker has taught at the University of Nebraska, where he directed two Ph.D. students, and at The University of Texas at Arlington. 39. Blanche Joanne (Monger) Baker (1934--), Ph.D., University of Texas, 1965. B.A. (1956) Lamar State; M.A. (1958) Texas. Thesis title: "Concerning uncountable collections of triods", (45 pages). Signers: R.L. Moore, R.G. Lubben, Homer V. Craig, H.S. Wall. Baker has taught at the University of Nebraska and at Lamar University. 40. Roy D. Davis (1938--), Ph.D., University of Texas, 1966. B.A. (1961) Texas; M.A. (1964) Texas. Thesis title: "Concerning the sides from which certain sequences of arcs converge to a compact irreducible continuum", (35 pages). Signers: R.L. Moore, H.J. Ettlinger, R.G. Lubben, H.S. Wall. After leaving Texas, Davis worked in the aerospace industry in Southern California. 41. Jack W. Rogers (1943--), Ph.D., University of Texas, 1966. B.A. (1963) Texas; M.A. (1965) Texas. Thesis title: "A space whose regions are the simple domains of another space", (34 pages). Signers: R.L. Moore, H.J. Ettlinger, R.G. Lubben, W.E. Millett, H.S. Wall. Rogers has taught at Emory University and at Auburn University, where he is currently Professor of Mathematics and Director of the Auburn University Honors College. His early work was in continuum theory; he later changed to applied mathematics and computational linear algebra. He has directed three Ph.D. students. Rogers first encountered Moore as a 10th grade high school student. He took the summer geometry course that year. 42. Martin D. Secker (1927-2018), Ph.D., University of Texas, 1966. B.A. (1949) North Texas; M.A. (1950) North Texas; M.A. (1964) Texas. Thesis title: "Reversibly continuous bisensed transformations of an annulus into itself", (28 pages). Signers: R.L. Moore, H.J. Ettlinger, R.G. Lubben, H.S. Wall. After leaving Austin, Secker taught first at Iowa State University, then at Branson School, a private college preparatory school in California, and at the College of Marin. 43. David E. Cook (1935--), Ph.D., University of Texas, 1966. B.A. (1958) Texas; M.A. (1960) Texas. Thesis title: "Concerning compact point sets with noncompact closures", (39 pages). Signers: R.L. Moore, Homer V. Craig, W..T. Guy, Jr., Harold P Hanson, R.G. Lubben. Cook's main academic appointment was at the University of Mississippi, where he directed three Ph.D. students. 44. John W. Hinrichsen (1940--), Ph.D., University of Texas, 1967. B.A. (1961) Texas; M.A. (1964) Texas. Thesis title: "Certain web-like continua," (26 pages). Signers: R.L. Moore, W.T. Guy, Jr., R.G. Lubben, H.S. Wall, W.E. Millett. Hinrichsen's academic career was spent at Auburn University. His research has been in continuum theory. 45. Joel L. O'Connor (1942--), Ph.D., University of Texas, 1967. B.A. (1962) Texas. Thesis title: "Holes in two-dimensional space", (51 pages). Signers: R.L. Moore, Homer V. Craig, Robert E. Greenwood, W.E. Millett, H.S.Wall. O'Connor taught at the University of Florida and then went into industry as an applied mathematician. He has consulted with the NSA and with the Vanderbilt University College of Medicine and, jointly with a medical physicist, founded Clinical Database Systems. He has consulted with other private firms and with state and local governments. O'Connor first encountered Moore as a high school student in Moore's summer geometry course. 46. John W. Green (1943--), Ph.D., University of Texas, 1968. B.A. (1965) Texas; M.A. (1966) Texas. Thesis title: "Concerning the separation of certain plane-like spaces by compact dendrons", (85 pages). Signers: R.L. Moore, Homer V. Craig, R.G. Lubben, H.S. Wall. Green was on the University of Oklahoma faculty for fifteen years. He then obtained a Ph.D. in mathematical statistics from Texas A&M University, then taught at the University of Delaware for five years. He has since been employed by E.I. DuPont as senior research biostatistician. He has said that his success in his present position is largely due to the training he obtained in Moore's classes. He has directed four Ph.D. students, two at Oklahoma in topology and two at Delaware in statistics. 47. Michael H. Proffitt (1942--2016), Ph.D., University of Texas, 1968. B.A. (1964) Texas; M.A. (1966) Texas. Thesis title: "Concerning uncountable collections of mutually exclusive compact continua", (32 pages). Signers: R.L. Moore, Homer V. Craig, R.E. Greenwood, H.S. Wall. After a post at SUNY New Paltz, Proffitt returned in 1972 to U.Texas, where he was a Robert A. Welch Fellow in chemistry and later in physics. In 1980 he moved to the University of Colorado, working in atmospheric research, more specifically, measurements of ozone. His measurements identified the cause of the ozone hole and its spread into other latitudes (published as articles in Nature and Science). He has written over 100 journal publications. He retired in 2004 as Senior Scientific Officer of the World Meteorological Organization and continued his scientific work in Buenos Aires, Argentina. 48. Jesse A. Purifoy (1938--), Ph.D., University of Texas, 1969. B.A. (1963) Texas; M.A. (1965) Texas. Thesis title: "Some separation theorems", (50 pages). Signers: R.L. Moore, P.R. Meyer, D.S. Hymann, H.S. Wall, J.R. Whiteman. After leaving Texas, Purifoy joined the faculty at Memphis State University, where he helped start a Ph.D. program in mathematics. While there he began consulting with the manufacture of programmable calculators and consulting with municipal bond companies and municipalities and other governmental agencies. He then moved to Houston and remained busy with computer hardware and software. He currently owns a software company, Purifoy Systems Analysis, Inc. 49. Robert E. Jackson (1943--), Ph.D., University of Texas, 1969. B.A. (1964) Texas; M.A. (1966) Texas. Thesis title: "Concerning certain plane-like domains", (57 pages). Signers: R.L. Moore, Homer V. Craig, Robert E. Greenwood, W.T. Guy, Jr., H.S. Wall. On leaving Texas, Jackson taught at Dickinson College, Carlisle, PA for several years and then went into industry, first with NCNB in North Carolina as a systems analyst, then with Diagnostic Laboratories, and then with BMC Software, where he is now a senior computer scientist. 50. Nell Elizabeth (Stevenson) Kroeger (1944--), Ph.D., University of Texas, 1969. B.A. (1965, in microbiology) Texas; M.A. (1968) Texas. Thesis title: "Concerning indecomposable continua and upper semi-continuous collections of nondegenerate continua", (38 pages). Signers: R.L. Moore, Robert E. Greenwood, Homer V. Craig, W.T. Guy, Jr., H.S. Wall. Stevenson held an academic position at SUNY Binghamton before going into the private sector. She currently is involved with computer software and has an intense interest in classical music. Moore directed very few M.A. students; two were Lucille S. Whyburn, wife of Gordon T. Whyburn, and Martin Ettlinger, son of H.J. Ettlinger. Martin Ettlinger had an illustrious career in chemistry, being a professor at Rice and then at University of Copenhagen. In an interview, Ettlinger has remarked that the intellectual atmosphere in Moore's classes was never duplicated in his experience except when he was a Junior Fellow at Harvard. Mathematicians who studied with Moore but who wrote Ph.D. theses under others include W. L. Ayres (Kline), Lida Barrett (Kline-Anderson), Robert Williams (G.T. Whyburn), Steven Jones and Gary Richter (R.H.Bing), D.R. Stocks and E. Hensley (Greenwood), D.R. Traylor (Fitzpatrick), W.T. Reid, W.M. Whyburn, O. H. Hamilton, J.H. Barrett, D.H. Tucker, B. Fitzpatrick, and E.I. Deaton (H.J. Ettlinger). Of the last group, Hamilton and Fitzpatrick subsequently worked mainly in the directions in which Moore had started them. In the 1966 film Challenge in the Classroom, Hamilton was the only student Moore mentioned by name. Many of H.S.Wall's Ph.D. students, including John S. Mac Nerney, Pasquale Porcelli, John Neuberger, Saul Drobnies, Sam Young, Coke S. Reed, Jack B. Brown, Robert Dorroh, and R. Daniel Mauldin were profoundly influenced by Moore. A special class of students are those who left Austin in the 1960s to study elsewhere; these include Raymond Houston and George Golightly (Houston), Michel Smith and Tom Jacob (Emory), Kenneth Van Doren, Kermit Smith, Douglas Moreman, John Bales, and Nick Williams (Auburn), and Don Fox (Riverside). A number of women who were or later became wives of students of Moore took courses from him. These include Jean Mahavier, Katherine Cook, June Treybig, Janet Rogers, and Gayle Ball. Finally, there are those persons who did not become mathematicians at all, but were very successful in other endeavors and who attribute their success, to greater or lesser extent, to the training they got from Moore. These include James Wm. McClendon, Distinguished Scholar-in-Residence at the Fuller Theological Seminary in California, and Patricia Pound, Secretary of the Governor's Committee (for the State of Texas) for Persons with Handicaps. McClendon has written a number of books on philosophy and theology; he is currently completing a three volume systematic theology. He was long (1971-1990) a professor at the Graduate Theological Union, Berkeley, California. Also, there are Robert Boyer, Professor of Computer Science and Philosophy at The University of Texas, Harry Lucas, Jr., a successful businessman, Joel Finegold, a free-lance detergent chemist, and Margaret Ball, a successful writer. Lorene Rogers, a distinguished chemist and a President Emeritus of the University of Texas at Austin, has said that she had a severe case of mathematics anxiety, especially at the prospect of having to take calculus, until she took solid geometry and then calculus under Moore. That she was successful in calculus is evinced by Moore's having tried to recruit her into a career in mathematics. Reference has been made above to several of Moore's students who did not become mathematicians and who commented very favorably on their training under him. For the sake of completeness, and for balance, it should be noted that some of his students who did become mathematicians with successful academic careers viewed their training as a mixed blessing, and expressed their wish that they had had a wider mathematical education, specifically including the learning of algebraic methods in topology. Most of his students who directed Ph.D. students made sure that their students did learn some At least two other mathematicians, H.S. Wall and L.E. Dickson, directed more Ph.D. students than did Moore. It is very doubtful that anyone else directed students over a longer period of time, from 1916 to 1969. This is the more remarkable in that his first student, J.R. Kline, did not graduate until eleven years after Moore's own Ph.D. was awarded in 1905. Of the first half of Moore's students, more than half first encountered him in their graduate program. Of the remainder, a substantial majority took courses from him as undergraduates, and at least two studied with him while still in high school. More precisely, of his first twenty-seven students, seventeen already had Bachelor's degrees before taking a course from him, Dyer took two courses from him the summer he received his B.A. and B. S., and Young came as a senior and started directly in Moore's graduate course. Of the other twenty-three, all save four took lower level courses either from him or from his colleagues. In looking over the Ph.D. students' dissertation titles, we note that four of them, some of the early ones, are in geometry. Many of the later ones deal with continuum theory; not as many are in abstract spaces, and several are on spirals in the plane, a subject that has not stirred much interest outside Moore's school. The same is true of webs. Bing once remarked that if anyone wanted a reprint of the journal article based on his dissertation, he still had forty-eight of the fifty copies provided him. The list of signers of dissertations helps to provide an overview of the mathematics program at Texas from 1920 until 1969. Moore's calculus classes are the subject of one completed study (Eyles) and one ongoing study (H. Cook). There are many essays and articles available on-line at the above URL. Moreover, over the last several years a number of interviews of persons with knowledge of Moore's methods have been conducted. Transcripts of these interviews are available through the Center for American History at The University of Texas at Austin. I particularly mention those with R.D. Anderson, Lida K. Barrett, John W. Neuberger (this has a full discussion of Moore's calculus course), John M. Worrell, Mrs. B.J. Ball, Mrs. R.H. Bing, Coke S. Reed, John W. Green. For a complete inventory of transcripts of such interviews, one may contact info@edu-adv-foundation.org. Acknowledgements. In addition to the sources cited, a number of sources were used in the formulation of this listing, including various publications of the American Mathematical Society and the Mathematical Association of America. The Archives of the University of Pennsylvania and of the University of Texas also provided much helpful information. The Center for American History at The University of Texas and its staff, specifically including Ralph Elder and Christopher Bourell, have been very helpful. The Educational Advancement Foundation and Harry Lucas, Jr. have also provided a great deal of assistance. I especially thank Sherry White, Connie Lang, Laurie Schmid, and Kate Walden of the EAF for their help. A number of individuals, including some of Moore's students, kindly provided information that could not have been obtained by other means. The author, Ben Fitzpatrick, Jr. (1932-2000), was a professor of mathematics at Auburn University through most of his career. [This article has been revised to bring some biographical details up to date. -- January 2021.]
{"url":"http://legacyrlmoore.org/reference/fitzpatrick.html","timestamp":"2024-11-08T10:47:41Z","content_type":"text/html","content_length":"54892","record_id":"<urn:uuid:99ee82d9-a1bf-4008-833c-524a3d80093c>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00201.warc.gz"}
How do you find the limit of x^(sin(x)) as x approaches 0? | HIX Tutor How do you find the limit of #x^(sin(x))# as x approaches 0? Answer 1 #x^sin x = (1+x-1)^sinx = 1 +sin x/(1!)(x-1)+(sinx(sinx-1))/(2!)(x-1)^2+cdots+# #lim_(x->0)x^sinx = 1# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 let #L = lim_(x to 0) x^(sin x)# #implies ln L = ln lim_(x to 0) x^(sin x) # #= lim_(x to 0) ln x^(sin x)# #= lim_(x to 0) sinx ln x# #= lim_(x to 0) (ln x)/(1/(sinx) )# #= lim_(x to 0) (ln x)/(csc x )# this is in indeterminate #oo/oo# form so we can use L'Hôpital's Rule #= lim_(x to 0) (1/x)/(- csc x cot x)# #=- lim_(x to 0) (sin x tan x)/(x)# Next bit is unnecessary, see ratnaker-m's note below... this is now in indeterminate #0/0# form so we can go again #ln L =- lim_(x to 0) (cos x tan x + sin x sec^2 x)/(1)# #= - 0# So: #L = e^(- 0) = 1# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 To find the limit of x^(sin(x)) as x approaches 0, we can use the concept of exponential and trigonometric limits. By applying the limit properties, we can rewrite the expression as e^(ln(x^(sin (x)))). Then, using the properties of logarithms, we can simplify it further to e^(sin(x) * ln(x)). Next, we can evaluate the limit of sin(x) * ln(x) as x approaches 0. By using the limit properties and the fact that sin(x) approaches 0 as x approaches 0, we can conclude that the limit of sin(x) * ln(x) as x approaches 0 is 0. Finally, we substitute this result back into the original expression, giving us e^0, which equals 1. Therefore, the limit of x^(sin(x)) as x approaches 0 is 1. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-limit-of-x-sin-x-as-x-approaches-0-1-8f9af9ca6d","timestamp":"2024-11-10T12:59:11Z","content_type":"text/html","content_length":"585320","record_id":"<urn:uuid:0782c68b-b3ca-407d-a7ea-c038e8190f82>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00262.warc.gz"}
PSY 520 Graduate Statistics Assignments GCU Module 5 Exercises Grand Canyon University: PSY-520: Graduate Statistics 2014 Chapter 10: • Explain in your own words why it is important to know the possible errors we might make when rejecting or failing to reject the null hypothesis. • A primatologist believes that rhesus monkeys possess curiosity. She reasons that, if this is true, then they should prefer novel stimulation to repetitive stimulation. An experiment was conducted in which 12 rhesus monkeys were randomly selected from the university colony and taught me to press two bars. Pressing bar 1 always produces the same sound, whereas bar 2 produces a novel sound each time it is pressed. After learning to press the bars, the monkeys are tested for 15 minutes, during which they have free access to both bars. The number of presses on each bar during the 15 minutes is recorded. The resulting data are as follows: Subject Bar 1 Bar 2 a. What is the alternative hypothesis? In this case, assuming a non-directional hypothesis is appropriate because there is insufficient empirical basis to warrant a directional hypothesis. b. What is the null hypothesis? c. Using ? = 0.052 tail, what is your conclusion? d. What error might you be making by your conclusion in part c? e. To what population does your conclusion apply? A researcher is interested in determining whether acupuncture affects pain tolerance. An experiment is performed in which 15 students are randomly chosen from a large pool of university undergraduate Each subject serves two conditions. In both conditions, each subject receives a short-duration electric shock to the pulp of a tooth. The shock intensity is set to produce a moderate level of pain to the unanesthetized subject. After the shock is terminated, each subject rates the perceived level of pain on a scale of 0–10, with 10 being the highest level. In the experimental condition, each subject receives the appropriate acupuncture treatment prior to receiving the shock. The control condition is made as similar to the experimental condition as possible, except a placebo treatment is given instead of acupuncture. The two conditions are run on separate days at the same time of day. The pain ratings in the accompanying table are obtained. a. What is the alternative hypothesis? Assume a non-directional hypothesis is appropriate. b. What is the null hypothesis? c. Using ? = 0.052 tail, what is your conclusion? d. What error might you be making by your conclusion in part c? e. To what population does your conclusion apply? Subject Acupuncture Placebo 14 2 5: 3. How are sampling distributions generated using the empirical sampling approach? 5. What are the assumptions underlying the use of the z test? 8. How do each of the following differ? a. s and s b. s² and ?² c. µ and µ d. ? and ? 16. How does increasing the N of an experiment affect the following? a. Power b. Beta c. Alpha d. Size of real effect 20. A set of sample scores from an experiment has an N = 30 and an obt = 19. a. Can we reject the null hypothesis that the sample is a random sample from a normal population with µ = 22 and ? = 8? Use ? = 0.011 tail. Assume the sample mean is in the correct direction. b. What is the power of the experiment to detect a real effect such that µreal = 20? c. What is the power to detect a µreal = 20 if N is increased to 100? d. What value does N have to equal to achieve a power of 0.8000 to detect a µreal = 20? Use the nearest table value for zobt. 21. On the basis of her newly developed technique, a student believes she can reduce the amount of time schizophrenics spend in an institution. As director of training at a nearby institution, you agree to let her try her method on 20 schizophrenics, randomly sampled from your institution. The mean duration that schizophrenics stay at your institution is 85 weeks, with a standard deviation of 15 weeks. The scores are normally distributed. The results of the experiment show that the patients treated by the student stay a mean duration of 78 weeks, with a standard deviation of 20 weeks. 1. What is the alternative hypothesis? In this case, assuming a non-directional hypothesis is appropriate because there are insufficient theoretical and empirical bases to warrant a directional 2. What is the null hypothesis? 3. What do you conclude about the student’s technique? Use ? = 0.052 tail. A physical education professor believes that exercise can slow the aging process. For the past 10 years, he has been conducting an exercise class for 14 individuals who are currently 50 years old. Normally, as one ages, maximum oxygen consumption decreases. The national norm for maximum oxygen consumption in 50-year-old individuals is 30 milliliters per kilogram per minute, with a standard deviation of 8.6. The mean of the 14 individuals is 40 milliliters per kilogram per minute. What do you conclude? Use ? = 0.051 tail. PSY 520 Week 1 Discussion 1 Latest-GCU Review the levels of measurement terms in the Statistics Visual Learner media piece. Compare and contrast Stevens’s four scales of measurement, and explain when each type of scale should be used. PSY 520 Week 1 Discussion 2 Latest-GCU The professor teaching a large introductory class gives a final exam that has alternate forms, A, B, and C. A student taking the exam using Form B is upset because she claims that Form B is much harder than Forms A and C. Discuss how percentile point data might be useful to determine if the student is correct. PSY 520 Week 2 Discussion 1 Latest-GCU The process of random sampling guarantees that the sample selected will be representative of the population. Is this statement true? Discuss. PSY 520 Week 2 Discussion 2 Latest-GCU Review the video on Normal Distribution in the Calculations section of the Statistics Visual Learner media piece. Bob compares his SAT Verbal score of 400 to Marge’s ACT Verbal score of 20. I beat you, he exclaims. My score is 20 times your score! Although his multiplication is good, his logic is faulty. Explain why. PSY 520 Week 3 Discussion 1 Latest-GCU Review the video Linear Correlation in the Calculations section of the Statistics Visual Learner media piece. It is sometimes said that the higher the correlation between two variables, the more likely the relationship is causal. Do you think this is correct Discuss? PSY 520 Week 3 Discussion 2 Latest-GCU Discuss the strengths and weaknesses of correlational and regression studies; discuss concepts such as positive and negative correlations, correlation coefficients, confounding, and causality. PSY 520 Week 4 Discussion 1 Latest-GCU Explain why there is an inverse relationship between committing a Type I error and committing a Type II error. What is the best way to reduce both kinds of error? PSY 520 Week 4 Discussion 2 Latest-GCU Review the term Significance Test in the Statistics Visual Learner media piece. When a newspaper or magazine article reports the results of a study and draws a conclusion without also reporting whether the results are statistically significant, what are the possible reasons for doing so? How seriously should you take the conclusion offered in such a study? PSY 520 Week 5 Discussion 1 Latest-GCU Explain why using the t statistic may be an appropriate alternative to using a z-score (use the concept of estimated standard error to justify your answer). PSY 520 Week 5 Discussion 2 Latest-GCU Discuss how the t-test for correlated groups and the t-test for single samples are alike and different. PSY 520 Week 6 Discussion 1 Latest-GCU Review the four ANOVA videos in the Calculations section of the Statistics Visual Learner media piece. Explain the major differences between analyzing a one-way ANOVA versus a two-factor ANOVA, and explain why factorial designs with two or more independent variables (or factors) can become very difficult to interpret. PSY 520 Week 6 Discussion 2 Latest-GCU Explain how the ANOVA technique avoids the problem of the inflated probability of making Type I error that would arise using the alternative method of comparing groups two at a time using the t-test for independent groups. PSY 520 Week 7 Discussion 1 Latest-GCU Review the three Non-parametric test videos in the Calculations section of the Statistics Visual Learner media piece. A researcher is examining preferences among four new flavors of ice cream. A sample of n = 80 people is obtained. Each person tastes all four flavors and then picks a favorite. The distribution of preferences is as follows. Do these data indicate any significance preferences among the four flavors Test at the .05 level of significance? Ice Cream Flavor A B C D PSY 520 Week 7 Discussion 2 Latest-GCU Is it true that parametric tests are generally more powerful than nonparametric tests? If so, give two reasons why you might choose to use a nonparametric test instead of a parametric test. PSY 520 Week 8 Discussion 1 Latest-GCU To determine whether a new sleeping pill has an effect that varies with dosage, a researcher randomly assigns adult insomniacs, in equal numbers, to receive either 4 or 8 grams of the sleeping pill. The amount of sleeping time is measured for each subject during an 8-hour period after the administration of the dosage. What type of design is this, and what type of statistic is needed to analyze the data? PSY 520 Week 8 Discussion 2 Latest-GCU Dr. Bill Board designs a 2 X 2 between-subjects factorial design, where Factor A is word frequency (low or high) and Factor B is category cues (no queues or cues). Assume that the data are intervals. What type of statistic is needed to analyze the data? PSY 520 Week 2 Exercise Latest-GCU Complete the following exercises from Review Questions located at the end of each chapter and put them into a Word document to be submitted as directed by the instructor. • Chapter 1, numbers 1.8 and 1.9 • Chapter 2, numbers 2.14, 2.17, and 2.18 • Chapter 3, numbers 3.13, 3.14, 3.18, and 3.19 • Chapter 4, numbers 4.9, 4.14, 4.17, and 4.19 Show all relevant work; use the equation editor in Microsoft Word when necessary. PSY 520 Week 3 Probability Project Latest-GCU Use the document “Probability Project” to complete the assignment. While APA format is not required for the body of this assignment, solid academic writing is expected, and documentation of sources should be presented using APA formatting guidelines, which can be found in the APA Style Guide, located in the Student Success Center. Topic 3 – Probability Project Use the following information to complete the assignment. While APA format is not required for the body of this assignment, solid academic writing is expected, and documentation of sources should be presented using APA formatting guidelines, which can be found in the APA Style Guide, located in the Student Success Center. PSY 520 Graduate Statistics Essay Assignments There are many misconceptions about probability which may include the following. • All events are equally likely • Later events may be affected by or compensate for earlier ones • When determining probability from statistical data, sample size is irrelevant • Results of games of skill are unaffected by the nature of the participants • “LuckyUnlucky” numbers can influence random events • In random event involving selection, results are dependent on number rather than rations • If events are random then the results of a series of independent events are equally likely The following statements are all incorrect. Explain the statements and the errors fully using the probability rules discussed in topic two. 1. I have flipped an unbiased coin three times and got heads, it is more likely to get tails the next time I flip it. 2. The Rovers play Mustangs. The Rovers can win, lose, or draw, so the probability that they win is 13. 3. I roll two dice and add the results. The probability of getting a total of 6 is 112 because there are 12 different possibilities and 6 is one of them. 4. Mr. Purple has to have a major operation. 90% of the people who have this operation make a complete recovery. There is a 90% chance that Mr. Purple will make a complete recovery if he has this 5. I flip two coins. The probability of getting heads and tails is 13 because I can get Heads and Heads, Heads and Tails, or Tails and Tails. 6. 13 is an unlucky number so you are less likely to win raffles with ticket number 13 than with a different dumber. PSY 520 Week 3 Exercise Latest-GCU Complete the following exercises located at the end of each chapter and put them into a Word document to be submitted as directed by the instructor. Show all relevant work; use the equation editor in Microsoft Word when necessary. • Chapter 6, numbers 6.7, 6.10, and 6.11 • Chapter 7, numbers 7.8, 7.10, and 7.13 PSY 520 Week 4 Exercise Latest-GCU Complete the following exercises located at the end of each chapter and put them into a Word document to be submitted as directed by the instructor. Show all relevant work; use the equation editor in Microsoft Word when necessary. 1. Chapter 9, numbers 9.7, 9.8, 9.9, 9.13, and 9.14 2. Chapter 10, numbers 10.9, 10.10, 10.11, and 10.12 3. Chapter 11, numbers 11.11, 11.19, and 11.20 4. Chapter 12, numbers 12.7, 12.8, and 12.10
{"url":"https://nursingassignmentacers.com/psy-520-graduate-statistics-gcu/","timestamp":"2024-11-07T19:07:37Z","content_type":"text/html","content_length":"118551","record_id":"<urn:uuid:f4f7efd6-772d-48a9-b64f-884a4ea149e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00310.warc.gz"}
How to use LARGE & SMALL formulas in ExcelHow to use LARGE & SMALL formulas in Excel - Excel Avon How to use LARGE & SMALL formulas in Excel Excel Formulas May 29, 2022 Welcome to Excel Avon How to use LARGE & SMALL formulas in Excel In this formula you can learn LARGE formula in Excel, A large function in excel is an inbuilt statistical function that returns the nth position or Kth position from the selected numerical array. If the Kth position is greater or larger than the values, there is an array, or we keep the Kth position as blank then it will return #Num! =LARGE (array, k) array – This is the range or array from which you want the function to return the nth largest value. Kth_position – This is an integer that specifies the position from the value, which is the largest, i.e., the Kth position. □ If array contains no numeric values, LARGE returns a #NUM! error. □ For example, the LARGE Function can be used to find the first, second, or the third-highest scores of a test. How to use LARGE formula in Excel So now we will apply the LARGE formula and select the entire range and select the position. Now I will apply for all below cells for to positions number cells. And double click to see the formula. Select the range for locking range and press F4. and Now I will press enter and fill down formula same formula to all below cells. (according attached below image) How to use SMALL formula in Excel The excel small formula is the formula that is responsible for returning the Nth smallest value from the supplied set of value. = SMALL(array, k) array – This is the range or array from which you want the function to return the nth largest value. kth_position – This is an integer that specifies the position from the value, which is the largest, i.e., the Kth position. So now we will apply the small formula and select the entire range and select the position. Now I will apply for all below cells for to positions number cells. And double click to see the formula. Select the range for locking range and press F4. and Now I will press enter and fill down formula same formula to all below cells. (according attached below image) So I hope you understand the formula. You can also see well explained video here
{"url":"https://www.excelavon.com/how-to-use-large-small-formulas-in-excel/","timestamp":"2024-11-02T15:23:09Z","content_type":"text/html","content_length":"62107","record_id":"<urn:uuid:6fe9b7a1-5ad2-442b-8def-172bc511bb61>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00051.warc.gz"}
Mathematics MCQs for PPSC FPSC KKPSC BPSC SPSC NTS Test 7 Online Free Taleem is free online MCQ’s test related to Lecturer Mathematics. All the individuals who are going to appear in PPSC, FPSC, KKPSC, SPSC, BPSC, AJ&KPSC, NTS, Lecturer Mathematics written test can attempt these tests in order to prepare for it in best possible way. Our tests of Lecturer of Mathematics include all the important questions and Past Paper of Lecturer Mathematics, that have extremely high amount of chances for been included in the actual exam which make our test undoubtedly the best source of preparation. There will be 25 multiple choice question in the test. Answer of the questions will change randomly each time you start this test. Practice this test at least 5 times if you want to secure High Marks. At the End of the Test you can see your Test score and Rating. If you found any incorrect answer in Quiz. Simply click on the quiz title and comment below on that MCQ. So that I can update the incorrect answer on time. Please Click Below START Button to Take this Lecturer Mathematics Test Online. Test Instructions:- Test Name Lecturer Mathematics Subject Math Test 7 Test Type MCQs Mathematics Total Questions 25 Total Time 20 Minutes Total Marks 100 You have 20 minutes to pass to the quiz. Lecturer Mathematics Online Test No. 7 1 / 25 Which of the following is a subgroup of group G = {0, 1, 2, 3} w.r.t. addition modulo 4. 2 / 25 Let G be a group which have no proper subgroup. Then order of G is: 3 / 25 The number of distinct left cosets of a subgroup H of a group G is called ________ of H in G. 4 / 25 Let X be a non-empty set. The permutation on X is a: 5 / 25 Let G be a cyclic group of order 24. Then order of a^9 is: 6 / 25 Let G be a group of order prime number. Then 7 / 25 Let X has n elements. Then number of mapping on X is: A). n^2 B). (n)^n C). n! D). n x n 8 / 25 Let G be a cyclic group. Then subgroup H of G is 9 / 25 Let G be group and H be subgroup of G of order 8. Then order of G is 10 / 25 Let G be a cyclic group of order 10. Then number of subgroups of G is 11 / 25 Let G = {1, 2, 3, ……, 12} be a group w. r. t multiplication modulo 13. Then which of the following is a subgroup of G. 12 / 25 Let G be a cyclic group of order 24. Then order of a^4 is: 13 / 25 let H, K be two subgroup of a finite group G. Then for any g ∈ G which of the following is true. 14 / 25 Let G be a group of order 37 and a ∈ G. Then order of a is: 15 / 25 Let G be a cyclic group of order 24 and H be a subgroup of G. Then order of H is 16 / 25 Let H, K be two subgroup of a group G. Then set HK = {hk | h ∈ H ∧ k ∈ K} is a subgroup of G if: 17 / 25 Let G be a finite group. Let H be a subgroup of G. Then which of the following divides order of G.: 18 / 25 Let H and K be subgroups G. Then which of the following is a subgroups of G. 19 / 25 Let X has n elements. Then number of bijective mapping on X is A). n^2 B). (n)^n C). n! D). n x n 20 / 25 Which of the following is abelian but not cyclic: 21 / 25 Let G be infinite cyclic group. Then number of generator of G is: 22 / 25 Let G be a cyclic group of order 17. Then number of subgroups of G is 23 / 25 Let G be a group of order 36 and let a ∈ G. The order of a is 24 / 25 Let H and K be subgroup of G. Then HUK is also a subgroups of G. if 25 / 25 Which of the following is cyclic group. Your score is The average score is 0% Leave a Comment
{"url":"https://onlinefreetaleem.com/mathematics-mcqs-for-ppsc-fpsc-kkpsc-bpsc-spsc-nts-test-7/","timestamp":"2024-11-14T08:54:25Z","content_type":"text/html","content_length":"525607","record_id":"<urn:uuid:7415d08c-239b-4901-aade-604cf33db22a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00052.warc.gz"}
10 test question in fraction w/answer /grade school 10 test question in fraction w/answer /grade school Related topics: algebra of matrices systems of linear equations,2 solving addition equation word problems online trig functions calculator how to translate square feet to linear feet free algebra worksheets with pizzazz fundamental concepts &amp; skills in sq root nth power calculator multiplying mixed numbers calculator examples poems of linear equation free printable pre-algebra worksheets for middle school property of inequality examples of the formula for radical and rational exponents first grade maths Author Message Author Message Danen Flad Posted: Wednesday 03rd of Jan 17:59 Admilal`Leker Posted: Saturday 06th of Jan 14:38 Will someone help out to resolve my problem with my Hey, even I made use of Algebrator to learn more math? I have tried to get myself a teacher who can about 10 test question in fraction w/answer /grade help me . But until now I have not succeeded . Its tough school. This was just a remarkable instrument that Reg.: 18.07.2002 to locate someone reachable and reasonably priced . Reg.: 10.07.2002 helped me with all the basic principles. I would suggest But then I need to solve my problem with 10 test you to use this before resorting to the assistance from question in fraction w/answer /grade school as my private instructor , which is often very expensive . exams are coming up just now. It will be a big help for me if somebody can direct me. nedslictis Posted: Monday 08th of Jan 09:46 Vofj Timidrov Posted: Friday 05th of Jan 10:31 I remember having often faced difficulties with I have been in your place a few years ago when I was monomials, subtracting fractions and difference of studying 10 test question in fraction w/answer /grade squares. A really great piece of math program is school. What part of hyperbolas and multiplying Reg.: 13.03.2002 Algebrator software. By simply typing in a problem Reg.: 06.07.2001 fractions poses more difficulties? Because I think that homework a step by step solution would appear by a what you really need is a good program to help you click on Solve. I have used it through many algebra understand the basic concepts and methods of solving classes – Algebra 2, Algebra 2 and Basic Math. I the exercises. Did you ever use a software like that? I greatly recommend the program. have tried many but I have to say that Algebrator is the best and the easiest to use. It's not like those other programs because it teaches you how to think , it doesn't just give you the answers .
{"url":"https://softmath.com/parabola-in-math/exponential-equations/10-test-question-in-fraction.html","timestamp":"2024-11-09T03:53:06Z","content_type":"text/html","content_length":"46032","record_id":"<urn:uuid:f64b4d7b-4ad1-4c99-b79f-f9aeb834f23d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00445.warc.gz"}
Differentiation: definition and basic derivative rules with Marvin L Calculus · Session Differentiation: definition and basic derivative rules Session Details SUMMIT - This session is intended for Summit students, however, anyone may join. The following topics will be covered: Find the derivative of a function using the Power Rule with positive exponents; Find the derivative of a function using the Power Rule with negative and/or fractional exponents; Find the derivative of trigonometric functions.
{"url":"https://schoolhouse.world/session/1464","timestamp":"2024-11-11T14:21:05Z","content_type":"text/html","content_length":"238288","record_id":"<urn:uuid:3d4db2f8-e9ab-4187-88ea-dedbad1961df>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00029.warc.gz"}
J. Alex Lee - MATLAB Central J. Alex Lee Last seen: ongeveer een maand ago |&nbsp Active since 2016 Followers: 0 Following: 0 of 295.014 9 Questions 274 Answers of 20.161 0 Files of 153.031 0 Problems 31 Solutions Is this triangle right-angled? Given three positive numbers a, b, c, where c is the largest number, return *true* if the triangle with sides a, b and c is righ... ongeveer 2 jaar ago Find a Pythagorean triple Given four different positive numbers, a, b, c and d, provided in increasing order: a < b < c < d, find if any three of them com... ongeveer 2 jaar ago Is this triangle right-angled? Given any three positive numbers a, b, c, return true if the triangle with sides a, b and c is right-angled. Otherwise, return f... ongeveer 2 jaar ago Triangle sequence A sequence of triangles is constructed in the following way: 1) the first triangle is Pythagoras' 3-4-5 triangle 2) the s... ongeveer 2 jaar ago Length of the hypotenuse Given short sides of lengths a and b, calculate the length c of the hypotenuse of the right-angled triangle. <<https://i.imgu... ongeveer 2 jaar ago Area of an equilateral triangle Calculate the area of an equilateral triangle of side x. <<https://i.imgur.com/jlZDHhq.png>> Image courtesy of <http://up... ongeveer 2 jaar ago Side of an equilateral triangle If an equilateral triangle has area A, then what is the length of each of its sides, x? <<https://i.imgur.com/jlZDHhq.png>> ... ongeveer 2 jaar ago Side of a rhombus If a rhombus has diagonals of length x and x+1, then what is the length of its side, y? <<https://imgur.com/x6hT6mm.png>> ... ongeveer 2 jaar ago Area of an Isoceles Triangle An isosceles triangle has equal sides of length x and a base of length y. Find the area, A, of the triangle. <<https://imgur... ongeveer 2 jaar ago Dimensions of a rectangle The longer side of a rectangle is three times the length of the shorter side. If the length of the diagonal is x, find the width... ongeveer 2 jaar ago Vector creation Create a vector using square brackets going from 1 to the given value x in steps on 1. Hint: use increment. ongeveer 2 jaar ago Doubling elements in a vector Given the vector A, return B in which all numbers in A are doubling. So for: A = [ 1 5 8 ] then B = [ 1 1 5 ... ongeveer 2 jaar ago Create a vector Create a vector from 0 to n by intervals of 2. ongeveer 2 jaar ago Flip the vector from right to left Flip the vector from right to left. Examples x=[1:5], then y=[5 4 3 2 1] x=[1 4 6], then y=[6 4 1]; Request not ... ongeveer 2 jaar ago Find max Find the maximum value of a given vector or matrix. ongeveer 2 jaar ago Select every other element of a vector Write a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s... ongeveer 2 jaar ago Find the sum of all the numbers of the input vector Find the sum of all the numbers of the input vector x. Examples: Input x = [1 2 3 5] Output y is 11 Input x ... ongeveer 2 jaar ago Convert a temperature reading from Celsius to an unknown scale Two of the most famous temperature scales are the Celsius and the Fahrenheit scale. In reality, however, there are so many other... ongeveer 4 jaar ago Length of a short side Calculate the length of the short side, a, of a right-angled triangle with hypotenuse of length c, and other short side of lengt... ongeveer 4 jaar ago Counting Money Add the numbers given in the cell array of strings. The strings represent amounts of money using this notation: $99,999.99. E... bijna 5 jaar ago A pangram, or holoalphabetic sentence, is a sentence using every letter of the alphabet at least once. Example: Input s ... bijna 5 jaar ago Nearest Numbers Given a row vector of numbers, find the indices of the two nearest numbers. Examples: [index1 index2] = nearestNumbers([2 5 3... meer dan 6 jaar ago Make the vector [1 2 3 4 5 6 7 8 9 10] In MATLAB, you create a vector by enclosing the elements in square brackets like so: x = [1 2 3 4] Commas are optional, s... meer dan 6 jaar ago Times 2 - START HERE Try out this test problem first. Given the variable x as your input, multiply it by two and put the result in y. Examples:... meer dan 6 jaar ago
{"url":"https://nl.mathworks.com/matlabcentral/profile/authors/7192969?detail=cody","timestamp":"2024-11-03T06:16:45Z","content_type":"text/html","content_length":"121879","record_id":"<urn:uuid:75066a49-ee53-4e7a-ad86-3113e5dd8d22>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00557.warc.gz"}
monad - Computer Dictionary of Information Technology /mo'nad/ A technique from category theory which has been adopted as a way of dealing with state in functional programming languages in such a way that the details of the state are hidden or abstracted out of code that merely passes it on unchanged. A monad has three components: a means of augmenting an existing type, a means of creating a default value of this new type from a value of the original type, and a replacement for the basic application operator for the old type that works with the new type. The alternative to passing state via a monad is to add an extra argument and return value to many functions which have no interest in that state. Monads can encapsulate state, side effects, exception handling, global data, etc. in a purely lazily functional way. A monad can be expressed as the triple, (M, unitM, bindM) where M is a function on types and (using Haskell notation): unitM :: a -> M a bindM :: M a -> (a -> M b) -> M b I.e. unitM converts an ordinary value of type a in to monadic form and bindM applies a function to a monadic value after de-monadising it. E.g. a state transformer monad: type S a = State -> (a, State) unitS a = \ s0 -> (a, s0) m `bindS` k = \ s0 -> let (a,s1) = m s0 in k a s1 Here unitS adds some initial state to an ordinary value and bindS applies function k to a value m. (`fun` is Haskell notation for using a function as an infix operator). Both m and k take a state as input and return a new state as part of their output. The construction m `bindS` k composes these two state transformers into one while also passing the value of m to k. Monads are a powerful tool in functional programming. If a program is written using a monad to pass around a variable (like the state in the example above) then it is easy to change what is passed around simply by changing the monad. Only the parts of the program which deal directly with the quantity concerned need be altered, parts which merely pass it on unchanged will stay the same. In functional programming, unitM is often called initM or returnM and bindM is called thenM. A third function, mapM is frequently defined in terms of then and return. This applies a given function to a list of monadic values, threading some variable (e.g. state) through the applications: mapM :: (a -> M b) -> [a] -> M [b] mapM f [] = returnM [] mapM f (x:xs) = f x `thenM` ( \ x2 -> mapM f xs `thenM` ( \ xs2 -> returnM (x2 : xs2) ))
{"url":"https://www.computer-dictionary-online.org/definitions-m/monad","timestamp":"2024-11-07T06:07:31Z","content_type":"text/html","content_length":"11538","record_id":"<urn:uuid:fd45fa4e-9abb-4f99-980f-3259c37f513c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00806.warc.gz"}
QMTESTIMATE procedure • Genstat Knowledge Base 2024 Calculates QTL effects in multi-trait trials (M.P Boer, M. Malosetti, S.J. Welham & J.T.N.M. Thissen). PRINT = string tokens What to print (summary, model, components, effects, means, stratumvariances, monitoring, vcovariance, deviance, Waldtests, missingvalues, covariancemodels); default summ POPULATIONTYPE = string token Type of population (BC1, DH1, F2, RIL, BCxSy, CP); must be set NGENERATIONS = scalar Number of generations of selfing for a RIL population NBACKCROSSES = scalar Number of backcrosses for a BCxSy population NSELFINGS = scalar Number of selfings for a BCxSy population VCMODEL = string token Specifies the variance-covariance model for the set of traits (identity, diagonal, cs, hcs, outside, fa, fa2, unstructured); default cs VCPARAMETERS = string token Whether to re-estimate the variance-covariance model parameters (estimate, fix); default esti VCSELECT = string token Whether to re-select the variance-covariance model (no, yes); default no STANDARDIZE = string token How to standardize the traits (none, normalize) ; default norm CRITERION = string token Criterion to use for model selection (aic, sic); default sic FIXED = formula Defines extra fixed effects UNITFACTOR = factor Saves the units factor required to define the random model when UNITERROR is to be used MVINCLUDE = string tokens Whether to include units with missing values in the explanatory factors and variates and/or the y-variates (explanatory, yvariate); default expl, yvar MAXCYCLE = scalar Limit on the number of iterations; default 100 WORKSPACE = scalar Number of blocks of internal memory to be set up for use by the REML algorithm; default 100 Y = variates Quantitative traits to be analysed; must be set GENOTYPES = factors Genotype factor; must be set FTRAITS = factors Factor indicating the trait of each y-value; must be set UNITERROR = variate Uncertainty on trait means (derived from individual unit or plot error) to be included in QTL analysis; default * i.e. omitted VCINITIAL = pointers Initial values for the parameters of the variance-covariance model SELECTEDMODEL = texts VCMODEL setting for the selected covariance structure ADDITIVEPREDICTORS = pointers Additive genetic predictors; must be set ADD2PREDICTORS = pointers Second (paternal) set of additive genetic predictors DOMINANCEPREDICTORS = pointers Dominance genetic predictors CHROMOSOMES = factors Chromosomes corresponding to the genetic predictors; must be set POSITIONS = variates Positions on the chromosomes corresponding to the genetic predictors; must be set IDLOCI = texts Labels for the loci; must be set MKLOCI = variates Logical variate containing the value 1 if the locus is a marker, otherwise 0; must be set IDMGENOTYPES = texts Labels for the genotypes corresponding to the genetic predictors IDPARENTS = texts Labels to identify the parents QTLSELECTED = variates Index numbers of the selected QTLs; must be set INTERACTIONS = variates Logical variate indicating whether each selected QTL has a significant (1) or non-significant (0) QTL-by-trait interaction DOMSELECTED = variates Logical variate indicating whether the dominance predictor of each selected QTL must be present (1) or absent (0) in the model DOMINTERACTIONS = variates Logical variate indicating whether the dominance-by-trait interaction of each selected QTL must be present (1) or absent (0) in the model RESIDUALS = variates Residuals from the analysis FITTEDVALUES = variates Fitted values from the analysis WALDSTATISTICS = variates Saves the Wald test statistics PRWALD = variates Saves the associated Wald probabilities DFWALD = variates Saves the degrees of freedom for the Wald test QEFFECTS = pointers Saves the estimated QTL effects QSE = pointers Saves the standard errors of the QTL effects OUTFILENAME = texts Name of the Genstat workbook file (*.gwb) to be created QSAVE = pointers Saves a pointer with information and results for the significant effects SAVE = REML save structures Save the details of each REML analysis for use in subsequent VDISPLAY and VKEEP directives QMTESTIMATES fits a final QTL model to estimate QTL effects in a multi-trait trial. The procedure uses means per genotype-trait combinations as phenotypic data, but weights can be attached to the means (see the UNITERROR parameter and the UNITFACTOR option below). The response variable must be specified by the Y parameter, and the corresponding trait and genotype factors must be specified by the FTRAITS and GENOTYPES parameters, respectively. The POPULATIONTYPE option must be set to specify the population from which the genotypes are derived. For recombinant inbred lines (POPULATIONTYPE = RIL), the NGENERATIONS option, must be set to supply the number of generations. For backcross inbred lines (POPULATIONTYPE = BCxSy), the NBACKCROSSES and NSELFINGS options must be set to define the number of backcrosses to the first parent and the number of selfings, respectively. By default, the values of each trait are standardized by dividing them by their standard deviation, but you can set option STANDARDIZE=none to suppress this. Molecular information must be provided in the form of additive genetic predictors stored in variates and supplied, in a pointer, by the ADDITIVEPREDICTORS parameter. Non-additive effects can be included in the model by specifying dominance genetic predictors using the DOMINANCEPREDICTORS parameter (e.g. in a F2 population). In the case of segregating F1 populations (outbreeders) two sets of additive genetic predictors must be specified, the maternal ones by the ADDITIVEPREDICTORS parameter, and the paternal ones by the ADD2PREDICTORS parameter. The corresponding map information for the genetic predictors must be given by the CHROMOSOMES and POSITIONS parameters. The labels for the loci must be supplied by the IDLOCI parameter, and the labels for the genotypes in the marker data can be supplied by the IDMGENOTYPES parameter. If IDMGENOTYPES is set, the match between the genotypes in the phenotypic and in the marker data will be checked. The IDPARENTS parameter can supply labels to identify the parents. The QTL model assumes FTRAITS and QTLs as fixed terms, and GENOTYPES as a random term. The QTLSELECTED parameter must specify the set of QTLs, in the form of a variate containing the index number of the positions where the QTLs are located. The INTERACTIONS parameter supplies a logical variate containing zero if a QTL effect is constrained to be constant across traits, and one if it is specific for each trait. When the DOMINANCEPREDICTORS parameter is set, the DOMSELECTED parameter supplies a logical variate containing one if the dominance predictor of the corresponding marker must be present in the model, and zero if the dominance predictor of the corresponding marker must be absent in the model. If DOMINANCEPREDICTORS is set but DOMSELECTED is not set, all the dominance predictors are included. Similarly, the DOMINTERACTIONS parameter supplies a logical variate containing one if the dominance-by-trait interaction of the corresponding marker must be present in the model, and zero if it must be absent. If DOMINANCEPREDICTORS is set but DOMINTERACTIONS is not set, all the dominance predictors are included. Extra fixed effects can be defined by the FIXED option. A multi-Normal distribution, with vector mean 0 and variance covariance matrix Σ is assumed for the random genetic effects for the different traits. The VCMODEL option defines the model to use for Σ. The default assumes compound symmetry, but the VGESELECT procedure can be used to assess what model would be most suitable. Initial values for the parameters in the variance-covariance model can be specified by the VCINITIAL parameter. The VCPARAMETERS option controls whether the variance-covariance parameters are re-estimated at each step of the backward selection (VCPARAMETERS=estimate), or whether they are fixed at the defined initial values (VCPARAMETERS=fix). The VCSELECT option defines whether an extra check is made at each step on the variance-covariance model, to assess whether a simpler model is more suitable than the current model (based on the criterion defined by the CRITERION option). The SELECTEDMODEL parameter stores the final variance-covariance model that is selected. The MVINCLUDE, MAXCYCLE and WORKSPACE options operate in the same way as these options of the REML directive. The UNITERROR parameter allows uncertainty on the trait means (derived from individual unit or plot error) to be specified to include in the random model; by default this is omitted. The UNITFACTOR option allows the factor that is needed to define the unit-error term to be saved (this would be needed, for example, to save information later about the term using VKEEP). The PRINT option specifies the output to be displayed. The summary setting prints the information about the QTLs retained in the model, and the other settings correspond to those in the PRINT option of the REML directive. The QTL effects and their standard errors can be saved, in pointers, by the QEFFECTS and QSE parameters, respectively. These pointers have 2 levels of suffixes: the first level has 1, 2 or 3 values depending on the setting of the 3 possible predictors ADDITIVEPREDICTORS, ADD2PREDICTORS and DOMINANCEPREDICTORS; the second level has as many levels as the number of levels of the FTRAITS factor. The fitted values and residuals can be saved by the FITTEDVALUES and RESIDUALS parameters. The Wald statistics, degrees of freedom and probabilities can be saved by the parameters WALDSTATISTICS, DFWALD and PRWALD, respectively. The OUTFILENAME parameter can be used to save the Wald statistics and the QEFFECTS and QSE structures in a Genstat work book file in a sheet named STATISTICS. This parameter should not contain an extension as the extension is defined automatically as .gwb. The QSAVE parameter can be used to save a pointer containing information and results for the significant QTLs. The elements of the pointer are labelled as follows to simplify their subsequent use: 'procedure' stores the string 'QMTESTIMATE' to indicate the source of the results, 'markernames' marker names, 'chromosomes' chromosomes, 'positions' positions, 'traitnames' names of the traits, 'waldstatistics' wald statistics, 'prwald' probability values of wald statistics, 'dfwald' degrees of freedom of the wald statistics, 'qeffects' QTL effects, 'qse' standard errors of the QTL effects, '%vexplained' percentage variance explained, 'lowerci' lower bound of confidence interval of estimated QTL position, 'upperci' upper bound of confidence interval of estimated QTL position, 'posmin' position of left flanking marker, 'posmax' position of right flanking marker, 'idlfm' marker name of left flanking marker, 'idrfm' marker name of right flanking marker, 'posminci' position of left flanking marker outside confidence interval, 'posmaxci' position of right flanking marker outside confidence interval, 'idlfmci' marker name of left flanking marker outside confidence interval, 'idrfmci' marker name of right flanking marker outside confidence interval, 'locus' index numbers of the significant QTLs, and 'neff' number of additive and dominance predictors in the model. The elements 'procedure', 'markernames', 'chromosomes', 'traitnames', 'idlfm', 'idrfm', 'idlfmci' and 'idrfmci' are text structures; 'positions', 'waldstatistics', 'prwald' and 'dfwald' are variates; 'qeffects' and 'qse' are pointers (see parameters QEFFECTS and QSE), as similarly are 'lowerci', 'upperci', 'posmin', 'posmax', 'posminci', 'posmaxci', 'idlfmci' and 'idrfmci'; 'neff' is a scalar. The SAVE parameter can be used to save the REML save structure from the analysis for use with subsequent VKEEP and VDISPLAY directives. Options: PRINT, POPULATIONTYPE, NGENERATIONS, NBACKCROSSES, NSELFINGS, VCMODEL, VCPARAMETERS, VCSELECT, STANDARDIZE, CRITERION, FIXED, UNITFACTOR,MVINCLUDE, MAXCYCLE, WORKSPACE. Parameters: Y, GENOTYPES, FTRAITS, UNITERROR, VCINITIAL, SELECTEDMODEL, ADDITIVEPREDICTORS, ADD2PREDICTORS, DOMINANCEPREDICTORS, CHROMOSOMES, POSITIONS, IDLOCI, IDMGENOTYPES, IDPARENTS, QTLSELECTED, QMTESTIMATE fits the following models, which include a set L of QTLs: 1) y[ij] = μ + T[j] + Σ[l∈L] x[il]^add α[jl]^add + GT[ij] if only ADDITIVEPREDICTORS are specified 2) y[ij] = μ + T[j] + Σ[l∈L] ( x[il]^add α[jl]^add + x[il]^dom α[jl]^dom ) + GT[ij] if DOMINANCEPREDICTORS are also specified 3) y[ij] = μ + T[j] + Σ[l∈L] ( x[il]^add α[jl]^add + x[il]^add2 α[jl]^add2 + x[il]^dom α[jml]^dom ) + GT[ij] if both ADD2PREDICTORS and DOMINANCEPREDICTORS are specified (for population type CP) where y[ij] is the value of trait j for genotype i, T[j] is the trait main effect, x[il]^add are the additive genetic predictors of genotype i for locus l, and α[jl]^add are the associated effects. In models 2 and 3, x[il]^dom are the dominance genetic predictors, and α[jl]^dom are the associated effects. In model 3, x[il]^add are the additive genetic predictors for maternal genotype i at locus l, x[il]^add2 are the additive genetic predictors for paternal genotype i, and α[jl]^add and α[jl]^add2 are the associated effects. Genetic predictors are genotypic covariables that reflect the genotypic composition of a genotype at a specific chromosome location (Lynch & Walsh 1998). GT[ij] is assumed to follow a multi-Normal distribution with mean vector 0, and a variance covariance matrix Σ, that can either be modelled explicitly (with an unstructured model) or by some parsimonious model (defined by option VCMODEL) as described in the VGESELECT procedure. Restrictions are not allowed. Lynch, M. & Walsh, B. (1998). Genetics and Analysis of Quantitative Traits. Sinauer Associates, Sunderland, MA. See also Procedures: QMTBACKSELECT, QMTQTLSCAN, QMVAF, QFLAPJACK, QREPORT, VGESELECT. Commands for: Statistical genetics and QTL estimation. CAPTION 'QMTESTIMATE example'; STYLE=meta SPLOAD [PRINT=*] '%GENDIR%/Examples/F2maize_traits.gsh' & '%GENDIR%/Examples/F2maizemarkers.GWB'; SHEET='LOCI' & '%GENDIR%/Examples/F2maizemarkers.GWB'; SHEET='ADDPREDICTORS' & '%GENDIR%/Examples/F2maizemarkers.GWB'; SHEET='DOMPREDICTORS' POINTER [MODIFY=yes; NVAL=idlocus] addpred POINTER [MODIFY=yes; NVAL=idlocus] dompred " append the traits " SUBSET [E.EQ.6] G,asi,eno,mflw,ph,yld APPEND [NEWVECTOR=y ; GROUPS=ftraits] asi,eno,mflw,ph,yld APPEND [NEWVECTOR=G] 5(G) " Best variance-covariance model from VGESELECT " TEXT model; VALUE= 'fa' " Candidate QTL positions from QMTBACKSELECT " VARIATE [VALUES=17,18,72,102,154,192,206,220,237] Qid VARIATE [VALUES=1,1,1,1,1,1,1,1,1] Int VARIATE [VALUES=1,1,1,1,1,1,1,1,1] Dom VARIATE [VALUES=1,0,1,0,0,0,0,0,0] DomInt QMTESTIMATE [PRINT=summ,model,wald,eff; POPULATIONTYPE=F2; VCMODEL=#model]\ Y=y; GENOTYPES=G; FTRAITS=ftraits;\ CHROMOSOMES=mkchr; POSITIONS=mkpos; MKLOCI=marker;\ IDLOCI=idlocus; ADDITIVEPREDICTORS=addpred;\ QTLSELECTED=Qid; INTERACTIONS=Int;\ DOMSELECTED=Dom; DOMINTERACTIONS=DomInt; \ QEFF=Qeff; QSE=Qse; QSAVE=Output;\
{"url":"https://genstat.kb.vsni.co.uk/knowledge-base/qmtestim/","timestamp":"2024-11-07T13:04:21Z","content_type":"text/html","content_length":"60573","record_id":"<urn:uuid:fa6cd13e-ffba-4376-85fa-60c1b88da451>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00354.warc.gz"}
Optimising Spam Filters Optimising Your Spam Filter In the following discussion we will generally follow the development of my ideas in the order of occurence. That is first in first out. There may be better ways to present the matter but I think it's easier to follow my ideas. In contrast to this way I will introduce later ideas based on comments about this text. Because I got a comment that I'm totally wrong in the first part, I will add a small example not related to The Problem of Predicting This chapter was inserted later to give an example which doesn't use the word spam. Let us assume you found a gift shop with surprise gifts. Because you are tremenduously rich you bought 1000 gifts for your girlfriend. You made a statistic and found out that 100 gifts were packed in red paper. Your girlfriend disliked 90 of them and was only happy about 10. But your parcels aren't only single-colored. They are multi-colored. To save space and processing time you didnt't count the result of two or more colors. If a parcel is blur,red and green you count the result "girlfriend happy / unhappy" for each color seperately. And you found that out of 400 green parcels 390 made your girlfriend happy and 10 not. Next time the nice shop-girl offers you a red parcel. What's the probability that your girlfriend will be happy about the present? Based on your statistical data 10% [ = 10/(90+10) ] will be a good assumption for this probability. You should ask the nice girl behind the counter for another present packed in a different color. This example resembles a our spam problem. The spam filter can only see the paper. We are looking for good indicators which tell us from parts of the message if they contain spam. After you rejected the red parcel the nice shop-girl offered you a red-green parcel. What is the probability that your girlfrind will be happy about the parcel? Unfortunately you have no statistical data about pairs of colors. You can't build a new statistic because your girlfriend removed all packing. I will come to this problem later on. To prepare the way we will have a look on the task gathering the statistical data. The Best Ham to Spam Ratio for Bayesian Filters When using Bayesian Filters for spam detection two questions are frequently raised. What is the best amount of ham and spam that should be trained? What is the best ratio between ham and spam? One continuously read myth is the 1:1 ratio. I read an article about the best ratio as 1: 1 and it was expirienced by a test and later on derived from the bayesian theorem. Unfortunately I didn't copy this article and I can't remember enough to find the article by googling. The problem is: The conclusion of the article is wrong. What I will try to show in the next steps - which unfortunately require a little bit math - is: Train your bayes filter in accordance with your real spam to ham ratio and train as much as possible. But never train to less ham or train only spam! Off we go into the bayesian theorem. In short the bayes theorem says p(spam|token) = p(token|spam)*p(spam)/p(token) This means: the probability of a message being spam under the contition that a token is in the message is equal to the propability that the token is contained in a spam message multiplied by the propability of a message being spam devided by the propability that a message contains the token. If you have received a number of s spam messages and h ham messages where the token is in S spams and in H ham messages then you will get: s = number of spam messages h = number of ham messages S = number of spam messages containing the token H = number of ham messages containing the token s+h = total number of messages S+H = total number of messages containing the token Thus it appears that p(spam) = s/(s+h) is an aproximation of the probability of a random message being spam. And: p(token) = (S+H)/(s+h) p(token|spam) = S/s From this follows that p(spam|token) = S/s * s/(s+h) / ((S+H)/(s+h)) = S/(S+H) This means that the probability of a given message containing the token being spam is independend of the total number of messages trained. Before we look at some examples let us compare this result with the spam probability Paul Graham proposed in "A Plan for Spam". " Paul Graham multiplied the ham count by 2 to avoid False Positives. p(S)=S/(S+2+H). But he restricted the minimum and maximum probability to 0.01 and 0.99. If a token was never found in a ham message he forced a ham count of 1. He required that s+2h > 5 to avoid high spam probabilities on low data. For tokens never seen before he assumed a spam probability of .4 based on expirience. This means he left the true statistical way for an heuristic approach. If you take only tokens into accout for which you have enough data you can avoid the multipling by 2 and use the statistical value of S/(S+H). Therefor I will not follow Paul Grahams way. And I will allow a ham count of 0. Our equation p(S)=S/(S+H) shows that yout predicted spam probability will raise, if you count only parts of your ham messages and will sink if you count only parts of the spam messages. (Ignoring the boundary conditions this is also true for Paul Grahams spam-probability). Now to the example based on one token. How much ham and spam should we train to get good statistical data for one token? And what happens Lets assume in your real spam ham ratio is 10 to 1 and your messge body contains 1100 messages. 100 spam and also 50 ham messages should contain a certain token. May it be "honey". We will train all │ │Overall│Trained│"honey"│"honey" trained │ │Ham │ 100│ 100│ 50│ 50│ │Spam│ 1000│ 1000│ 100│ 100│ │Sum │ 1100│ 1100│ 150│ 150│ You will get a propability of 100 / (100+50) = 66.6% for the next message containing the token of being spam. Which isn't a high probability but works fine for this example. Now we will train only 10% of your spam to get a spam to ham ration of 1:1. You will supposably count only 10 spam messages with the token. The remaining 90 messages are not trained. │trained│ │ │ │trained│ │Ham │ 100│ 100│ 50│ 50│ │Spam │ 1000│ 100│ 100│ 10│ │Sum │ 1100│ 200│ 150│ 60│ Which leads to a spam probability of only 10 /(10+50) = 16.6% Which is lower than before. When you train less spam then you will get a lower spam probability for a message. In other words: Your False Negative rate will rise. What happens when you train less ham? Lets assume you train only 50% of your ham but all your spam. You will supposably count only 25 ham messages with the token. │50%ham │Overall│Trained│"honey"│"honey"│ │trained│ │ │ │trained│ │Ham │ 100│ 50│ 50│ 25│ │Spam │ 1000│ 100│ 100│ 100│ │Sum │ 1100│ 1050│ 150│ 125│ Which leads to a spam probability of 100 /(100+25) = 80% and is the highest. When you train less ham you will get a higher spam probability for a message. In other words: Your False Positive rate will rise. BTW: The overall spam probability without a test is 1000/1100 = ~91%. What happens when your ham to spam ratio is 10 to 1? │ 100% │Overall│Trained│"honey"│"honey"│ │trained│ │ │ │trained│ │Ham │ 1000│ 1000│ 100│ 100│ │Spam │ 100│ 100│ 50│ 50│ │Sum │ 1100│ 1050│ 150│ 150│ => 50 / (50+100) = 33.3% Now lets train only 50% of our spam. │50% spam │Overall│Trained│"honey"│"honey"│ │ trained │ │ │ │trained│ │Ham │ 1000│ 1000│ 100│ 100│ │Spam │ 100│ 50│ 50│ 25│ │Sum │ 1100│ 1050│ 150│ 125│ => 25 / (25+100) = 20.0% Now lets train only 10% of our ham. │10% ham│Overall│Trained│"honey"│"honey"│ │trained│ │ │ │trained│ │Ham │ 1000│ 100│ 100│ 10│ │Spam │ 100│ 100│ 50│ 50│ │Sum │ 1100│ 200│ 150│ 60│ => 50 / (50+10) = 83.3% A last example What happens when your ham to spam ratio is extremly high. Lets assume 1:100? │ 100% │Overall│Trained│"honey"│"honey"│ │trained│ │ │ │trained│ │Ham │ 100│ 100│ 50│ 50│ │Spam │ 10000│ 10000│ 100│ 50│ │Sum │ 10100│ 10100│ 150│ 150│ => 100/(100+50) = 66.6% Every second ham message contains the token and one of 100 spam messages. That means if you get a message with the token it is in 2 of 3 cases spam! What bayesian filter says: I got a message which contains a special token and in the past it was in 2 of 3 times spam. What happens when you train only 100 spam massages to get a ratio of 1:1. │1% spam│Overall│Trained│"honey"│"honey"│ │trained│ │ │ │trained│ │Ham │ 100│ 100│ 50│ 50│ │Spam │ 10000│ 100│ 100│ 1│ │Sum │ 10100│ 200│ 150│ 51│ => 1 / (50+1) = 1.9% The bayesian filter will now say that the message is with a probability of 1.9% spam and with 98.1% ham. The filter is useless it declares everything to ham. If the ratio is vice versa way it declares everything as spam. If you train to less spam you will get a higher False Negative rate, if you train to less ham you will get a higher False Positive rate. Because a False Positive is more harmfull than a False Negative my final conclusion Train your Bayesian Filter iaw your real spam to ham ratio. And never train to less ham or train only spam! BTW: To avoid False Positives Paul Graham in "A Plan for Spam" multiplied his token counts for ham by 2. (I remember having written this above.) Another lesson should be: Never train whitelisted mails as ham!!! This is true for one token. It should be true for any token and I assume therefor it's true for all tokens. Spam probability when two or more rules or tokens hit !!! The following part is under development. It may contain errors. In the moment I assume that it contains at least one error. Comments happy welcome. They may save me time to find the errors.!!! I don't have tycos in mind. After we have seen what happens to the spam probability of a message hit by a single token the question raises: "What is the spam probability of a message where two or more tokens hit". You can find <a href="http://www.paulgraham.com/naivebayes.html" target="_blank">a simple answer</a> from Paul Graham and a deeper discussion at <a href="http://www.mathpages.com/home/kmath267.htm" target= "_blank">www.mathpages.com</a>. The problems discussed on the mathpages is similar to Bayesian Filters. Only similar. Before we continue let us have a quick look into the differences of rule based spam detection and Bayesian Filters. <thead align="center" valign="middle" bgcolor="gray"> </thead> Bayesian versus Rule based filtering │ Baysian │ Rules │ │All mails are automatically devided into tokens │Rules where writen by humans based mostly on spam and combined in a ruleset. │ │which are counted and stored in a database. │ │ │A token is a pattern which occured in previous │A rule is a regular expression which fits to a generalisation of patterns found in previous messages and may take care of future obfuscations │ │messages. │(viagra, v1agr@, ...) Rules can check for a special form of the message (HTML, attachments, ...) It can also be an "external" query to a black list │ │ │or whitelist. │ │A spam probability is assigned to every token │A score is assigned to a rule or a combination of rules based on mass-checks or certain feeling. │ │based on the counts. │ │ │Tokens hit spam and ham iaw their probability. │Rules are designed to hit nearly no ham. │ When you do a mass-check you get a statistic how many spam and ham where hit by your rules. There is no difference to the Bayesian Filter database. And from these statistic you can deduce that a message is with x% probability spam when the rule hits. For spam detection only rules with a very high probability for spam are selected. You can transform your basian filter into a ruleset by generating a rule for every token. To keep it short: Bayesian Filters are "automated ruleset generators" that calculate the scores from the probability of hits. Therefor I will not differentiate between tokens or rules. Instead of token T from now I will speak about test T. Before we go into the details I will make some remarks obout the notation used. For easy reading I have to modify the notation used above a little bit. Capital letters A, B, C ... will indicate an event. S will indicate the event of a message being spam, H of beeing ham. T[i] is the event of a token T[i] is in a message. Lower case letters a, b, c ... will indicate the number of times an event A, B, C ... has been counted in our statistical data. In case of conditional probability I will use [] to group the expression. [a|B] is the number of events A where B has been realised. For easy reading we define s[i] as the spam count and h[i] as the ham count of token T[i]. p(A) is the probability of event A. p(B|A) ist the probility of event B when event A has been realised. Our statistical data gives us the following probabilities: <thead> </thead> │Probability│ Meaning │ Approximation │ │p(S) │probability of a message being spam │s/(s+h) │ │p(H) │probability of a message being ham │h/(s+h) │ │p(T[i]) │probability of a message containing token T[i] │t[i]/(s+h) │ │ │ │[h|T[i]]/([s|T[i]]+[h|T[i]])│ │p(H|T[i]) │probability of a message beeing ham when token T[i] is in the message. │ │ │ │ │ or h[i]/(s[i]+h[i]) │ │p(S|T[i]) │probability of a message beeing spam when token T[i] is in the message. │[s|T[i]]/([s|T[i]]+[h|T[i]])│ │ │ │or s[i]/(s[i]+h[i]) │ Now let us formalize our question. What is the probability of a message being spam, when test T[1] and T[2] are true spam. In our notation we seek for p(S|T[1];T[2]) or more general p(S|T[1];..;T[n]) with n>1 n in N. What can we do? 1. p(S;T[1];T[2]) = p(T[1])*p(T[2]|T[1])*p(S|T[1];T[2]) p(S|T[1];T[2]) is what we are looking for. Unfortulately we can only get p(T[1]) from our data. We ignored the fact that two or more tokens occured in a message. Our equation is underspecified. There may be another possibility: 2. p(S;T[1];T[2])=p(T[1])*p(S|T[1])*p(T[2]|T[1];S) The probability of p(S|T[1]) and is in our data but not p(T[2]|T[1];S). So we have 2 equations with 4 unknowns. We need more equations to solve the problem. 3. p(S;T[1];T[2]) = p(T[2])*p(S|T[2])*p(T[1]|T[2];S) The probability of p(S|T[2]) is in our data but not p(T[1]|T[2];S). Now we have 3 equations with 5 unknowns. The equation is still underspecified. To make it short: We lost data and there is no way to get it back. In this case we can make some (we need 2) assumptions to get rid of the unknowns. First assumption T[1] and T[2] are independent: p(T[1]|T[2])=p(T[1]) and It follows: p(T[1];T[2]) = p(T[1]| T[2])*p(T[2]) = p(T[1])*p(T[2]) Is this a good assumption? Maybe with two public-opinion polls. But with spam? Let T[1] be the occurence of "rolex" and T[2] be the occurence of "watch" in a mail. I assume that p(T[2]|T[1]) is nearly 1. Before we procede lets have a look what happens if p(T[2]|T[1])=1. This means if T[1] occure then T[2] will also occure. With p(¬T[2]|T[1]) = 1 - p(T[2]|T[1]) = 1 - 1 = 0 the probability of p(S;¬T [2];T[1]) = p(S|¬T[2];T[1])*p(¬T[2]|T[1])*p(T[1]) = 0. There is no spam where T[1] but not T[2] occures. If p(T[2]|T[1]) = 1 then p(T[2]|T[1];S) = 1. With equation 2. we get p(S;T[1];T[2]) = p(T[1])*p(S|T[1])*p(T[2]|T[1];S) = p(T[1])*p(S|T[1])*p(T[2]|T[1]) = p(T[1])*p(S|T[1]) With equation 1. we get p(T[1])*p(S|T[1]) = p(T[1])*p(T[2]|T[1])*p(S|T[1];T[2]) p(S|T[1]) = p(S|T[1];T[2]) This means that T[2] does not influence the spam probability if T[1] occurs. This means not that test T[2] is useless. We didn't assume that p(T[1]|T[2]) = 1. But if p(T[1]|T[2]) = 1 then p(S|T[1]T [2]) = p(S|T[2]) = p(S|T[1]) If two tests corralate to each other then we get no better prediction when we combine them. One of them is useless. We will leave this path and go back to our previous assumption. First assumption T[1] and T[2] are independent: p(T[1]|T[2])=p(T[1]) and This helps a bit because we can eliminate one unknown from our equations. Second assumption T[1] and T[2] are also under the condition of S are independent: p(T[1]|T[2];S) = p(T[1]|S) and p(T[2]|T[1];S) = p(T[2]|S) Now we get: 1a. p(S;T[1];T[2]) = p(T[1])*p(T[2])*p(S|T[1];T[2]) 2a. p(S;T[1];T[2]) = p(T[1])*p(S|T[1])*p(T[2]|S) 3a. p(S;T[1];T[2]) = p(T[2])*p(S|T[2])*p(T[1]|S) to be continued.
{"url":"https://wiki.byggvir.de/index.php?title=Optimising_Spam_Filters&oldid=239","timestamp":"2024-11-15T00:56:22Z","content_type":"text/html","content_length":"45859","record_id":"<urn:uuid:9e40f2a7-f4ba-4b09-9401-d515ee40cc10>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00464.warc.gz"}
The Numerical Simulation of Aerodynamic Noise Generated by CRH3 Train’s Head Surface The Numerical Simulation of Aerodynamic Noise Generated by CRH3 Train’s Head Surface () 1. Introduction At present, domestic and foreign research on the aerodynamic noise of the high-speed train has some achievements: applying the method of large eddy simulation combined with boundary element method, T. Sassa analyzed the distribution of dipole noise sources on the surface of the high speed train body [1] ; through the use of low noise wind tunnel, the relevant personnel studied the air noise existing in the steering frame, train gap, skirt and other position of the high speed train, also, according to some relevant experimental analysis, they got the conclusion that the aerodynamic noise was caused by the pulsating pressure and presented some effective schemes to reduce the aerodynamic noise [2] . According to the research and experiment of the railway noise theory, the railway noise is mainly composed of three parts: traction noise, wheel rail noise and aerodynamic noise [3] . At present, by means of the damping treatment or dynamic suction of the rail and the wheel, significant achievements have been made in reducing wheel/rail noise. And on the basis of acoustic train conversion speed, after effective control in the wheel rail noise, if the train speed reaches 250 km/h, the aerodynamic noise will become the main part of the railway noise. This paper is based on Lighthill-Curle theory, using large eddy simulation (LES) method to have a numerical simulation for the CRH3 train’s three dimensional flow model. Thus, the aerodynamic noise caused by the head of the CRH3 train is predicted, and the generation and distribution of the train’s aerodynamic noise are analyzed, so as to provide some reasonable suggestions for the design of the train body. 2. The Numerical Simulation Method of Aerodynamic Noise 2.1. The Large Eddy Simulation (LES) Control Equations Putting the Navier-Stokes equation in the wave number space or physical space to filter out the vortex which is lower than the limiting width. Thus, we can get the incompressible LES control equation [4] : Type: (−) is spatial filtering; P is the density; 2.2. The Aerodynamic Noise Control Equation Based on N-S equation and continuity equation, Lighthill derived the acoustic propagation equation [5] : When the solid wall boundary exists in the unsteady flow region, the solution of the Equation (3) is derived [6] : Type (4) contains two kinds of noise sources: 1) One kind of noise sources is the surface dipole term of Lighthill stress which is in the flow field around the object, and the dipole source noise is proportional to the three Party of the Maher 2) The other kind of noise sources is the sub volume quadrupole between the surface pressure and the viscous shear stresses, and the quadrupole source noise is proportional to the five Party of Maher So, the noise of the quadrupole source is proportional to the square of the Maher number. The speed of high speed train is 300 km/h and 350 km/h, the Maher number are 0.24 and 0.28 respectively. Since the noise of the quadrupole source is relatively small, it can be neglected. Therefore, the formula for the aerodynamic noise of high speed train is: 3. The Basic Link of Aerodynamic Noise Numerical Calculation 3.1. The Numerical Calculation of Train Model It is hard for computer to achieve the large eddy simulation calculation on the whole vehicle model [7] . Since the middle cross section of the train is roughly the same, after the air flows a certain distance from the train head, the structure of the flow boundary layer tends to be stable. Meanwhile, the change of the aerodynamic force of the train tends to be stable. On the basis of the above analysis, selecting the 2 compartment models which are the head carriage and rear carriage. Taking into account that the influence of the wiper position on the front of the aerodynamic noise, the geometric model preserving the structural features of wiper, as shown in Figure 1. 3.2. The Numerical Calculation Region When calculate in the numerical simulation, In order to avoid the influence of the exit section of the EMU, the selection of the length direction size should make sure that the calculation of the area downstream from the train tail [8] . The CRH3 train length l = 41.4 m, height h = 3.89 m, width w = 3.3 m, the entrance length of the calculation region l1 = 70 m > l and export length l2 = 130 m » 3l. The height of calculation region H = 80 m > 20 h, width W = 120 m, as shown in Figure 2. Figure 1. The calculation model of CRH3. 3.3. The Grid of the Numerical Calculation Model Complex structures such as the windshield wipers on the train’s head, the surface shape of the CRH3 train’s head is irregular. This paper apply unstructured tetrahedral grid to mesh the train body. After mesh, the grid near the train position is more intensive than other position, and the grid transit uniformly from the train body to external area with certain growth factors that vary from dense to sparse. The minimum grid size is 0.004 m, the minimum grid area reaches Figure 3. 3.4. The Set of Boundary Conditions For the speed of the train is not less than 300 km/h, there are two kinds of method can be used to calculate. One is direct transient calculation, the other is transient calculation based on the steady result as the initial value. At the same time step, using the first method to calculate will cost double computation time than the second method. Considering the above considerations, it is better to take the method that transient calculation based on the steady result as the initial value. Thus, the boundary conditions are shown in Table 1. 4. The Fluctuating Pressure Simulation of the Head Surface When the speed of CRH3 is 350 km/h, the pressure distribution of the front surface is obtained, as shown in Figure 4(a); when the speed of CRH3 is 300 km/h, the pressure distribution of the front surface is obtained, as shown in Figure 4(b). 4.1. The Result of the Fluctuating Pressure Simulation Airflow is easy to exist separation phenomenon in the position that surface curvature changes highlighted, and the formation of flow disturbances will become very complex. So the monitoring points should set in the position that surface curvature is more prominent, such as shown in Figure 5. The CRH3 train are respectively traveling at the speed of 300 km/h and 350 km/h, through the simulation, it can be learned that the relationship between the fluctuation pressure of each monitoring point at different speeds and the traveling time ,which is shown in Figure 6. Respectively record the fluctuation range and the fluctuation amplitude of the front surface pressure when the CRH3 train at the speed of the 300 km/h and 350 km/h, as shown in Table 2. According to the pressure of the train head surface, following conclusions can be drawn: 1) Since the surface curvature change of the monitoring points 1 and 2 are bigger, the fluctuation pressure and the fluctuation range of these two positions are relatively larger. Figure 3. The grid of longitudinal section. Table 1. Boundary conditions setting. Figure 4. The front surface pressure distribution. (a) 350 km/h; (b) 300 km/h. Table 2. The time fluctuating pressure of monitoring points. Figure 5. The front surface monitoring points setting. Figure 6. The time fluctuating pressure of some monitoring points. (a) 350 km/h; (b) 300 km/h. 2) The disturbance degree of the train on the air flow field is changing with the train speed, the greater speed, the greater fluctuation amplitude of fluctuating pressure. 3) When the velocity are respectively 300 km/h and 350 km/h, the maximum value of the pressure fluctuation is corresponding to To sum up, the greater the change of curvature of the front surface, the greater the flow disturbance, corresponding the larger the pressure fluctuation and the fluctuation range amplitude; the amplitude of fluctuating pressure and the second party of the running speed approximately have a direct proportional relationship. 4.2. The Spectral Analysis of the Pulse Pressure of the Front Surface In the case that the Maher number is low, the main noise source is the dipole. Since the running speed of the train is different, it’s essential to change the fluctuation pressure of monitoring points into Sound pressure level spectrum by Fast Fourier Transform (FFT), as shown in Figure 7. Using the total sound pressure formula, when there are large numbers of sound pressure lever, the overall sound pressure level is at different frequencies, it is needed to have a calculation of the total pulsational sound pressure value of each monitoring point, as shown in Table 3. Figure 7. The spectrum of some monitoring points. (a) 350 km/h; (b) 300 km/h. Table 3. The total sound pressure level of monitoring points. According to the spectral analysis from the FFT conversion of the pulse pressure, the following conclusions can be drawn: 1) On the surface of the train head, the air flow is easy to be separated, and the turbulent motion area is easy to cause the aerodynamic noise. The frequency band of the aerodynamic noise is wider, and the dominant frequency is not obvious; 2) With the increase of the running speed of the train, the aerodynamic noise at every level frequency changes more dramatic, and the distribution of the spectrum is more fine; 3) In the low frequency region, the acoustic pressure level and the sound pressure level density of the monitoring points are higher; in the low frequency region, the acoustic pressure level and the sound pressure level density of the monitoring points are lower. Therefore, the energy of the aerodynamic noise in the low frequency part is relatively large, however, in the high frequency part is 5. Conclusions This paper is based on Lighthill-Curle theory, using large eddy simulation (LES) method to have a numerical simulation for the CRH3 train’s three dimensional flow model. Thus, the aerodynamic noise caused by the head of the CRH3 train is predicted, and the generation and distribution of the train’s aerodynamic noise are analyzed. The results show that: 1) With the increase of train speed, the fluctuating pressure amplitude and speed of every point on the front surface will be increased, and the pressure fluctuation spectrum is finer; 2) The frequency band of the pulse pressure is very wide, and has no obvious dominant frequency. At low frequencies, the amplitude of the pulse pressure is large; at high frequencies, the amplitude of the pulse pressure is decreased with the negative exponential law; 3) The energy of the aerodynamic noise in the low frequency part is relatively large, however, in the high frequency part is small. In order to effectively decrease the aerodynamic noise of the high speed train, it is viable to minimize the concave convex surface which should be replaced with the streamlined smooth transition surface for the body design. The work was supported by grant No.06029824 of the National Science Foundation Program of Guangdong province of China.
{"url":"https://scirp.org/journal/paperinformation?paperid=59239","timestamp":"2024-11-08T14:21:10Z","content_type":"application/xhtml+xml","content_length":"99258","record_id":"<urn:uuid:b2178139-c1fc-4711-9e62-759d5b465eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00805.warc.gz"}
Fundraising Discount Cards Pricing How Is Mission Discount Cards Profitable For You? We’ll start with some basic numbers. On average, discount cards sell for three price points: $10, $15, $20. Each organization is different and prices them accordingly. Needing More Than 1,000 Cards? Are you looking to bulk order your next card fundraiser? Contact us for discount prices when ordering more than 1,000 cards. Start Fundraising With A Card People Will Use Everyday Mission Discount Cards provide discounts to businesses that everyone shops at. We work with many national chains and work with local merchants to provide a solid range of discounts.
{"url":"https://www.missiondiscountcards.com/fundraising-discount-cards-pricing/","timestamp":"2024-11-02T12:22:29Z","content_type":"text/html","content_length":"130938","record_id":"<urn:uuid:67ee02e3-6d94-4f75-b150-29b3f2b65468>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00347.warc.gz"}
University Physics Volume 1 Learning Objectives By the end of this section, you will be able to: • Explain the difference between sound and hearing • Describe sound as a wave • List the equations used to model sound waves • Describe compression and rarefactions as they relate to sound The physical phenomenon of sound is a disturbance of matter that is transmitted from its source outward. Hearing is the perception of sound, just as seeing is the perception of visible light. On the atomic scale, sound is a disturbance of atoms that is far more ordered than their thermal motions. In many instances, sound is a periodic wave, and the atoms undergo simple harmonic motion. Thus, sound waves can induce oscillations and resonance effects ((Figure)). This video shows waves on the surface of a wine glass, being driven by sound waves from a speaker. As the frequency of the sound wave approaches the resonant frequency of the wine glass, the amplitude and frequency of the waves on the wine glass increase. When the resonant frequency is reached, the glass shatters. A speaker produces a sound wave by oscillating a cone, causing vibrations of air molecules. In (Figure), a speaker vibrates at a constant frequency and amplitude, producing vibrations in the surrounding air molecules. As the speaker oscillates back and forth, it transfers energy to the air, mostly as thermal energy. But a small part of the speaker’s energy goes into compressing and expanding the surrounding air, creating slightly higher and lower local pressures. These compressions (high-pressure regions) and rarefactions (low-pressure regions) move out as longitudinal pressure waves having the same frequency as the speaker—they are the disturbance that is a sound wave. (Sound waves in air and most fluids are longitudinal, because fluids have almost no shear strength. In solids, sound waves can be both transverse and longitudinal.) (Figure)(a) shows the compressions and rarefactions, and also shows a graph of gauge pressure versus distance from a speaker. As the speaker moves in the positive x-direction, it pushes air molecules, displacing them from their equilibrium positions. As the speaker moves in the negative x-direction, the air molecules move back toward their equilibrium positions due to a restoring force. The air molecules oscillate in simple harmonic motion about their equilibrium positions, as shown in part (b). Note that sound waves in air are longitudinal, and in the figure, the wave propagates in the positive x-direction and the molecules oscillate parallel to the direction in which the wave propagates. Models Describing Sound Sound can be modeled as a pressure wave by considering the change in pressure from average pressure, [latex] \text{Δ}P=\text{Δ}{P}_{\text{max}}\text{sin}(kx\mp \omega t+\varphi ). [/latex] This equation is similar to the periodic wave equations seen in Waves, where [latex] \text{Δ}P [/latex] is the change in pressure, [latex] \text{Δ}{P}_{\text{max}} [/latex] is the maximum change in pressure, [latex] k=\frac{2\pi }{\lambda } [/latex] is the wave number, [latex] \omega =\frac{2\pi }{T}=2\pi f [/latex] is the angular frequency, and [latex] \varphi [/latex] is the initial phase. The wave speed can be determined from [latex] v=\frac{\omega }{k}=\frac{\lambda }{T}. [/latex] Sound waves can also be modeled in terms of the displacement of the air molecules. The displacement of the air molecules can be modeled using a cosine function: In this equation, s is the displacement and [latex] {s}_{\text{max}} [/latex] is the maximum displacement. Not shown in the figure is the amplitude of a sound wave as it decreases with distance from its source, because the energy of the wave is spread over a larger and larger area. The intensity decreases as it moves away from the speaker, as discussed in Waves. The energy is also absorbed by objects and converted into thermal energy by the viscosity of the air. In addition, during each compression, a little heat transfers to the air; during each rarefaction, even less heat transfers from the air, and these heat transfers reduce the organized disturbance into random thermal motions. Whether the heat transfer from compression to rarefaction is significant depends on how far apart they are—that is, it depends on wavelength. Wavelength, frequency, amplitude, and speed of propagation are important characteristics for sound, as they are for all waves. • Sound is a disturbance of matter (a pressure wave) that is transmitted from its source outward. Hearing is the perception of sound. • Sound can be modeled in terms of pressure or in terms of displacement of molecules. • The human ear is sensitive to frequencies between 20 Hz and 20 kHz. Conceptual Questions What is the difference between sound and hearing? Sound is a disturbance of matter (a pressure wave) that is transmitted from its source outward. Hearing is the human perception of sound. You will learn that light is an electromagnetic wave that can travel through a vacuum. Can sound waves travel through a vacuum? Sound waves can be modeled as a change in pressure. Why is the change in pressure used and not the actual pressure? Consider a sound wave moving through air. The pressure of the air is the equilibrium condition, it is the change in pressure that produces the sound wave. Consider a sound wave modeled with the equation[latex] s(x,t)=4.00\,\text{nm}\,\text{cos}(3.66\,{\text{m}}^{-1}\,x-1256\,{\text{s}}^{-1}t). [/latex] What is the maximum displacement, the wavelength, the frequency, and the speed of the sound wave? Consider a sound wave moving through the air modeled with the equation[latex] s(x,t)=6.00\,\text{nm}\,\text{cos}(54.93\,{\text{m}}^{-1}x-18.84\,×\,{10}^{3}\,{\text{s}}^{-1}t). [/latex] What is the shortest time required for an air molecule to move between 3.00 nm and –3.00 nm? Consider a diagnostic ultrasound of frequency 5.00 MHz that is used to examine an irregularity in soft tissue. (a) What is the wavelength in air of such a sound wave if the speed of sound is 343 m/s? (b) If the speed of sound in tissue is 1800 m/s, what is the wavelength of this wave in tissue? Show Answer a. [latex] \lambda =68.60\,\mu \text{m;} [/latex] b. [latex] \lambda =360.00\,\mu \text{m} [/latex] A sound wave is modeled as [latex] \text{Δ}P=1.80\,\text{Pa}\,\text{sin}(55.41\,{\text{m}}^{-1}\,x-18,840\,{\text{s}}^{-1}t). [/latex] What is the maximum change in pressure, the wavelength, the frequency, and the speed of the sound wave? A sound wave is modeled with the wave function [latex] \text{Δ}P=1.20\,\text{Pa}\,\text{sin}(kx-6.28\,×\,{10}^{4}{\text{s}}^{-1}t) [/latex] and the sound wave travels in air at a speed of [latex] v= 343.00\,\text{m/s}\text{.} [/latex] (a) What is the wave number of the sound wave? (b) What is the value for [latex] \text{Δ}P(3.00\,\text{m},20.00\,\text{s}) [/latex]? The displacement of the air molecules in sound wave is modeled with the wave function [latex] s(x,t)=5.00\,\text{nm}\,\text{cos}(91.54\,{\text{m}}^{-1}x-3.14\,×\,{10}^{4}\,{\text{s}}^{-1}t) [/latex]. (a) What is the wave speed of the sound wave? (b) What is the maximum speed of the air molecules as they oscillate in simple harmonic motion? (c) What is the magnitude of the maximum acceleration of the air molecules as they oscillate in simple harmonic motion? A speaker is placed at the opening of a long horizontal tube. The speaker oscillates at a frequency f, creating a sound wave that moves down the tube. The wave moves through the tube at a speed of [latex] v=340.00\,\text{m/s}\text{.} [/latex] The sound wave is modeled with the wave function [latex] s(x,t)={s}_{\text{max}}\text{cos}(kx-\omega t+\varphi ). [/latex] At time [latex] t=0.00\,\text {s} [/latex], an air molecule at [latex] x=3.5\,\text{m} [/latex] is at the maximum displacement of 7.00 nm. At the same time, another molecule at [latex] x=3.7\,\text{m} [/latex] has a displacement of 3.00 nm. What is the frequency at which the speaker is oscillating? A 250-Hz tuning fork is struck and begins to vibrate. A sound-level meter is located 34.00 m away. It takes the sound [latex] \text{Δ}t=0.10\,\text{s} [/latex] to reach the meter. The maximum displacement of the tuning fork is 1.00 mm. Write a wave function for the sound. A sound wave produced by an ultrasonic transducer, moving in air, is modeled with the wave equation [latex] s(x,t)=4.50\,\text{nm}\,\text{cos}(9.15\,×\,{10}^{4}\,{\text{m}}^{-1}x-2\pi (5.00\,\text {MHz})t). [/latex] The transducer is to be used in nondestructive testing to test for fractures in steel beams. The speed of sound in the steel beam is [latex] v=5950\,\text{m/s}\text{.} [/latex] Find the wave function for the sound wave in the steel beam. Porpoises emit sound waves that they use for navigation. If the wavelength of the sound wave emitted is 4.5 cm, and the speed of sound in the water is [latex] v=1530\,\text{m/s,} [/latex] what is the period of the sound? Bats use sound waves to catch insects. Bats can detect frequencies up to 100 kHz. If the sound waves travel through air at a speed of [latex] v=343\,\text{m/s,} [/latex] what is the wavelength of the sound waves? A bat sends of a sound wave 100 kHz and the sound waves travel through air at a speed of [latex] v=343\,\text{m/s}\text{.} [/latex] (a) If the maximum pressure difference is 1.30 Pa, what is a wave function that would model the sound wave, assuming the wave is sinusoidal? (Assume the phase shift is zero.) (b) What are the period and wavelength of the sound wave? Consider the graph shown below of a compression wave. Shown are snapshots of the wave function for [latex] t=0.000\,\text{s} [/latex] (blue) and [latex] t=0.005\,\text{s} [/latex] (orange). What are the wavelength, maximum displacement, velocity, and period of the compression wave? Show Answer [latex] \begin{array}{cc} \lambda =6.00\,\text{m}\hfill \\ \\ {s}_{\text{max}}=2.00\,\text{mm}\hfill \\ \\ v=600\,\text{m/s}\hfill \\ \\ T=0.01\,\text{s}\hfill \end{array} [/latex] Consider the graph in the preceding problem of a compression wave. Shown are snapshots of the wave function for [latex] t=0.000\,\text{s} [/latex] (blue) and [latex] t=0.005\,\text{s} [/latex] (orange). Given that the displacement of the molecule at time [latex] t=0.00\,\text{s} [/latex] and position [latex] x=0.00\,\text{m} [/latex] is [latex] s(0.00\,\text{m},0.00\,\text{s})=1.08\,\text {mm,} [/latex] derive a wave function to model the compression wave. A guitar string oscillates at a frequency of 100 Hz and produces a sound wave. (a) What do you think the frequency of the sound wave is that the vibrating string produces? (b) If the speed of the sound wave is [latex] v=343\,\text{m/s,} [/latex], what is the wavelength of the sound wave? Show Answer (a) [latex] f=100\,\text{Hz,}\enspace [/latex] (b) [latex] \lambda =3.43\,\text{m} [/latex] perception of sound traveling pressure wave that may be periodic; the wave can be modeled as a pressure wave or as an oscillation of molecules
{"url":"https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/17-1-sound-waves/","timestamp":"2024-11-07T20:47:03Z","content_type":"text/html","content_length":"67273","record_id":"<urn:uuid:c796724d-c0d1-4e85-bbba-66aced4906d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00745.warc.gz"}
layout: doc_page Basic Theta Sketch Accuracy Sketch accuracy is usually measured in terms of Relative Error (RE = Measured/Truth -1). Sketches are stochastic processes and the estimates produced are random variables that have a probability distribution that is close to the familiar Gaussian, which looks like the following. The sketch estimator algorithm examines the internal state of the sketch and returns an estimate of the mean of the probability distribution that includes the actual value. When the sketch contains more than a hundred or so values, we can assume that the shape is pretty close to Gaussian due to the Central Limit Theorem. It is important to understand that the sketch has no idea what the true value is; it only knows the internal state of the sketch. From the mathematical theory of these sketches (see Sketch Equations and Theta Sketch Framework) we know: • The estimate is unbiased. If you were to feed the same data into the sketch using T different hash functions, the average of all T trials will converge on the true answer. • The variance of the estimate across all T trials is < est2/(k-1), where k is the configured size of the sketch. Dividing the variance by est2 and taking the square root normalizes the error to < 1/√ k - 1 , which is called the Relative Standard Error or RSE. This corresponds to one standard deviation stated as a fraction between zero and one, which can be translated to a percent error. Because k is a constant, the bounds of Relative Error of a sketch is constant. The area under the curve of the Standard Normal (Gaussian) Distribution is defined to be 1.0. The fractional area between two points on the X-axis is referred to as the confidence level. Thus, the confidence level (the fractional area) between +1 RSE (+1 SD) and -1 RSE (-1 SD) is 68.27%. Similarly, the confidence level between +2 RSE (+2 SD) and -2 RSE (-2 SD) is 95.4%.
{"url":"https://apache.googlesource.com/datasketches-website/+/refs/heads/3p-links/docs/Theta/ThetaAccuracy.md","timestamp":"2024-11-12T04:33:28Z","content_type":"text/html","content_length":"4053","record_id":"<urn:uuid:08e336e0-c91d-4612-b077-73686ca710e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00027.warc.gz"}
On the Persistence of the Eigenvalues of a Perturbed Fredholm Operator of Index Zero under Nonsmooth Perturbations | EMS Press On the Persistence of the Eigenvalues of a Perturbed Fredholm Operator of Index Zero under Nonsmooth Perturbations • Pierluigi Benevieri Università degli Studi di Firenze, Italy • Alessandro Calamai Università Politecnica delle Marche, Ancona, Italy • Massimo Furi Università degli Studi di Firenze, Italy • Maria Patrizia Pera Università degli Studi di Firenze, Italy Let be a real Hilbert space and denote by its unit sphere. Consider the nonlinear eigenvalue problem , where , is a bounded self-adjoint (linear) operator with nontrivial kernel and closed image, and is a (possibly) nonlinear perturbation term. A unit eigenvector of (corresponding to the eigenvalue ) is said to be persistent if it is close to solutions of the above equation for small values of the parameters and . We give an affirmative answer to a conjecture formulated by R. Chiappinelli and the last two authors in an article published in 2008. Namely, we prove that, if is Lipschitz continuous and the eigenvalue has odd multiplicity, then the sphere contains at least one persistent eigenvector. We provide examples in which our results apply, as well as examples showing that, if the dimension of is even, then the persistence phenomenon may not occur. Cite this article Pierluigi Benevieri, Alessandro Calamai, Massimo Furi, Maria Patrizia Pera, On the Persistence of the Eigenvalues of a Perturbed Fredholm Operator of Index Zero under Nonsmooth Perturbations. Z. Anal. Anwend. 36 (2017), no. 1, pp. 99–128 DOI 10.4171/ZAA/1581
{"url":"https://ems.press/journals/zaa/articles/14534","timestamp":"2024-11-14T16:42:43Z","content_type":"text/html","content_length":"95033","record_id":"<urn:uuid:66b27f8c-f5fc-49c2-9bdb-ac011f80101f>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00414.warc.gz"}
Locally Defined Operators and Locally Lipschitz Composition Operators in the Space WBVp(·)([a, b]) Locally Defined Operators and Locally Lipschitz Composition Operators in the Space WBVp(·)([a, b]) () 1. Introduction This paper lies in the field of variable exponent function spaces, exactly we will deal with the space Variable exponent Lebesgue spaces appeared in the literature in 1931 in the paper by Orlicz [3] . He was interested in the study of function spaces that contain all measurable functions for some With the emergence of nonlinear problems in applied sciences, standard Lebesgue and Sobolev spaces demostrated their limitations in applications. The class of nonlinear problems with variable exponents growth is a new research field and it reflects a new kind of physical phenomena. It is well known that the class of nonlinear operator equations of various types has many useful applications in describing numerous problems of the real world. A number of equations which include a given operators have arisen in many branches of science such as the theory of optimal control, economics, biological, mathematical physics and engineering. Among nonlinear operators, there is a distinguished class called composi- tion operators. Next we define such operators. Definition 1.1. Given a function More generally, given This operator is also called superposition operator or susbtitution operator or Nemytskij operator. The operator in the form (1.1) is usually called the (autonomous) composition operator and the one defined by (1.2) is called non-autonomos. A rich source of related questions are the excellent books by J. Appell and P. P. Zabrejko [7] and J. Appell, J. Banas, N. Merentes [8] . E. P. Sobolevskij in 1984 [9] proved that the autonomous composition operator associate to In this paper, we obtained two main results. The organization of this paper is as follows. Section 2, we gather some notions and preliminary facts, and necessary back- ground about the class of functions of bounded 2. Preliminaries Throughout this paper, we use the following notation: Let a function meter of the image In 2013 R. Castillo, N. Merentes and H. Rafeiro [1] introduced the notion of bounded variation space in the Wiener sense with variable exponent on Definition 2.1 (See [1] ). Given a function is called Wiener variation with variable exponent (or In case that Definition 2.2. (Norm in Theorem 2.3 (See [1] ). Every sequence in In 2015, O. Mejía, N. Merentes and J. L. Sánchez [2] showed the following properties of elements of Lemma 2.4 (General properties of the (P1) minimality: if (P2) monotonicity: if (P3) semi-additivity: if (P4) change of a variable: if strictly) monotone function, then (P5) regularity: The following structural theorem is taken from [2] , this gives us a characterization of the members of Theorem 2.5 (see [2] ). The map Proposition 2.6. Suppose that Proof. Let Afterwards, we choose Then for these y, we have Lemma 2.7. Let Proof. Let Proposition 2.8. Let that is, the Luxemburg norm is lower semi-continuous on Proof. Let By the pointwise convergence of for all that is, Passing the limit as semicontinuous, i.e., Lemma 2.9 (Invariance Principle). Let Proof. The function is an affine homeomorphism with inverse the function such that: defines a 1-1 correspondence between all partitions 3. Locally Lipschitz Composition Operators In this section, we expose one of the main results of this paper. We demonstrate that a result of the Sobolevskij type is also valid in the space Theorem 3.1. Let Proof. First let us assume that The finiteness of By the classical mean value theorem we find Now, by definition of I we have Making a simple calculation Again by the mean value theorem we find By definition of J we have Again a simple calculation shows that Summing up both partial sums and observing that which proves the assertion. Conversely, suppose that H satisfies a Lipschitz condition. By assumption, the constant is finite for each This shows that h is locally Lipschitz, and so the derivative creasing sequence of positive real numbers converging to 0; without loss of generality, we may assume that Since the composition operator H associate to h acts in the space Now, we show that the sequences in Wiener’s sense for all obtain the estimates Since the partition holds for every 2.7, the definition of the function which shows that the sequence Theorem 2.3 ensures the existence of a pointwise convergent subsequence of verges pointwise on Now setting we note that for almost all It remains to prove that whenever the sequence 4. Locally Defined Operators In this section, we present our second main result, which is related to the notion of locally defined operator. We prove that every locally defined operator mapping the space of continuous and Definition 4.1. Let holds true. Remark 4.1. For some pairs Definition 4.2. (See [13] ) An operator 1) left-hand defined, if and only if for every 2) right-hand defined, if and only if for every From now on, let Theorem 4.3. (See [13] ) The operator Theorem 4.4. Let Proof. We begin by showing that for every implies that To this end choose arbitrary belongs to Since, for all the condition (4.2) implies that according to Definition 4.2, we get Therefore, by the continuity of Suppose now that The sequence of functions for all for all Similar reasoning shows, that From (4.4) and (4.5), we obtain that Let us observe that and for all and for every From (4.7), (4.8) and (4.9) the function To show that Take an arbitrary therefore, by (4.10) and (4.12) in the case when in the case when By the lower semicontinuity of and the convergence of series Thus there exist a function According to the first part of the proof, we have Hence, by continuity of To define the function Of course Since, by (4.13), for all functions f, according to what has already been proved, we have To prove the uniqueness of h, assume that for all which proves the uniqueness of h. 5. Conclusion In this paper, we get two important results. In Theorem 3.1, we show that the result of the Sobolevkij type is valid for the space of functions of bounded
{"url":"https://scirp.org/journal/paperinformation?paperid=70894","timestamp":"2024-11-09T06:15:11Z","content_type":"application/xhtml+xml","content_length":"151747","record_id":"<urn:uuid:e081c911-2fe7-407d-9c5e-ecb4ce2c1ba7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00775.warc.gz"}
Revenge of the Fallen Optimus Prime image gallery and review | www.transformertoys.co.uk Revenge of the Fallen - Optimus Prime Welcome to the toy review, image gallery and information page for Revenge of the Fallen Optimus Prime . Along with images of Optimus Prime you can also find information about this Transformers figure including and subgroups and alternative names. On the left hand navigational menu you can find a list of all of the toys that use the same mold, all of the toys that aer based on the same character as this figure, as well as a list of the latest Transformers toy gallery that have been added to this website. Below this introductory paragraph, you will find some tabs that, when clicked, will replace this area. You can use them to view the various profiles that have been associated with this toy, the TechSpec of the figure, a list of tags that were associated to the toy image gallery as well as a tab dedicated to all of the toys that are associated with this one. That tab will allow you to jump between "linked" toys. Optimus Prime Profile 1. US Toy TechSpec □ Function: □ Motto: Freedom is the right of all sentient beings □ Profile: Optimus Prime Tech Specs Coming Soon Toys associated with Optimus Prime The following toys are associated with Optimus Prime Image Tab Navigation There are 2 image tabs containing 25 each. Click on the number to jump to that page or press Ctrl and type the number you wish to jump to. Photographs and images of Optimus Prime There are 41 images available for this toy. Image Tab Navigation There are 2 image tabs containing 25 each. Click on the number to jump to that page or press Ctrl and type the number you wish to jump to. Revenge of the Fallen - Optimus Prime - Information The images contained withhin ths gallery were supplied by The Last Allosaur on the 8th June 2009. We are very grateful fr his support and hope you appreciate his efforts. Transformers Toys, Toy Galleries, Transformers Review, Toy Review, Transformers Toy Galleries, Optimus Prime, Optimus Prime , Revenge Of The Fallen, , Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Optimus Prime, Rotf Optimus Prime, Revenge Of The Fallen Optimus Prime, Transformers Movie 2 Optimus Prime, Other Transformers toys and figures called Optimus Prime. Stats: The toy "Optimus Prime" has been used on 100 occasions out of a total of 4,032 toys within our database. Rating: 4.00/1 vote(s) Why not share your thoughts on this article by granting it a rating out of 5 stars. You can edit your rating at any time by re-visiting this page and re-rating the content or, if you are a site member, through your control panel.
{"url":"http://www.transformertoys.co.uk/resources/review/revenge+of+the+fallen/+optimus+prime+/3812","timestamp":"2024-11-03T10:03:14Z","content_type":"text/html","content_length":"158339","record_id":"<urn:uuid:7a4855b1-97b4-4c57-ad0d-89cd09ecae60>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00024.warc.gz"}
multivariable critical points calculator Exponents Rational Exponents Factoring Polynomials Axis of Symmetry and Vertex of a Parabola 3. Solving Exponential Equations Evaluatefxx, fyy, and fxy at the critical points. The main purpose for determining critical points is to locate relative maxima and minima, as in single-variable calculus. endstream endobj startxref Critical/Saddle point calculator for f(x,y) No related posts. Factoring General Polynomials Rationalizing the Denominator Polar Representation of Complex Numbers Simplifying Fractions 1 0 While this may seem like a silly point, after all in each case \(t = 0\) is identified as a critical point, it is sometimes important to know why a point is a critical point. A critical point of a multivariable function is a point where the partial derivatives of first order of this function are equal to zero. 0 â ® ... Use the gradient function to calculate the derivative. Dividing Rational Expressions Find more Mathematics widgets in Wolfram|Alpha. Rewriting Algebraic Fractions Free online 3D grapher from GeoGebra: graph 3D functions, plot surfaces, construct solids and much more! What do you know about paraboliods? Find critical points of multivariable functions Our mission is to provide a free, world-class education to anyone, anywhere. Addition with Negative Numbers Adding and Subtracting Rational Expressions With Different Denominators Multiplying by 14443 Thank you! Free math problem solver answers your calculus homework questions with step-by-step explanations. 113 0 obj <> endobj Systems of Equations That Have No Solution or Infinitely Many Solutions Multiplying Polynomials 4 Comments Peter says: March 9, 2017 at 11:13 am Bravo, your idea simply excellent. Below is the graph of f(x , y) = x2 + y2and it looks that at the critical point (0,0) f has a minimum value. Try the Free Math Solver or Scroll down to Tutorials! Graphing an Inverse Function Plot multivariable function, find critical points. Decimals and Fractions By using this website, you agree to our Cookie Policy. Positive Integral Divisors Observe that the constant term, c, â ¦ It only says that in some region around the point (a,b)(a,b) the function will always be larger than f(a,b)f(a,b). Adding Rational Expressions with the Same Denominator Graphing Linear Inequalities Percents and Fractions We all wished we had it 4 months ago. hÞbbd``b`f! Mathematical Terms hÞb```f``2 ÌÞ ÈÀ O A critical value is the image under f of a critical point. Radical Notation Standard Form of a Line %%EOF Home A critical point of a function of a single real variable, f (x), is a value x 0 in the domain of f where it is not differentiable or its derivative is 0 (f â ²(x 0) = 0). Adding Fractions Multiplying Monomials Each component in the gradient is among the function's partial first derivatives. Follow 73 views (last 30 days) PJ on 18 Jan 2018. Adding and Subtracting Fractions Dividing Polynomials by Monomials and Binomials The internet calculator will figure out the partial derivative of a function with the actions shown. Vote. Quadratic Equations with Imaginary Solutions Multiplying and Dividing Fractions ¢ H°> G Ë($¸ZA,o !Ö R$N÷20120 ¸ Ô$þ3æ| 0 é ! Subtracting Rational Expressions with the Same Denominator Critical/Saddle point calculator for f(x,y) Added Aug 4, 2018 by Sharonhahahah in Mathematics Rationalizing the Denominator From Multivariable Equation Solver to scientific notation, we have got all kinds of things covered. Solving Quadratic Equations by Factoring Such points are called critical points. saddle point calculator multivariable. Factoring Trinomials Most other programs just give you the answer, which did not help me when it come to test time, Algebrator helped me through each problem step by step. Calculate the value of D to decide whether the critical point corresponds to a relative maximum, relative minimum, or a saddle point. Subtracting Mixed Numbers with Renaming But now, we see that the minimum is actually Thanks for contributing an answer to â ¦ Free partial derivative calculator - partial differentiation solver step-by-step This website uses cookies to ensure you get the best experience. Powers of Complex Numbers Fractions Outside of that region it is completely possible for the function to be smaller. Critical Points and Extrema Calculator The calculator will find the critical points, local and absolute (global) maxima and minima of the single variable function. Please use this form if you would like to have this math solver on your website, free of charge. Like Radical Terms By using this website, you agree to our Cookie Policy. Multiplication by 572 Multiplying Monomials Multiplying Polynomials Imaginary Solutions to Equations Order and Inequalities Again, outside of tâ ¦ example. Simplifying Radicals Solving Rational Equations It is a number 'a' in the domain of a given function 'f'. The point \((a,b)\) is a critical point for the multivariable function \(f (x,y)\text{,}\) if both partial derivatives are 0 at the same time. Wiki says: March 9, 2017 at 11:14 am Here there can not be a mistake? Solving Quadratic Equations by Using the Quadratic Formula A function z=f(x,y) has critical points where the gradient del f=0 or partialf/partialx or the partial derivative partialf/partialy is not defined. Simplifying Products and Quotients Involving Square Roots is a twice-differentiable function of two variables and In this article, we â ¦ FOIL Multiplying Polynomials Midpoint of a Line Segment Computes and visualizes the critical points of single and multivariable functions. Factors and Prime Numbers The above calculator is an online tool which shows output for the given input. Equations of a Line - Point-Slope Form Differentiate any single or multivariable function; Find the critical points and saddle points of a function; Calculate the gradient of a function; Identify the local extrema of a function; Find the single, double, or triple integral of a function; Determine the dot or cross product of two vectors; Calculate the divergence or curl of a vector field Variables and Expressions Solve these equations to get the x and y values of the critical point. Fields Medal Prize Winners (1998). Rational Expressions TUTORIALS: Find the critical points by setting the partial derivatives equal to zero. 0.1 Reminder For a function of one variable, f(x), we ï¬ nd the local maxima/minima by diï¬ erenti- ation. Example 2 Determine the critical points and locate any relative minima, maxima and saddle points of function f defined by f(x , y) = 2x 2 - 4xy + y 4 + 2 . Rationalizing the Denominator seventh Edition, Mcdougal Littell Algebra 2 math book problems, interpreting slope for quadratic equation, proving trigonometric identities (example problems), adding and subtracting integers worksheets, negative exponents practice "free worksheets". how to solve linear system of equations in programming by gauss elimination method in c language? Anthony Washington, MO. Powers Quadratic Expressions Completing Squares I have showed the program to a couple of classmates during a study group and they loved it. Calculus: Integral with adjustable bounds. Rational Exponents Calculus: Integral with adjustable bounds. Fractional Exponents Ratios and Proportions Solution to Example 1: We first find the first order partial derivatives. Simplifying Square Roots Solving Quadratic Inequalities Multiplying and Dividing Monomials Solving Quadratic Equations by Completing the Square Finding the Least Common Denominator Rationalizing the Denominator Solving Linear Systems of Equations by Elimination %PDF-1.5 %âãÏÓ Similarly, with functions of two variables we can only find a minimum or maximum for a function if both partial derivatives are 0 at the same time. Then I got Algebrator, and it helped me not only with quadratic but also with pretty much any equation or expression I could think of! 1. It is 'x' value given to the function and it is set for all real numbers. Simple Partial Fractions Khan Academy is a 501(c)(3) nonprofit organization. The Function Analysis Calculator computes critical points, roots and other properties with the push of a button. Rationalizing the Denominator Dana White, IL, I think it is great! A First Course in Abstract Algebra. fx(x,y) = 2x = 0 fy(x,y) = 2y = 0 The solution to the above system of equations is the ordered pair (0,0). Complex Numbers Reply. 100% Free. Critical Number: It is also called as a critical point or stationary point. example. Calculus: Fundamental Theorem of Calculus Mathematical Terms Graphing Solutions of Inequalities Critical points of multivariable functions calculator Critical points of multivariable functions We recall that a critical point of a function of several variables is a point at which the gradient of the function is either the zero vector 0 or is undefined. Critical Points Critical points: A standard question in calculus, with applications to many ï¬ elds, is to ï¬ nd the points where a function reaches its relative maxima and minima. How to find and classify the critical points of multivariable functions. Simplifying Square Roots That Contain Whole Numbers Ò¥ îÚ­îzâ!o) ¤ºòïwÆN Ûìé í ñÌØùâØ KB ã!a Jm® ÷¡ö)J@Í Ô JBí­5Ô¨Óø&)ZH© ç>ãÐP KMÑ p Äç #~ÀQ ¯Àé /Þ0K³. Multiplying Radicals The 3-Dimensional graph of function f given above shows that f has a local minimum at the point (2,-1,f(2,-1)) = (2,-1,-6). : we first find the first partial derivatives solve these equations to get the x and f y to function. Each component in the domain of a differentiable function of a differentiable function a. G Ë ( $ ¸ZA, o! Ö R $ N÷20120 ¸ Ô $ 0. Min read found Algebrator maximum, relative minimum, or a saddle point is... An online tool which shows output for the given input multivariable functions the domain of critical... One variable, f ( x ), we ï¬ nd the local maxima/minima by diï¬ erenti-.. Also called as a critical point Bravo, your idea simply excellent fact, in a couple of during... Push of a critical point of a differentiable function of one variable, f ( x, y 1. Definition does not say that a relative minimum is the smallest value that the term! After spending multivariable critical points calculator hours trying to understand my homework night After night, I think it is for...: find the critical point of a real or complex variable is any value in its domain its... Value that the constant term, c, â ¦ critical point for all numbers... Step-By-Step explanations possible for the given input points of single and multivariable functions stationary point your idea simply.. Is a point where the partial derivatives f x and f y group and they loved it After. Point calculator for f ( x, y ) 1 min read a differentiable function of a function... Jan 2018 of D to decide whether the critical points of multivariable functions our mission is to locate maxima. Partial derivative of one point to Sofsource.com and figure out adding fractions, and! World-Class education to anyone, anywhere tool which shows output for the function multivariable critical points calculator is... ' in the domain of a function with the actions shown I think is! Agree to our Cookie Policy c, â ¦ critical point of a critical value is smallest. Properties with the push of a critical point and plenty additional algebra subject calculus. To decide whether the critical points, roots multivariable critical points calculator other properties with the actions shown, your simply... The constant term, c, â ¦ critical point of a button calculus Integral. Night After night, I found Algebrator website, you agree to our Cookie.! Form if you would like to have this math solver on your website, free of charge the function partial. Hard time above calculator is an online tool which shows output for the given input Number! Local extrema of Two Variables in this web site! Ö R $ N÷20120 ¸ Ô $ þ3æ| 0!... Think it is completely possible for the function will ever take,,. Cookie Policy its domain where its derivative is zero problem solver answers your calculus homework with. Solve these equations to get the x and f y your idea simply excellent calculator is an online which. Integral with adjustable bounds is among the function multivariable critical points calculator partial first derivatives equal to zero your website, agree. In a couple of classmates during a study group and they loved it for the function to calculate the of. Visualizes the critical point or stationary point observe that the constant term, c, â ¦ critical.... Fyy, and fxy at the critical points of single and multivariable functions push... Optimization Problems with functions of Two Variables in this web site value that the function 's partial first derivatives March... That this definition does not say that a relative minimum is the image under f a., and fxy at the critical points, roots and other properties the. On your website, you agree to our Cookie Policy and fxy at the critical points in which derivative! Example 2: find the first partial derivatives first order of this function are equal to zero and plenty algebra!, and fxy at the critical point above calculator is an online tool which shows for... First find the critical points Use the gradient is among the function will take! Study group and they loved it or a saddle point a relative maximum, minimum. Relative maximum, relative minimum, or a saddle point c, â ¦ critical point or stationary.... ) No related posts this website, you agree to our Cookie Policy figure out adding,! Found Algebrator is set for all real numbers, o! Ö R $ $. Of one variable, f ( x, y ) No related posts Peter says: March,. Critical/Saddle point calculator for f ( x, y ) 1 min read Example 1: we first the... After spending countless hours trying to understand my homework night After night, I think it a! Also called as a critical point the free math solver or Scroll to. Please Use this form if you would like to have this math solver or down. X, y ) No related posts function and it is ' '! Fact that only works for critical points of multivariable functions function is a '! Examples of calculating the critical points a saddle point, in a couple of sections see! The push of a button this form if you would like to have math! Example 1: we first find the first order partial derivatives at 11:14 am Here there not. With step-by-step explanations couple of sections weâ ll see a fact that only works for critical points and extrema! Problems with functions of Two variable functions definition does not say that a relative maximum, relative minimum is image... Say that a relative minimum is the smallest value that the function Analysis Computes. Function of a function of a multivariable function is a Number ' a ' in gradient. We ï¬ nd the local maxima/minima by diï¬ erenti- ation figure out the partial derivative of one point a (! Completely possible for the function to be smaller properties with the actions shown to. Partial derivative of a button additional algebra subject areas calculus: Integral with adjustable bounds properties with actions... Only works for critical points calculator Computes and visualizes the critical point corresponds to a couple sections... Or a saddle point homework night After night, I found Algebrator of first order partial derivatives and functions... Function with the push of a differentiable function of one point say that a minimum. Really giving me a hard time derivatives f x and f y 4 Peter! Only works for critical points examples of calculating the critical points in which the.. Min read outside of that region it is great equations were really giving a. Group and they loved it order partial derivatives of first order partial derivatives of first order partial derivatives first. D to decide whether the critical point or stationary point real or complex variable is any value in domain... March 9, 2017 at 11:14 am Here there can not be a mistake first. Visualizes the critical points by setting the partial derivative of a critical point corresponds a... Equations were really giving me a hard time a free, world-class education anyone. To Example 1: we first find the critical point of a real or complex variable any... F y functions of Two variable functions are equal to zero to calculate the value of to! And fxy at the critical points, roots and other properties with the push of a differentiable function of point. Free, world-class education to anyone, anywhere multivariable function is a where... Other properties with the push of a real or complex variable is any value in its where... Critical value is the image under f of a differentiable function of one variable, f (,... And local extrema of Two Variables in this web site also called as a critical value is the under! Get the x and f y 3 ) nonprofit organization the function it. By gauss elimination method in c language ( c ) ( 3 ) nonprofit organization fact, in multivariable critical points calculator. ( $ ¸ZA, o! Ö R $ N÷20120 ¸ Ô $ þ3æ| 0 é take! Homework questions with step-by-step explanations set for all real numbers a free, world-class education anyone! ( 3 ) nonprofit organization or Scroll down to Tutorials homework night After night, I found Algebrator these to! Additional algebra subject areas calculus: Integral with adjustable bounds at 11:13 am Bravo, idea! Minimum is the smallest value that the function 's partial first derivatives given to the function to be.... The above calculator is an online tool which shows output for the function Analysis calculator Computes critical points multivariable... The domain of a critical point corresponds to a couple of sections weâ ll see a fact that works! Is 0 say that a relative minimum, or a saddle point þ3æ| 0 é website you. Views ( last 30 days ) PJ on 18 Jan 2018 the push of a critical value the! Each component in the domain of a real or complex variable is any value in its domain where derivative! $ N÷20120 ¸ Ô $ þ3æ| 0 é ) PJ on 18 Jan 2018 form. My homework night After night, I found Algebrator a saddle point a. Is completely possible for the given input relative maximum, relative minimum is the smallest value the! Two variable functions you agree to our Cookie Policy in a couple sections. Local maxima/minima by diï¬ erenti- ation one variable, f ( x, y ) 1 min read ) 1 read. X and y values of the critical points x, y ) No related posts, fyy, fxy. Above calculator is an online tool which shows output for the function 's first... Pj on 18 Jan 2018 ) PJ on 18 Jan 2018 0.1 Reminder for a function with Maxima And Minima Of Functions Of Two Variables Calculator, Cooking Ribs On Char-broil Infrared Grill, Baking Soda Price 10gm, Orange King Bougainvillea Plant, Traeger Pizza Oven Attachment, Coleman Tailgate Grill, Local Link Bus Schedule, Skyrim Alesan Location, Category Pronunciation Australian,
{"url":"http://inwestbudwl.pl/4kb0d7/43e592-multivariable-critical-points-calculator","timestamp":"2024-11-11T21:29:17Z","content_type":"text/html","content_length":"113188","record_id":"<urn:uuid:63fe3a10-58c1-4fe5-bc93-30945241edd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00506.warc.gz"}
PoleZeroFIR () PoleZeroFIR (unsigned channels) PoleZeroFIR (unsigned channels, float b0, float b1, float a1) PoleZeroFIR (const PoleZeroFIR &copy) PoleZeroFIR (PoleZeroFIR &&filter) ~PoleZeroFIR () unsigned getChannels () const void setChannels (unsigned channels) void setCoeff (const std::vector< float > &bvals, const std::vector< float > &avals) const std::vector< float > getBCoeff () const const std::vector< float > getACoeff () const void setBCoeff (float b0, float b1) void setACoeff (float a1) void setHighpass (float frequency) void setAllpass (float coefficient) void setBlockZero (float pole=0.99f) void step (float gain, float *input, float *output) void calculate (float gain, float *input, float *output, size_t size) void clear () size_t flush (float *output) This class implements a one-zero digital filter. This filter is the preferred way to create a first-order highpass filter, and so we provide the method setHighpass for setting the highpass frequency. In addition, there are methods for creating an allpass filter, as well as a DC blocking filter. Frequencies are specified in "normalized" format. A normalized frequency is frequency/sample rate. For example, a 7 kHz frequency with a 44100 Hz sample rate has a normalized value 7000/44100 = 0.15873. However, filters are not intended to be model classes, and so it does not save the defining frequency. This class supports vector optimizations for SSE and Neon 64. In timed simulations, these optimizations provide at least a 3-4x performance increase (and for 4 or 8 channel audio, much higher). These optimizations make use of the matrix precomputation outlined in "Implementation of Recursive Digital Filters into Vector SIMD DSP Architectures". The algorithm in this paper performs extremely well in our tests, and even out-performs Apple's Acceleration library. However, our implementation is limited to 128-bit words as 256-bit (e.g. AVX) and higher show no significant increase in performance. For performance reasons, this class does not have a (virtualized) subclass relationship with other IIR or FIR filters. However, the signature of the the calculation and coefficient methods has been standardized so that it can support templated polymorphism. This class is not thread safe. External locking may be required when the filter is shared between multiple threads (such as between an audio thread and the main thread).
{"url":"https://www.cs.cornell.edu/courses/cs5152/2023sp/resources/engine/api/classcugl_1_1dsp_1_1_pole_zero_f_i_r.html","timestamp":"2024-11-14T21:03:19Z","content_type":"application/xhtml+xml","content_length":"39516","record_id":"<urn:uuid:f97b0511-5787-428c-8963-1f9211906fc9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00124.warc.gz"}
Designing Assignments Initial Publication Date: October 22, 2013 "[R]eal problems are messy and not amenable to unequivocal final answers...." Grant Wiggins in "'Get Real!' Assessing for Quantitative Literacy," Quantitative Literacy: Why Numeracy Matters for Schools and Colleges. Jump down to: Keep your eye on your course goals | Backward-design your course and assignments | Be very clear on your goals | Set assignments in an explicit, real-world context | Plan your assessments and assignments | Tell students what you are up to The principles for designing strong quantitative reasoning (QR) assignments are similar to those for designing strong assignments of any other type. Because of this, much of what follows is drawn from John Bean's take on designing strong quantitative writing assignments. There are a number of tips that are specific to teaching students to work with quantitative evidence in particular: Of course, one of the best suggestions for assignment redesign is simple: borrow and steal! 1. Keep your eye on your course goals and align QR activities accordingly When was the last time you got toward the end of the term and thought, "Gee, I have no idea how I will fill the last days of the class. I have completely achieved all I had hoped to for this term!"? The reality is that time is scarce. We never cover everything we had hoped to cover in our courses. We struggle to teach everything that is presumed by courses that list our own courses as pre-reqs. And now we are being asked to take on general education learning goals like QR. The solution to this tension is not to speed up the class to squeeze in 10% more content. And it isn't to jettison the disciplinary content expectations of our colleagues. Instead, we need to think hard about how QR naturally fits in the context of our course. We need to make the introduction of QR a means for doing better what we already are trying to achieve rather than a competitive threat to our primary priorities. Otherwise it is a safe bet you will drop QR altogether. So, start by articulating the goals you have for students in your course. Then look for where QR is relevant and important to meeting fully those objectives. In some fields it is obvious how teaching QR complements traditional course goals. In other fields it may take a little more thought. To jumpstart that process, look through the collection of Quantitative Reasoning in Writing assignments from disciplines close to your own. Even if you don't intend to give writing assignments, these existing assignments can spark ideas of how QR can be connected to your discipline. 2. Backward-design your course and assignments It is generally best practice to start course planning at the end. Broadly speaking, start with the course goals, then move back to how you will know if those goals are met (i.e. assessments), and finally consider how you will bring students to a successful outcome (activities). You might think about answering the following questions: What do I want my students to learn by the end of the term? How will I know if they achieve those goals? In order to get to that point, what will students need to learn first? How can the assignments and activities in early stages of the course support later learning. Similarly, complex assignments may benefit from backward-design. What class activities must happen first to arm students with the tools to complete the assignment? Should the assignment be broken into sub-parts so that later sub-assignments can build on previous activity? These same general principles apply to teaching QR. It is worth thinking a little, though, about the specific way these principles play out in the QL context. 3. Be very clear on your goals for including real-world data in your activities There are many lessons for students to learn by using real data in their assignments. How do you find relevant data? How do you interpret the variables available and choose the best items to use and which to avoid? How do you clean data, with an eye toward both practicality and ethics? How do you analyze raw data? How do you present data in text, charts, and tables? Any one of these would be worth the time of an assignment. As you design your assignment, ask yourself which of these are the goals you have for your students. Then focus the assignment on those aspects. For example, if finding and cleaning data isn't critical to your course goals, consider offering students a clean version of the data they need with the assignment so that they don't spend time on the data gathering Once you decide on your goals for the assignment, be sure to share your intentions with reference library staff. If you really want students to struggle through the process of finding data, you don't want reference librarians resolving that struggle too early in the process! 4. Set assignments in an explicit, real-world context John Bean suggests a helpful acronym--"Give your students a RAFT"--when creating context-rich assignments in general: • Role: give the student a role or purpose for the assignment • Audience: identify an explicit audience • Format: tell the student the genre you expect for their final product • Task: lay out the assignment or problem they are to address For example, you might provide the following prompt: "You are an aide to a US Senator (role and audience) who must vote on guest worker immigration reform legislation. She has asked you to prepare a white paper to help inform her thinking (format). She would like to understand what the model of supply and demand suggests concerning the effects of this bill on the labor market (task part a). She doesn't want pure theory, however. She reminds you that she is an elected official and so in addition to knowing the directions of the predicted changes to employment and wages she also could use help finding data that suggests the magnitudes of those effects (task part b). Because she is busy, the white paper can be no longer than five pages in length." In addition to these generic context principles, remember that QR includes sifting through conflicting data and weighing source credibility. Consider providing more information than is needed for the assignment and requiring students to make reasoned judgements about what information to use. 5. Plan your assessments and assignments for the course from the start QR assignments inevitably involve multiple facets. At very least, the assignment includes disciplinary content in addition to QR components. It is very likely that some students will succeed in one dimension while struggling in another. To avoid difficult grading dilemmas at the end of the assignment, decide in advance how you will grade the work. One way to do this is to create a rubric that identifies the criteria to be evaluated and describes varying levels of student success for each. Such rubrics can include numerical scores which are added up to arrive at a final score. Or they can be qualitative. Whatever the choice, making these decisions up front and communicating the criteria to students in advance will save trouble in the long run. (Also, check out the discussion of designing assessments to aid this decision process.) 6. Tell students what you are up to For many students, the introduction of QR in your course may seem foreign. Not surprisingly, in their first encounter with QR practices students will bump into many challenges. You can minimize their sense of frustration with this learning process by using the suggestions above to support them along the way. But it is also very helpful to tell your students what you are trying to achieve. In your syllabus and at the top of your assignments, explicitly articulate your QR objectives. After students turn in their assignments, devote 5 or 10 minutes of class time to talking about their experience with the assignment. What did they find interesting? What was challenging? What was frustrating? What was rewarding? Not only will this conversation give you ideas about how better to teach the material next time around, these conversations also give you an opportunity to articulate how QR assignments give students a deeper understanding of the course material and its applications. While nothing can guarantee student good will, open communication about course objectives and your reasons for setting those objectives can earn you the benefit of the doubt.
{"url":"https://serc.carleton.edu/sp/qssdl/qr/designing_assignments.html","timestamp":"2024-11-09T20:23:24Z","content_type":"text/html","content_length":"66222","record_id":"<urn:uuid:6ff56560-f467-4b85-a675-83ff1d447e79>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00638.warc.gz"}
Environmental Science AS-T - Guided Pathways Course Units Typically Offered 1st Semester CHEM 120 - Introduction to Chemistry (CSU GE B1)M 5.0 CHEM 120 - Introduction to Chemistry (5.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of an intermediate algebra course. Advisory:It is advised that students be able to engage in written composition at a college level and read college-level texts. Transfers to: UC (*Credit limit), CSU (*Students will receive credit for only one of the following courses: CHEM 110, CHEM 120; no credit if taken after CHEM 130) This one-semester course is designed for students intending to major in science or engineering. The course primarily prepares students for CHEM130; additionally, it fulfills the General Education requirement in the physical sciences. This course introduces the fundamental principles of general chemistry, with emphasis on chemical nomenclature and quantitative problems in chemistry. The lecture presents classical and modern chemistry, including atomic theory, periodic properties, chemical bonding, chemical reactions, stoichiometry, acids and bases, gas laws, and solutions. The laboratory introduces the techniques of experimental chemistry with examples from all areas of chemistry. ENGL 101 - College Composition and ResearchGE 3.5 ENGL 101 - College Composition and Research (3.5 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or eligibility for college composition. Transfers to:UC, CSU This composition course enables students to generate logical, coherent essays that incorporate sources necessary for academic and professional success. Students become proficient in researching, evaluating, and incorporating sources, and in learning critical reading and thinking skills through expository and persuasive reading selections before applying these skills to creating original documented essays. The writing workshop component of the course is designed to assist students with improving and refining their writing and language skills: Students complete writing workshop activities that enhance their ability to compose logical, well-supported arguments that exhibit grammatical fluency and correct citation styles. Students meet with composition instructors through individual or small group conferences that address students’ specific writing concerns. This course is designed for students who wish to fulfill the General Education requirement for Written CSU GE C1 - ArtsGE 3.0† All honors courses have a prerequisite. † some classes may have higher units. Select one: ARCH 103 ART 101, 104, 105, 105H, 106, 106H, 107, 108, 109, 110, 112, 113, 115, 117, 120, 121, 130, 135, 140 DANC 179, 179H, 199, 199H GDSN 110 MUS 101, 129, 130, 131, 132, 133, 135, 136 MUST 151, 152 PHTO 110, 130 THTR 101, 105, 105H, 110, 150 Select one: POLS 110 / POLS 110H (CSU GE D)GE 3.0 POLS 110 - Government of the United States (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level and read college-level texts. Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: POLS 110 or POLS 110H) This course surveys and analyzes the origins, principles, institutions, policies, and politics of U.S. National and California State Governments, including their constitutions. Emphasis is placed on the rights and responsibilities of citizens, and an understanding of the political processes and issues involved in the workings of government. This course fulfills the American Institutions requirement for the Associate Degree. It also is suitable for students wishing to expand their knowledge of local, state and national governments. POLS 110H - Government of the United States Honors (3.0 units) Prerequisite: ENGL 101 Advisory:It is advised that students be able to read college-level texts. Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: POLS 110 or POLS 110H) This course surveys and analyzes the origins, principles, institutions, policies, and politics of U.S. National and California State Governments, including their constitutions. Emphasis is placed on the rights and responsibilities of citizens, and an understanding of the political processes and issues involved in the workings of government. This course fulfills the American Institutions requirement for the Associate Degree. It also is suitable for students wishing to expand their knowledge of local, state and national governments. This course is intended for students eligible for the Honors Program. Total Semester Units: 14.5† 2nd Semester CHEM 130 - General Chemistry IM 5.0 CHEM 130 - General Chemistry I (5.0 units) Prerequisite: CHEM 120 Advisory: ENGL 101; MATH 175 Transfers to: UC, CSU This course is the first semester of a two-semester sequence designed for students intending to major in science and engineering. The lecture course covers classical and modern chemistry, with applications in stoichiometry and classical atomic theory of chemistry, periodic properties, gas laws, modern quantum theory of atomic and molecular structure and periodic properties, thermochemistry, liquids and solids, and solution chemistry. The laboratory introduces experimental chemistry with examples from all areas of chemistry. Select one: BIOL 120 / CHEM 140 / GEOL 150 / GEOL 151 / GEOG 101 / GEOG 101L / MATH 130 / MATH 130H / PSY 190 / MATH 190 / MATH 190H / MATH 170 M 4.0 Choose 15 units from the following: Biol 120/Chem 140/GEOL 150 and GEOL 151 or GEOG 101 and GEOG 101L/ Math 130 or Math 130H or Psy 190/ Math 190 or Math 190H/Math 170 BIOL 120 - Environmental Biology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU In this course, students utilize basic biological concepts and an interdisciplinary approach to determine how to address environmental challenges. Topics may include ecosystem characteristics and functions, population dynamics, energy and material resource use, pollution, and alternative energy sources. Because the course takes up the social, political, and economic implications of environmental decisions, it is intended for students from many disciplines, including non-STEM disciplines. This course fulfills the general education requirement for life sciences majors. CHEM 140 - General Chemistry II (5.0 units) Prerequisite: CHEM 130 Advisory: ENGL 101; MATH 180 Transfers to:UC, CSU CHEM 140 is a continuation of CHEM 130. Theory and techniques of elementary physical chemistry are stressed. Emphasis is placed on the dynamics of chemical change using thermodynamics and reaction kinetics as the major tools. A thorough treatment of equilibrium is given, with many examples of acid/base, buffer, solubility, and complex ions. Entropy and free energy, electrochemistry, coordination compounds and a brief introduction to organic chemistry and nuclear chemistry are presented. Various analytical techniques used in modern chemistry are introduced. Descriptive chemistry of representative metallic and nonmetallic elements is included. The Laboratory introduces experimental chemistry with examples from areas of kinetics, equilibrium, acid/base and buffer preparation, differential titration, electrochemistry, and qualitative analysis. Modern instrumental methods are used in some exercises. GEOL 150 - Physical Geology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This introductory course covers the principles of geology, with emphasis on Earth processes, and fulfills the physical science general education requirement. The course focuses on the internal structure and origin of the Earth and the processes that change and shape. Earthquakes, volcanoes, oil, beaches, tsunamis, rocks, rivers, glaciers, plate tectonics, minerals, and continent and mountain building are among the topics that are explored. GEOL 151 - Physical Geology Laboratory (1.0 units) Prerequisite/Corequisite: GEOL 150 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This lab engages students with a hands-on review of the principles presented in Geology 150 and their application to everyday life. Laboratory exercises will include but are not limited to the identification of minerals; igneous, metamorphic, and sedimentary rocks; topographic and geologic map exercises demonstrating the work of water, wind, ice, and gravity; and the effects of tectonic GEOG 101 - Introduction to Physical Geography (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU This general education course introduces students to the natural processes that shape the earth. Weather and climate, landforms and volcanoes, glaciers, rivers, and coastal phenomena are among the topics explored. This course is for any students interested in the physical processes that shape land masses. GEOG 101L - Introduction to Physical Geography Laboratory (1.0 units) Prerequisite/Corequisite: GEOG 101 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU The physical geography laboratory is designed to acquaint students with the methods, techniques, and procedures used by geographers in the study and analysis of the physical environment. Students use maps, the Internet, and other tools to work with real-world geographic data. This course fulfills the general education lab requirement in physical sciences when taken with or after the Introduction to Physical Geography course (GEOG 101). MATH 130 - Statistics (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory: ENGL 101; READ 101 Transfers to: UC (*crdit limit),CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. MATH 130H - Statistics Honors (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course and ENGL 101 Advisory: READ 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. This course is intended for students who meet Honors Program requirements. PSY 190 - Statistics for the Behavioral Sciences (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory:ENGL 101 and the ability to read college-level texts. Transfers to: UC (*credit limit), CSU (*The UC will grant credit for only one of the following courses: MATH 130 or MATH 130H or PSY 190) This course provides an overview of the types of statistics that are important in the behavioral sciences. It is designed to teach students majoring in psychology, sociology, political science, and anthropology how to present and interpret experimental data. The course focuses on hypothesis testing and the statistics used to analyze assumptions, with topics including basic probability, measures of central tendency, measures of variance, sampling, and inferential statistics. MATH 190 - Calculus I (4.0 units) Prerequisite:MATH 180 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics in functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus. MATH 190H - Calculus I Honors (4.0 units) Prerequisite: MATH 180; ENGL 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics of functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus.This course is intended for students who meet Honors Program requirements. MATH 170 - Elements of Calculus (4.0 units) Prerequisite: Enrollment requires appropriate placement (based on high school GPA and/or other measures) or completion of an intermediate algebra course. Advisory: ENGL 101 Transfers to: UC (8credit limit),CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190, or MATH 190H) This one-semester course focuses on the fundamentals of algebra-based calculus and its applications to the fields of business, economics, social sciences, biology, and technology. Course topics include graphing of functions; applications of derivatives and integrals of functions including polynomials; rational, exponential, and logarithmic functions; multivariable derivatives; and differential equations. Select one: CSU GE C1 or C2 - Arts or Humanities GE 3.0† All honors courses have a prerequisite. † some classes may have higher units. Select one: ARCH 103 ART 101, 104, 105, 105H, 106, 106H, 107, 108, 109, 110, 112, 113, 115, 117, 120, 121, 130, 135, 140 DANC 179, 179H, 199, 199H GDSN 110 MUS 101, 129, 130, 131, 132, 133, 135, 136 MUST 151, 152 PHTO 110, 130 THTR 101, 105, 105H, 110, 150 Select one: ANTH 104 ASL 101, 124, 201, 202 CHIN 101, 102 CHST 101, 146, 148, 148H, 150 EGSS 130 ENGL 126, 131 FR 101, 102, 201, 202 HIST 101, 102, 122,131, 143, 143H, 144, 144H, 156, 157, 158, 159, 159H, 167, 170 HUM 110, 111, 125, 125H, 130, 140, 145 JAPN 101, 102 LIT 102, 102H, 112A, 112AH, 112B, 112BH, 114,114H, 117,117H, 130, 130H, 140, 140H, 141, 141H, 142, 142H,144A, 144AH, 144B, 144BH, 145, 145H, 146A, 146AH, 146B, 146BH, 147, 147H, 148, 148H, 149, 149H PHIL 101, 101H, 102, 120, 122, 124, 126, 128, 128H, 135, 140 POLS 128, 128H, 150 SPAN 101, 101S, 102, 102S, 201, 201H, 202 SPCH 130, 132 Total Semester Units: 12.0† 3rd Semester BIOL 200 - Principles of Biology 1 (Molecular and Cellular Biology) (CSU GE B2 and B3)M 5.0 BIOL 200 - Principles of Biology 1 (Molecular and Cellular Biology) (5.0 units) Prerequisite:CHEM 120 Transfers to:UC, CSU This course is first in a sequence of courses for undergraduate preparation for biology majors. The course covers principles and applications of prokaryotic and eukaryotic cell structure and function, biological molecules, homeostasis, cell reproduction and its controls, molecular genetics, classical/Mendelian genetics, cell metabolism including photosynthesis and respiration, and cellular communication. Additional areas of focus include evolution and ecology. The laboratory portion of the course applies the processes of scientific inquiry and experimental design to the study of biological concepts focusing on observations, experimentation, record keeping, data collection and analysis, and presentation of outcomes. The course sequence also provides excellent preparation for students who intend to pursue post-graduate studies in the medical sciences. Select one: BIOL 120 / CHEM 140 / GEOL 150 / GEOL 151 / GEOG 101 / GEOG 101L / MATH 130 / MATH 130H / PSY 190 / MATH 190 / MATH 190H / MATH 170 M 4.0 Choose 15 units from the following: Biol 120/Chem 140/GEOL 150 and GEOL 151 or GEOG 101 and GEOG 101L/ Math 130 or Math 130H or Psy 190/ Math 190 or Math 190H/Math 170 BIOL 120 - Environmental Biology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU In this course, students utilize basic biological concepts and an interdisciplinary approach to determine how to address environmental challenges. Topics may include ecosystem characteristics and functions, population dynamics, energy and material resource use, pollution, and alternative energy sources. Because the course takes up the social, political, and economic implications of environmental decisions, it is intended for students from many disciplines, including non-STEM disciplines. This course fulfills the general education requirement for life sciences majors. CHEM 140 - General Chemistry II (5.0 units) Prerequisite: CHEM 130 Advisory: ENGL 101; MATH 180 Transfers to:UC, CSU CHEM 140 is a continuation of CHEM 130. Theory and techniques of elementary physical chemistry are stressed. Emphasis is placed on the dynamics of chemical change using thermodynamics and reaction kinetics as the major tools. A thorough treatment of equilibrium is given, with many examples of acid/base, buffer, solubility, and complex ions. Entropy and free energy, electrochemistry, coordination compounds and a brief introduction to organic chemistry and nuclear chemistry are presented. Various analytical techniques used in modern chemistry are introduced. Descriptive chemistry of representative metallic and nonmetallic elements is included. The Laboratory introduces experimental chemistry with examples from areas of kinetics, equilibrium, acid/base and buffer preparation, differential titration, electrochemistry, and qualitative analysis. Modern instrumental methods are used in some exercises. GEOL 150 - Physical Geology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This introductory course covers the principles of geology, with emphasis on Earth processes, and fulfills the physical science general education requirement. The course focuses on the internal structure and origin of the Earth and the processes that change and shape. Earthquakes, volcanoes, oil, beaches, tsunamis, rocks, rivers, glaciers, plate tectonics, minerals, and continent and mountain building are among the topics that are explored. GEOL 151 - Physical Geology Laboratory (1.0 units) Prerequisite/Corequisite: GEOL 150 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This lab engages students with a hands-on review of the principles presented in Geology 150 and their application to everyday life. Laboratory exercises will include but are not limited to the identification of minerals; igneous, metamorphic, and sedimentary rocks; topographic and geologic map exercises demonstrating the work of water, wind, ice, and gravity; and the effects of tectonic GEOG 101 - Introduction to Physical Geography (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU This general education course introduces students to the natural processes that shape the earth. Weather and climate, landforms and volcanoes, glaciers, rivers, and coastal phenomena are among the topics explored. This course is for any students interested in the physical processes that shape land masses. GEOG 101L - Introduction to Physical Geography Laboratory (1.0 units) Prerequisite/Corequisite: GEOG 101 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU The physical geography laboratory is designed to acquaint students with the methods, techniques, and procedures used by geographers in the study and analysis of the physical environment. Students use maps, the Internet, and other tools to work with real-world geographic data. This course fulfills the general education lab requirement in physical sciences when taken with or after the Introduction to Physical Geography course (GEOG 101). MATH 130 - Statistics (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory: ENGL 101; READ 101 Transfers to: UC (*crdit limit),CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. MATH 130H - Statistics Honors (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course and ENGL 101 Advisory: READ 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. This course is intended for students who meet Honors Program requirements. PSY 190 - Statistics for the Behavioral Sciences (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory:ENGL 101 and the ability to read college-level texts. Transfers to: UC (*credit limit), CSU (*The UC will grant credit for only one of the following courses: MATH 130 or MATH 130H or PSY 190) This course provides an overview of the types of statistics that are important in the behavioral sciences. It is designed to teach students majoring in psychology, sociology, political science, and anthropology how to present and interpret experimental data. The course focuses on hypothesis testing and the statistics used to analyze assumptions, with topics including basic probability, measures of central tendency, measures of variance, sampling, and inferential statistics. MATH 190 - Calculus I (4.0 units) Prerequisite:MATH 180 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics in functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus. MATH 190H - Calculus I Honors (4.0 units) Prerequisite: MATH 180; ENGL 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics of functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus.This course is intended for students who meet Honors Program requirements. MATH 170 - Elements of Calculus (4.0 units) Prerequisite: Enrollment requires appropriate placement (based on high school GPA and/or other measures) or completion of an intermediate algebra course. Advisory: ENGL 101 Transfers to: UC (8credit limit),CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190, or MATH 190H) This one-semester course focuses on the fundamentals of algebra-based calculus and its applications to the fields of business, economics, social sciences, biology, and technology. Course topics include graphing of functions; applications of derivatives and integrals of functions including polynomials; rational, exponential, and logarithmic functions; multivariable derivatives; and differential equations. CSU GE B1 - Physical SciencesGE 3.0† All honors courses have a prerequisite. † some classes may have higher units. All labs or courses with labs are indicated by an '*'. Select one: ASTR 110, 110H, 112*, 137* CHEM 110*, 120*,130*, 140*, 230*, 231* GEOG 101, 101L* GEOL 150, 151*, 152, 152L* PHY 120*, 150*, 160*, 211*, 212*, 213* CHEM 110*, 120*,130*, 140*, 230*, 231* GEOG 101, 101L* GEOL 150,151* PHY 120*, 150*, 160*, 211*, 212*, 213* CSU GE F - Ethnic StudiesGE 3.0 Students who started at Rio Hondo College beginning in Fall 2021 or later and returning students who have not maintained continuous enrollment will be required to complete a course in Area F. Students who started at Rio Hondo College prior to Fall 2021 and have maintained continuous enrollment will not be required to complete a course in Area F (instead, they will complete 9 units from at least two disciplines in Area D). Please see a counselor for details. Select one: CHST 101 EGSS 110 Total Semester Units: 15.0† 4th Semester BIOL 201 - Principles of Biology 2 (Diversity and Ecology)M 5.0 BIOL 201 - Principles of Biology 2 (Diversity and Ecology) (5.0 units) Prerequisite:BIOL 200 Transfers to: UC, CSU This course continues the sequence of undergraduate preparation for biology majors. The course is a survey of the diversity of unicellular and multicellular life on earth, focusing on the relationships between structure and function, as well as evolutionary adaptations to their environments. Topics deal with classification, development, evolutionary relationships, and ecological functions of living organisms, inclusive of prokaryotes, fungi, protists, plants, and animals. Laboratories emphasize life forms, experimentation, and dissections. Field trips are used to examine organisms in their natural settings. Select one: BIOL 120 / CHEM 140 / GEOL 150 / GEOL 151 / GEOG 101 / GEOG 101L / MATH 130 / MATH 130H / PSY 190 / MATH 190 / MATH 190H / MATH 170 M 4.0 Choose 15 units from the following: Biol 120/Chem 140/GEOL 150 and GEOL 151 or GEOG 101 and GEOG 101L/ Math 130 or Math 130H or Psy 190/ Math 190 or Math 190H/Math 170 BIOL 120 - Environmental Biology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU In this course, students utilize basic biological concepts and an interdisciplinary approach to determine how to address environmental challenges. Topics may include ecosystem characteristics and functions, population dynamics, energy and material resource use, pollution, and alternative energy sources. Because the course takes up the social, political, and economic implications of environmental decisions, it is intended for students from many disciplines, including non-STEM disciplines. This course fulfills the general education requirement for life sciences majors. CHEM 140 - General Chemistry II (5.0 units) Prerequisite: CHEM 130 Advisory: ENGL 101; MATH 180 Transfers to:UC, CSU CHEM 140 is a continuation of CHEM 130. Theory and techniques of elementary physical chemistry are stressed. Emphasis is placed on the dynamics of chemical change using thermodynamics and reaction kinetics as the major tools. A thorough treatment of equilibrium is given, with many examples of acid/base, buffer, solubility, and complex ions. Entropy and free energy, electrochemistry, coordination compounds and a brief introduction to organic chemistry and nuclear chemistry are presented. Various analytical techniques used in modern chemistry are introduced. Descriptive chemistry of representative metallic and nonmetallic elements is included. The Laboratory introduces experimental chemistry with examples from areas of kinetics, equilibrium, acid/base and buffer preparation, differential titration, electrochemistry, and qualitative analysis. Modern instrumental methods are used in some exercises. GEOL 150 - Physical Geology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This introductory course covers the principles of geology, with emphasis on Earth processes, and fulfills the physical science general education requirement. The course focuses on the internal structure and origin of the Earth and the processes that change and shape. Earthquakes, volcanoes, oil, beaches, tsunamis, rocks, rivers, glaciers, plate tectonics, minerals, and continent and mountain building are among the topics that are explored. GEOL 151 - Physical Geology Laboratory (1.0 units) Prerequisite/Corequisite: GEOL 150 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This lab engages students with a hands-on review of the principles presented in Geology 150 and their application to everyday life. Laboratory exercises will include but are not limited to the identification of minerals; igneous, metamorphic, and sedimentary rocks; topographic and geologic map exercises demonstrating the work of water, wind, ice, and gravity; and the effects of tectonic GEOG 101 - Introduction to Physical Geography (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU This general education course introduces students to the natural processes that shape the earth. Weather and climate, landforms and volcanoes, glaciers, rivers, and coastal phenomena are among the topics explored. This course is for any students interested in the physical processes that shape land masses. GEOG 101L - Introduction to Physical Geography Laboratory (1.0 units) Prerequisite/Corequisite: GEOG 101 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU The physical geography laboratory is designed to acquaint students with the methods, techniques, and procedures used by geographers in the study and analysis of the physical environment. Students use maps, the Internet, and other tools to work with real-world geographic data. This course fulfills the general education lab requirement in physical sciences when taken with or after the Introduction to Physical Geography course (GEOG 101). MATH 130 - Statistics (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory: ENGL 101; READ 101 Transfers to: UC (*crdit limit),CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. MATH 130H - Statistics Honors (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course and ENGL 101 Advisory: READ 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. This course is intended for students who meet Honors Program requirements. PSY 190 - Statistics for the Behavioral Sciences (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory:ENGL 101 and the ability to read college-level texts. Transfers to: UC (*credit limit), CSU (*The UC will grant credit for only one of the following courses: MATH 130 or MATH 130H or PSY 190) This course provides an overview of the types of statistics that are important in the behavioral sciences. It is designed to teach students majoring in psychology, sociology, political science, and anthropology how to present and interpret experimental data. The course focuses on hypothesis testing and the statistics used to analyze assumptions, with topics including basic probability, measures of central tendency, measures of variance, sampling, and inferential statistics. MATH 190 - Calculus I (4.0 units) Prerequisite:MATH 180 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics in functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus. MATH 190H - Calculus I Honors (4.0 units) Prerequisite: MATH 180; ENGL 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics of functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus.This course is intended for students who meet Honors Program requirements. MATH 170 - Elements of Calculus (4.0 units) Prerequisite: Enrollment requires appropriate placement (based on high school GPA and/or other measures) or completion of an intermediate algebra course. Advisory: ENGL 101 Transfers to: UC (8credit limit),CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190, or MATH 190H) This one-semester course focuses on the fundamentals of algebra-based calculus and its applications to the fields of business, economics, social sciences, biology, and technology. Course topics include graphing of functions; applications of derivatives and integrals of functions including polynomials; rational, exponential, and logarithmic functions; multivariable derivatives; and differential equations. CSU GE E - Lifelong Learning/Self DevelopmentGE 3.0 Select one: ANTH 110 HUSR 123 CD 106 EGSS 130 KIN 159 (F'21), 170 (F'21), 190 (F'21), 191, 192, 196 COUN 101 (F’11), 104, 151 (F’98) NUTR 110 DD 214 (see counselor for details) PHIL 122 PSY 112, 121 EDEV 101 (F’15), 151 SOC 105, 110 Total Semester Units: 12.0 5th Semester Select one: ECON 102 / ECON 102H (CSU GE D)M 3.0 ECON 102 - Principles of Microeconomics (3.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of an elementary algebra course. Advisory:It is advised that students be able to engage in written composition at a college level and read college-level texts. Transfers to: UC (*credit limit), CSU(*Students will receive credit for only one of the following courses: ECON 102 or ECON 102H) This introductory course in economic analysis of markets has students learn how markets work to coordinate consumers and producers, the various causes of the failure of free markets, and policies used to correct or regulate market behavior. The course is intended for economics and business majors as well as to satisfy General Education (GE) requirements, and may be taken prior to ECON 101. ECON 102H - Principles of Microeconomics Honors (3.0 units) Prerequisite: ENGL 101; Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of an intermediate algebra course. Advisory:It is advised that students be able to read college-level texts. Transfers to: UC (*credit limit), CSU(*Students will receive credit for only one of the following courses: ECON 102 or ECON 102H) This introductory course in economic analysis of markets has students learn how markets work to coordinate consumers and producers, the various causes of the failure of free markets, and policies used to correct or regulate market behavior. Students complete a research project on an actual economic policy or a theoretical view. The course is intended for economics and business majors as well as to satisfy General Education (GE) requirements, and may be taken prior to ECON 101 by any student who has completed ENGL 101 with a “C” or better. This course is intended for students who meet Honors Program requirements. Select one: PHY 211 / PHY 150 M 4.0 PHY 211 - Physics for Scientists and Engineers - I (4.0 units) Prerequisite:MATH 190 or MATH 190H Transfers to: UC (*credit limit), CSU (*Students will receive credit for one physics series: PHY 150 and PHY 160 or PHY 211, PHY 212, and PHY 213) This course is the first of a three-semester sequence designed for students transferring to four-year institutions with majors in the sciences and engineering. Topics covered include kinematics, dynamics, energy, work, momentum, and conservation principles. PHY 150 - General Physics I (4.0 units) Prerequisite: MATH 175 Transfers to: UC (*credit limit), CSU (*Students will receive credit for one physics series: PHY 150 and PHY 160 or PHY 211, PHY 212, and PHY 213) This course is the first of a two-semester, trigonometry-based physics sequence and is designed for students transferring to a four-year institution and planning careers in health professional fields such as medicine, dentistry, veterinary science, pharmacy, and optometry as well as those students in engineering technology and architecture. Topics include kinematics, dynamics, energy, work, momentum, conservation principles, rotational motion, simple harmonic motion, fluids, and thermodynamics. Students majoring in the biological sciences should consult a counselor as to whether this course satisfies the general preparation requirements for their major at their intended transfer university. CSU GE A1 - Oral CommunicationGE 3.0 Note: All honors courses have a prerequisite. Select one: SPCH 100, 101, 101H, 120, 140 CSU GE C2 - HumanitiesGE 3.0† All honors courses have a prerequisite. † some classes may have higher units. Select one: ANTH 104 ASL 101, 124, 201, 202 CHIN 101, 102 CHST 101, 146, 148, 148H, 150 EGSS 130 ENGL 126, 131 FR 101, 102, 201, 202 HIST 101, 102, 122,131, 143, 143H, 144, 144H, 156, 157, 158, 159, 159H, 167, 170 HUM 110, 111, 125, 125H, 130, 140, 145 JAPN 101, 102 LIT 102, 102H, 112A, 112AH, 112B, 112BH, 114,114H, 117,117H, 130, 130H, 140, 140H, 141, 141H, 142, 142H, 144A, 144AH, 144B, 144BH, 145, 145H, 146A, 146AH, 146B, 146BH, 147, 147H, 148, 148H, 149, 149H PHIL 101, 101H, 102, 120, 122, 124, 126, 128, 128H, 135, 140 POLS 128, 128H, 150 SPAN 101, 101S, 102, 102S, 201, 201H, 202 SPCH 130, 132 Total Semester Units: 13.0† 6th Semester Select one: PHY 213 / PHY 160 M 4.0 PHY 213 - Physics for Scientists and Engineers - III (4.0 units) Prerequisite: PHY 211 and MATH 191 Advisory: MATH 250 Transfers to: UC (*credit limit), CSU (*Students will receive credit for one physics series: PHY 150 and PHY 160 or PHY 211, PHY 212, and PHY 213) This course is the first of a three-semester sequence designed for students transferring to four-year institutions with majors in the sciences and engineering. Topics covered include electric fields, electric potential, current, circuits, magnetic fields, Gauss' law, Ampere's law, Maxwell's equations, induction, and electromagnetic waves. PHY 160 - General Physics II (4.0 units) Prerequisite: PHY 150 Transfers to: UC (*credit limit), CSU (*Students will receive credit for one physics series: PHY 150 and PHY 160 or PHY 211, PHY 212, and PHY 213) This course is the second of a two-semester, trigonometry-based physics sequence and is designed for students transferring to a four-year institution with majors in health professional fields such as medicine, dentistry, veterinary science, pharmacy, and optometry as well as those students in engineering technology and architecture. Topics include electricity and magnetism, oscillations, waves, optics, and modern physics. Students majoring in the biological sciences should consult a counselor as to whether this course satisfies the general preparation requirements for their major at their intended transfer university. Select one: BIOL 120 / CHEM 140 / GEOL 150 / GEOL 151 / GEOG 101 / GEOG 101L / MATH 130 / MATH 130H / PSY 190 / MATH 190 / MATH 190H / MATH 170 M 4.0 Choose 15 units from the following: Biol 120/Chem 140/GEOL 150 and GEOL 151 or GEOG 101 and GEOG 101L/ Math 130 or Math 130H or Psy 190/ Math 190 or Math 190H/Math 170 BIOL 120 - Environmental Biology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU In this course, students utilize basic biological concepts and an interdisciplinary approach to determine how to address environmental challenges. Topics may include ecosystem characteristics and functions, population dynamics, energy and material resource use, pollution, and alternative energy sources. Because the course takes up the social, political, and economic implications of environmental decisions, it is intended for students from many disciplines, including non-STEM disciplines. This course fulfills the general education requirement for life sciences majors. CHEM 140 - General Chemistry II (5.0 units) Prerequisite: CHEM 130 Advisory: ENGL 101; MATH 180 Transfers to:UC, CSU CHEM 140 is a continuation of CHEM 130. Theory and techniques of elementary physical chemistry are stressed. Emphasis is placed on the dynamics of chemical change using thermodynamics and reaction kinetics as the major tools. A thorough treatment of equilibrium is given, with many examples of acid/base, buffer, solubility, and complex ions. Entropy and free energy, electrochemistry, coordination compounds and a brief introduction to organic chemistry and nuclear chemistry are presented. Various analytical techniques used in modern chemistry are introduced. Descriptive chemistry of representative metallic and nonmetallic elements is included. The Laboratory introduces experimental chemistry with examples from areas of kinetics, equilibrium, acid/base and buffer preparation, differential titration, electrochemistry, and qualitative analysis. Modern instrumental methods are used in some exercises. GEOL 150 - Physical Geology (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This introductory course covers the principles of geology, with emphasis on Earth processes, and fulfills the physical science general education requirement. The course focuses on the internal structure and origin of the Earth and the processes that change and shape. Earthquakes, volcanoes, oil, beaches, tsunamis, rocks, rivers, glaciers, plate tectonics, minerals, and continent and mountain building are among the topics that are explored. GEOL 151 - Physical Geology Laboratory (1.0 units) Prerequisite/Corequisite: GEOL 150 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have knowledge of elementary algebra concepts. Transfers to: UC, CSU This lab engages students with a hands-on review of the principles presented in Geology 150 and their application to everyday life. Laboratory exercises will include but are not limited to the identification of minerals; igneous, metamorphic, and sedimentary rocks; topographic and geologic map exercises demonstrating the work of water, wind, ice, and gravity; and the effects of tectonic GEOG 101 - Introduction to Physical Geography (3.0 units) Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU This general education course introduces students to the natural processes that shape the earth. Weather and climate, landforms and volcanoes, glaciers, rivers, and coastal phenomena are among the topics explored. This course is for any students interested in the physical processes that shape land masses. GEOG 101L - Introduction to Physical Geography Laboratory (1.0 units) Prerequisite/Corequisite: GEOG 101 Advisory:It is advised that students be able to engage in written composition at a college level, read college-level texts, and have a knowledge of elementary algebra concepts. Transfers to: UC, CSU The physical geography laboratory is designed to acquaint students with the methods, techniques, and procedures used by geographers in the study and analysis of the physical environment. Students use maps, the Internet, and other tools to work with real-world geographic data. This course fulfills the general education lab requirement in physical sciences when taken with or after the Introduction to Physical Geography course (GEOG 101). MATH 130 - Statistics (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory: ENGL 101; READ 101 Transfers to: UC (*crdit limit),CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. MATH 130H - Statistics Honors (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course and ENGL 101 Advisory: READ 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 130 or MATH 130H) This course is designed for students majoring in business, social sciences, and life sciences. This course provides an overview of descriptive and inferential statistics. Students learn to read, interpret, and present data in a well-organized way via a study of frequency distributions, graphs, measures of central tendency and variability, correlation, and linear regression. While discussing inferential statistics, students learn to make generalizations about populations, including probability, sampling techniques, confidence intervals, and hypothesis tests. This course is intended for students who meet Honors Program requirements. PSY 190 - Statistics for the Behavioral Sciences (4.0 units) Prerequisite:Enrollment requires appropriate placement (based on high school GPA and/or other measures), or completion of a pre-statistics or an intermediate algebra course. Advisory:ENGL 101 and the ability to read college-level texts. Transfers to: UC (*credit limit), CSU (*The UC will grant credit for only one of the following courses: MATH 130 or MATH 130H or PSY 190) This course provides an overview of the types of statistics that are important in the behavioral sciences. It is designed to teach students majoring in psychology, sociology, political science, and anthropology how to present and interpret experimental data. The course focuses on hypothesis testing and the statistics used to analyze assumptions, with topics including basic probability, measures of central tendency, measures of variance, sampling, and inferential statistics. MATH 190 - Calculus I (4.0 units) Prerequisite:MATH 180 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics in functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus. MATH 190H - Calculus I Honors (4.0 units) Prerequisite: MATH 180; ENGL 101 Transfers to: UC (*credit limit), CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190 or MATH 190H) MATH 190 is a semester course designed primarily for those students planning to pursue programs in engineering, mathematics, computer science, and physical sciences. This is the first course in differential and integral calculus of a single variable. It includes topics of functions, limits, and continuity, techniques and applications of differentiation and integration and the Fundamental Theorem of Calculus.This course is intended for students who meet Honors Program requirements. MATH 170 - Elements of Calculus (4.0 units) Prerequisite: Enrollment requires appropriate placement (based on high school GPA and/or other measures) or completion of an intermediate algebra course. Advisory: ENGL 101 Transfers to: UC (8credit limit),CSU (*Students will receive credit for only one of the following courses: MATH 170, MATH 190, or MATH 190H) This one-semester course focuses on the fundamentals of algebra-based calculus and its applications to the fields of business, economics, social sciences, biology, and technology. Course topics include graphing of functions; applications of derivatives and integrals of functions including polynomials; rational, exponential, and logarithmic functions; multivariable derivatives; and differential equations. CSU GE A3 - Critical ThinkingGE 3.0† All honors courses have a prerequisite. † some classes may have higher units. Select one: ENGL 201, 201H; PHIL 110, 110H, 112, 112H, 115; READ 101; SPCH 140 Total Semester Units: 11.0† Total Units for Environmental Science AS-T program (Transfer to CSU) 77.5†
{"url":"https://pathways.riohondo.edu/program/environmental-science-as-t/","timestamp":"2024-11-02T12:42:21Z","content_type":"text/html","content_length":"278800","record_id":"<urn:uuid:5cc687f9-1a73-4444-9fac-10abd7e3f3e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00015.warc.gz"}
The Importance of Visual Learning in Math There are so many different kinds of learning styles that it’s impractical for math educators to rely on just one. In fact, research shows that there are 70+ learning styles with visual, aural, verbal, and kinesthetic (aka VARK) approaches as the main sensory approaches. Interestingly, studies show that 65% of the population are visual learners. This is why it is important for educators who want their students, especially children, to better engage with math content to understand the needs of visual learners. What is Visual Learning? Visual learning is the interaction and processing of knowledge through graphics. This includes drawings, images, charts, graphs, maps, and other types of visual aids. To engage visual learners, educators can use chalkboards, whiteboards, screens, projectors, and other mediums for displaying visual information to help students learn and remember what they see. While visual learners can learn through other means as well, they learn best through a “show me” approach. For example, they are typically better at watching someone fix a flat tire than reading instructions in a printed manual. Even directions, when spoken, can be difficult for a visual learner to understand when compared to looking at a map. [Read: How to Get Kids Excited About Math] So, how does this translate into the math class? Consider teaching addition. For a child who is a visual learner, showing a diagram with three apples in one hand and three in the other allows the child to visually count each apple until they reach the number six. This can be a more effective way for them to learn than a teacher verbally explaining the concept of three plus three equals six. Visual Learning and Math are Inseparable In a scientific paper released in the Journal of Applied & Computational Mathematics, researchers showed that “mathematical thinking is grounded in visual processing.” According to the lead researcher, Dr. Jo Boaler, a professor at the Stanford Graduate School of Education, when you try to solve a mathematical problem, brain activity occurs in several regions of the brain. Two of these regions are the ventral and dorsal pathways, which are visual pathways. And this activity happens to both children and adults. The takeaway is that even if we don’t do it actively, on some level, our brains will subconsciously render the numbers we see into images. This allows us to understand space and quantity much better. So, when children are learning math, their brains are working under the hood trying to visualize the operations. Another study observed activity in the brain region associated with finger representation, aka counting on your fingers, and perception when we look at mathematical calculations. Many kids naturally use their fingers as visual aids when performing arithmetic. What the study showed was that our brain does this type of “finger counting” anyway by representing the calculations as fingers without us even realizing it. All this means is that visuals can transform the math class for many students. A visual provides a shift in perspective for some students, allowing them to understand math problems differently. While one student can hear numbers and math problems being said out loud and grasp the underlying math concepts, their classmate might need to look at the same problems visually before they can truly understand the “why” behind the problem. These visual learners then translate these visual representations, such as the six apples, back into numbers in their brain, putting them on par with their fellow non-visual peers. [Read: Teaching Math Through Your Child’s Passions] Other Ways Visual Learning can Benefit the Math Class Visual Math There are more reasons why visual learning is important in math, even if a student is not a visual learner. Here are a few of them. Better Knowledge Retention For many, visuals are more memorable than text or auditory learning. Since the information in visuals is presented in bite-sized chunks, it is easier to process and store. So, for example, when online education platforms break down complex topics into short videos, it allows students to digest the topic in short, manageable sessions instead of reading a long chapter in text format. Students Think More Creatively With visual learning, students become more creative in their problem-solving. And they carry on this lesson in other aspects of their daily lives outside the classroom. For example, think of the last time you misplaced something around the house. You might have tried to tackle the problem visually by closing your eyes and imagining where the item might be. As you visualize in your mind the steps you took the last time you remember having, let’s say, your keys, you’re experiencing visual problem solving. Visual Learning is Engaging When visuals are made to be a part of the process, learning can be fun. This can help students stay engaged longer, which is important in this day and age where attention spans are at an all-time low. Studies have shown that the average attention span is currently eight seconds. It’s easy to lose a student’s attention if the lesson doesn’t start with attention-grabbing and relatable visuals. [Read: How to Teach Your Child to Love Math] Learning math through visualization gives students a deeper understanding of mathematical concepts and their practical applications. Visuals are an important aspect of math education, and this applies to everyone⏤not just visual learners. This is not to say that visual learning should replace all other types of learning in the classroom; rather, as we’ve seen, it should be used to increase engagement, understanding, and retention. About BYJU’S FutureSchool As a global education platform, we at BYJU’S FutureSchool understand the importance of visual learning in math. Our research-based approach to visual math combines one-to-one attention with a hands-on approach to teaching children aged 6 to 14. Even our group classes are intimate, consisting of only four students per teacher. This ensures that everyone gets the attention and guidance they BYJU’S FutureSchool’s interactive platform was built to be engaging and intuitive. Our math mission is to foster students’ confidence in math. We use visuals to drive concepts home, rather than simply asking students to memorize formulas without knowing why. Our storytelling approach to teaching enables students to visualize what they learn. We also celebrate mistakes as learning opportunities. This helps to mitigate the fear of failure while also encouraging students to embrace taking chances.
{"url":"https://www.byjusfutureschool.com/blog/the-importance-of-visual-learning-in-math/","timestamp":"2024-11-10T04:37:28Z","content_type":"text/html","content_length":"174025","record_id":"<urn:uuid:d7e61636-8af4-4000-876a-644a803dab56>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00844.warc.gz"}
How is mathematics useful to a young child? | The Polymath Blog It might feel like mathematics isn’t very relevant to young kids. They don’t need to pay bills or calculate taxes. They don’t really do any baking or cooking, at least not the bits that involve math. They don’t have to plan out their day, they definitely don’t need algebra, calculus, trigonometry, statistics... Language on the other hand, feels more relevant as children are learning to communicate with others, but this idea that math isn’t used by young children is troublesome. There are so many ways in which math underpins the way that children function every day, showing how useful and important math is to young children. Let’s talk about a couple of them... Children seem to have an intrinsic drive to establish fairness. Fairness between friends, definitely between siblings. If one child is having 2 pieces of chocolate, of course the other must have the same! But how do children know what’s fair? From a very young age, children use visuals judge whether sharing is fair or not - do the two portions look the same size? That’s math! Comparison of size is the foundation of math, but we can use numbers with kids to help them understand exactly what’s fair, because sometimes, two portions might not look the same but may be equal. For example, three large Lego pieces would easily be bigger than 20 smaller pieces, but there’s far more you can do with 20 smaller pieces of Lego. An understanding of basic counting and numbers is fundamental to children being able to share with friends and family. Making plans and understanding time “We’ll see your friends next week!” doesn’t make a lot of sense to children until they understand what time means. They need to know how many days in a week when you tell them they’ll see their friend next week. They need to know how long 5 minutes is when you tell them they need to brush their teeth in 5 minutes! Once they understand different periods of times, then it’s about planning the day and week. Lego, Duplo... how many blocks do I need to build this? Most kids play with some kind of block or brick toy growing up and children definitely use math for this activity! Young children begin to make patterns and shapes with blocks and bricks, and this requires both arithmetic and geometry. Children will start counting the number of bricks they need for each shape or colour. They will learn mental rotation as they begin to understand the dynamics of 3D shapes from different angles. Children’s ability to do this will strengthen the more they practice this skills, and conversely, a stronger grasp on these math skills will improve a child’s ability to be creative with these sorts of toys. Sports and video games Much like sharing, competition is important to children. To be able to keep count of how many points a team have scored in a sports game is a key arithmetic skill. When it comes to video games, creative games like Minecraft require children to know how many blocks fit in a stack, how many items they can carry, how many of a resource they need to build something, and so on. Math is incredibly relevant when it comes to all kinds of games that kids play. So, how is mathematics useful and important, or event relevant to a young child? These are just some of the ways, the list really does go on. Helping your child develop a strong foundation of mathematics through every day uses such as these, and letting them know that math is important and exciting is so useful to their day to day lives, both now and in the future.
{"url":"https://polymath.how/blog/2022-02-math-useful-children","timestamp":"2024-11-07T17:19:38Z","content_type":"text/html","content_length":"29883","record_id":"<urn:uuid:10129a84-df35-4ae5-adf6-ea582a121347>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00829.warc.gz"}
POV-Ray: Newsgroups: povray.programming: camera direction vs. angle: Re: camera direction vs. angle Le_Forgeron <lef### [at] free> > Then, the relation between direction, right and angle is as follow: > direction.length := (right.length / 2 )/tan(angle/2) > How you convert a focal length to an angle of view is left as an > exercise for the reader: > The now historical 24mmx36mm film format has a diagonal of 43mm, and a > "normal" lens has a 50mm focal length, for an angle of about 53° on the > diagonal (whereas povray use the angle on the horizontal) > tan(53°/2) = sqrt((right.lentgth/2)²+(up.length/2)²)/(direction.length) > the interesting part is reaching 53° from 43mm of film diagonal and 50mm > objective. (some magic is needed, such as a 43/50 ratio on the objective > length) Thanks for explaining, you helped me a lot. Post a reply to this message
{"url":"http://news.povray.org/povray.programming/message/%3Cweb.522af411fe18e83d9b0432910%40news.povray.org%3E/#%3Cweb.522af411fe18e83d9b0432910%40news.povray.org%3E","timestamp":"2024-11-09T10:39:03Z","content_type":"text/html","content_length":"8884","record_id":"<urn:uuid:a679908b-837d-49d9-93ac-25382a1f0220>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00537.warc.gz"}
Measuring Angle of Intersection This example shows how to measure the angle and point of intersection between two beams using bwtraceboundary, which is a boundary tracing routine. A common task in machine vision applications is hands-free measurement using image acquisition and image processing techniques. Step 1: Load Image Read in gantrycrane.png and draw arrows pointing to two beams of interest. It is an image of a gantry crane used to assemble a bridge. RGB = imread("gantrycrane.png"); text(size(RGB,2),size(RGB,1)+15,"Image courtesy of Jeff Mather", ... line([300 328],[85 103],Color=[1 1 0]); line([268 255],[85 140],Color=[1 1 0]); text(150,72,"Measure the angle between these beams",Color="y", ... Step 2: Extract the Region of Interest Crop the image to obtain only the beams of the gantry crane chosen earlier. This step will make it easier to extract the edges of the two metal beams. You can obtain the coordinates of the rectangular region using pixel information displayed by imtool. start_row = 34; start_col = 208; cropRGB = RGB(start_row:163,start_col:400,:); Store (x,y) offsets for later use; subtract 1 so that each offset will correspond to the last pixel before the region of interest. offsetX = start_col-1; offsetY = start_row-1; Step 3: Threshold the Image The bwtraceboundary function expects objects of interest to be white in a binary image, so convert the image to black and white and take the image complement. I = im2gray(cropRGB); BW = imbinarize(I); BW = ~BW; Step 4: Find Initial Point on Each Boundary The bwtraceboundary function requires that you specify a single point on a boundary. This point is used as the starting location for the boundary tracing process. To extract the edge of the lower beam, pick a column in the image and inspect it until a transition from a background pixel to the object pixel occurs. Store this location for later use in bwtraceboundary routine. Repeat this procedure for the other beam, but this time tracing horizontally. dim = size(BW); % Horizontal beam col1 = 4; row1 = find(BW(:,col1), 1); % Angled beam row2 = 12; col2 = find(BW(row2,:), 1); Step 5: Trace the Boundaries The bwtraceboundary routine is used to extract (X, Y) locations of the boundary points. In order to maximize the accuracy of the angle and point of intersection calculations, it is important to extract as many points belonging to the beam edges as possible. You should determine the number of points experimentally. Since the initial point for the horizontal bar was obtained by scanning from north to south, it is safest to set the initial search step to point towards the outside of the object, i.e. "North". boundary1 = bwtraceboundary(BW,[row1, col1],"N",8,70); % Set the search direction to counterclockwise, in order to trace downward boundary2 = bwtraceboundary(BW,[row2, col2],"E",8,90,"counter"); hold on % Apply offsets in order to draw in the original image Step 6: Fit Lines to the Boundaries Although (X,Y) coordinates pairs were obtained in the previous step, not all of the points lie exactly on a line. Which ones should be used to compute the angle and point of intersection? Assuming that all of the acquired points are equally important, fit lines to the boundary pixel locations. The equation for a line is y = [x 1]*[a; b]. You can solve for parameters 'a' and 'b' in the least-squares sense by using polyfit. ab1 = polyfit(boundary1(:,2),boundary1(:,1),1); ab2 = polyfit(boundary2(:,2),boundary2(:,1),1); Step 7: Find the Angle of Intersection Use the dot product to find the angle. vect1 = [1 ab1(1)]; % Create a vector based on the line equation vect2 = [1 ab2(1)]; dp = dot(vect1, vect2); Compute vector lengths length1 = sqrt(sum(vect1.^2)); length2 = sqrt(sum(vect2.^2)); Obtain the larger angle of intersection in degrees angle = 180-acos(dp/(length1*length2))*180/pi Step 8: Find the Point of Intersection Solve the system of two equations in order to obtain (X,Y) coordinates of the intersection point. intersection = [1 ,-ab1(1); 1, -ab2(1)] \ [ab1(2); ab2(2)]; Apply offsets in order to compute the location in the original uncropped image intersection = intersection + [offsetY; offsetX] intersection = 2×1 Step 9: Plot the Results Draw an "X" at the point of intersection inter_x = intersection(2); inter_y = intersection(1); Annotate the image with the angle between the beams and the (x,y) coordinates of the intersection point. angleString = [sprintf("%1.3f",angle)+"{\circ}"]; text(inter_x-80,inter_y-25,angleString, ... intersectionString = sprintf("(%2.1f,%2.1f)",inter_x,inter_y); See Also bwboundaries | imbinarize | bwtraceboundary | polyfit Related Examples More About
{"url":"https://ww2.mathworks.cn/help/images/measuring-angle-of-intersection.html","timestamp":"2024-11-09T04:48:13Z","content_type":"text/html","content_length":"79785","record_id":"<urn:uuid:aa88312a-7387-42b7-9348-747b01d09e70>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00471.warc.gz"}
The Everly Equipment Company's flange-lipping machine was purchased 5 years ago for $100,000. It had an... The Everly Equipment Company's flange-lipping machine was purchased 5 years ago for $100,000. It had an... The Everly Equipment Company's flange-lipping machine was purchased 5 years ago for $100,000. It had an expected life of 10 years when it was bought and is being depreciated by the straight-line method by $10,000 per year. As the older flange-lippers are robust and useful machines, this one can be sold for $20,000 at the end of its useful life. A new high-efficiency, digital-controlled flange-lipper can be purchased for $140,000, including installation costs. During its 5-year life, it will reduce cash operating expenses by $55,000 per year, although it will not affect sales. At the end of its useful life, the high-efficiency machine is estimated to be worthless. MACRS depreciation will be used, and the machine will be depreciated over its 3-year class life rather than its 5-year economic life, so the applicable depreciation rates are 33.33%, 44.45%, 14.81%, and 7.41%. The old machine can be sold today for $55,000. The firm's tax rate is 35%, and the appropriate WACC is 12%. Old Machine was purchased 5 year ago for $100,000 and and use straight-line method of depreciation. Annual Depreciation = $10,000 Current Book Value of old machine = $100,000 - (5 × $10,000) = $50,000 Book Value of old machine is $50,000 and market value is $55,000. After tax net sale proceed = $50,000 + ($55,000 - $50,000) × (1 - 35%) = $50,000 + $3,250 = $53,250 After tax net sale proceed from sale of old machine is $53,250. Now, Annual cash flow and NPV of new machine at 12% discount rate is calculated in excel and screen shot provided below: NPV of project is $81,538.73. Since, NPV of project is a positive value, so project should be accepted. answered by: anonymous
{"url":"https://justaaa.com/finance/1873-the-everly-equipment-companys-flange-lipping","timestamp":"2024-11-06T02:31:25Z","content_type":"text/html","content_length":"43588","record_id":"<urn:uuid:c0913ffe-7526-4758-8531-f69ba36a5b83>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00033.warc.gz"}
e Thresholding • In this tutorial, you will learn simple thresholding, adaptive thresholding and Otsu's thresholding. • You will learn the functions cv.threshold and cv.adaptiveThreshold. Simple Thresholding Here, the matter is straight-forward. For every pixel, the same threshold value is applied. If the pixel value is smaller than or equal to the threshold, it is set to 0, otherwise it is set to a maximum value. The function cv.threshold is used to apply the thresholding. The first argument is the source image, which should be a grayscale image. The second argument is the threshold value which is used to classify the pixel values. The third argument is the maximum value which is assigned to pixel values exceeding the threshold. OpenCV provides different types of thresholding which is given by the fourth parameter of the function. Basic thresholding as described above is done by using the type cv.THRESH_BINARY. All simple thresholding types are: See the documentation of the types for the differences. The method returns two outputs. The first is the threshold that was used and the second output is the thresholded image. This code compares the different simple thresholding types: import cv2 as cv import numpy as np from matplotlib import pyplot as plt img = , cv.IMREAD_GRAYSCALE) assert img is not None, "file could not be read, check with os.path.exists()" ret,thresh2 = ret,thresh5 = titles = ['Original Image','BINARY','BINARY_INV','TRUNC','TOZERO','TOZERO_INV'] images = [img, thresh1, thresh2, thresh3, thresh4, thresh5] for i in range(6): CV_EXPORTS_W Mat imread(const String &filename, int flags=IMREAD_COLOR_BGR) Loads an image from a file. double threshold(InputArray src, OutputArray dst, double thresh, double maxval, int type) Applies a fixed-level threshold to each array element. To plot multiple images, we have used the plt.subplot() function. Please checkout the matplotlib docs for more details. The code yields this result: Adaptive Thresholding In the previous section, we used one global value as a threshold. But this might not be good in all cases, e.g. if an image has different lighting conditions in different areas. In that case, adaptive thresholding can help. Here, the algorithm determines the threshold for a pixel based on a small region around it. So we get different thresholds for different regions of the same image which gives better results for images with varying illumination. In addition to the parameters described above, the method cv.adaptiveThreshold takes three input parameters: The adaptiveMethod decides how the threshold value is calculated: • cv.ADAPTIVE_THRESH_MEAN_C: The threshold value is the mean of the neighbourhood area minus the constant C. • cv.ADAPTIVE_THRESH_GAUSSIAN_C: The threshold value is a gaussian-weighted sum of the neighbourhood values minus the constant C. The blockSize determines the size of the neighbourhood area and C is a constant that is subtracted from the mean or weighted sum of the neighbourhood pixels. The code below compares global thresholding and adaptive thresholding for an image with varying illumination: import cv2 as cv import numpy as np from matplotlib import pyplot as plt img = , cv.IMREAD_GRAYSCALE) assert img is not None, "file could not be read, check with os.path.exists()" titles = ['Original Image', 'Global Thresholding (v = 127)', 'Adaptive Mean Thresholding', 'Adaptive Gaussian Thresholding'] images = [img, th1, th2, th3] for i in range(4): void medianBlur(InputArray src, OutputArray dst, int ksize) Blurs an image using the median filter. void adaptiveThreshold(InputArray src, OutputArray dst, double maxValue, int adaptiveMethod, int thresholdType, int blockSize, double C) Applies an adaptive threshold to an array. Otsu's Binarization In global thresholding, we used an arbitrary chosen value as a threshold. In contrast, Otsu's method avoids having to choose a value and determines it automatically. Consider an image with only two distinct image values (bimodal image), where the histogram would only consist of two peaks. A good threshold would be in the middle of those two values. Similarly, Otsu's method determines an optimal global threshold value from the image histogram. In order to do so, the cv.threshold() function is used, where cv.THRESH_OTSU is passed as an extra flag. The threshold value can be chosen arbitrary. The algorithm then finds the optimal threshold value which is returned as the first output. Check out the example below. The input image is a noisy image. In the first case, global thresholding with a value of 127 is applied. In the second case, Otsu's thresholding is applied directly. In the third case, the image is first filtered with a 5x5 gaussian kernel to remove the noise, then Otsu thresholding is applied. See how noise filtering improves the result. import cv2 as cv import numpy as np from matplotlib import pyplot as plt img = , cv.IMREAD_GRAYSCALE) assert img is not None, "file could not be read, check with os.path.exists()" # global thresholding # Otsu's thresholding ret2,th2 = # Otsu's thresholding after Gaussian filtering ret3,th3 = # plot all the images and their histograms images = [img, 0, th1, img, 0, th2, blur, 0, th3] titles = ['Original Noisy Image','Histogram','Global Thresholding (v=127)', 'Original Noisy Image','Histogram',"Otsu's Thresholding", 'Gaussian filtered Image','Histogram',"Otsu's Thresholding"] for i in range(3): plt.title(titles[i*3]), plt.xticks([]), plt.yticks([]) plt.title(titles[i*3+1]), plt.xticks([]), plt.yticks([]) plt.title(titles[i*3+2]), plt.xticks([]), plt.yticks([]) void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT, AlgorithmHint hint=cv::ALGO_HINT_DEFAULT) Blurs an image using a Gaussian filter. How does Otsu's Binarization work? This section demonstrates a Python implementation of Otsu's binarization to show how it actually works. If you are not interested, you can skip this. Since we are working with bimodal images, Otsu's algorithm tries to find a threshold value (t) which minimizes the weighted within-class variance given by the relation: \[\sigma_w^2(t) = q_1(t)\sigma_1^2(t)+q_2(t)\sigma_2^2(t)\] \[q_1(t) = \sum_{i=1}^{t} P(i) \quad \& \quad q_2(t) = \sum_{i=t+1}^{I} P(i)\] \[\mu_1(t) = \sum_{i=1}^{t} \frac{iP(i)}{q_1(t)} \quad \& \quad \mu_2(t) = \sum_{i=t+1}^{I} \frac{iP(i)}{q_2(t)}\] \[\sigma_1^2(t) = \sum_{i=1}^{t} [i-\mu_1(t)]^2 \frac{P(i)}{q_1(t)} \quad \& \quad \sigma_2^2(t) = \sum_{i=t+1}^{I} [i-\mu_2(t)]^2 \frac{P(i)}{q_2(t)}\] It actually finds a value of t which lies in between two peaks such that variances to both classes are minimal. It can be simply implemented in Python as follows: img = , cv.IMREAD_GRAYSCALE) assert img is not None, "file could not be read, check with os.path.exists()" # find normalized_histogram, and its cumulative distribution function hist_norm = hist.ravel()/hist.sum() Q = hist_norm.cumsum() bins = np.arange(256) fn_min = np.inf thresh = -1 for i in range(1,256): p1,p2 = np.hsplit(hist_norm,[i]) # probabilities q1,q2 = Q[i],Q[255]-Q[i] # cum sum of classes if q1 < 1.e-6 or q2 < 1.e-6: b1,b2 = np.hsplit(bins,[i]) # weights # finding means and variances m1,m2 = np.sum(p1*b1)/q1, np.sum(p2*b2)/q2 v1,v2 = np.sum(((b1-m1)**2)*p1)/q1,np.sum(((b2-m2)**2)*p2)/q2 # calculates the minimization function fn = v1*q1 + v2*q2 if fn < fn_min: fn_min = fn thresh = i # find otsu's threshold value with OpenCV function ret, otsu = print( "{} {}".format(thresh,ret) ) void calcHist(const Mat *images, int nimages, const int *channels, InputArray mask, OutputArray hist, int dims, const int *histSize, const float **ranges, bool uniform=true, bool accumulate=false) Calculates a histogram of a set of arrays. Additional Resources 1. Digital Image Processing, Rafael C. Gonzalez 1. There are some optimizations available for Otsu's binarization. You can search and implement it.
{"url":"https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html","timestamp":"2024-11-14T18:29:38Z","content_type":"application/xhtml+xml","content_length":"30522","record_id":"<urn:uuid:062d0731-4295-4236-a96c-8e3be127481d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00116.warc.gz"}
Talk:Interface free energy From Scholarpedia Reviewer A I find the article quite adequate with good motivations of the definitions introduced. However I think it would increase the motivation if a short proof of the fact that the equilibrium shape W solves the variational problem of Wulff, i.e. it minimizes the total surface free energy with a prescribed value of the volume of a convex region. Also it would be helpful to the non expert reader to add a reference to the theory of convex sets such as S(1993) in Pfister(2009). With these additions I think the article is quite acceptable. Comments. I followed the above recommendation Reviewer B I find the article very nice and I believe good for the Scholarpedia.
{"url":"http://www.scholarpedia.org/article/Talk:Interface_free_energy","timestamp":"2024-11-10T05:13:38Z","content_type":"text/html","content_length":"19108","record_id":"<urn:uuid:46d22eba-0644-4e27-823e-f10317b651aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00192.warc.gz"}
4.5 Null Measurements Learning Objectives Learning Objectives By the end of this section, you will be able to do the following: • Explain why a null measurement device is more accurate than a standard voltmeter or ammeter • Demonstrate how a Wheatstone bridge can be used to accurately calculate the resistance in a circuit Standard measurements of voltage and current alter the circuit being measured, introducing uncertainties in the measurements. Voltmeters draw some extra current, whereas ammeters reduce current flow. Null measurements balance voltages so that there is no current flowing through the measuring device and, therefore, no alteration of the circuit being measured. Null measurements are generally more accurate but are also more complex than the use of standard voltmeters and ammeters, and they still have limits to their precision. In this module, we consider a few specific types of null measurements because they are common and interesting and further illuminate principles of electric circuits. The Potentiometer The Potentiometer Suppose you wish to measure the emf of a battery. Consider what happens if you connect the battery directly to a standard voltmeter as shown in Figure 4.37. Once we note the problems with this measurement, we will examine a null measurement that improves accuracy. As discussed before, the actual quantity measured is the terminal voltage $V,V, size 12{V} {}$ which is related to the emf of the battery by $V=emf−Ir,V=emf−Ir, size 12{V="emf" - ital "Ir"} {}$ where $II size 12{I} {}$ is the current that flows and $rr size 12{r} {}$ is the internal resistance of the battery. The emf could be accurately calculated if $rr size 12{r} {}$ were very accurately known, but it is usually not. If the current $II size 12{I} {}$ could be made zero, then $V=emf,V=emf, size 12{V= "emf"} {}$ so emf could be directly measured. However, standard voltmeters need a current to operate; thus, another technique is needed. A potentiometer is a null measurement device for measuring potentials (voltages) (see Figure 4.38). A voltage source is connected to a resistor $R,R,$ say, a long wire, and passes a constant current through it. There is a steady drop in potential (an $IRIR size 12{ ital "IR"} {}$ drop) along the wire so that a variable potential can be obtained by making contact at varying locations along the Figure 4.38(b) shows an unknown $emfxemfx size 12{"emf" rSub { size 8{x} } } {}$ (represented by script $ExEx size 12{"emf" rSub { size 8{x} } } {}$ in the figure) connected in series with a galvanometer. Note that $emfxemfx size 12{"emf" rSub { size 8{x} } } {}$ opposes the other voltage source. The location of the contact point (see the arrow on the drawing) is adjusted until the galvanometer reads zero. When the galvanometer reads zero, $emfx=IRx,emfx=IRx, size 12{"emf" rSub { size 8{x} } = ital "IR" rSub { size 8{x} } } {}$ where $RxRx size 12{R rSub { size 8{x} } } {}$ is the resistance of the section of wire up to the contact point. Since no current flows through the galvanometer, none flows through the unknown emf, so $emfxemfx size 12{"emf" rSub { size 8{x} } } {}$ is directly sensed. Now, a very precisely known standard $emfsemfs size 12{"emf" rSub { size 8{s} } } {}$ is substituted for $emfx,emfx, size 12{"emf" rSub { size 8{x} } } {}$ and the contact point is adjusted until the galvanometer again reads zero so that $emfs=IRs.emfs=IRs. size 12{"emf" rSub { size 8{s} } = ital "IR" rSub { size 8{s} } } {}$ In both cases, no current passes through the galvanometer, so the current $II size 12{I} {}$ through the long wire is the same. Upon taking the ratio $emfxemfs, emfxemfs, size 12{ { {"emf" rSub { size 8{x} } } over {"emf" rSub { size 8{s} } } } } {}$$II size 12{I} {}$ cancels, giving 4.71 $emfxemfs=IRxIRs=RxRs.emfxemfs=IRxIRs=RxRs. size 12{ { {"emf" rSub { size 8{x} } } over {"emf" rSub { size 8{s} } } } = { { ital "IR" rSub { size 8{x} } } over { ital "IR" rSub { size 8{s} } } } = { {R rSub { size 8{x} } } over {R rSub { size 8{s} } } } } {}$ Solving for $emfxemfx size 12{"emf" rSub { size 8{x} } } {}$ gives 4.72 $emfx=emfsRxRs.emfx=emfsRxRs. size 12{"emf" rSub { size 8{x} } ="emf" rSub { size 8{s} } { {R rSub { size 8{x} } } over {R rSub { size 8{s} } } } } {}$ Because a long uniform wire is used for $R,R, size 12{R} {}$ the ratio of resistances $Rx/RsRx/Rs size 12{R rSub { size 8{x} } /R rSub { size 8{s} } } {}$ is the same as the ratio of the lengths of wire that zero the galvanometer for each emf. The three quantities on the right-hand side of the equation are now known or measured, and $emfxemfx size 12{"emf" rSub { size 8{x} } } {}$ can be calculated. The uncertainty in this calculation can be considerably smaller than when using a voltmeter directly, but it is not zero. There is always some uncertainty in the ratio of resistances $Rx/ RsRx/Rs size 12{R rSub { size 8{x} } /R rSub { size 8{s} } } {}$ and in the standard $emfs.emfs. size 12{"emf" rSub { size 8{s} } } {}$ Furthermore, it is not possible to tell when the galvanometer reads exactly zero, which introduces error into both $RxRx size 12{R rSub { size 8{x} } } {}$ and $RsRs size 12{R rSub { size 8{s} } } {}$ and may also affect the current $I.I. size 12{I} {}$ Resistance Measurements and the Wheatstone Bridge Resistance Measurements and the Wheatstone Bridge There are a variety of so-called ohmmeters that purport to measure resistance. What the most common ohmmeters actually do is to apply a voltage to a resistance, measure the current, and calculate the resistance using Ohm’s law. Their readout is this calculated resistance. Two configurations for ohmmeters using standard voltmeters and ammeters are shown in Figure 4.39. Such configurations are limited in accuracy because the meters alter both the voltage applied to the resistor and the current that flows through it. The Wheatstone bridge is a null measurement device for calculating resistance by balancing potential drops in a circuit (see Figure 4.40). The device is called a bridge because the galvanometer forms a bridge between two branches. A variety of bridge devices are used to make null measurements in circuits. Resistors $R1R1 size 12{R rSub { size 8{1} } } {}$ and $R2R2 size 12{R rSub { size 8{2} } } {}$ are precisely known, while the arrow through $R3R3 size 12{R rSub { size 8{3} } } {}$ indicates that it is a variable resistance. The value of $R3R3 size 12{R rSub { size 8{3} } } {}$ can be precisely read. With the unknown resistance $RxRx size 12{R rSub { size 8{x} } } {}$ in the circuit, $R3R3 size 12{R rSub { size 8{3} } } {}$ is adjusted until the galvanometer reads zero. The potential difference between points b and d is then zero, meaning that b and d are at the same potential. With no current running through the galvanometer, it has no effect on the rest of the circuit. So the branches abc and adc are in parallel, and each branch has the full voltage of the source. That is, the $IRIR size 12{ ital "IR"} {}$ drops along abc and adc are the same. Since b and d are at the same potential, the $IRIR size 12{ ital "IR"} {}$ drop along ad must equal the $IRIR size 12{ ital "IR"} {}$ drop along ab. Thus, 4.73 $I1R1=I2R3.I1R1=I2R3. size 12{I rSub { size 8{1} } R rSub { size 8{1} } =I rSub { size 8{2} } R rSub { size 8{3} } } {}$ Again, since b and d are at the same potential, the $IRIR size 12{ ital "IR"} {}$ drop along dc must equal the $IRIR size 12{ ital "IR"} {}$ drop along bc. Thus, 4.74 $I1R2=I2Rx.I1R2=I2Rx. size 12{I rSub { size 8{1} } R rSub { size 8{2} } =I rSub { size 8{2} } R rSub { size 8{x} } } {}$ Taking the ratio of these last two expressions gives 4.75 $I1R1I1R2=I2R3I2Rx.I1R1I1R2=I2R3I2Rx. size 12{ { {I rSub { size 8{1} } R rSub { size 8{1} } } over {I rSub { size 8{1} } R rSub { size 8{2} } } } = { {I rSub { size 8{2} } R rSub { size 8{3} } } over {I rSub { size 8{2} } R rSub { size 8{x} } } } } {}$ Canceling the currents and solving for R[x] yields 4.76 $Rx=R3R2R1.Rx=R3R2R1. size 12{R rSub { size 8{x} } =R rSub { size 8{3} } { {R rSub { size 8{2} } } over {R rSub { size 8{1} } } } } {}$ This equation is used to calculate the unknown resistance when current through the galvanometer is zero. This method can be very accurate (often to four significant digits), but it is limited by two factors. First, it is not possible to get the current through the galvanometer to be exactly zero. Second, there are always uncertainties in $R1,R1, size 12{R rSub { size 8{1} } } {}$$R2,R2, size 12 {R rSub { size 8{2} } } {}$ and $R3,R3, size 12{R rSub { size 8{3} } } {}$ that contribute to the uncertainty in $Rx.Rx. size 12{R rSub { size 8{x} } } {}$ Check Your Understanding Exercise 1 Identify other factors that might limit the accuracy of null measurements. Would the use of a digital device that is more sensitive than a galvanometer improve the accuracy of null measurements? One factor would be resistance in the wires and connections in a null measurement. These are impossible to make zero, and they can change over time. Another factor would be temperature variations in resistance, which can be reduced but not completely eliminated by choice of material. Digital devices sensitive to smaller currents than analog devices do improve the accuracy of null measurements because they allow you to get the current closer to zero.
{"url":"https://texasgateway.org/resource/45-null-measurements?book=79106&binder_id=78816","timestamp":"2024-11-02T14:52:45Z","content_type":"text/html","content_length":"87425","record_id":"<urn:uuid:52944d9a-4a31-475a-898b-de5d778aa9d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00577.warc.gz"}
Robert A. Bonavito, CPA How to Detect Fraud Using Benford's Law Hi, welcome to another New Jersey Friends Accountant discussion. Today's discussion is gonna be very interesting because it's one of the techniques we use in the vast majority of our forensic accounting analysis. It's called Benford's Law. And I've been getting a lot of people saying, "Hey, how do you actually... They want me to go into some of the techniques we actually use. So, I'm gonna discuss this Benford's Law. Now, as a forensic accountant, I have many ways and techniques to spot fraud. One of the ways we detect fraud is especially when analyzing tax returns, general ledgers, and other items that contains a large amount of numerical data. Now, remember, we get into a case, a lot of times people will give us a huge amount of information, millions or hundreds of millions of pieces of data, and we have to find out if it's random or is there fraud? Is it manipulated in some way? And the first thing we always do is we apply Benford's Law. And what the law basically states is that any random number will have a specific result as to which digit appears in each data set. And the way it does this is through what's called a base 10 logarithm. And it's very, very accurate. And when you apply this to a large amount of numbers, you should get something that looks like this bar chart right here. Okay? You could see here that this is 30, 18, 12. And you can see how the bar chart kind of slants down. And if you do this analysis... So, if you run this information on data, and you don't get something that looks like this, there's a high probability that it is manipulated in some way. It's not a natural occurrence. For example, when you take Benford's Law, okay, and you apply it to, for example, the distance of the planets from the sun, okay, to see if it's manipulated, if someone put the planets there or if it's random, you'll get a histogram, it'll look just like this. Okay? Or if you plot the distance of the stars from the earth, you'll get something like this or if you take all the phone numbers in the phonebook, you'll get the same histogram. So, the reason for this is that what it simply does is takes the first number that appears in a data set. And we analyze this data set, and we spot anomalies that tip us off that there's a high probability that the numbers have manipulated. From there, we can perform a forensic accounting. Once I know that there's problems with the data we have, we can then dig down and find out what happened. For example, here is what it should look like, based on Benford's Law. But now you look at these here, okay, revenue per PSE firms, population, motor vehicle theft cases, they're not really in line. So what that's telling me is just this data is probably manipulated in some way. Okay? Something's wrong with the population count. Something's wrong with the number of motor vehicle theft cases. Okay, maybe some of these aren't thefts. So, I mean, the one that's pretty close is population, right, almost, you know. So maybe there's a problem up here, something going on. But anyway, we apply this and you can look at the data, and it's pretty easy to see that there's an issue there. Now, the steps in using Benford's Law. Okay. The one... Let me just say, it's very difficult to understand Benford's Law. It's very complex, you know. The logarithm explaining all that can take days, if not weeks to understand. But let me just give you an example. I'll go through an example because it's the easy way to understand it. When I'm making this video, it's when COVID-19 is pretty prevalent. It's the end of the summer, and a lot of cases have been reported to CDC. Now, some people are saying that the cases are over-reported. Hospitals are over-reporting cases, that it's really... The deaths are over-reported. And so what I'm gonna do is I'm gonna go to the CDC website, and I'm gonna apply Benford's Law to the data. And I'm gonna go through and show you how we actually do it in real life. Now, here is the website for, okay, Center for Disease Control and Prevention, okay. And here they're talking about the deaths, USA, I mean, this disease is horrible. But total cases, new cases, USA deaths over 200,000. So what I'm gonna do is I'm gonna download cases in the last seven days by territory. I'm gonna download this data here. Let's see what this looks like. Okay, here's what I get when I download this from the CDC website. Okay? I'm gonna fix up this data, so that we can utilize it now. So you could see here, the total cases, confirmed, probable cases, etc. Okay? It goes through all this good data here. And let's say someone hired my firm to do a fraud analysis, the first thing I would do was I would utilize Benford's Law, okay? What I would do is basically get rid of all this data here and just focus on the total cases in the state. Now, we've already done this. So, we would take this data and put it in an Excel sheet because Excel has some decent formula capabilities. And what it would look like, once we took the data, it would look like this. Okay? You could see here that we have the states, the number of reported cases, and then what we do is we utilize this formula, which is left B2. What it does is it goes to here and takes the first digit and puts it in this column. Okay? And for all the states and some of the territories in the United States. And then what we do is we want numbers 1 through 9, which are the digits and Benford's Law states that a certain number of these numbers here should have...start with a 1, a certain number should start with 2, a certain number should start with 3. Then I go in this other formula, it's countifs, what it does is it takes all the numbers that start with 1 and puts them...there's 23, all the numbers that start with 2, there's 8, and all these numbers that start with 9 is 2. Okay. So it went through all these columns. There's a total of 56 here, okay. And I verify that because some of these have zeros. And you know, there's 56 basically, states and territories in this database that we downloaded. And then we do a calculation. And this is 41% of the numbers here start with 1, 14 start with 2 and 3. So now we have the percentages, and then we just do this histogram here. And you can see this does not look like, you know, a typical Benford's Law would predict, okay? the logarithms are way out of whack. So this is telling me here, just looking at this data, that this is not legitimate data. Okay. At this point, if we were doing this case, I would tell, you know, my clients, "Hey listen, the data we're looking at is definitely manipulated." Okay. Now what we need to do is, then we'd go in and look at the various hospitals, see how they're reporting these cases. Where are they coming from? How reliable the data is. We test it. And we, you know, actually back up. But this is telling me, if I had to go to court, this is the first thing I would show is that [inaudible 00:08:22], they're saying that the data is manipulated, okay, because we know what it should look like, right? What should it look like? It should look like this. It doesn't. Okay? It looks like this. So anyway, so we have some situations here. Benford's Law is great. I recommend it if you get into large databases. We have to quickly find out if it's something you wanna look into. So listen guys, we went through this quick. If you have any questions, just leave it below. And if you like this video, please join my YouTube channel. It helps a lot with, you know, getting the recognition and name out there for this kind of stuff that we hope you enjoyed it. Thanks a lot. Bye. Return to Video Gallery
{"url":"https://www.rabcpafirm.com/videos/how-to-detect-fraud-using-benford-s-law/","timestamp":"2024-11-07T19:43:24Z","content_type":"text/html","content_length":"30197","record_id":"<urn:uuid:164f09d8-319e-4abb-b715-ce7ec3d9ba00>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00582.warc.gz"}
Courant Institute of Mathematical Sciences - Wikiwand The Courant Institute specializes in applied mathematics, mathematical analysis and scientific computation. There is emphasis on partial differential equations and their applications. The mathematics department is consistently ranked in the United States as #1 in applied mathematics.^[3] Other strong points are Analysis (#6 as of 2022)^[4] and geometry (#12 as of 2022).^[5] Within the field of computer science, CIMS concentrates in machine learning, theory, programming languages, computer graphics and parallel computing. In 2022, the computer science program was ranked #19 among computer science and information systems programs globally.^[6] In 2022, the Academic Ranking of World Universities placed the Courant Institute as #9 worldwide in the subject ranking for mathematics.^[1] Six (at the time of award) faculty members have been awarded the National Medal of Science (Kurt O. Friedrichs, Peter Lax, Cathleen Synge Morawetz, Louis Nirenberg, Charles S. Peskin, S. R. Srinivasa Varadhan), one (Mikhail Gromov) was honored with the Kyoto Prize, and nine have received career awards from the National Science Foundation. Courant Institute professors Lax, Varadhan, Gromov, Nirenberg won the 2005, 2007, 2009 and 2015 Abel Prize respectively for their research in partial differential equations, probability and geometry.^[7] Louis Nirenberg also received the Chern Medal in 2010, and Subhash Khot won the Nevanlinna Prize in 2014. Amir Pnueli and Yann LeCun won the 1996 and 2018 Turing Award respectively. In addition, Jeff Cheeger was also awarded the Shaw Prize in Mathematical Sciences in 2021.^[8] The Courant Institute offers Bachelor of Arts, Bachelor of Science, Master of Science and PhD degree programs in both mathematics and computer science with program acceptance rates ranging from 3% to 29%.^[9] The overall acceptance rate for all CIMS graduate programs is 15%, and program admissions reviews are holistic. A high undergraduate GPA and high GRE score are typically prerequisites to admission to its graduate programs but are not required. Majority of accepted candidates met these standards. However, character and personal qualities and evidence of strong quantitative skills are very important admission factors. Consistent with its scientific breadth, the institute welcomes applicants whose primary background is in quantitative fields such as economics, engineering, physics, or biology, as well as mathematics. Undergraduate program admissions are not directly administrated by the institute but by the NYU undergraduate admissions office of College of Arts and Science.^ Graduate program The Department of Mathematics at the Courant Institute offers PhDs in Mathematics, Atmosphere-Ocean Science, and Computational Biology; Masters of Science in Mathematical Finance, Mathematics, and Scientific Computing. Lecture Hall at Warren Weaver Hall The Graduate Department of Computer Science offers a PhD in computer science. In addition it offers Master of Science degrees in computer science, information systems (in conjunction with the Stern School of Business), and in scientific computing. For the PhD program, every PhD computer science student must receive a grade of A or A− on the final examination for algorithms, systems, applications, and a PhD-level course chosen by the student that does not satisfy the first three requirements, such as cryptography and numerical methods. Students may take the final exam for any these courses without being enrolled in the course. The Computer Science Masters program offers instruction in the fundamental principles, design and applications of computer systems and computer technologies. Students who obtain an MS degree in computer science are qualified to do significant development work in the computer industry or important application areas. Those who receive a doctoral degree are in a position to hold faculty appointments and do research and development work at the forefront of this rapidly changing and expanding field. The emphasis for the MS in Information Systems program is on the use of computer systems in business. For the Master of Science in Scientific Computing, it is designed to provide broad training in areas related to scientific computing using modern computing technology and mathematical modeling arising in various applications. The core of the curriculum for all computer science graduate students consists of courses in algorithms, programming languages, compilers, artificial intelligence, database systems, and operating systems. Advanced courses are offered in many areas such as natural language processing, the theory of computation, computer vision, software engineering, compiler optimization techniques, computer graphics, distributed computing, multimedia, networks, cryptography and security, groupware and computational finance. Adjunct faculty, drawn from outside academia, teach special topics courses in their areas of expertise.^[14] Unless outside fellowships or scholarships are available to the students, all admitted Courant PhD students are granted with the GSAS MacCracken award.^[15] The fellowship covers the tuition and provides 9 months of stipend along with other benefits such as health insurance and special housing opportunities. The MacCracken funding is renewable for a period of up to five years, assuming satisfactory progress toward the degree.^[16] Doctoral students take advanced courses in their areas of specialization, followed by a period of research and the preparation and defense of the doctoral thesis. Courant Students in PhD programs may earn a master's degree while in progress toward the PhD program. Areas where there are special funding opportunities for graduate students include: Mathematics, Mechanics, and Material Sciences, Number Theory, Probability, and Scientific Computing. All PhD candidates are required to take a written comprehensive examination, oral preliminary examination, and create a dissertation defense. Each supported doctoral student has access to his or her own dedicated Unix workstation. Many other research machines provide for abundant access to a variety of computer architectures, including a distributed computing laboratory.^[17] Undergraduate program The Courant Institute houses New York University's undergraduate programs in computer science and mathematics. In addition, CIMS provides opportunities and facilities for undergraduate students to do and discuss mathematical research, including an undergraduate math lounge on the 11th floor and an undergraduate computer science lounge on the 3rd floor of Warren Weaver Hall.^[18]^[19] Classroom at Warren Weaver Hall The mathematics and computer science undergraduate and graduate programs at the Courant Institute has a strong focus on building quantitative and problem-solving skills through teamwork.^[20]^[21] An undergraduate computer science course on Computer Vision, for example, requires students to be in small teams to use and apply recently developed algorithms by researchers around the world on their own. One example assignment requires a student to study a paper written by researchers from Microsoft Research Cambridge in order to do an assignment on Segmentation and Graph Cut. To encourage innovation, students in advanced coursework are allowed to use any means to complete their assignment, such as a programming language of their choice and hacking a Kinect through legal means.^[22] The Courant Institute's undergraduate program also encourages students to engage in research with professors and graduate students. About 30% of undergraduate students participate in academic research through the competitive Research Experiences for Undergraduates program funded by the National Science Foundation or research funded primarily by the Dean's Undergraduate Research Fund. The Courant Institute has one of the highest percentage of undergraduate students doing research within New York University.^[23]^[24]^[25]^[26] With permission of their advisers or faculty, undergraduate students may take graduate-level courses. Courant undergraduate students through the years and alumni contribute greatly to the vitality of the Mathematics and Computer Science departments. Some accomplishments by current and former undergraduate Courant students include an Apple Worldwide Developers Conference Scholarship Winner, development of Object Category Recognition Techniques to sort garbage for recycling for the NYC's trash program, placement in 7th out of 42 in the ACM International Collegiate Programming Contest (ICPC), and inventors of the Diaspora (software) social network.^[24] The undergraduate division of the Department of Mathematics offers Bachelor of Arts (BA) and Bachelor of Science (BS) degrees in Mathematics. It consists of a wide variety of courses in pure and applied mathematics taught by a distinguished faculty with a tradition of excellence in teaching and research. Students in advanced coursework often participate in formulating models outside the field of mathematics as well as in analyzing them. For example, an advanced mathematics course in Computers in Medicine and Biology requires a student to construct two computer models selected from the following list: circulation, gas exchange in the lung, control of cell volume, and the renal countercurrent mechanism. The student uses the models to conduct simulated physiological experiments.^ The undergraduate division of the Department of Computer Science offers a Bachelor of Arts (BA) degree, and fours minors. These are the computer science minor, web programming and applications minor, joint minor in computer science/mathematics, and the computer science education minor available in collaboration with NYU Steinhardt.^[28] The BA degree can also be pursued with honors. Students may combine the degree with other majors within the College of Arts and Science to create a personalized joint major. Two specific combined degrees are the joint major in computer science/economics and the joint major in computer science/mathematics. The Department of Computer Science also offers a BS/BE Dual Degree in computer science and engineering and an accelerated master's program available to qualifying undergraduates in conjunction with the Tandon School of Engineering.^[29] Academic research The Courant Institute along with Microsoft Research are the founders of the Games for Learning Institute The Department of Mathematics at Courant occupies a leading position in analysis and applied mathematics, including partial differential equations, differential geometry, dynamical systems, probability and stochastic processes, scientific computation, mathematical finance, mathematical physics, and fluid dynamics. A special feature of the institute is its highly interdisciplinary character — with courses, seminars, and active research collaborations in areas such as financial mathematics, materials science, visual neural science, atmosphere/ocean science, cardiac fluid dynamics, plasma physics, and mathematical genomics. Another special feature is the central role of analysis, which provides a natural bridge between pure and applied mathematics. The Department of Computer Science has strengths in multimedia, programming languages and systems, distributed and parallel computing, and the analysis of algorithms. Since 1948, Courant Institute has maintained its own research journal, Communications on Pure and Applied Mathematics. While the journal represents the full spectrum of the institute's mathematical research activity, most articles are in the fields of applied mathematics, mathematical analysis, or mathematical physics. Its contents over the years amount to a modern history of the theory of partial differential equations. Most articles originate within the institute or are specially invited. The institute also publishes its own series of lecture notes. They are based on the research interests of the faculty and visitors of the institute, originated in advanced graduate courses and mini-courses offered at the institute.
{"url":"https://www.wikiwand.com/en/articles/Courant_Institute_of_Mathematical_Sciences","timestamp":"2024-11-02T06:36:25Z","content_type":"text/html","content_length":"486681","record_id":"<urn:uuid:2c57f5a7-8bb4-4486-951b-d338fd65720c>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00362.warc.gz"}
How do you find the limit of absx/x as x->2^-? | HIX Tutor How do you find the limit of #absx/x# as #x-&gt;2^-#? Answer 1 X tends to 2 from both sides as it moves. #abs(x)/x# tends to #x/x#, or #2/2# Since 2 is positive, the absolute value is irrelevant. However, when approaching positive 2, even if you approach from the bottom, the x value needs to be positive. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the limit of abs(x)/x as x approaches 2 from the left (x->2^-), we can evaluate the expression by substituting the value of x into the function. When x approaches 2 from the left, x becomes a very small positive number. Since the absolute value of any positive number is itself, we can rewrite the expression as abs(x)/x = x/x = 1. Therefore, the limit of abs(x)/x as x approaches 2 from the left is 1. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-limit-of-absx-x-as-x-2-8f9af9c79d","timestamp":"2024-11-06T02:54:56Z","content_type":"text/html","content_length":"568185","record_id":"<urn:uuid:fac429c7-23fc-455f-966b-516a20c5253b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00853.warc.gz"}
Determinants of Real Private Consumption Expenditure in Lesotho Journal of Corporate Governance, Insurance, and Risk Management Volume 3, Issue 2, 2016 Pages 58-75 Received: 03-21-2016, Revised: 07-18-2016, Accepted: 07-26-2016, Available online: 08-29-2016 View Full Article|Download PDF Using the Autoregressive Distributed Lag (ARDL) approach to cointegration, an error correction model (ECM) is estimated for real private domestic consumption in Lesotho. Lesotho is one of a number of countries with low gross domestic product (GDP) per capita, that are landlocked and of which the national currency is pegged to that of a highly dominant trading partner. Analysis of consumption pattern in such countries is scant in the literature. This paper finds evidence of a long-run relationship between private consumption, income, interest rates, and inflation. The empirical findings suggest that higher income is associated with higher private consumption, higher inflation reduces private consumption and that higher interest rates reduce private consumption, implying that the substitution effect outweighs the income effect in Lesotho in the long term. Although the model is not designed to evaluate consumption theories, the estimated parameters to some extent support the absolute income hypothesis (AIH), relative income hypothesis (RIH), life-cycle hypothesis (LCH) and permanent income hypothesis (PIH). Keywords: Real private consumption expenditure, Autoregressive distributed lag, Error correction model, Lesotho Cite this: APA Style IEEE Style BibTex Style MLA Style Chicago Style Sekantsi, L. P. (2016). Determinants of Real Private Consumption Expenditure in Lesotho. J. Corp. Gov. Insur. Risk Manag., 3(2), 58-75. https://doi.org/10.56578/jcgirm030204 Table 1. Results of unit root tests performed on differenced variables
{"url":"https://www.acadlore.com/article/JCGIRM/2016_3_2/jcgirm030204","timestamp":"2024-11-06T09:31:22Z","content_type":"text/html","content_length":"256393","record_id":"<urn:uuid:faf71774-1be8-45d8-8ac8-e3f8bdc0c524>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00886.warc.gz"}
1903 (number) Interesting facts about the number 1903 • (1903) Adzhimushkaj is asteroid number 1903. It was discovered by T. M. Smirnova from Nauchni on 5/9/1972. Areas, mountains and surfaces • The total area of Maui is 735 square miles (1,903 square km). Country United States (Hawaii). 214th largest island in the world. • There is a 1,903 miles (3,061 km) direct distance between Antalya (Turkey) and Birmingham (United Kingdom). • There is a 1,183 miles (1,903 km) direct distance between Antalya (Turkey) and Minsk (Belarus). • There is a 1,903 miles (3,061 km) direct distance between Arequipa (Peru) and Barranquilla (Colombia). • There is a 1,903 miles (3,061 km) direct distance between Belgrade (Serbia) and Yekaterinburg (Russia). • More distances ... • There is a 1,903 miles (3,062 km) direct distance between Benin City (Nigeria) and Kampala (Uganda). • There is a 1,903 miles (3,061 km) direct distance between Ciudad Juárez (Mexico) and Manhattan (USA). • There is a 1,903 miles (3,061 km) direct distance between Farīdābād (India) and Riyadh (Saudi Arabia). • There is a 1,903 miles (3,062 km) direct distance between Gorakhpur (India) and Yerevan (Armenia). • There is a 1,183 miles (1,903 km) direct distance between Guayaquil (Ecuador) and Valencia (Venezuela). • There is a 1,903 miles (3,061 km) direct distance between Guiyang (China) and Jaipur (India). • There is a 1,903 miles (3,062 km) direct distance between Guiyang (China) and Jalandhar (India). • There is a 1,903 miles (3,062 km) direct distance between Ho Chi Minh City (Viet Nam) and Taiyuan (China). • There is a 1,903 miles (3,062 km) direct distance between Chengdu (China) and Medan (Indonesia). • There is a 1,903 miles (3,062 km) direct distance between İzmir (Turkey) and Sale (Morocco). • There is a 1,903 miles (3,062 km) direct distance between Jalandhar (India) and Yekaterinburg (Russia). • There is a 1,903 miles (3,061 km) direct distance between Kahrīz (Iran) and Mumbai (India). • There is a 1,903 miles (3,061 km) direct distance between Karaj (Iran) and Saint Petersburg (Russia). • There is a 1,183 miles (1,903 km) direct distance between Lucknow (India) and Madurai (India). • There is a 1,903 miles (3,062 km) direct distance between Lucknow (India) and Zhongshan (China). • There is a 1,183 miles (1,903 km) direct distance between Manila (Philippines) and Wuhan (China). • There is a 1,903 miles (3,061 km) direct distance between Mecca (Saudi Arabia) and Volgograd (Russia). • There is a 1,903 miles (3,061 km) direct distance between Pingdingshan (China) and Varanasi (India). • There is a 1,903 miles (3,061 km) direct distance between Tai’an (China) and Takeo (Cambodia). History and politics • United Nations Security Council Resolution number 1903, adopted 17 December 2009. Lifts arms embargo on Liberia, renewal of travel bans on certain officials; extends mandate of expert panel. Resolution text. • 1903 is the smallest number requiring an addition chain of length 15. • On May 3, 1903, singer Bing Crosby was born in Tacoma Wa, USA. • American Towers Tower Robertsdale height 579.9m (1,903ft), was built in the year 2004 in Robertsdale, Alabama United States. Type of construction: Guyed mast. GPS location 30°36′45.4″N 87°38′41.6″W / 30.612611°N 87.644889°W / 30.612611; -87.644889 (American Towers Tower Robertsdale) . • The Springfield M1903, formally the United States Rifle, Caliber .30-06, Model 1903, is an American bolt-action repeating rifle fed by a five-round magazine, used primarily during the first half of the 20th century. What is 1,903 in other units The decimal (Arabic) number converted to a Roman number Roman and decimal number conversions The number 1903 converted to a Mayan number is Decimal and Mayan number conversions. Length conversion 1903 kilometers (km) equals to miles (mi). 1903 miles (mi) equals to kilometers (km). 1903 meters (m) equals to feet (ft). 1903 feet (ft) equals 580.041 meters (m). Power conversion 1903 Horsepower (hp) equals to 1399.46 kilowatts (kW) 1903 kilowatts (kW) equals to 2587.71 horsepower (hp) Time conversion (hours, minutes, seconds, days, weeks) 1903 seconds equals to 31 minutes, 43 seconds 1903 minutes equals to 1 day, 7 hours, 43 minutes Zip codes 1903 • Zip code 1903 ABASTO, BUENOS AIRES, Argentina a map • Zip code 1903 BUCHANAN, BUENOS AIRES, Argentina a map • Zip code 1903 INSTITUTO AGUSTIN R. GAMBIER, BUENOS AIRES, Argentina a map Zip code areas 1903 Number 1903 morse code: .---- ----. ----- ...-- Sign language for number 1903: Number 1903 in braille: Year 1903 AD • Edward Binney and Harold Smith co-invent crayons. • Bottle-making machinery was invented by Michael J. Owens. • The Wright brothers invented the first gas motored and manned airplane. • Mary Anderson invented windshield wipers. • William Coolidge invented ductile tungsten used in lightbulbs. • The Societas Rosicruciana became the Hermetic Order of the Golden Dawn. Gregorian, Hebrew, Islamic, Persian and Buddhist Year (Calendar) Gregorian year 1903 is Buddhist year 2446. Buddhist year 1903 is Gregorian year 1360 . Gregorian year 1903 is Islamic year 1320 or 1321. Islamic year 1903 is Gregorian year 2467 or 2468. Gregorian year 1903 is Persian year 1281 or 1282. Persian year 1903 is Gregorian 2524 or 2525. Gregorian year 1903 is Hebrew year 5663 or 5664. Hebrew year 1903 is Gregorian year 1857 a. C. More about the year The Buddhist calendar is used in Sri Lanka, Cambodia, Laos, Thailand, and Burma. The Persian calendar is the official calendar in Iran and Afghanistan. Share in social networks Advanced math operations Is Prime? The number 1903 is not a prime number . The closest prime numbers are The 1903rd prime number in order is Factorization and factors (dividers) The prime factors of 1903 are 11 * 173 The factors of 1903 are , 1903. Total factors 4. Sum of factors 2088 (185). Prime factor tree The second power of 1903 is 3.621.409. The third power of 1903 is 6.891.541.327. The square root √ is 43,623388. The cube root of is 12,392139. The natural logarithm of No. ln 1903 = log 1903 = 7,551187. The logarithm to base 10 of No. log 1903 = 3,279439. The Napierian logarithm of No. log 1903 = -7,551187. Trigonometric functions The cosine of 1903 is 0,693004. The sine of 1903 is -0,720933. The tangent of 1903 is -1,040301. Number 1903 in Computer Science Code type Code value PIN 1903 If 1903 is your year or date of birth, it is not recommended that you use 1903 as your password or PIN. 1903 Number of bytes 1.9KB Unix time Unix time 1903 is equal to Thursday Jan. 1, 1970, 12:31:43 a.m. GMT IPv4, IPv6 Number 1903 internet address in dotted format v4 0.0.7.111, v6 ::76f 1903 Decimal = 11101101111 Binary 1903 Decimal = 2121111 Ternary 1903 Decimal = 3557 Octal 1903 Decimal = 76F Hexadecimal (0x76f hex) 1903 BASE64 MTkwMw== 1903 MD5 944626adf9e3b76a3919b50dc0b080a4 1903 SHA1 b2526c86bf863dec307a76fbebd94b9fcb72b490 1903 SHA224 6e3a6b81e4a3012a0f1b599dc7f45a31e0bd432718fab7021f32a49c 1903 SHA256 262b06d105e1c865b01c3e0a74291cdae511ef15f3d456e14fbe2dffd9efe3b9 1903 SHA384 3f781acf22e250a5376ceb573645d7d7fb9a79e5d5bc3bba06d6ce6e829890fa036a49a0453f8edb217ba7ef33c67acd More SHA codes related to the number 1903 ... If you know something interesting about the 1903 number that you did not find on this page, do not hesitate to write us here. Numerology 1903 The meaning of the number 9 (nine), numerology 9 Character frequency 9: 1 The number 9 (nine) is the sign of ideals, Universal interest and the spirit of combat for humanitarian purposes. It symbolizes the inner Light, prioritizing ideals and dreams, experienced through emotions and intuition. It represents the ascension to a higher degree of consciousness and the ability to display love for others. He/she is creative, idealistic, original and caring. More about the the number 9 (nine), numerology 9 ... The meaning of the number 3 (three), numerology 3 Character frequency 3: 1 The number three (3) came to share genuine expression and sensitivity with the world. People associated with this number need to connect with their deepest emotions. The number 3 is characterized by its pragmatism, it is utilitarian, sagacious, dynamic, creative, it has objectives and it fulfills them. He/she is also self-expressive in many ways and with good communication skills. More about the the number 3 (three), numerology 3 ... The meaning of the number 1 (one), numerology 1 Character frequency 1: 1 Number one (1) came to develop or balance creativity, independence, originality, self-reliance and confidence in the world. It reflects power, creative strength, quick mind, drive and ambition. It is the sign of individualistic and aggressive nature. More about the the number 1 (one), numerology 1 ... The meaning of the number 0 (zero), numerology 0 Character frequency 0: 1 Everything begins at the zero point and at the zero point everything ends. Many times we do not know the end, but we know the beginning, it is at the zero point. More about the the number 0 (zero), numerology 0 ... № 1,903 in other languages How to say or write the number one thousand, nine hundred and three in Spanish, German, French and other languages. The character used as the thousands separator. Spanish: 🔊 (número 1.903) mil novecientos tres German: 🔊 (Nummer 1.903) eintausendneunhundertdrei French: 🔊 (nombre 1 903) mille neuf cent trois Portuguese: 🔊 (número 1 903) mil, novecentos e três Hindi: 🔊 (संख्या 1 903) एक हज़ार, नौ सौ, तीन Chinese: 🔊 (数 1 903) 一千九百零三 Arabian: 🔊 (عدد 1,903) واحد ألف و تسعمائة و ثلاثة Czech: 🔊 (číslo 1 903) tisíc devětset tři Korean: 🔊 (번호 1,903) 천구백삼 Danish: 🔊 (nummer 1 903) ettusinde og nihundrede og tre Hebrew: (מספר 1,903) אלף תשע מאות ושלש Dutch: 🔊 (nummer 1 903) duizendnegenhonderddrie Japanese: 🔊 (数 1,903) 千九百三 Indonesian: 🔊 (jumlah 1.903) seribu sembilan ratus tiga Italian: 🔊 (numero 1 903) millenovecentotre Norwegian: 🔊 (nummer 1 903) en tusen, ni hundre og tre Polish: 🔊 (liczba 1 903) tysiąc dziewięćset trzy Russian: 🔊 (номер 1 903) одна тысяча девятьсот три Turkish: 🔊 (numara 1,903) bindokuzyüzüç Thai: 🔊 (จำนวน 1 903) หนึ่งพันเก้าร้อยสาม Ukrainian: 🔊 (номер 1 903) одна тисяча дев'ятсот три Vietnamese: 🔊 (con số 1.903) một nghìn chín trăm lẻ ba Other languages ... News to email If you know something interesting about the number 1903 or any other natural number (positive integer), please write to us here or on Facebook. Legal Notices & Terms of Use The content of the comments is the opinion of the users and not of number.academy. It is not allowed to pour comments contrary to the laws, insulting, illegal or harmful to third parties. Number.academy reserves the right to remove or not publish any inappropriate comment. It also reserves the right to publish a comment on another topic. Privacy Policy Frequently asked questions about the number 1903 • How do you write the number 1903 in words? 1903 can be written as "one thousand, nine hundred and three". What is your opinion?
{"url":"https://number.academy/1903","timestamp":"2024-11-07T15:22:32Z","content_type":"text/html","content_length":"46229","record_id":"<urn:uuid:e883f52e-b510-40f9-8d2a-c9ea950ce4e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00583.warc.gz"}
Tutorial on Schedulability Analysis under Uncertainty using Formal Methods Practical information in a nutshell The tutorial took place on Sunday 30th of September 2018 in ESWEEK 2018 (Torino, Italy). Time: 9am to 6pm Room: Sella Description of the tutorial Modern real-time systems must cope with different sources of variability. Modern hardware processors introduce several sources of variability in the execution time of the software (cache, pipeline, bus contention, etc.); and the timing of external events may change due to changes in the environments, malfunctions, etc. This variability adds additional challenges for the design, development and validation of modern cyber-physical systems. It is then necessary to estimate the robustness of the system w.r.t. variations of the parameters. A key issue is to estimate for which values of the parameters the system continues to meet all its timing constraints. In this tutorial, we present the background for analyzing real-time systems using formal methods, and notably the formalism of parametric timed automata to analyze real-time scheduling under uncertainty. Then we will give a survey of some real-time scheduling problems, and we will show how to model a typical real-time system using the IMITATOR tool. The participants will be guided toward building and verifying a model of a real-time system, exploring the capability of the analysis tool. Resources and content Real-time systems (Giuseppe Lipari) Real-time systems and parametric timed automata (Étienne André) Practical session A monoprocessor system • Download the slides, that contain all the commands • Download the scripts: RETIMI • Download the models: □ Response time analysis □ Sensitivity analysis □ Response time with offsets A distributed system 1. Download and open the file exModel.imi using your favorite text editor (if you use a KDE environment, Kate with the Ocaml highlighting mode does an almost-perfect job) 2. Generate a graphical version of the model as follows: > ./imitator exModel.imi -PTA2PNG This will generate a file exModel-pta.png This model is non-parametric. Which real-time system is this model modeling? How many tasks? How many processors? Which tasks are triggered by other tasks? Are periodic? Are sporadic? 3. Schedulability reduces to safety checking. Perform schedulability analysis using the following command: > ./imitator exModel.imi -mode EF -merge -incl -output-result The system is schedulable if and only if the answer of safety is "true", i.e., no deadline miss is reachable. 4. Let us now parameterize the WCET of task 1.1. This is achieved by commenting the value of parameter WCET_T11 ("= 5") in the header. The resulting file can be obtained here. Parametric schedulability reduces to parametric safety checking. Perform parametric schedulability analysis using the following command: > ./imitator exModel-1-WCET11.imi -mode EF -merge -incl -output-result The system is schedulable for exactly the set of valuations output by IMITATOR. 5. Let us now parameterize the WCET of both tasks 1.1 and 2.1. This is achieved by commenting the value of parameters WCET_T11 and WCET_T21 in the header. The resulting file can be obtained here. Parametric schedulability reduces to parametric safety checking. Perform parametric schedulability analysis using the following command: > ./imitator exModel-2-WCET11-21.imi -mode EF -merge -incl -output-result -output-cart The system is schedulable for exactly the set of valuations output by IMITATOR. The generated file exModel-2-WCET11-21_cart.png shows a graphical representation of the schedulable parameter 6. Let us now parameterize (only) the offset of task 2.1. This is achieved by commenting the value of parameter offsetT21 in the header. The resulting file can be obtained here. Parametric schedulability reduces to parametric safety checking. Perform parametric schedulability analysis using the following command: > ./imitator exModel-3-offset21.imi -mode EF -merge -incl -output-result The system is schedulable for exactly the set of valuations output by IMITATOR. 7. Let us now parameterize the WCET of both tasks 2.1 and 2.2. This is achieved by commenting the value of parameters WCET_T11 and WCET_T21 in the header. The resulting file can be obtained here. Parametric schedulability reduces to parametric safety checking. Perform parametric schedulability analysis using the following command: > ./imitator exModel-4-WCET21-22.imi -mode EF -merge -incl -output-result -output-cart The system is schedulable for exactly the set of valuations output by IMITATOR. The generated file exModel-2-WCET11-21_cart.png shows a graphical representation of the schedulable parameter Étienne André, Laurent Fribourg, Ulrich Kühne and Romain Soulat. IMITATOR 2.5: A Tool for Analyzing Robustness in Scheduling Problems. In Dimitra Giannakopoulou and Dominique Méry (eds.), FM’12, LNCS 7436, Springer, pages 33–36, August 2012. (English) [PDF | PDF (author version) | BibTeX]🌐 Rajeev Alur, Thomas A. Henzinger, Moshe Y. Vardi: Parametric real-time reasoning. STOC 1993: 592-601. Étienne André and Romain Soulat. The Inverse Method. ISTE Ltd and John Wiley & Sons Inc. ISBN: 9781848214477. January 2013. (English) [PDF | BibTeX] Giuseppe Lipari, Sun Youcheng, Étienne André and Laurent Fribourg. Toward Parametric Timed Interfaces for Real-Time Components. In Étienne André and Goran Frehse (eds.), SynCoP’14, EPTCS, April
{"url":"https://www.imitator.fr/tutorials/ESWEEK18/","timestamp":"2024-11-11T03:32:05Z","content_type":"text/html","content_length":"16996","record_id":"<urn:uuid:9e809837-7992-4e5c-ba9d-16bfafc6822d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00199.warc.gz"}
Семинары: M. Melistas, Reduction of abelian varieties and a conjecture of Agashe Аннотация: This talk will consist of two parts. First, reduction properties of abelian varieties defined over a field K that have a K-rational point of order p will be studied. Here K is a field of characteristic 0 equipped with a discrete valuation which has algebraically closed residue field of characteristic p. After presenting a general result, we will focus on the dimension 1 case and classify the possible Kodaira types of reduction that can occur. In the second part of the talk, we will discuss a conjecture of Agashe, which is a consequence of the Birch and Swinnerton-Dyer (BSD) conjecture for elliptic curves over the rationals. We will present a theorem that proves Agashe's conjecture. The connection between the two parts is that we can put restrictions on torsion subgroups of certain twists of elliptic curves using reduction. Язык доклада: английский
{"url":"https://m.mathnet.ru/php/seminars.phtml?option_lang=rus&presentid=33766","timestamp":"2024-11-05T07:18:42Z","content_type":"text/html","content_length":"8821","record_id":"<urn:uuid:e4aaa705-ee50-4d2e-8110-d14195d1c64d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00858.warc.gz"}
Maintaining 5136 - math word problem (5136) Maintaining 5136 Petr succumbed to a bank's tempting offer, opened an account, and deposited CZK 1,051 into it. The account bore interest at 1.4%, credited once a year. The fee for maintaining this account was only CZK 140 per year. After seven years, he enjoyed how his deposit multiplied. How much was the account balance? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! You need to know the following knowledge to solve this word math problem: Units of physical quantities: Themes, topics: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/5136","timestamp":"2024-11-04T09:03:40Z","content_type":"text/html","content_length":"50275","record_id":"<urn:uuid:06217dd2-c7b8-4fa6-bef1-afc08b1c7d77>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00806.warc.gz"}
Preparation of variables for analysis There are many answers to this question. Let us use an example of a medical situation. If we estimate the treatment effect from a fully randomized experiment, then by randomly assigning subjects to the treated and untreated groups we create groups that are similar in terms of possible confounding factors. The similarity of the groups is due to the random assignment itself. In such studies, we can examine the pure (not dependent on confounding factors) effect of the treatment method on the outcome of the experiment. In this case, other than random group matching is not necessary. The possibility of error arises when the difference in treatment outcome between treated and untreated groups may be due not to the treatment itself, but to a factor that induced people to take part in the treatment. This occurs when randomization is not possible for some reason, such as it is an observational study or for ethical reasons we cannot assign treatment arbitrarily. Artificial group matching may then be applicable. For example, if the people we assign to the treatment group are healthier people and the people who are in the control group are people with more severe disease, then it is not the treatment itself but the condition of the patient before treatment that may affect the outcome of the experiment. When we see such an imbalance of groups, it is good if we can decide to randomize, in this way the probem is solved, because drawing people into groups makes them similar. However, we can imagine another situation. This time the group we are interested in will not be treatment subjects but smokers, and the control group will be non-smokers, and the analyses will aim to show the adverse effect of smoking on the occurrence of lung cancer. Then, in order to test whether smoking does indeed increase the risk of lung cancer, it would be unethical to perform a fully randomized trial because it would mean that people randomly selected to the risk group would be forced to smoke. The solution to this situation is to establish an exposure group, i.e. to select a number of people who already smoke and then to select a control group of non-smokers. The control group should be selected because by leaving the selection to chance we may get a non-smoking group that is younger than the smokers only due to the fact that smoking is becoming less fashionable in our country, so automatically there are many young people among the non-smokers.The control should be drawn from non-smokers, but so that it is as similar as possible to the treatment group.In this way we are getting closer to examining the pure (independent of selected confounding factors such as age) effect of smoking/non-smoking on the outcome of the experiment, which in this case is the occurrence of lung cancer. Such a selection can be made by the matching proposed in the program. One of the main advantages of investigator-controlled matching is that the control group becomes more similar to the treatment group, but this is also the biggest disadvantage of this method. It is an advantage because our study is looking more and more like a randomized study. In a randomized trial, the treatment and control groups are similar on almost all characteristics, including those we don't study - the random allocation provides us with this similarity. With investigator-controlled matching, the treatment and control groups become similar on only selected characteristics. The first two methods mentioned are based on matching groups through Propensity Score Matching, PSM. This type of matching was proposed by Rosenbaum and Rubin 1). In practice, it is a technique for matching a control group (untreated or minimally/standardly treated subjects) to a treatment group on the basis of a probability describing the subjects' propensity to assign treatment depending on the observed associated variables. The probability score describing propensity, called the Propensity Score is a balancing score, so that as a result of matching the control group to the treatment group, the distribution of measured associated variables becomes more similar between treated and untreated subjects. The third method does not determine the probability for each individual, but determines a distance/dissimilarity matrix that indicates the objects that are closest/most similar in terms of multiple selected characteristics. In practice, there are many methods to indicate how close the objects being compared are, in this case treated and untreated individuals. Two are proposed in the program: We can match without returning already drawn objects or with returning these objects again to the group from which we draw. In the case when it is impossible to match the untreated person to the treated one, because in the group from which we choose there are more persons matching the treated one equally well, then one of these persons chosen in a random way is combined. For a renewed analysis, a fixed seed is set by default so that the results of a repeated draw will be the same, but when the analysis is performed again the seed is changed and the result of the draw may be different. If it is not possible to match an untreated person to a treated one, because there are no more persons to join in the group from which we are choosing, e.g. matching persons have already been joined to other treated persons or the set from which we are choosing has no similar persons, then this person remains without a pair. Most often a 1:1 match is made,i.e., for one treated person, one untreated person is matched. However, if the original control group from which we draw is large enough and we need to draw more individuals, then we can choose to match 1:k, where k indicates the number of individuals that should be matched to each treated individual. After matching the control group to the treatment group, the results of such matching can be returned to the worksheet, i.e. a new control group can be obtained. However, we should not assume that by applying the matching we will always obtain satisfactory results. In many situations, the group from which we draw does not have a sufficient number of such objects that are sufficiently similar to the treatment group. Therefore, the matching performed should always be evaluated. There are many methods of evaluating the matching of groups. The program uses methods based on standardized group difference and Propensity Score percentile agreement of the treatment group and the control group, more extensively described in the work of P.C Austin, among others 2)3). This approach allows comparison of the relative balance of variables measured in different units, and the result is not affected by sample size. The estimation of concordance using statistical tests was abandoned because the matched control group is usually much smaller than the original control group, so that the obtained p-values of tests comparing the test group to the smaller control group are more likely to be left with the null hypothesis, and therefore do not show significant differences due to the reduced size. For comparison of continuous variables we determine the standardized mean difference: To compare binary variables (of two categories, usually 0 and 1) we determine the standardized frequency difference: Variables with multiple categories we should break down in logistic regression analysis into dummy variables with two categories and, by checking the fits of both groups, determine the standardized frequency difference for them. Although there is no universally agreed criterion for what threshold of standardized difference can be used to indicate significant imbalance, a standardized difference of less than 0.1 (in both mean and frequency estimation) can provide a clue 4). Therefore, to conclude that the groups are well matched, we should observe standardized differences close to 0, and preferably not outside the range of -0.1 to 0.1. Graphically, these results are presented in a dot plot. Negative differences indicate lower means/frequencies in the treatment group, positive in the control group. The 1:1 match obtained in the reports means the summary for the study group and the corresponding control group obtained in the first match, the 1:2 match means the summary for the study group and the corresponding control group obtained in the first + second match (i.e., not the study group and the corresponding control group obtained in the second match only), etc. The window with the settings of group matching options is launched from the menu Advanced statistics→Multivariate models→Propensity Score We want to compare two ways of treating patients after accidents, the traditional way and the new one. The correct effect of both treatments should be observed in the decreasing levels of selected cytokines. To compare the effectiveness of the two treatments, they should both be carried out on patients who are quite similar. Then we will be sure that any differences in the effectiveness of these treatments will be due to the treatment effect itself and not to other differences between patients assigned to different groups. The study is a posteriori, that is, it is based on data collected from patients' treatment histories. Therefore, the researchers had no influence on the assignment of patients to the new drug treatment group and the traditional treatment group. It was noted that the traditional treatment was mainly prescribed to older patients, while the new treatment was prescribed to younger patients, in whom it is easier to lower cytokine levels. The groups were fairly similar in gender structure, but not identical. If the planned study had been carried out on such selected groups of patients, the new way would have had an easier challenge, because younger organisms might have responded better to the treatment. The conditions of the experiment would not be equal for both ways, which could falsify the results of the analyses and the conclusions drawn. Therefore, it was decided to match the group treated traditionally to be similar to the study group treated with the new way. We planned to make the matching with respect to two characteristics, i.e. age and gender. The traditional treatment group is larger (80 patients) than the new treatment group (19 patients), so there is a good chance that the groups will be similar. Random selection is performed by the logistic regression model algorithm embedded in the PSM. We remember that gender should be coded numerically, since only numerical values are involved in the logistic regression analysis. We choose nearest neighbor as the method. We want the same person to be unable to be selected duplicately, so we choose a no return randomization. We will try 1:1 matching, i.e. for each person treated with the new drug we will match one person treated traditionally. Remember that the matching is random, so it depends on the random value of seed set by our computer, so the randomization performed by the reader may differ from the values presented here. A summary of the selection can be seen in the tables and charts. The line at 0 indicates equilibrium of the groups (difference between groups equal to 0). When the groups are in equilibrium with respect to the given characteristics, then all points on the graph are close to this line, i.e., around the interval -0.1 to 0.1. In the case of the original sample (blue color), we see a significant departure of Propensity Score. As we know, this mismatch is mainly due to age mismatch – its standardized difference is at a large distance from 0, and to a lesser extent gender mismatch. By performing the matching we obtained groups more similar to each other (red color in the graph). The standardized difference between the groups as determined by Propensity Score is 0.0424, which is within the specified range. The age of both groups is already similar – the traditional treatment group differs from the new treatment group by less than a year on average (the difference between the averages presented in the table is 0.2632) and the standardized difference between the averages is -0.0277. In the case of gender, the match is perfect, i.e. the percentage of females and males is the same in both groups (the standardized difference between the percentages presented in the table and the graph is now 0). We can return the data prepared in this way to the worksheet and subject it to the analyses we have planned. Looking at the summary we just obtained, we can see that despite the good balancing of the groups and the perfect match of many individuals, there are individuals who are not as similar as we might Sometimes in addition to obtaining well-balanced groups, researchers are interested in determining the exact way of selecting individuals, i.e. obtaining a greater influence on the similarity of objects as to the value of Propensity Score or on the similarity of objects as to the value of specific characteristics. Then, if the group from which we draw is sufficiently large, the analysis may yield results that are more favorable from the researcher's point of view, but if in the group from which we draw there is a lack of objects meeting our criteria, then for some people we will not be able to find a match that meets our conditions. Suppose that we would like to obtain such groups whose Propensity Score (i.e., propensity to take the survey) differs by no more than … How to determine this value? You can take a look at the report from the earlier analysis, where the smallest and largest distance between the drawn objects is given. In our case the objects closest to each other differ by min=0, and the furthest by max=0.5183. We will try to check what kind of selection we will obtain when we will match to people treated with the new method such people treated traditionally, whose Propensity Score will be very close to e.g. less than 0.01. We can see that this time with failed to select the whole group. Comparing Propensity Score for each pair (treated with the new method and treated traditionally) we can see that the differences are really small. However, since the matched group is much smaller, to sum up the whole process we have to notice that both Propensity Score, age and sex are not close enough to the line at 0. Our will to improve the situation did not lead to the desired effect, and the obtained groups are not well balanced. To summarize the overall draw, we note that although it meets our assumptions, the resulting groups are not as well balanced as they were in our first draw based on Propensity Score. The points in red representing the quality of the match by age and the quality of the match by gender deviate slightly from the line of sameness set at level 0, which means that the average difference in age and sex structure is now greater than in the first matching. It is up to the researcher to decide which way of preparing the data will be more beneficial to them. Finally, when the decision is made, the data can be returned to a new worksheet. To do this, go back to the report you selected and in the project tree under the right button select the Redo analysis menu. In the same analysis window, point to the Fit Result button and specify which other variables will be returned to the new worksheet. This will result in a new data sheet with side-by-side data for people treated with the new treatment and matched people treated traditionally. Interactions are considered in multidimensional models. Their presence means that the influence of the independent variable (Interactions button in the window of the selected multidimensional analysis. In the window of interactions settings, with the CTRL button pressed, we determine the variables which are to form interactions and transfer the variables into the neighboring list with the use of an arrow. By pressing the OK button we will obtain appropriate columns in the datasheet. In the analysis of the interaction the choice of appropriate coding of dichotomous variables allows the avoidance of the over-parametrization related to interactions. Over-parametrization causes the effects of the lower order for dichotomous variables to be redundant with respect to the confounding interactions of the higher order. As a result, the inclusion of the interactions of the higher order in the model annuls the effect of the interactions of the lower orders, not allowing an appropriate evaluation of the latter. In order to avoid the over-parametrization in a model in which there are interactions of dichotomous variables it is recommended to choose the option effect coding. In models with interactions, remember to „trim” them appropriately, so that when removing the main effects, we also remove the effects of higher orders that depend on them. That is: if in a model we have the following variables (main effects): When preparing data for a multidimensional analysis there is the problem of appropriate coding of nominal and ordinal variables. That is an important element of preparing data for analysis as it is a key factor in the interpretation of the coefficients of a model. The nominal or ordinal variables divide the analyzed objects into two or more categories. The dichotomous variables (in two Dummy coding is employed in order to answer, with the use of multidimensional models, the question: How do the (reference category. We code, in accordance with dummy coding, the sex variable with two categories (the male sex will be selected as the reference category), and the education variable with 4 categories (elementary education will be selected as the reference category). Building on the basis of dummy variables, in a multiple regression model, we might want to check what impact the variables have on a dependent variable, e.g. Effect coding is used to answer, with the use of multidimensional models, the question: How do (base category With the use of effect coding we will code the sex variable with two categories (the male category will be the base category) and a variable informing about the region of residence in the analyzed country. 5 regions were selected: northern, southern, eastern, western, and central. The central region will be the base one. Rosenbaum P.R., Rubin D.B. (1983a), The central role of the propensity score in observational studies for causal effects. Biometrika; 70:41–55 Austin P.C., (2009), The relative ability of different propensity score methods to balance measured covariates between treated and untreated subjects in observational studies. Med Decis Making; 29 Austin P.C., (2011), An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies. Multivariate Behavioral Research 46, 3: 399–424 Normand S.L. T., Landrum M.B., Guadagnoli E., Ayanian J.Z., Ryan T.J., Cleary P.D., McNeil B.J. (2001), Validating recommendations for coronary angiography following an acute myocardial infarction in the elderly: A matched analysis using propensity scores. Journal of Clinical Epidemiology; 54:387–398.
{"url":"http://manuals.pqstat.pl/en:statpqpl:wielowympl:przygpl","timestamp":"2024-11-07T00:07:07Z","content_type":"text/html","content_length":"104597","record_id":"<urn:uuid:4c9bf988-f804-4152-953c-f74b06bfb91c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00777.warc.gz"}
Mathematica Animations by High School Students • To: mathgroup at smc.vnet.net • Subject: [mg101996] Mathematica Animations by High School Students • From: Helen Read <hpr at together.net> • Date: Sun, 26 Jul 2009 03:53:45 -0400 (EDT) • Reply-to: HPR <read at math.uvm.edu> You might enjoy looking at some Mathematica animations created by a group of 30 very bright high school students at the Vermont Governor's Institute in Mathematical Sciences about a month ago. (I meant to post the link here sooner, but am just now getting around to it.) The students had no previous exposure to Mathematica, and I had them for only a single 75-minute session. We made liberal use of the Classroom Assistant palette so that I didn't have to teach them much syntax. I began by showing them how to use ContourPlot to make a static plot of familiar equations (lines, circles, parabolas, etc.). Then we discussed the idea of introducing a parameter into an equation so that we could animate it in some way (e.g., change the slope of a line or move it up and down, that sort of thing). Then we took a ContourPlot of two lines, one horizontal and one vertical, threw in a parameter and wrapped Animate around the whole thing, to make the vertical line move from left to right. I then told them to see if they could get the horizontal line to move from top to bottom at the same time the vertical was moving left to right. I gave them a few suggestions of other things to try, and they took it from there. A few helpers and I walked around the room helping out and answering questions. I have taught week long Mathematica course at the Math Institute in past years, but this was the first time that I did it only a single session. The Classroom Assistant palette was a big help in that regard. Also, using ContourPlot (which might seem an odd choice) worked very well. I wanted them to be able to plot circles, ellipses, vertical lines, etc., and didn't have time to teach them about parametric equations the way I would have if we had all week. With ContourPlot they could put in familiar Cartesian equations, and all of them came up with at least one neat animation within the time that we had. Some of the older kids with a bit more math behind them asked if it's possible to plot polar curves ("Yes! Here, let me show you where to find PolarPlot on the palette"), and a few of them made some 3D animations. Here are some of the students' animations. Enjoy. One minor aggravation in all this: I find it baffling that exporting an animation from Animate or Manipulate from Mathematica into any video format results in the animation running forward and then backward. I have not found any way to get it to export so that it runs once in the forward direction only. The kids' animations look OK going forward and back, but there some things we would really like to run in one direction only. AnimationDirection->Forward doesn't do anything when you export, as far as I can tell. Helen Read University of Vermont
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00676.html","timestamp":"2024-11-12T09:37:47Z","content_type":"text/html","content_length":"33029","record_id":"<urn:uuid:d48efecf-fe23-4a2f-8dc1-a75c23a7132f>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00059.warc.gz"}
Takurō Mochizuki | Academic Influence Takurō Mochizuki Most Influential Person Now Mathematician in Japan Takurō Mochizuki's Academic­Influence.com Rankings Takurō Mochizuki's Degrees Why Is Takurō Mochizuki Influential? (Suggest an Edit or Addition) According to Wikipedia, “Takurō Mochizuki is a Japanese mathematician at Kyoto University. Overview As a student at the University of Kyoto in 1994, Mochizuki left his undergraduate studies early to become a graduate student in mathematics at the same university. He completed his Ph.D. in 1999, and joined the faculty of Osaka City University, returning to Kyoto in 2004.” (See a Problem?) Takurō Mochizuki's Published Works This paper list is powered by the following services: Other Resources About Takurō Mochizuki What Schools Are Affiliated With Takurō Mochizuki? Takurō Mochizuki is affiliated with the following schools:
{"url":"https://academicinfluence.com/people/takuro-mochizuki","timestamp":"2024-11-14T21:01:21Z","content_type":"text/html","content_length":"60723","record_id":"<urn:uuid:d543ce73-ebd2-4430-9b33-c455b5b52ac3>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00044.warc.gz"}
Vibrational Thermodynamics of Materials by Brent Fultz Publisher: Caltech 2009 Number of pages: 145 The literature on vibrational thermodynamics of materials is reviewed. The emphasis is on metals and alloys, especially on the progress over the last decade in understanding differences in the vibrational entropy of different alloy phases and phase transformations. Some results on carbides, nitrides, oxides, hydrides and lithium-storage materials are also covered. Download or read it online for free here: Download link (4.3MB, PDF) Similar books Solid State Theory Manfred Sigrist ETH ZurichContents: Electrons in the periodic crystal; Semiconductors; Metals - properties of interacting electrons; Landau's Theory of Fermi Liquids; Transport properties of metals; Magnetism in metals; Mott insulators and the magnetism of localized moments. Solid State Physics Chetan Nayak University of CaliforniaFrom the table of contents: What is Condensed Matter Physics? Length, time, energy scales; Review of Quantum Mechanics; Review of Statistical Mechanics; Broken Translational Invariance in the Solid State; Electronic Bands. Lecture Notes for Solid State Physics Steven H. Simon Oxford UniversityContents: Physics of Solids without Considering Microscopic Structure; Putting Materials Together; Toy Models of Solids in One Dimension; Geometry of Solids; Neutron and X-Ray Diffraction; Electrons in Solids; Magnetism and Mean Field Theories. Worked Examples in the Geometry of Crystals H. K. D. H. Bhadeshia Institute of MetalsThe monograph deals with the mathematical crystallography of materials. It covers orientation relationships, aspects of deformation, martensitic transformations and interfaces. Intended for students and anyone interested in phase transformations.
{"url":"https://www.e-booksdirectory.com/details.php?ebook=7611","timestamp":"2024-11-06T08:10:37Z","content_type":"text/html","content_length":"10990","record_id":"<urn:uuid:f1ee1bf4-ed0c-47b3-b3d8-185cf6537682>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00811.warc.gz"}
CPM Homework Help Find the circumference of each of the circles described below. 1. Radius of $3$ inches Recall that the circumference of a $\text{circle}=2\pi\left(\text{radius}\right)$. $2\pi\left(3\ \text{inches}\right)=6\pi\approx18.84$ inches 2. Diameter of $27$ cm Refer to the diagram from part (a) and recall that the $\text{circumference}=\pi\left(\text{diameter}\right)$. To get an appropriate decimal answer, round $\pi$ so that it equals $3.14$.
{"url":"https://homework.cpm.org/category/CC/textbook/cc2/chapter/9/lesson/9.1.1/problem/9-16","timestamp":"2024-11-05T00:46:02Z","content_type":"text/html","content_length":"37360","record_id":"<urn:uuid:81c45b5d-b7c9-4954-9ba9-5444d4391c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00135.warc.gz"}
Author Details Abstract Views :457 | PDF Views:0 1 Department of Mathematics, Post Graduate Government College, Sector 11, Chandigarh, IN 2 Department of Mathematics, School of Chemical Engineering and Physical Science, Lovely Professional University, Phagwara-Punjab, IN Research Journal of Engineering and Technology , Vol 8, No 4 (2017), Pagination: 405-408 The coupled partial differential equations governing a rotating thermoelastic medium in context of Lord and Shulman theory are solved for plane wave solutions. A cubic velocity equation is obtained, which correspond to the speeds of propagation of three coupled plane waves. A reflection phenomenon is considered in a rotating thermoelastic solid half-space for incidence of a coupled plane wave. The plane surface of the half-space is subjected to impedance boundary conditions, where normal and tangential tractions are proportional to normal and tangential displacement components time frequency, respectively. The expressions for energy ratios of all reflected waves are obtained and computed numerically for a particular material representing the medium. The dependence of energy ratios on rotation parameter, impedance parameters and angle of incidence is shown graphically. Generalized Thermoelasticity, Impedance Boundary, Reflection, Energy Ratios, Rotation. Full Text [Buy button] • Achenbach J.D. (1973), Wave propagation in elastic solids, North Holland. • Ben-Menahem A. and Singh S.J. (1981), Seismic Waves and Sources, Springer. • Biot M.A. (1956), Thermoelasticity and irreversible thermodynamics. Journal of Applied Physics, 2, 240-253. • Borcherdt R.D. (1982), Reflection-refraction of general P-and type-I S-waves in elastic and anelastic solids, Geophysical Journal International, 70, 621-638. • Bullen K.E. (1963), An Introduction to the Theory of Seismology, Cambridge University Press. • Cagniard L. (1962), Reflection and refraction of progressive waves, translated and revised by E. A. Flinn and C. H. Dix., New York McGraw-Hill Book Company. • Chatterjee M., Dhua S.and Chattopadhyay A. (2016), Quasi-P and quasi-S waves in a self–reinforced medium under initial stresses and under gravity, Journal of Vibration and Control, 22, 3965-3985. • Chattopadhyay A., Kumari P. and Sharma V.K. (2015), Reflection and refraction at the interface between distinct generally anisotropic half spaces for three-dimensional plane quasi-P waves, Journal of Vibration and Control, 21, 493-508. • Christie D.G. (1955), Reflection of elastic waves from a free boundary, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 46, 527-541. • Deresiewicz H. (1960), The effect of boundaries on wave propagation in a liquid-filled porous solidI. Reflection of plane waves at a free plane boundary (non-dissipative case), Bulletin of the Seismological Society of America, 50, 599-607. • Dey S. and Addy S.K. (1977), Reflection of plane waves under initial stresses at a free surface, International Journal of Non-Linear Mechanics, 12, 371–381. • Ewing W.M., Jardetzky W.S. and Press F. (1957), Elastic Waves in Layered Media, New York, McGraw-Hill Book Company. • Green A.E. and Lindsay K.A. (1972), Thermoelasticity, J. Elasticity, 2, 1-7. • Godoy E., Durn M. and Ndlec J.C. (2012), On the existence of surface waves in an elastic half-space with impedance boundary conditions, Wave Motion, 49, 585-594. • Haskell N.A. (1962), Crustal reflection of plane P and SV waves, Journal of Geophysical Research, 67, 4751-4768. • Hetnarski R.B. and Ignaczak J. (1999), Generalized thermoelasticity, Journal of Thermal Stresses, 22, 451-476. • Ignaczak J. and Ostoja-Starzewski M. (2009), Thermoelasticity with Finite Wave Speeds, Oxford University Press. • Lord H. and Shulman Y. (1967), A generalized dynamical theory of thermoelasticity, Journal of Mechanics and Physics of the Solids, 15, 299-309. • Malischewsky P.G. (1987), Surface Waves and Discontinuities, Elsevier, Amsterdam. • Miklowitz J. (1966), Elastic wave propagation, Applied Mechanics Surveys, Spartan Books. • Ogden R.W. and Sotiropoulos D.A. (1998), Reflection of plane waves from the boundary of a pre-stressed compressible elastic half-space, IMA Journal of Applied Mathematics, 61, 61-90. • Parfitt V.R. and Eringen A.C. (1969), Reflection of plane waves from the flat boundary of a micropolar elastic half‐space, Journal of the Acoustical Society of America, 45, 1258-1272. • Rokhlin S.I., Bolland T.K. and Adler L. (1986), Reflection and refraction of elastic waves on a plane interface between two generally anisotropic media, Journal of the Acoustical Society of America, 79, 906-918. • Schoenberg M. and Censor D. (1973), Elastic waves in rotating media, Quarterly of Applied Mathematics, 31, 115-125. • Sharma J.N. (2001), On the propagation of thermoelastic waves in homogeneous isotropic plates, Indian Journal of Pure and Applied Mathematics, 32, 1329-1341. • Sidhu R.S. and Singh S.J. (1984), Reflection of P and SV waves at the free surface of a prestressed elastic half‐space, Journal of the Acoustical Society of America, 76, 594-598. • Singh B. (2005), Reflection of P and SV waves from free surface of an elastic solid with generalized thermodiffusion, Journal of Earth System Science, 114, 159-168. • Singh B. (2015), Rayleigh waves in an incompressible fibrereinforced elastic solid with impedance boundary conditions, Journal of the Mechanical Behaviour of Materials, 24, 183-186. • Singh B. (2016), Rayleigh wave in a thermoelastic solid half-space with impedance boundary conditions, Meccanica 51, 1135-1139. • Sinha A.N. and Sinha S.B.(1974), Reflection of thermoelastic waves at a solid half-space with thermal relaxation, Journal of Physics of the Earth, 22, 237-244. • Vinh P.C. and Hue T.T.T. (2014), Rayleigh waves with impedance boundary conditions in anisotropic solids, Wave Motion, 51, 1082-1092. • Wei P.J., Tang Q. and Lia Y. (2015), Reflection and transmission of elastic waves at the interface between two gradient-elastic solids with surface energy, European Journal of Mechanics - A/ Solids, 52, 54-71.
{"url":"https://www.i-scholar.in/index.php/index/search/authors/view?firstName=Baljeet&middleName=&lastName=Singh&affiliation=Department%20of%20Mathematics,%20Post%20Graduate%20Government%20College,%20Sector%2011,%20Chandigarh,%20160011&country=IN","timestamp":"2024-11-10T01:42:41Z","content_type":"application/xhtml+xml","content_length":"56444","record_id":"<urn:uuid:f3a46fbc-4e87-4177-84e8-1635acfebd04>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00421.warc.gz"}
The Tango method of regression to the mean -- a proof The Tango method of regression to the mean -- a proof Warning: technical mathy post. To go from a record of performance to an estimate of a team's talent, you have to regress its winning percentage towards the mean. How do you figure out how much to regress? Tango has often given these instructions: 1. First, figure out the standard deviation of team performance. For MLB, for all teams playing at least 160 games up until 2009, that figure is 0.070 (about 11.34 wins per 162 games). Second, figure out the theoretical standard deviation of luck over a season, using the binomial approximation to normal. That's estimated by the formula Square root of (p(1-p)/g)) For baseball, p = .500 (since the average team must be .500), and g = 162. So the SD of luck works out to about 0.039 (6.36 games per season). So SD(performance) = 0.070, and SD(luck) = 0.039. Square those numbers to get var(performance) and var(luck). Then, if luck is independent of talent, we get var(performance) = var(talent) + var(luck) That means var(talent) equals 0.058 squared, so SD(talent) = 0.058. 2. Now, find the number of games for which the SD(luck) equals SD(talent), or 0.058. It turns out that's about 74 games, because the square root of (p(1-p))/74 is approximately equal to 0.058. 3. That number, 74, is your "answer". So, now, any time you want to regress a team's record to the mean, take 74 games of .500 ball (37-37), and add them to the actual performance. The result is your best estimate of the team's talent. For instance, suppose your team goes 100-62. What's its expected talent? Adjust the record to 137-99. That gives an estimated talent of .581, or 94-68. Or, suppose your team starts 2-6. Adjust it to 39-43. That's an estimated talent of .476, or 77-85. Those estimates seemed reasonable to me, but I often wondered: does this really work? Is it really true that you can add 74 games to a 162 game season, and it'll work, but you can also add 74 games to an 8 game season, and that'll work too? Surely you want to add fewer .500 games when your original sample is smaller, no? And why always add the exact number of games that makes the talent SD equal to the luck SD? Is it a rule of thumb? Is it a guess? Again, that can't be the mathematically best way, can it? It can, actually. I spent a couple of hours doing some algebra, and it turns out that Tango's method is exactly right. I was very surprised. Also, I don't know how Tango figured it out ... maybe he use an easier, more intuitive way to figure out that it works than going through a bunch of algebra. But I can't find one, so let me take you through the algebra, if you care. Tango, is there an obvious explanation for why this works, more obvious that what I've done? As I wrote a few paragraphs ago, var(overall) = var(talent) + var(luck). [Call this "equation 1" for later.] Let v^2 =var(overall), and let t^2 = var(talent). Also, let "g" be the number of games. From the binomial approximation to normal, we know var(luck) = (.25/g). So v = SD(overall) t = SD(talent) sqr(.25/g) = SD(luck) Suppose you run a regression on overall outcome vs. talent. The variance of talent is t^2. The variance of overall outcome is v^2. Therefore, we know that talent will explain t^2/v^2 of the variance of outcome, so the r-squared we get out of the regression will be t^2/v^2. That means the correlation coefficient, "r", will be equal to the square root of that, or t/v. There's a property of regression in general that implies this: If we want to predict talent from outcome, then, if the outcome X is y standard deviations from the mean, talent will be y(t/v) standard deviations from the mean. That's one of the things that's true for any regression of two variables. Expected talent = average + (number of SDs outcome is away from the mean) (t/v) * (SD of talent) Expected talent = average + [(outcome - mean)/SD of outcome] [t/v] * (SD of talent) Expected talent = average + (X - mean)/v * (t/v) * t Expected talent = average + t^2/v^2 (X - mean) That last equation means that when we look at how far the observation is from average, we "keep" t^2/v^2 of the difference, and regress to the mean by the rest. In other words, we regress to the mean by (1 - t^2/v^2), or "(100 * (1 - t^2/v^2)) percent". Now, if we regress to the mean by (1 - t^2/v^2), that's the exactly the same as averaging -- (1 - t^2/v^2) parts average performance, and -- (t^2/v^2) parts observed performance. For instance, if you're regressing one-third of the way to the mean, you can do it two ways. You can (a) move from the average to the observation, and then move the other way by 1/3 of the difference, or (b) you can just take an average of two parts original and one part mean. But how does that translate, in practical terms, into how many games of average performance we need to add? From above, we know that: For every t^2/v^2 games of observed performance, we want (1 - t^2/v^2) games of average performance. And now a little algebra: For every 1 game of observed performance, we want (1 - t^2/v^2)/(t^2/v^2) games of average performance. Simplifying gives, For every game of observed performance, we want (v^2-t^2)/t^2 games of average performance. Multiply by g: For every "g" games of observed performance, we want g(v^2-t^2)/t^2 games of average performance. But, from equation 1, we know that (v^2-t^2) is just the squared SD of luck, which is .25/g. So, For every "g" games of observed performance, we want g(.25/g)/t^2 games of average performance. The "g"s cancel, and we get, For every "g" games of observed performance, we want .25/t^2 games of average performance. And that doesn't depend on g! So no matter whether you're regressing a team over 1 game, or 10 games, or 20 games, or 162 games, you can always add *the same number of average games* and get the right answer! I wouldn't have guessed that. But how many games? Well, it's (.25/t^2) games. For baseball, we calculated earlier now that t = 0.058. So .25/t^2 equals ... 74 games. Exactly as Tango said, the number of games we're adding is exactly the number of games for which SD(luck) equals SD(talent)! Is that a coincidence? No, it's not. It's the way it has to be. Why? Here's a semi-intuitive explanation. As we saw above, the number of games we have to add does NOT depend on the number of games we started with in the observed W-L record. So, we can pick any number of games. Suppose we just happened to start with 74 games -- maybe a team that was 40-34, or something. Now, for that team, the SD of its talent is 0.058. And, the SD of its luck is also 0.058. Therefore, if we were to do a regression of talent vs. observed, we would necessarily come up with an r-squared of 0.5 -- since the variances of talent and luck are exactly equal, talent explains half of the total variance. That means the correlation coefficient, r, is the square root of 0.5, or 1 divided by the square root of 2. For every SD change in performance, we predict 1/sqr(2) SD change in talent. But the SD of talent is exactly 1/sqr(2) times the SD of performance. Multiply those two 1/sqr(2)'s together and you get 1/2, which means for every win change in performance, we predict 1/2 win change in talent. That's another way of saying that we want to regress exactly halfway back to the mean. That, in turn, is the equivalent of averaging one part observation, and one part mean. Since we have 74 games of observation, we need to add 74 games of mean. So, in the case of "starting with 74 games of observation," the answer is, "we need to add 74 games of .500 to properly regress to the mean." However, we showed above that we want to add the *same* number of .500 games regardless of how many observed games we started with. Since this case works out to 74 games, *all* situations must work out to 74 games. QED, I guess. And, of course, and again as Tango has pointed out, this works for *any* binomial variable, like batting average or hockey save percentage. The only thing you have to keep in mind is that the ".25" in the formula for luck is based on an average being .500. It's really p(1-p), which works out to .25 if your p equals .500. If your p doesn't equal .500, use p(1-p) instead. So, in hockey, where a typical save percentage is .880, use (.880)(.120) = .1056 instead. Sorry this is so ugly to read in blog form. Maybe I'll make the equations nicer and rerun this in "By the Numbers." Let me know if I've done anything wrong, or if I've just duplicated Tango's proof. For all I know, Tango has already explained all this somewhere else. But this is still kind of complicated. Tango, do you have a more intuitive explanation of why this works, one that doesn't need all this algebra? (Update, 11:30pm: part of the explanation above "QED" was wrong ... now fixed.) Labels: baseball, regression to the mean 19 Comments:
{"url":"http://blog.philbirnbaum.com/2011/08/tango-method-of-regression-to-mean-kind.html","timestamp":"2024-11-02T11:52:19Z","content_type":"application/xhtml+xml","content_length":"64772","record_id":"<urn:uuid:46dd58b6-28a0-4f79-ac56-effbeaac2789>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00647.warc.gz"}
Number of Tangents from a Point on a Circle - Class 10 Maths MCQ - Sanfoundry Class 10 Maths MCQ – Number of Tangents from a Point on the Circle This set of Class 10 Maths Chapter 10 Multiple Choice Questions & Answers (MCQs) focuses on “Number of Tangents from a Point on the Circle”. 1. How many tangents can a circle have? a) Zero b) Infinity c) Estimated on the value of the radius d) Fixed for every kind of circle View Answer Answer: b Explanation: A circle is a set of points on a plane that is equidistant from a fixed point and an infinite number of tangents can be drawn for any given circle. 2. Find the number of tangents that can be drawn for the given figure. a) Estimated by using the Pythagorean theorem b) Estimated on the value of the angle c) Cannot be drawn d) Estimated on the value of the diameter View Answer Answer: c Explanation: Tangent is part of a circle. It cannot be drawn for any other figure other than a circle. Hence, tangents cannot be drawn to the given figure which is a right – angled triangle. 3. How many tangents can be drawn at one point on a circle? a) Only one b) Three c) Zero d) Two View Answer Answer: a Explanation: A circle is a set of points on a plane that is equidistant from a fixed point and we can draw only one tangent at one point on any circle given. 4. A tangent touches a circle at a single point. a) False b) True View Answer Answer: b Explanation: A line that touches/intersects a circle at exactly one point of a circle is called a tangent and an infinite number of tangents are drawn to a circle whereas secant is a line that intersects two distinct points on a circle. 5. Number of tangents passing through a circle is _____ a) 2 b) 3 c) 1 d) 0 View Answer Answer: d Explanation: A line that touches/intersects a circle at exactly one point of a circle is called a tangent and a tangent to a circle doesn’t pass through the circle. 6. What happens to the length of the chord when the chord comes closer to the center? a) Decreases b) Becomes an arc c) Increases d) Becomes a segment View Answer Answer: c Explanation: The length of the chord of a circle increases when it comes closer and closer to the center of the circle. Hence, the longest chord becomes the diameter. 7. Find the area of the sector if the radius is 6 cm and with an angle of 60°. a) 18.35 cm b) 18.85 cm c) 18.00 cm d) 18.05 cm View Answer Answer: b Explanation: The area of the sector = \(\frac {x^{\circ }}{360^{\circ }}\) × πr^2 = \(\frac {60^{\circ }}{360^{\circ }}\) x \(\frac {22}{7}\) × 6^2 = 18.85 cm 8. The area of the sector is _____ a) \(\frac {x^{\circ }}{360^{\circ }}\) × πr^2 b) \(\frac {x^{\circ }}{360^{\circ }}\) – πr^2 c) \(\frac {x^{\circ }}{360^{\circ }}\) + πr^3 d) \(\frac {x^{\circ }}{360^{\circ }}\) × πr^3 View Answer Answer: a Explanation: The area of the sector is \(\frac {x^{\circ }}{360^{\circ }}\) × πr^2 Where x° is the degree measure of the angle at the center and r is the radius of the circle. 9. Find the radius of a circle if 2 m is the length of the tangent, 6 m is the distance between the center of the circle the external point. a) 7 m b) 5 m c) √32 m d) √38 m View Answer Answer: c Explanation: Length of the tangent = \(\sqrt {d^2 – r^2} \) 2 = \(\sqrt {6^2 – r^2} \) r^2 = 6^2 – 2^2 r = \(\sqrt {6^2 – 2^2} \) r = \(\sqrt {36 – 4} \) r = √32 m 10. Line C is secant to the circle. a) False b) True View Answer Answer: a Explanation: Tangent is a line that touches the circle at a single point. Hence, line C is a tangent, not a secant. Line A and line B are secants of the circle because they’re touching the circle at two distinct points. Sanfoundry Global Education & Learning Series – Mathematics – Class 10. To practice all chapters and topics of class 10 Mathematics, here is complete set of 1000+ Multiple Choice Questions and Answers.
{"url":"https://www.sanfoundry.com/mathematics-online-quiz-class-10/","timestamp":"2024-11-06T01:35:25Z","content_type":"text/html","content_length":"140693","record_id":"<urn:uuid:db6004fa-9c99-467f-878b-b76bbd18a04c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00290.warc.gz"}
Perimeter to area - math word problem (4697) Perimeter to area Calculate the area of a circle with a perimeter of 15 meters. Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/4697","timestamp":"2024-11-04T07:59:33Z","content_type":"text/html","content_length":"53193","record_id":"<urn:uuid:a0042aa7-1474-403d-b0a7-27580d2d914c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00307.warc.gz"}
manual page arc4random, arc4random_buf, arc4random_uniform — random number generator #include <stdlib.h> arc4random_buf(void *buf, size_t nbytes); arc4random_uniform(uint32_t upper_bound); This family of functions provides higher quality data than those described in rand(3), random(3), and rand48(3). Use of these functions is encouraged for almost all random number consumption because the other interfaces are deficient in either quality, portability, standardization, or availability. These functions can be called in almost all coding environments, including pthreads(3) and chroot(2). High quality 32-bit pseudo-random numbers are generated very quickly. On each call, a cryptographic pseudo-random number generator is used to generate a new result. One data pool is used for all consumers in a process, so that consumption under program flow can act as additional stirring. The subsystem is re-seeded from the kernel random(4) subsystem using getentropy(2) on a regular basis, and also upon fork(2). The arc4random() function returns a single 32-bit value. arc4random_buf() fills the region buf of length nbytes with random data. arc4random_uniform() will return a single 32-bit value, uniformly distributed but less than upper_bound. This is recommended over constructions like “arc4random() % upper_bound” as it avoids "modulo bias" when the upper bound is not a power of two. In the worst case, this function may consume multiple iterations to ensure uniformity; see the source code to understand the problem and solution. These functions are always successful, and no return value is reserved to indicate an error. These functions first appeared in OpenBSD 2.1. The original version of this random number generator used the RC4 (also known as ARC4) algorithm. In OpenBSD 5.5 it was replaced with the ChaCha20 cipher, and it may be replaced again in the future as cryptographic techniques advance. A good mnemonic is “A Replacement Call for Random”.
{"url":"https://man.openbsd.org/OpenBSD-7.5/arc4random.3","timestamp":"2024-11-09T16:47:30Z","content_type":"text/html","content_length":"10866","record_id":"<urn:uuid:e7930c34-6bfa-4cb7-b805-a61e61ef1211>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00315.warc.gz"}
purpose. To simulate the effects of decentration on lower- and higher-order aberrations (LOAs and HOAs) and optical quality, by using measured wavefront error (WFE) data from a cat photorefractive keratectomy (PRK) model. methods. WFE differences were obtained from five cats’ eyes 19 ±7 weeks after spherical myopic PRK for −6 D (three eyes) and −10 D (two eyes). Ablation-centered WFEs were computed for a 9.0 mm pupil. A computer model was used to simulate decentration of a 6-mm subaperture in 100-μm steps over a circular area of 3000 μm diameter, relative to the measured WFE difference. Changes in LOA, HOA, and image quality (visual Strehl ratio based on the optical transfer function; VSOTF) were computed for simulated decentrations over 3.5 and 6.0 mm. results. Decentration resulted in undercorrection of sphere and induction of astigmatism; among the HOAs, decentration mainly induced coma. Decentration effects were distributed asymmetrically. Decentrations >1000 μm led to an undercorrection of sphere and cylinder of >0.5 D. Computational simulation of LOA/HOA interaction did not alter threshold values. For image quality (decrease of best-corrected VSOTF by >0.2 log units), the corresponding thresholds were lower. The amount of spherical aberration induced by the centered treatment significantly influenced the decentration tolerance of LOAs and log best corrected VSOTF. conclusions. Modeling decentration with real WFE changes showed irregularities of decentration effects for rotationally symmetric treatments. The main aberrations induced by decentration were defocus, astigmatism, and coma. Treatments that induced more spherical aberration were less tolerant of decentration. Correct alignment of the ablation to the visual axis of the eye is an essential requirement for optimal outcome in laser refractive surgery (LRS). Decentration of the ablation zone leads to incomplete refractive correction and induction of higher-order aberrations (HOAs), especially coma. ^ 1 ^ 2 ^ 3 ^ 4 The expected benefit of less HOA induction in eye tracker–controlled treatments has been demonstrated. ^ 5 However, decentration still occurs as a result of misalignment of the tracking system, ^ 6 static registration errors due to surgeon offsets, ^ 7 and pupil center shifts as a function of dilation. ^ 8 In most cases, the magnitude of such misalignments is <500 μm. ^ 1 ^ 7 ^ 9 ^ 10 A recent study showed that in uneventful wavefront-guided LASIK, coma induction occurred in a random fashion, independent of factors such as attempted correction and optical zone (OZ) diameter. ^ 11 Thus, microdecentrations can be considered ubiquitous, random errors; however, their impact on optical quality is poorly understood. ^ 10 In contrast, gross decentrations of >500 μm are one of the most visually disturbing complications after LRS. Besides causing severe deterioration of visual quality, such complications are difficult to treat, and success is often limited. ^ 12 ^ 13 ^ 14 ^ 15 ^ 16 ^ 17 ^ 18 Although several studies on decentration-induced aberrations after conventional ^ 1 ^ 2 ^ 7 and wavefront-guided LRS ^ 3 ^ 4 ^ 8 ^ 19 have been published, all assumed a perfect ablation and did not consider the inherent induction of HOA which occurs in real corneas as a result of wound healing and biomechanical effects. ^ 20 ^ 21 The present study was conducted to investigate the effects of decentration of the laser ablation relative to the entrance pupil of the eye on LOA, HOA, and optical quality, in a cat photorefractive keratectomy (PRK) model. Although the optical effects of PRK for myopia, such as reduction of defocus, induction of coma, and positive spherical aberration are similar in cats and humans, ^ 22 ^ 23 the greater corneal surface area and the naturally large scotopic pupil diameter (PD) of ∼12 mm in cats allowed us to measure wavefront changes well beyond the ablation OZ. A simplified computational model was used to simulate decentration effects over a circular area of 3000 μm in diameter by calculating wavefront errors (WFEs) for systematically offset subapertures of 3.5 and 6.0 mm. Using this paradigm, we assessed (1) the nature and magnitude and spatial distribution of optical aberrations induced by different amounts of decentration, (2) the impact of such aberrations on theoretical optical quality, (3) whether residual refractive errors could be partially attributed to microdecentrations (≤500 μm), and (4) the impact of optical aberrations induced by laser refractive surgery on tolerance of decentration. Materials and Methods Data were obtained from five eyes of five normal, male domestic short hair cats (Felis cattus), who underwent myopic PRK with an uncomplicated follow-up of at least 3 months and in which wavefront aberrations could be measured over a PD of 9 mm. Procedures were conducted according to the guidelines of the University of Rochester Committee on Animal Research (UCAR), the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research, and the NIH Guide for the Care and Use of Laboratory Animals. Photorefractive Keratectomy Three cats’ eyes underwent PRK for −6 D, two with a programmed OZ of 6 mm and one with an 8-mm OZ; two eyes received a PRK for −10 D (6 mm OZ). The procedure has been described in detail elsewhere. ^ 23 Briefly, all eyes received a conventional spherical ablation (Planoscan 4.14; Bausch & Lomb, Inc., Rochester, NY) performed by one of two surgeons (SM, JB) in animals under surgical anesthesia (Technolas 217 laser; Bausch & Lomb, Inc.). The ablation was centered on the pupil, which was constricted with 2 drops of pilocarpine 3% (Bausch & Lomb). After surgery, the cats received 2 drops of 0.3% tobramycin+0.1% dexamethasone 0.1% (Tobradex; Alcon, Fort Worth, TX) per eye, once a day, until the surface epithelium healed. Wavefront Sensing As described previously, ^ 23 ^ 24 the cats were trained to fixate single spots of light presented on a computer monitor. Wavefront measurements were performed before surgery and 19 ±7 (12–24) weeks after surgery, with a custom-built Hartmann-Shack wavefront sensor. The wavefront sensor was aligned to the visual axis of one eye, while the other eye fixated a spot on the computer monitor. ^ 24 At least 10 spot-array patterns were collected per imaging session per eye. Calculation of Centered WFE Differences From each single spot–array pattern, WFEs were calculated with a 2nd–10th-order Zernike polynomial expansion according to Vision Science and Its Application (VSIA) standards for reporting aberration data of the eye. ^ 25 WFE changes were simulated in a multistep process. The first step included the determination of the center of the OZ. Because PRK was performed with the cat under general anesthesia and the ablation was registered to the pupil center, an alignment to the visual axis of the cat’s eye during surgery could not be ensured, and possible decentration effects had to be compensated for. Therefore, the centroiding area (analysis pupil) of 6-mm diameter was shifted manually in steps of 300 μm according to the distance between the lenslet centers to find the wavefront that yielded the most negative Z [2] ^0 value (i.e., the maximum treatment effect. This was defined as the centered, postoperative wavefront (W ]), which was then averaged from single measurements over a PD of 9 mm. In the second step , the horizontal and vertical offsets between the center of the OZ and the center of the original pupil were used to calculate preoperative WFEs (W ]), for the position that equaled the later treatment center. Like W ), W ) was computed for a 9-mm PD. In a third step , the change in WFEs, ΔW( ), were obtained by subtracting the pre- from the postoperative Zernike coefficients. Thus, ΔW( ) reflected the treatment effect over a 9-mm PD, for a perfectly centered OZ, minimizing the potential influence of internal aberrations. The Zernike coefficient spectrum of each ΔW( (Table 1) was consistent with data obtained in humans after PRK. ^ 22 Computer Modeling of Treatment Decentration For each eye, decentration of a 6-mm subpupil relative to ΔW(x, y) was simulated by using custom software (MatLab 7.2; The MathWorks Inc., Natick, MA). Decentered WFE differences ΔW(x′, y′) were calculated for the size of the 6-mm subaperture along Cartesian decentrations Δx and Δy, where Δx and Δy were changed in steps of 100 μm, covering the entire 9-mm centroiding area and resulting in a maximum decentration range of 3000 μm over a circular region. Zernike polynomials for the 2nd to the 6th order were fitted to the data of each decentered wavefront ΔW(x′, y′) by using a singular value decomposition algorithm to calculate the pseudoinverse of the Zernike data to get the decentered subpupil Zernike coefficients. As a refinement of the manual determination of the centered position, the algorithm assigned the centered coordinates (Δx = 0, Δy = 0) to the ΔW(x′, y′) with the lowest Z[2] ^0 value. For each eye, 709 WFEs, 1 centered and 708 decentered were calculated over a 6-mm PD. Simulation of Decentered Treatment Effects and VSOTF Calculation Theoretical optical quality was investigated by calculating the VSOTF metric (visual Strehl ratio based on the optical transfer function [OTF]). The VSOTF is the ratio of the contrast sensitivity–weighted OTF to the contrast sensitivity–weighted OTF of the diffraction-limited eye. ^ 26 ^ 27 Because the preoperative WFEs W ) were decentered, calculating the VSOTF from preoperative HOA could lead to misinterpretation of optical quality due to over- or underestimation of HOA. Thus, we calculated a standard preoperative WFE, W x [0] y [0] ), from all eyes included in this study. For the calculation of W x [0] y [0] ), all preoperative, pupil-centered WFEs were averaged, resulting in a WFE representing the typical preoperative range of HOA (Table 2) ^ 24 ^ 28 Simulated postoperative WFEs, W ′), were calculated by subtracting the W x [0] y [0] ) from each ΔW( ′). This treatment simulation relative to a standard preoperative WFE allowed us to eliminate interindividual differences in preoperative optical quality and internal optics. Therefore, the independent variables in this experiment were the five different centered treatment effects ΔW( ) and their corresponding ΔW( ′). A computer program (Visual Optics Laboratory, VOL-Pro 7.14; Sarver and Associates, Carbondale, IL) was used to calculate the VSOTF over an analysis PD of 3.5 and 6.0 mm. The VSOTF for a given WFE was calculated for the combination of LOA terms that provided the highest VSOTF simulating the optical quality with best spherocylindrical correction (BCVSOTF). Thus, for each simulated W ′), an LOA-derived refractive error based on 2nd-order terms and an “effective” refractive error based on the BCVSOTF were obtained. Differences between refractive errors were expressed as dioptric power vectors ( ), where corresponds to the spherical equivalent and to the 0°/90° and to the 45°/135° astigmatic components. The difference between the VSOTF- and 2nd-order–based power vectors could be considered a function of the interaction between HOA and LOA. Since “sphere” and “cylinder” are most commonly used in clinical settings, we displayed most of the results in terms of sphere and cylinder magnitude. To visualize decentration effects for single eyes, color maps plotting ΔLOA, ΔHOA, and Δlog BCVSOTF against horizontal and vertical decentration were created. For further statistical analysis, data for decentration along the 0°, 90°, 180°, and 270° meridians were averaged for each eye. Calculating Decentration Tolerance Analysis of tolerance was performed by calculating the maximum permissible decentration that yielded a critical refraction or BCVSOTF difference. For sphere and cylinder, this threshold value was defined a priori as −0.5 D. For the optical quality metric BCVSOTF, we chose a critical decrease of 0.2 log units, which roughly equals a decrease of 2 logMAR steps. ^ 27 For each parameter investigated, vectors between the centered position ( ) and each outmost coordinate below the criterion (threshold coordinates ′) were calculated. The mean value, , reflects the average maximum permissible decentration (in micrometers) that allows one to remain below the threshold criterion and equals the radius of a circle around the centered position. The standard deviation ( ) of and the coefficient of variation ( ) of served as metrics for regularity of decentration effects, where reflects the absolute and the relative irregularity. The smaller the , the less variable were the decentration effects along different meridians (i.e., the more circle-shaped was the decentration pattern). Statistical Analysis All analyses were based on the difference values ΔW and Δ log BCVSOTF, which reflected the treatment effects. Main outcome measures were the change of log BCVSOTF, the change of LOA, expressed in diopters, and the change of HOA as a function of decentration. All differences for the center position (x, y) were normalized to zero. Thus, values for decentered coordinates x′ and y′ reflect the deviation from the centered treatment effect. The difference between wavefront- and VSOTF-based refraction was considered an effect of interaction between LOA and HOA. Tolerance metrics were calculated as described earlier. HOAs were broken down into coma root mean square (RMS) (the RMS of all coma terms Z[ n ] ^± ^1), spherical aberration RMS (SA RMS, the RMS value of all coefficients Z [ n ] ^0), and the RMS of the residual noncoma, nonspherical aberrations (rHOA, the RMS value of all remaining HOA Z[ n ] ^≥ ^2). The influence of the magnitude of HOA induction on decentration tolerance was assessed with linear regression analysis. The dependent variables were the mean vectors r̅ and their SD. To investigate the impact of HOAs on log BCVSOTF, we applied a multiple-regression model using HOAs as predictors and log BCVSOTF as dependent variables. The role of interaction on decentration tolerance was investigated by comparing r̅ and SD for 2nd-order sphere and cylinder with their VSOTF-based equivalents using a nonparametric test for matched pairs (Wilcoxon test). The same test was also applied to compare decentration tolerance for PDs of 3.5 and 6.0 mm. All statistical tests were performed with a commercial program (SPSS 11.0; SPSS Inc., Chicago, IL), assuming a significance level of P < 0.05 and using the Bonferroni adjustment for multiple tests. Change in Second-Order Aberrations For all eyes examined, increasing decentration caused increasing undercorrection of 2nd-order sphere and induction of 2nd-order astigmatism. However, the pattern of decentration effects was triangular in shape rather than rotationally symmetric, as might have been predicted from the intended refractive correction (Fig. 1) . These irregularities were more pronounced for 3.5-mm PDs which also showed reduction of cylinder magnitude with decentration (Figs. 1A 1B) . When averaging over all five eyes, decentrations of ≤1000 μm had a limited effect on sphere and cylinder magnitude, since the average undercorrection and cylinder induction were > −0.5 D (Fig. 2) . In contrast, decentrations ≥1000 μm resulted in larger deviation from the central treatment effect. The mean induction of astigmatism was higher for 6- than for 3.5-mm PDs; however, the differences between the two PDs for decentrations ≥900 μm reached only local significance of < 0.05, which was nonsignificant with the Bonferroni correction. Decentration Effects and the Interaction between HOAs and LOAs VSOTF-based refraction data included interaction effects of LOA with HOA. Apart from a tendency of VSOTF-based values to be more hyperopic at the centered position, there were no significant differences between 2nd-order and VSOTF-based power vectors (Table 3) . Decentration effects were more irregular for VSOTF-based refraction data than for the corresponding wavefront-derived data, particularly for sphere measured over 6-mm PDs (local < 0.05; Table 3 ). The effects of decentration on the VSOTF cylinder magnitude also showed high interindividual variability among the eyes. Changes of HOAs and BCVSOTF HOAs induced by decentration were dominated by coma (Table 4) . As for 2nd-order aberrations, the decentration patterns for coma RMS were not rotationally symmetric, and displayed flatter slopes but higher irregularity for 3.5- than for 6-mm PDs (Fig. 3) . We found a significant influence of the amount of decentration on the induction of coma RMS at a PD of 6 mm (adjusted R ^2 = 0.51; B = 0.7 × 10 < 0.001). At 3.5 mm, although much less pronounced (adjusted R ^2 = 0.23, B = 0.08 × 10 < 0.001), the same tendency was observed (Fig. 4) . The induction of SA RMS and rHOA RMS was less influenced by decentration (no significant correlation), with irregular decentration patterns and high variability between individual eyes at the two In all eyes, theoretical best-corrected optical quality expressed as BCVSOTF decreased by −0.41 ±0.13 log units for 3.5-mm pupils and by −0.55 ±0.19 log units for 6.0 mm pupils after a centered treatment. Decentration resulted in even higher decrease in log BCVSOTF (Figs. 5 6) . Furthermore, if Δlog BCVSOTF was computed for a 3.5-mm PD, the position that yielded the minimum decrease of BCVSOTF was located paracentrally in all eyes (Fig. 5A) . The regression model revealed a significant influence of the HOAs on log BCVSOTF at both PDs (adjusted R ^2 = 0.84 for 6-mm PD, R ^2 = 0.81 for 3.5-mm PD) with the highest impact of coma RMS in both models. Analysis of Decentration Tolerance and Irregularity Table 5 shows the mean vectors and their s. Both for wavefront-derived and for VSOTF-based sphere, the critical for an undercorrection of 0.5 D was greater than 1000 μm in all cases. The mean change of decentration tolerance due to interaction was 82 ± 232 μm for 6-mm PD and −92 ±73 μm for 3.5-mm PD (both > 0.05). For the 6-mm PD, of cylinder magnitude decreased by −160 ±142 μm when interaction was simulated ( > 0.05). At the 3.5-mm PD, values remained almost constant (14 ± 153 μm; > 0.05). While the of sphere and cylinder was similar at the two PDs, the data (Figs. 5 6) suggested a higher decentration tolerance at the 3.5-mm PD with regard to log BCVSOTF. This was confirmed by analysis of Table 5 ; local < 0.05). Analysis of the showed that the 2nd-order sphere (6.0-mm PD) had more regular decentration patterns than did the other parameters (Table 5) Linear regression analysis revealed that decentration tolerance was influenced significantly by spherical aberrations induced by the centered treatment. At 6-mm PD, 2nd-order sphere ( R ^2 = 0.87, B = −181; < 0.05), 2nd-order cylinder (adjusted R ^2 = 0.80, B = −278; < 0.05), and the VSOTF sphere ( R ^2 = 0.80, B = −407; < 0.05) were significantly influenced by ΔSA RMS but not by the amount of defocus or coma and rHOA RMS changes. Likewise, sphere and cylinder obtained over a 3.5-mm PD appeared not to be influenced by defocus change or HOA induction of the treatment. Steeper decrease of Δlog BCVSOTF with decentration was also associated with higher amounts of SA RMS induction by the treatment (Fig. 7) , but this association did not reach statistical significance. In this series of eyes, we could not establish any correlation between the induced defocus or HOA and the irregularity index SD of , neither for sphere and cylinder, nor for log BCVSOTF. Decentration Effects Followed an Irregular Pattern The present experiments revealed that decentration effects were distributed asymmetrically, although the treatment involved only rotationally symmetric ablation patterns. This behavior affected all parameters investigated and was more pronounced at the smaller (3.5-mm) pupil size. The fact that only changes between post- and preoperative WFE (ΔW) were analyzed compensated for possible interference from internal aberrations. Thus, the observed asymmetry could be due only to the treatment itself. Aside from the induction of astigmatism, a considerable amount of nonrotational symmetric HOAs (e.g., trefoil and coma) were also induced (Table 1) . For example, the triangular pattern in Figure 1 is likely to correlate with the presence of trefoil in the centered WFE difference ΔW( ). Because of the small sample size and the high variance among ΔW( ), we were not able to establish significant correlations between particular aberrations and asymmetry indices ( ). However, high interindividual differences between attempted and achieved refractive corrections and the observed asymmetries in the centered ΔW( ) may be explained by individual differences in laser ablation rates, ^ 23 local differences in laser energy, or irregularities in the biological response to PRK (i.e., wound healing and biomechanical changes in the cornea ^ 21 ^ 23 ^ 29 ^ 30 Decentration-Induced LOAs As observed by others, ^ 1 ^ 2 ^ 14 ^ 31 there was undercorrection of the spherical refractive error and induction of astigmatism as a function of decentration in all eyes examined. However, to our knowledge, the present study is the first decentration model study that is based on real wavefront data. Because model studies in the literature have always assumed the Munnerlyn algorithm ^ 1 ^ 2 or a perfect wavefront-guided ablation, ^ 3 ^ 4 ^ 19 they probably underestimated the effects of HOA induced by the primary treatment. ^ 21 ^ 32 Unlike spherical aberrations that dominated ΔW( ), the amount of coma RMS and rHOA RMS induced by the treatment did not significantly influence decentration tolerance of sphere and cylinder. With calculation of a simulated endpoint of the subjective refraction based on the metric VSOTF, an investigation of interaction effects between LOA and HOA was possible. In particular, we asked whether induced HOA affected the endpoint of the subjective refraction and caused “residual refractive error.” All eyes showed a tendency toward hyperopic VSOTF sphere values over a 6-mm PD which could be explained by interaction with spherical ^ 33 Furthermore, interindividual variability of interaction effects increased with decentration ( Table 3 , higher SDs for larger decentrations). Although cautious because of our small sample size, we believe that contrary to its effects on LOA induction, decentration did not consistently affect LOA/HOA interactions. However, VSOTF-based refraction results ^ 27 may differ from subjective refraction, particularly with HOA-related image distortion. Our finding that only decentrations ≥1000 μm caused spherical and cylindrical undercorrection ≥0.5 D and larger suggests that ubiquitous microdecentrations ^ 7 ^ 9 ^ 10 ≤500 μm are not a significant source of postoperative residual refractive errors. HOA and Best-Corrected Optical Quality Induction of HOAs, especially coma, is a key contributor to symptoms after LRS in humans. ^ 1 ^ 12 ^ 13 ^ 14 ^ 17 ^ 34 ^ 35 We noted that the induction of HOAs by decentration occurred in an irregular pattern that may have resulted from treatment-induced, nonrotationally symmetric aberrations. There was also a large difference in the induction of coma at 3.5- and 6-mm PDs. Although on average, the amount of spherical aberrations induced was not affected by decentration, the s increased with decentration, reflecting high interindividual differences. In all eyes examined, log BCVSOTF decreased asymmetrically as a function of decentration, displaying asymmetric decentration patterns. The obvious relationship between coma and log BCVSOTF in the decentration maps (Figs. 3 4) was confirmed by regression analysis that revealed a highly significant, numerical impact of coma on log BCVSOTF at both 3.5- and 6-mm PDs. The large discrepancy between decentration tolerance at 3.5- and 6-mm PDs suggests that microdecentrations could be one cause of night vision disturbances in eyes that are asymptomatic under photopic conditions, particularly if center shifts between constricted and dilated pupil are involved. ^ 8 Indeed, significant amounts of coma have been reported in such symptomatic eyes. ^ 34 ^ 35 Another potential reason for a high interindividual variability of symptoms is the compensation of corneal aberrations by the lens. ^ 36 Further studies involving ray tracing models will be necessary to investigate the role of internal optics on decentration effects. Anatomy, optics, function, and subjective perception are key levels in the concept of quality of vision after refractive surgical procedures. ^ 37 The model described herein allowed us to investigate decentration tolerance as a novel dimension of the “optics” level in the quality of vision concept. Our calculations reduced possible biases resulting from aberrometer misalignments ^ 38 or internal optics so that “pure” WFE changes could be investigated. Although these computations are laborious, an evaluation of decentration effects on novel treatment modalities (e.g., presbyopia-correcting laser profiles ^ 39 or new multifocal intraocular lenses) is now possible. As demonstrated in the context of image quality, ^ 33 it appears logical that different aberrations should interact, affecting decentration tolerance. A limitation of our computational model, however, is that it simulates decentration by pupil shifts rather than by shifts of the treatment zone. Given that some of our treatments were decentered themselves, this could be a problem, especially if different portions of the central cornea yield significantly different biological responses. Nevertheless, the computational model described in the present study is a powerful and versatile tool for the analysis of decentration effects on refractive outcome. Although based on a small sample of experimental eyes, it allowed us to reach conclusions of potential interest for refractive surgical practice: (1) Decentration of myopic treatments leads to consistent undercorrection of the defocus term and the induction of astigmatism. However, the decentration tolerance of sphere and cylinder (including simulated interaction effects) makes it unlikely that microdecentrations ≤500 μm are a significant cause of residual refractive errors in otherwise asymptomatic eyes. (2) In contrast to effects on LOAs, microdecentrations appear to be a source of HOA-related visual symptoms under mesopic conditions in a proportion of eyes that would be asymptomatic under photopic conditions. (3) Given the intra- and interindividual variability of effects in our model, it appears that only some eyes will experience symptoms in clinical practice. (4) Finally, our results suggest that minimizing the induction of spherical aberration by maximizing the functional optical zone of the cornea ^ 40 using aspheric ablation profiles ^ 41 ^ 42 or large OZ diameters ^ 43 could significantly increase decentration tolerance and by doing so, optimize refractive outcome. Supported by Deutsche Forschungsgemeinschaft Grant Bu 2163/1-1 (JB); National Eye Institute Grant NIH R01 EY015836 (KRH) and Core Grant 08P0EY01319F to the Center for Visual Science; a grant from Bausch & Lomb Inc.; grants from the University of Rochester’s Center for Electronic Imaging Systems; funding as an NYSTAR-designated Center for Advanced Technology; and an unrestricted grant to the University of Rochester’s Department of Ophthalmology from Research to Prevent Blindness. Corresponding author: Jens Bühren, Department of Ophthalmology, Box 314; University of Rochester Medical Center; 601 Elmwood Ave, Rochester, NY 14642; Table 1. Eye Treatment (D) OZ (mm) TTZ (mm) Centered Wavefront Error Change ΔW(x, y) PD (mm) Sphere (D) Cylinder (D) Axis (°) Total HOA RMS (μm) Coma RMS (μm) SA RMS (μm) rHOA RMS (μm) c1-005 OD −6 8 11.1 9 +3.33 −0.58 172 2.412 1.019 1.838 1.184 6 +4.71 −0.61 12 0.541 0.197 0.333 0.388 c2-001 OS −6 6 9.1 9 +0.82 −0.37 91 1.790 0.516 1.585 0.650 6 +2.17 −0.65 88 0.618 0.285 0.327 0.440 c2-006 OS −6 6 9.1 9 +1.90 −0.11 79 1.155 0.324 0.991 0.496 6 +2.56 −0.24 40 0.307 0.120 0.039 0.280 c5-005 OD −10 6 9.1 9 +2.49 −0.28 29 2.182 0.414 2.108 0.381 6 +4.11 −0.28 37 0.574 0.296 0.426 0.246 c5-026 OD −10 6 9.1 9 +3.24 −0.45 174 2.983 0.423 2.924 0.409 6 +5.29 −0.47 160 0.430 0.291 0.262 0.178 OZ, diameter of the programmed optical zone; TTZ, diameter of the total treatment zone; total HOA RMS, root mean square value of 3rd- to 10th-order aberrations; coma RMS, RMS value of 3rd- to 9th-order coma; SA RMS, RMS value of 4th- to 10th-order spherical aberration; rHOA RMS, residual RMS of all noncoma, nonspherical HOA. Table 2. PD (mm) Log BCVSOTF Total HOA RMS (μm) Coma RMS (μm) SA RMS (μm) rHOA RMS (μm) 3.5 −0.05 0.036 0.031 0.012 0.014 6.0 −0.38 0.185 0.145 0.078 0.083 For all calculations LOA were set to zero. BCVSOTF (visual Strehl ratio based on the optical transfer function, simulated for best correction); total HOA RMS, root mean square value of 3rd- to 6th-order aberrations; coma RMS, RMS of 3rd to 5th order coma; SA RMS, RMS of Z [4] ^0 and Z [6] ^0 ; rHOA RMS, residual RMS of all noncoma, nonspherical HOA. Table 3. PD (mm) Decentration (μm) Difference between 2nd-Order and VSOTF-Based Refraction Change (D) M J0 J45 3.5 0 0.05 ± 0.11 −0.05 ± 0.07 −0.02 ± 0.06 200 0.03 ± 0.26 −0.04 ± 0.09 −0.01 ± 0.07 500 0.01 ± 0.37 −0.02 ± 0.08 0.03 ± 0.09 1000 0.03 ± 0.39 −0.02 ± 0.13 0.04 ± 0.10 1500 0.13 ± 0.43 −0.02 ± 0.11 0.02 ± 0.05 6.0 0 0.68 ± 0.31 −0.18 ± 0.14 0.03 ± 0.06 200 0.61 ± 0.45 −0.16 ± 0.17 0.04 ± 0.08 500 0.63 ± 0.28 −0.10 ± 0.23 0.00 ± 0.09 1000 0.58 ± 0.53 −0.06 ± 0.13 −0.01 ± 0.12 1500 0.56 ± 0.65 −0.16 ± 0.37 0.01 ± 0.15 The data are averaged from the 0°, 90°, 180°, and 270° meridian and expressed as mean and SD of the difference between second-order and VSOTF-based dioptric power vectors , and . Differences were not statistically significant. VSOTF refraction, simulated endpoint of the subjective refraction based on the BCVSOTF; , spherical equivalent; , 0°/90° astigmatic component; , 45°/135° astigmatic component. Table 4. PD (mm) Decentration (μm) Δlog BCVSOTF HOA Induction Δ Coma RMS Δ SA RMS Δ rHOA RMS 3.5 0 0 ± 0 0 ± 0 0 ± 0 0 ± 0 200 0 ± 0.04 0 ± 0.017 0 ± 0.006 0 ± 0.014 500 −0.03 ± 0.09 0.004 ± 0.041 −0.001 ± 0.015 −0.001 ± 0.031 1000 −0.12 ± 0.18 0.039 ± 0.069 0.011 ± 0.025 0.002 ± 0.044 1500 −0.22 ± 0.21 0.112 ± 0.079 0.015 ± 0.034 0.003 ± 0.058 6.0 0 −0.14 ± 0.19 0.058 ± 0.039 0.270 ± 0.149 0.186 ± 0.042 200 −0.18 ± 0.17 0.111 ± 0.080 0.275 ± 0.138 0.187 ± 0.041 500 −0.24 ± 0.18 0.300 ± 0.147 0.295 ± 0.147 0.195 ± 0.047 1000 −0.29 ± 0.20 0.660 ± 0.289 0.338 ± 0.184 0.223 ± 0.048 1500 −0.29 ± 0.20 0.932 ± 0.403 0.317 ± 0.188 0.252 ± 0.042 The data are averaged from the 0°, 90°, 180°, and 270° meridian and expressed as mean and standard deviations. The values are normalized to the values for the centered position for a 3.5 mm pupil diameter (PD), i.e. each value reflects the difference to the value obtained from centered position over a 3.5 mm PD. Δx, horizontal decentration; Δy, vertical decentration. log BCVSOTF, visual Strehl ratio based on the optical transfer function, simulated for best correction; coma RMS, RMS of 3rd and 5th order coma terms; SA RMS, RMS of Z [4] ^0 and Z [6] ^0 ; rHOA RMS, residual RMS of all noncoma, nonspherical HOA. Table 5. Parameter Threshold Value r̅ (μm) SD of r̅ (μm) (CV of r̅ [%]) 3.5-mm PD 6-mm PD 3.5-mm PD 6-mm PD Δ2nd-order sphere −0.5 D 1255 ± 160 1313 ± 136 228 ± 60 111 ± 47 (19 ± 7) (8 ± 3) Δ2nd-order cylinder −0.5 D 1304 ± 130 1008 ± 214 208 ± 64 173 ± 102 (16 ± 6) (16 ± 8) ΔVSOTF sphere −0.5 D 1348 ± 104 1232 ± 314 204 ± 63 246 ± 55 (15 ± 6) (20 ± 5) ΔVSOTF cylinder −0.5 D 1289 ± 201 1167 ± 271 226 ± 100 207 ± 63 (19 ± 13) (20 ± 12) Δlog BCVSOTF −0.2 1219 ± 210 800 ± 512 248 ± 82 143 ± 40 (21 ± 8) (21 ± 8) The radius is the mean length of the vectors between the center and the locations with threshold values. The SD and CV of reflect the irregularity of the decentration behavior. All data are expressed as the mean and SD. PD, analysis pupil diameter. VSOTF sphere/cylinder, simulated endpoint of the subjective refraction based on the BCVSOTF. MrochenM, KaemmererM, MierdelP, SeilerT. Increased higher-order optical aberrations after laser refractive surgery: a problem of subclinical decentration. J Cataract Refract Surg. 2001;27:362–369. [CrossRef] [PubMed] MihashiT. Higher-order wavefront aberrations induced by small ablation area and sub-clinical decentration in simulated corneal refractive surgery using a perturbed schematic eye model. Semin Ophthalmol. 2003;18:41–47. [CrossRef] [PubMed] BueelerM, MrochenM, SeilerT. Maximum permissible lateral decentration in aberration-sensing and wavefront-guided corneal ablation. J Cataract Refract Surg. 2003;29:257–263. [CrossRef] [PubMed] BueelerM, MrochenM, SeilerT. Maximum permissible torsional misalignment in aberration-sensing and wavefront-guided corneal ablation. J Cataract Refract Surg. 2004;30:17–25. [CrossRef] [PubMed] MrochenM, EldineMS, KaemmererM, SeilerT, HutzW. Improvement in photorefractive corneal laser surgery results using an active eye-tracking system. J Cataract Refract Surg. 2001;27:1000–1006. [CrossRef] [PubMed] BueelerM, MrochenM. Limitations of pupil tracking in refractive surgery: systematic error in determination of corneal locations. J Refract Surg. 2004;20:371–378. PorterJ, YoonG, MacRaeS, et al. Surgeon offsets and dynamic eye movements in laser refractive surgery. J Cataract Refract Surg. 2005;31:2058–2066. [CrossRef] [PubMed] PorterJ, YoonG, LozanoD, et al. Aberrations induced in wavefront-guided laser refractive surgery due to shifts between natural and dilated pupil center locations. J Cataract Refract Surg. 2006; [CrossRef] [PubMed] WebberSK, McGheeCN, BryceIG. Decentration of photorefractive keratectomy ablation zones after excimer laser surgery for myopia. J Cataract Refract Surg. 1996;22:299–303. [CrossRef] [PubMed] OuJI, MancheEE. Topographic centration of ablation after LASIK for myopia using the CustomVue VISX S4 excimer laser. J Refract Surg. 2007;23:193–197. BührenJ, KohnenT. Factors affecting the change in lower-order and higher-order aberrations after wavefront-guided laser in situ keratomileusis for myopia with the Zyoptix 3.1 system. J Cataract Refract Surg. 2006;32:1166–1174. [CrossRef] [PubMed] VerdonW, BullimoreM, MaloneyRK. Visual performance after photorefractive keratectomy: a prospective study. Arch Ophthalmol. 1996;114:1465–1472. [CrossRef] [PubMed] AzarDT, YehPC. Corneal topographic evaluation of decentration in photorefractive keratectomy: treatment displacement vs intraoperative drift. Am J Ophthalmol. 1997;124:312–320. [CrossRef] [PubMed] MrochenM, KruegerRR, BueelerM, SeilerT. Aberration-sensing and wavefront-guided laser in situ keratomileusis: management of decentered ablation. J Refract Surg. 2002;18:418–429. KnorzMC, NeuhannT. Treatment of myopia and myopic astigmatism by customized laser in situ keratomileusis based on corneal topography. Ophthalmology. 2000;107:2072–2076. [CrossRef] [PubMed] KohnenT. Combining wavefront and topography data for excimer laser surgery: the future of customized ablation?. (editorial)J Cataract Refract Surg. 2004;30:285–286. [CrossRef] [PubMed] KanellopoulosAJ. Topography-guided custom retreatments in 27 symptomatic eyes. J Refract Surg. 2005;21:S513–S518. JankovMR, 2nd, PanagopoulouSI, TsiklisNS, et al. Topography-guided treatment of irregular astigmatism with the wavelight excimer laser. J Refract Surg. 2006;22:335–344. GuiraoA, WilliamsDR, CoxIG. Effect of rotation and translation on the expected benefit of an ideal method to correct the eye’s higher-order aberrations. J Opt Soc Am A Opt Image Sci Vis. 2001; [CrossRef] [PubMed] RobertsC. Future challenges to aberration-free ablative procedures. J Refract Surg. 2000;16:S623–S629. YoonG, MacRaeS, WilliamsDR, CoxIG. Causes of spherical aberration induced by laser refractive surgery. J Cataract Refract Surg. 2005;31:127–135. [CrossRef] [PubMed] SeilerT, KaemmererM, MierdelP, KrinkeHE. Ocular optical aberrations after photorefractive keratectomy for myopia and myopic astigmatism. Arch Ophthalmol. 2000;118:17–21. [CrossRef] [PubMed] NagyLJ, MacRaeS, YoonG, et al. Photorefractive keratectomy in the cat eye: Biological and optical outcomes. J Cataract Refract Surg. 2007;33:1051–1064. [CrossRef] [PubMed] HuxlinKR, YoonG, NagyL, PorterJ, WilliamsD. Monochromatic ocular wavefront aberrations in the awake-behaving cat. Vision Res. 2004;44:2159–2169. [CrossRef] [PubMed] ThibosLN, ApplegateRA, SchwiegerlingJT, WebbR. Standards for reporting the optical aberrations of eyes. J Refract Surg. 2002;18:S652–S660. ChengX, ThibosLN, BradleyA. Estimating visual quality from wavefront aberration measurements. J Refract Surg. 2003;19:S579–S584. ChengX, BradleyA, ThibosLN. Predicting subjective judgment of best focus with objective image quality metrics. J Vision. 2004;4:310–321. PorterJ, GuiraoA, CoxIG, WilliamsDR. Monochromatic aberrations of the human eye in a large population. J Opt Soc Am A Opt Image Sci Vis. 2001;18:1793–1803. [CrossRef] [PubMed] Møller-PedersenT, CavanaghHD, PetrollWM, JesterJV. Stromal wound healing explains refractive instability and haze development after photorefractive keratectomy: a 1-year confocal microscopic study. Ophthalmology. 2000;107:1235–1245. [CrossRef] [PubMed] NettoMV, MohanRR, AmbrosioR, Jr, et al. Wound healing in the cornea: a review of refractive surgery complications and new prospects for therapy. Cornea. 2005;24:509–522. [CrossRef] [PubMed] KapadiaMS, KrishnaR, ShahS, WilsonSE. Surgically induced astigmatism after photorefractive keratectomy with the excimer laser. Cornea. 2000;19:174–179. [CrossRef] [PubMed] CanoD, BarberoS, MarcosS. Comparison of real and computer-simulated outcomes of LASIK refractive surgery. J Opt Soc Am A. 2004;21:926–936. ApplegateRA, MarsackJD, RamosR, SarverEJ. Interaction between aberrations to improve or reduce visual performance. J Cataract Refract Surg. 2003;29:1487–1495. [CrossRef] [PubMed] ChalitaMR, ChavalaS, XuM, KruegerRR. Wavefront analysis in post-LASIK eyes and its correlation with visual symptoms, refraction, and topography. Ophthalmology. 2004;111:447–453. [CrossRef] [PubMed] McCormickGJ, PorterJ, CoxIG, MacRaeS. Higher-order aberrations in eyes with irregular corneas after laser refractive surgery. Ophthalmology. 2005;112:1699–1709. [CrossRef] [PubMed] ArtalP, BenitoA, TaberneroJ. The human eye is an example of robust optical design. J Vision. 2006;6:1–7. KohnenT, BührenJ, KasperT, TerziE. Quality of vision after refractive surgery.KohnenT KochDD eds. Essentials of Ophthalmology, Cataract and Refractive Surgery. 2004;303–314.Springer Berlin. DaviesN, Diaz-SantanaL, Lara-SaucedoD. Repeatability of ocular wavefront measurement. Optom Vis Sci. 2003;80:142–150. [CrossRef] [PubMed] KollerT, SeilerT. Four corneal presbyopia corrections: simulations of optical consequences on retinal image quality. J Cataract Refract Surg. 2006;32:2118–2123. [CrossRef] [PubMed] TaberneroJ, KlyceSD, SarverEJ, ArtalP. Functional optical zone of the cornea. Invest Ophthalmol Vis Sci. 2007;48:1053–1060. [CrossRef] [PubMed] MannsF, HoA, ParelJM, CulbertsonW. Ablation profiles for wavefront-guided correction of myopia and primary spherical aberration. J Cataract Refract Surg. 2002;28:766–774. [CrossRef] [PubMed] MrochenM, DonitzkyC, WüllnerC, LöfflerJ. Wavefront-optimized ablation profiles: theoretical background. J Cataract Refract Surg. 2004;30:775–785. [CrossRef] [PubMed] BührenJ, KühneC, KohnenT. Influence of pupil and optical zone diameter on higher-order aberrations after wavefront-guided myopic LASIK. J Cataract Refract Surg. 2005;31:2272–2280. [CrossRef] [PubMed]
{"url":"https://iovs.arvojournals.org/article.aspx?articleid=2125624","timestamp":"2024-11-10T06:11:06Z","content_type":"text/html","content_length":"292086","record_id":"<urn:uuid:ed4875af-b719-4d70-8c23-27eb23ec80c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00501.warc.gz"}
sppequ(3) [centos man page] sppequ(3) [centos man page] sppequ.f(3) LAPACK sppequ.f(3) sppequ.f - subroutine sppequ (UPLO, N, AP, S, SCOND, AMAX, INFO) Function/Subroutine Documentation subroutine sppequ (characterUPLO, integerN, real, dimension( * )AP, real, dimension( * )S, realSCOND, realAMAX, integerINFO) SPPEQU computes row and column scalings intended to equilibrate a symmetric positive definite matrix A in packed storage and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal UPLO is CHARACTER*1 = 'U': Upper triangle of A is stored; = 'L': Lower triangle of A is stored. N is INTEGER The order of the matrix A. N >= 0. AP is REAL array, dimension (N*(N+1)/2) The upper or lower triangle of the symmetric matrix A, packed columnwise in a linear array. The j-th column of A is stored in the array AP as follows: if UPLO = 'U', AP(i + (j-1)*j/2) = A(i,j) for 1<=i<=j; if UPLO = 'L', AP(i + (j-1)*(2n-j)/2) = A(i,j) for j<=i<=n. S is REAL array, dimension (N) If INFO = 0, S contains the scale factors for A. SCOND is REAL If INFO = 0, S contains the ratio of the smallest S(i) to the largest S(i). If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by S. AMAX is REAL Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, the i-th diagonal element is nonpositive. Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. November 2011 Definition at line 117 of file sppequ.f. Generated automatically by Doxygen for LAPACK from the source code. Version 3.4.2 Tue Sep 25 2012 sppequ.f(3) Check Out this Related Man Page cppequ.f(3) LAPACK cppequ.f(3) cppequ.f - subroutine cppequ (UPLO, N, AP, S, SCOND, AMAX, INFO) Function/Subroutine Documentation subroutine cppequ (characterUPLO, integerN, complex, dimension( * )AP, real, dimension( * )S, realSCOND, realAMAX, integerINFO) CPPEQU computes row and column scalings intended to equilibrate a Hermitian positive definite matrix A in packed storage and reduce its condition number (with respect to the two-norm). S contains the scale factors, S(i)=1/sqrt(A(i,i)), chosen so that the scaled matrix B with elements B(i,j)=S(i)*A(i,j)*S(j) has ones on the diagonal. This choice of S puts the condition number of B within a factor N of the smallest possible condition number over all possible diagonal UPLO is CHARACTER*1 = 'U': Upper triangle of A is stored; = 'L': Lower triangle of A is stored. N is INTEGER The order of the matrix A. N >= 0. AP is COMPLEX array, dimension (N*(N+1)/2) The upper or lower triangle of the Hermitian matrix A, packed columnwise in a linear array. The j-th column of A is stored in the array AP as follows: if UPLO = 'U', AP(i + (j-1)*j/2) = A(i,j) for 1<=i<=j; if UPLO = 'L', AP(i + (j-1)*(2n-j)/2) = A(i,j) for j<=i<=n. S is REAL array, dimension (N) If INFO = 0, S contains the scale factors for A. SCOND is REAL If INFO = 0, S contains the ratio of the smallest S(i) to the largest S(i). If SCOND >= 0.1 and AMAX is neither too large nor too small, it is not worth scaling by S. AMAX is REAL Absolute value of largest matrix element. If AMAX is very close to overflow or very close to underflow, the matrix should be scaled. INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, the i-th diagonal element is nonpositive. Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. November 2011 Definition at line 118 of file cppequ.f. Generated automatically by Doxygen for LAPACK from the source code. Version 3.4.1 Sun May 26 2013 cppequ.f(3)
{"url":"https://www.unix.com/man-page/centos/3/sppequ","timestamp":"2024-11-14T05:32:16Z","content_type":"text/html","content_length":"34827","record_id":"<urn:uuid:5fff565e-0761-422c-a430-ec809245eea4>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00025.warc.gz"}
Linear Pair of Angles—Definition, Axiom, Examples - Grade Potential Centennial, CO Linear Pair of AnglesDefinition, Axiom, Examples The linear pair of angles is a significant subject in geometry. With multiple real-world uses, you'd be astonished to find how applicable this figure can be. While you may believe it has no application in your life, we all should understand the concept to ace those tests in school. To save your time and make this info easy to access, here is an preliminary insight into the characteristics of a linear pair of angles, with images and examples to help with your private study sessions. We will also discuss some real-life and geometric uses. What Is a Linear Pair of Angles? Linearity, angles, and intersections are theories that continue to be applicable as you go forward in geometry and more complicated theorems and proofs. We will answer this query with a straightforward explanation in this single point. A linear pair of angles is the name given to two angles that are located on a straight line and have the sum of their measurement of angles is 180 degrees. To put it simply, linear pairs of angles are two angles that are adjacent on the same line and together form a straight line. The total of the angles in a linear pair will always produce a straight angle equivalent times to 180 degrees. It is important to keep in mind that linear pairs are at all times at adjacent angles. They share a common apex and a common arm. This means that at all times make on a straight line and are always supplementary angles. It is important to clarify that, even though the linear pair are constantly adjacent angles, adjacent angles aren't always linear pairs. The Linear Pair Axiom Over the precise explanation, we will study the two axioms earnestly to completely comprehend every example thrown at you. Let’s start by defining what an axiom is. It is a mathematical postulate or assumption that is accepted without proof; it is considered clear and self-explanatory. A linear pair of angles has two axioms linked with them. The first axiom states that if a ray is located on a line, the adjacent angles will form a straight angle, making them a linear pair. The second axiom establishes that if two angles create a linear pair, then uncommon arms of both angles makes a straight angle among them. This is commonly called a straight line. Examples of Linear Pairs of Angles To imagine these axioms better, here are a few drawn examples with their respective answers. Example One As we can see in this instance, we have two angles that are neighboring each other. As you can observe in the figure, the adjacent angles form a linear pair due to the fact that the total of their measures is equivalent to 180 degrees. They are also supplementary angles, since they share a side and a common vertex. Angle A: 75 degrees Angle B: 105 degrees Sum of Angles A and B: 75 + 105 = 180 Example Two In this example, we have two lines intersect, making four angles. Not every angles makes a linear pair, but respective angle and the one adjacent to it form a linear pair. ∠A 30 degrees ∠B: 150 degrees ∠C: 30 degrees ∠D: 150 degrees In this example, the linear pairs are: ∠A and ∠B ∠B and ∠C ∠C and ∠D ∠D and ∠A Example Three This instance shows convergence of three lines. Let's look at the axiom and properties of linear pairs. ∠A 150 degrees ∠B: 50 degrees ∠C: 160 degrees None of the angle combinations sum up to 180 degrees. As a result, we can come to the conclusion that this diagram has no linear pair unless we expand a straight line. Applications of Linear Pair of Angles Now that we have explored what linear pairs are and have observed some instances, let’s understand how this concept can be used in geometry and the real world. In Real-World Scenarios There are multiple implementations of linear pairs of angles in real-world. One such example is architects, who apply these axioms in their daily job to check if two lines are perpendicular and creates a straight angle. Builders and construction professionals also use experts in this field to make their work simpler. They employ linear pairs of angles to assure that two close walls create a 90-degree angle with the Engineers also uses linear pairs of angles frequently. They do so by working out the pressure on the beams and trusses. In Geometry Linear pairs of angles additionally perform a function in geometry proofs. A ordinary proof that uses linear pairs is the alternate interior angles theorem. This concept explains that if two lines are parallel and intersected by a transversal line, the alternate interior angles formed are congruent. The proof of vertical angles as well replies on linear pairs of angles. Even though the adjacent angles are supplementary and sum up to 180 degrees, the opposite vertical angles are at all times equal to each other. Because of previously mentioned two rules, you are only required to know the measurement of one angle to determine the measure of the rest. The concept of linear pairs is subsequently utilized for more sophisticated implementation, such as working out the angles in polygons. It’s essential to understand the basics of linear pairs, so you are ready for more advanced geometry. As you can see, linear pairs of angles are a relatively simple concept with few engaging applications. Next time you're out and about, observe if you can notice some linear pairs! And, if you're participating in a geometry class, take notes on how linear pairs may be helpful in proofs. Enhance Your Geometry Skills using Grade Potential Geometry is entertaining and useful, maily if you are curious about the field of architecture or construction. Despite that, if you're having difficulty understanding linear pairs of angles (or any concept in geometry), think about signing up for a tutoring session with Grade Potential. One of our expert tutors can guide you grasp the material and ace your next examination.
{"url":"https://www.centennialinhometutors.com/blog/linear-pair-of-angles-definition-axiom-examples","timestamp":"2024-11-10T02:22:52Z","content_type":"text/html","content_length":"77308","record_id":"<urn:uuid:ae63f26d-a972-4b1b-ba88-373958dc4fb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00490.warc.gz"}
Pole placement in the case of control of MIMO-system with state derivative feedback Authors: Zubov&nbspN.E., Mikrin&nbspE.A., Misrikhanov&nbspM.Sh., Ryabchenko&nbspV.N. Published: 03.09.2015 Published in issue: #4(103)/2015 DOI: 10.18698/0236-3933-2015-4-3-12 Category: Aviation, Rocket and Space Engineering | Chapter: Dynamics, Ballistics, Flying Vehicle Motion Control Keywords: pole placement, determined MIMO-system, derivative control, system decomposition The paper presents the developed method of pole placement in the determined linear dynamical MIMO-system controlled by state derivative feedback. The method is based on the original decomposition of the initial system with the help of matrix semiorthogonal zero divisors. The method is universal for both continuous and discrete cases of MIMO-system descriptions. It has no restrictions on both state vector dimensions and input ofMIMO-system, as well as on algebraic and geometric multiplicity of the given poles. It also allows analytical synthesis of regulators. The application of the proposed approach for the fourth-order dynamic system with two inputs is considered. [1] Ogata K. Modem Control Engineering. Prentice-Hall. New Jersey. 2002. [2] Abdelaziz T.H.S., Valaek M. Eigenstructure assignment by state-derivative and partial output-derivative feedback for linear time-invariant control systems. Acta Polytechnica, 2004, no. 4, pp. [3] Abdelaziz T.H.S., Valaek M. A direct algorithm for pole placement by state-derivative feedback for multi input linear systems - non singular case. Kybernetika, 2005, vol. 41, no. 5, pp. 637-660. [4] Abdelaziz T.H.S. Parametric eigenstructure assignment using state-derivative feedback for linear systems. J. Vibration and Contr., 2012, no. 18, pp. 1809-1827. [5] Dai L. Singular Control Systems. Lecture notes in control and information sciences. Springer-Verlag, Berlin, 1989. [6] Misrikhanov M.Sh., Ryabchenko V.N. Band criteria and recursive tests of complete controllability and observability of linear algebraic-differentiable systems. Automation and Remote Control, 2008, vol. 69:9, pp. 1485-1503. [7] Bernstein D.S. Matrix mathematics. Princeton Univ. Press, 2009. [8] Misrikhanov M.Sh., Ryabchenko V.N. Pole Placement in Large Dynamical Systems with Many Inputs and Outputs. Doklady Mathematics, 2011, vol. 84:1, pp. 591-593. [9] Misrikhanov M.Sh., Ryabchenko V.N. Pole placement for controlling a large scale power system. Automation and Remote Control, 2011, vol. 72:10, pp. 2123-2146. [10] Zubov N.E., Mikrin E.A., Misrikhanov M.Sh., Ryabchenko V.N. Synthesis of decoupling laws for attitude stabilization of a spacecraft. J. of Computer and Systems Sciences International, 2012, vol. 51:1, pp. 80-96. [11] Misrikhanov M.Sh., Ryabchenko V.N. Algebraic and matrix methods in the theory of linear MIMO-systems. Vestnik IGEU, 2005, vol. 5, pp. 196-240 (in Russ.).
{"url":"https://vestnikprib.bmstu.ru/eng/catalog/arse/ball/888.html","timestamp":"2024-11-11T21:23:56Z","content_type":"application/xhtml+xml","content_length":"11400","record_id":"<urn:uuid:4f1edc53-0d5a-4358-847c-b93ced5e028b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00148.warc.gz"}
Current Developments in Mathematics, 2020 Current Developments in Mathematics Current Developments in Mathematics, 2020 Published: 27 October 2022 Publisher: International Press of Boston, Inc. List Price: $75.00 This volume presents four papers based on selected lectures given at the Current Developments in Mathematics conference, sponsored by Harvard University and Massachusetts Institute of Technology, in January 2021. Pub. Date ISBN-13 ISBN-10 Medium Binding Size, Etc. Status List Price 2022 Oct 9781571464217 1571464212 Print paperback 7” x 10” In Print US$75.00
{"url":"https://intlpress.com/site/pub/pages/books/items/00000578/index.php","timestamp":"2024-11-04T12:20:29Z","content_type":"text/html","content_length":"7812","record_id":"<urn:uuid:cd9832aa-b32c-4b65-bcbd-80efeeda679c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00802.warc.gz"}
inverse trigonometric functions worksheet pdf Students should solve the CBSE issued sample papers to understand the pattern of the M110 fa17 page 1 6 worksheet 18 inverse trigonometric functions 7 4 in exercises1 40 compute theexactvalue. Students can download these worksheets and practice them. ˇ 2 x ˇ 2 definition. Definition of the Trig Functions Right triangle definition For this definition we assume that 0 2 p < “'); atOptions = { 5.7 Inverse Trigonometric Functions: Integration Integrals involving inverse trig functions – Let u be a differentiable function of x, and let a > 0. The inverse sine function denoted by sin 1 x or arcsinx is de ned to be the inverse of the restricted sine function sinx. CBSE Class 12 Mathematics Inverse Trigonometric Functions (1). View Inverse_Trig_Worksheet parte 1.pdf from MATH 101 at Miami Coral Park Senior High. opposite sin hypotenuse q= hypotenuse csc opposite q= adjacent cos hypotenuse q= hypotenuse sec adjacent q= opposite tan adjacent q= adjacent cot opposite q= Unit circle definition For this definition q is any angle. Examples: Find the integral. inverse trigonometric functions worksheet pdf, Second Grade 2nd Grade Spelling Worksheets Pdf, Realistic Coloring Pages Of Animals For Adults, First Grade Maths Worksheet For Class 1 Pdf, Jurassic World Indominus Rex Dinosaur Coloring Pages, Ghost Happy Halloween Ghost Halloween Coloring Pictures, Printable Iron Man Hulkbuster Coloring Pages, Alphabet Tracing Worksheets For 3 Year Olds Pdf, Free Printable Friendship Bff Coloring Pages For Girls. Inverse Trigonometric Functions Precalculus Unit 4 Trigonometric Functions Precalculus Smart Board Lessons. AP Calculus AB - Worksheet 33 Derivatives of Inverse Trigonometric Functions Know the following Theorems. AP Calculus AB – Worksheet 37 Integration of Inverse Trigonometric Functions Evaluate each integral. 'params' : {} Give the domain and range of fand the inverse function f 1. Inverse Trigonometric Functions The trigonometric functions are not one-to-one. For further review, please visit section 2.7 or the handout given in class on inverse functions. Inverse Trigonometric Functions Note. July 10, 2019. 'params' : {} Algebra 2 Rationalizing The Denominator Worksheet Answers. Rather have pen and paper ready and try to work through the examples before reading their solutions. Part A. 2 3sin lim . Suppose aand bare positive real numbers and ln(ab) = 3 and ln(ab2) = 5. functions using fundamental identities, and persevere in pdf comprises six inverse trig functions using reference angle for the level. h��W�n�8���8�;P�%�V�ISۉ���$��@Wlu���i'��%�y�1���r��8dB:�Y�4�����!P�C���J�jΜb(-2�Ԡ�H����ZE]�0t�:�T�� �$@��f)΄��Lr �I��*�I%U$���t��BM+s�)��E�aJK`>D��:.�"��̎���'�$ ����"��y/_/��u ���ԑ��d����X��,�h�1ߎ �h���u�ES���f�޻���� �8-��C���K��{R7_��+p�4�T)�F(���=��#�-Zl04�H?�mFn�lk ~��K�x6J�1�;ޡ��zb`�QQe�UV�*�����"� B6�-$���/���ɖ�0GϡQ��MY⊼jh;�� ����S����:ޖ03�G��,�p*�|� 22 arcsin du u C au a ³ 2. жY��6�X4+�u���e�6w��ΨUn����E�WAD�/!��`H[("b�A�tmT�r�(Q�RQ��� :U��.����'���pp]8��t;P�Q�AUَ��V-:�Zv`���`�w������'�7נ�����������+��A�` D�� 9r2 1 r3 ³ dr 3. Students will need to find the exact values without a calculator. 1 Inverse Trigonometric Functions 1.1 Quick Review It is assumed that the student is familiar with the concept of inverse functions. }; Evaluate the following limits : 1. %PDF-1.6 %���� Then sketch the graph. 1) tan ( ) 2) cos 3) sin 4) csc Identify the domain and range of each. Math 109 t9 inverse trigonometric functions page 2 3. Simplifying Equations Worksheet. Download free printable worksheets Inverse Trigonometric Functions pdf of CBSE and kendriya vidyalaya Schools as per latest syllabus in pdf, CBSE Class 12 Mathematics Worksheet - Inverse Trigonometric Functions. 'width' : 300, Then sketch the graph. In order to continue to use our website, we ask you to confirm your identity as a person. M110 Fa17 Page 1/6 Worksheet 18 - Inverse Trigonometric Functions (§7.4) In Exercises1 - 40,compute theexactvalue. Find the EXACT value of the following 3 3 a) Sin 1 = r C 2MEatdse N Ww4i2tuhc VIenIf ei BnMiVtae U NC Dafl ckujl PujsK.m Worksheet by Kuta Software LLC Kuta Software - Infinite Calculus Name_____ Differentiation - Inverse Trigonometric Functions Date_____ Period____ Differentiate each function with respect to x. 2 4 12 dy yy Solve for missing angles of a right triangle using inverse trigonometry. Inverse Trigonometric Functions Worksheet PDF. Inverse trigonometric functions worksheet with answers Author: Refesa Jagune Subject: Inverse trigonometric functions worksheet with answers. 3.9. The quiz will mainly ask you to identify the various properties of certain inverse trigonometric functions. 'format' : 'iframe', Find the angle mainly ask you to confirm your identity as a person inverse trigonometric functions worksheet pdf of. A a ³ 2 persevere in pdf comprises six inverse trig integrals we ’ a... To practice simplifying 10 inverse trigonometric functions ( 1 ) inverse trigonometric functions worksheet pdf you can integrate as follows 6 worksheet 18 trigonometric... 0T xep ig memommye mtprwys ( §7.4 ) in Exercises1 40 compute theexactvalue under chapter. The derivative of y with respect to the appropriate variable Exercises1 - 40 compute. We ask you to identify the domain and range of each of fand the inverse function of f f! At inverse trigonometric functions and their graphs definition the exact value of each expression lim cos x x →∞. Better marks in examinations functions - inverse trigonometric functions ( 1 ) you. Domain and range dx 8. dx 1 4x2 ³ 9 every year for students for class 12 Mathematics trigonometric. Our website, we ask you to identify the domain and range to be the inverse of the functions... T ³ 6. x 2 1 ³ dx 8. dx 1 4x2 ³ 9 at inverse trigonometric functions Objective solve! 0 t. 7 find the angle ³cos 3 4z dz 4. dx ³ sin2 3x.. Of inverse trigonometric func-tions from section 1.6 software llc kuta software - infinite Precalculus inverse trig Name_____... We ask you to confirm your identity as a person booklet you will gain. To practice simplifying 10 inverse trigonometric functions Precalculus Smart board Lessons Answers,! Take an angle of a triangle and find the side length to take an angle of a and... On inverse functions, to take an angle of a triangle and find the angle §7.4 ) Exercises1! There only three integrals and not six dx ³ sin2 3x 5 for class 12 inverse! Each expression or arcsinx is de ned to be the inverse sine function denoted by sin 1 x or is! 1 ) tan ( ) 2 ) cos 3 ) sin 4 ) csc identify the domain range... Dx 8. dx 1 4x2 ³ 9 t xm pa mddeo 7w liwtdhl gi nn pn9i. Quiz will mainly ask you to identify the domain and range of the! For the rest of the chapter wherein a list of functions is given with domain! Are two ways to graph its inverse 1 1 how to use our website, we construct. U arc C u u a a ³ 2 Ratios date period various properties of certain inverse trigonometric functions inverse... The restricted sine function sinx 2 cos 3 ) sin 4 ) csc identify the various of! Are there only three integrals and not six gives a brief explanation of the restricted function. X ) = 5 llc kuta software infinite geometry name inverse trigonometric functions ( §7.4 ) in Exercises1 40. Definitions of the topics covered under this chapter t9 inverse trigonometric functions the chapter wherein a list functions... To continue to use this booklet function sinx persevere in pdf comprises six inverse trig integrals we re a behind... 6- x lim cos x x π →∞ 8. sin lim functions ( §7.4 ) in Exercises1 - 40 compute. 12 - inverse trigonometric functions, and we know about trigonometric functions page 3. Given with corresponding domain and range of each expression need to find the exact values without a.... Complete your ideas Professor Davis ’ s lectures, inverse trigonometric functions in class on inverse functions take. These evaluating trigonometric worksheet focuses on the integrals - infinite Precalculus inverse trig functions to take an angle a... Is given with corresponding domain and range of each x or arcsinx is de ned to be inverse. Each expression 1 inverse trigonometric Ratios date period much by just reading this you... Through the examples before reading their solutions two ways to graph its inverse −1 ) 2. arcsin − 3! A a ³ 3 the quiz will mainly ask you to confirm your identity as person. A list of functions is given with corresponding domain and range of each 2 3! Dy yy View Inverse_Trig_Worksheet parte 1.pdf from math 101 at Miami Coral Park Senior High their. A person the side length integrals and not six x 6cos 2 sin dt... Their Answers to solve the math fun fact 1 tan 2 cos 3 sin 4 ) csc the... About inverse functions angles of a right triangle using inverse trigonometry the rest the! ³ 3 period 1 find the exact value of each activity allows student practice. Arccosx is de ned to be the inverse function f 1 in to! And find the side length construct one-to-one functions from them function denoted by sin 1 x or is. Mainly ask you to confirm your identity as a person your ideas we ’ re a little behind Davis... This section we review the definitions of the chapter wherein a list of functions is given with corresponding domain range! 109 t9 inverse trigonometric functions ( 1 ) sin2 3x 5 student to simplifying! Arctan du u arc C u u a a ³ 2 2 1 ³ 7. We talk concerning trigonometry Worksheets and Answers pdf, below we will several. Their do-mains, we can construct one-to-one functions from them a right triangle using inverse trigonometry functions Precalculus Unit trigonometric! Are two ways to graph its inverse inverse trig integrals we ’ re a behind... Review it is the introductory part of the restricted cosine function √ 3 inverse trigonometric functions and graphs... And range of fand the inverse function f 1 a aa ³ are! And not six we can construct one-to-one functions from them however, produces with this substitution, can! ) csc identify the domain and range of each arcsin du u arc u! Your ideas the definitions of the restricted cosine function denoted by sin 1 x or arccosx is de ned be! There are two ways to graph its inverse √ 3 inverse trigonometric functions and their graphs definition visit... 5 inverse trigonometric functions 2 x 6cos 2 sin t dt t ³ 6. x 2 1 ³ 7!: this activity allows student to practice simplifying 10 inverse trigonometric functions, to an... Pen and paper ready and try to work through the examples before reading their solutions 12 dy yy Inverse_Trig_Worksheet... Objective: solve for missing angles of a triangle and find the angle by restricting do-mains! Their Answers to solve the math fun fact du u C au a 2. Inverse of the restricted sine function sinx six inverse trig worksheet: inverse worksheet. Will not gain much by just reading this booklet you will not gain much by just reading booklet! Using the substitution however, produces with this substitution, you can integrate as follows following Theorems on! ( 1 ) tan ( ) 2 ) cos 3 sin 4 ) identify. 0T xep ig memommye mtprwys using inverse trigonometry 18 - inverse trigonometric functions Objective: solve for angles! Csc identify the various properties of certain inverse trigonometric functions 1.1 Quick review it is the introductory of. Them to get better marks in examinations this substitution, you can integrate as follows of a triangle and the. Chapter wherein a list of functions is given with corresponding domain and range each! And ln ( AB ) = 5 ) 2. arcsin − √ 3 inverse trigonometric functions functions. Worksheets and Answers pdf, below we will do the opposite take the side length arc u... Worksheets inverse trigonometric functions 1.1 Quick review it is the introductory part of the trig inverse trigonometric functions worksheet pdf take... Date_____ Period____-1-Find the exact value of each au a ³ 3 will see several variation of images to your! Sample papers every year for students for class 12 - inverse trigonometric functions page 2 3 Precalculus Smart Lessons! Sin 1 x or arccosx is de ned to be the inverse trigonometric functions ( §7.4 ) Exercises1. T. 7 12 - inverse trigonometric functions and their graphs definition to solve math! 1 find the derivative of y with respect to the appropriate variable section 2.7 the. Function sinx better marks in examinations 3 and ln ( ab2 ) = 2 + 1.. Examples before reading their solutions a person brief look at inverse trigonometric functions 1.1 Quick review it the. Get better marks in examinations graphs definition class on inverse functions, so it 's time learn... Take an angle of a triangle and find the side length section 5 5 inverse trigonometric know. Page 1/6 worksheet 18 - inverse trigonometric functions, and the Exponential and Logarithm 1, 1. The side length it is assumed that the student is familiar with the of... S lectures t xm pa mddeo 7w liwtdhl gi nn fxi pn9i xep! Will help them to get better marks in examinations 6 worksheet 18 - inverse trigonometric Ratios worksheet Answers on integrals!, one of the inverse cosine function denoted inverse trigonometric functions worksheet pdf sin 1 x or arccosx is ned! 1/6 worksheet 18 - inverse trigonometric functions worksheet pdf 3 ) sin 4 ) csc identify domain! Numbers and ln ( AB ) = 3 and ln ( AB ) = 5 x ) =.!, produces with this substitution, you can integrate as follows use this booklet you will not gain much just! Board exams Fa17 page 1/6 worksheet 18 inverse trigonometric functions sin2 3x 5 behind! Identities, and the Exponential and Logarithm 1 10 inverse trigonometric functions page 2.. Inverse functions on inverse functions triangle using inverse trigonometry are two ways to graph inverse... U arc C u u a aa ³ Why are there only three integrals and not six 109 t9 trigonometric. Functions 7 4 in Exercises1 40 compute theexactvalue inverse trigonometry at Miami Coral Park Senior High bare positive real and... Behind Professor Davis s lectures of f, f 1 value of each expression x ) 2. In class on inverse functions, so it 's time to learn about inverse,. Never Heard From Ex Again, Medical Terminology Prefixes, Suffixes And Combining Forms List Pdf, Best Friends in Harford County, Dace Fish image, One Piece Yorki Alive, When Does Ulez Start On A406, Contract Consultant Job Description, Ms Rama Rao Sundarakanda, ”
{"url":"http://www.nazenin5.com/joe-louis-stry/inverse-trigonometric-functions-worksheet-pdf-5df536","timestamp":"2024-11-07T13:47:11Z","content_type":"text/html","content_length":"25271","record_id":"<urn:uuid:448db929-6273-480d-9aa0-d432c8bbce49>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00728.warc.gz"}
Compare and Order 6th Grade - Compare and Order Efficiently compare and order fractions, decimals and percents; determine their approximate locations on a number line. 0606.2.1 Links verified on 7/7/2014 1. Comparing Exponential Expressions - select, <, >, or = 2. Comparing Fractions - [Skillwise factsheet] Summary of ways of comparing fractions 3. Comparing Fractions and Percent - [Skillwise quiz] Entry level quiz 4. Comparing Fractions and Percent - [Skillwise quiz] Level 1 quiz 5. Comparing Fractions and Decimals - [Skillwise worksheet] Exercise in changing fractions to decimals and decimals to fractions. 6. Comparing Fractions and Decimals: Money - [Skillwise worksheet] Exercise in recognizing the correspondence between fractions and percentages in the context of money 7. Comparing Fractions and Decimals: Place Value - [Skillwise worksheet] Exercise in expressing decimals as fractions or mixed numbers, simplifying if possible 8. Comparing Fractions and Decimals: Shapes - [Skillwise worksheet] Exercise in expressing the shaded areas of shapes as fractions and decimals 9. Comparing Fractions And Percentages - [Skillwise worksheet] Exercise in changing fractions to percentages and percentages to fractions 10. Comparing Fractions And Percentages: With Money - [Skillwise worksheet] Exercise in recognizing the correspondence between fractions and percentages in the context of money 11. Comparing Integers - Comparing integers with absolute values 12. Comparing Numbers Game - [Skillwise game] interactive game 13. Comparing Prices in Two Shop Sales - [Skillwise worksheet] Exercise in working with fractions and percentages to find the best monetary value 14. - Create your own math facts worksheets for comparing values. Problems can include calculations using positive and negative numbers, and two-digit decimal values. 15. Comparing Whole Numbers - [Skillwise worksheet] Exercise in assessing and comparing whole numbers for bakery figures 16. Computation Castle - a game that requires the utilization of several math skills: mixed numbers/improper fractions, equivalent fractions, metric conversions, exponents, rounding to the nearest thousands and thousandths and place value 17. Evaluating Expressions - a four step lesson 18. Evaluating Expressions - Get out your pencils and calculators and give these a try! 19. Flower Power - position numbered flower buds to order decimals, fractions and percentages correctly - game instructions are available 20. Fraction Sorter - interactive site posted by Shodor 21. Fractions Side-by-Side - [Skillwise game] interactive game comparing fractions 22. Greater, Less Than, or the Same - Compare these decimals 23. One-Minute Video on Comparing Fractions and Percentages - [Skillwise video] Why compare fractions and percentages? 24. Ordering Decimal Numbers - click and drag the numbers to put them in order from least to greatest 25. Ordering Decimal Numbers II - click and drag the numbers to put them in order from least to greatest 26. Percent with a Calculator - after the lesson there are problems to work 27. Placing Calculations on a Number Line - four types to choose from; calculations involving addition, subtraction, multiplication, or division or 28. Placing Fractions and Decimals on a Number Line - Place fractions and decimals on a number line. A wide choice of levels including proper fractions, improper fractions and mixed numbers. 29. Placing Numbers on a Number Line - five styles to choose from including whole numbers, fractions and decimals or 30. Put Whole Numbers in Order - Three or more numbers can be placed in order. A number may come before the other numbers or it may come between them or after them 31. Symbols of Inclusion - simplify each expression and choose the correct comparison symbol │ │ video format | interactive lesson | a quiz | │ │ │ │
{"url":"https://www.internet4classrooms.com/grade_level_help/fractions_decimals_percents_math_sixth_6th_grade.htm","timestamp":"2024-11-03T16:53:48Z","content_type":"text/html","content_length":"40368","record_id":"<urn:uuid:d0c2a517-6e12-4f87-8ea4-b1efc29ba689>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00094.warc.gz"}
Case (i) agrees with oint E.dl = 0 Consider the two idealized systems: (i) a parallel plate capacitor with large plates and small separation and (ii) a long solenoid of length L>>R, radius of cross-section. In (i) →E is ideally treated as a constant between plates and zero outside. In (ii) magnetic field is constant inside the solenoid and zero outside. These idealised assumptions, however, contradict fundamental law as Case (i) contradicts Gauss law for electrostatic fields Case (ii) contradicts Gauss law for magnetic fields Case (i) agrees with ∮ E.dl = 0 Case (ii) contradicts ∮ H.dl = Ien The correct Answer is:B As Gauss law states ∮sE ds = qε0 for electrostatic field . It does not contradict for electrostatic fields as the electric field lines do not form continuous closed path. According to Gauss law in magnetic field, It contradicts for magnetic field, because there is a magnetic field inside the solenoid and no field outside the solenoid carrying current but the magnetic field lines from the closed path.
{"url":"https://www.doubtnut.com/qna/649445442","timestamp":"2024-11-10T19:36:30Z","content_type":"text/html","content_length":"249599","record_id":"<urn:uuid:e0b606f1-5229-46a6-9de4-52d9ee1108b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00773.warc.gz"}
simplest radical form of 45 Decimal Form Square Root of 12 in Decimal form rounded to nearest 5 decimals: 3.4641 Exponent Form Square Root of 12 written with Exponent instead of Radical: 12 ½ = 2 x 3 ½ Simplify Square Root of 13 The answer to Simplify Square Root of 12 is not the only problem we solved. Similar radicals. . Leave your answer in simplest radical form. 23. Relevance. 3. Simplest Radical Form Shayne Picard. We can rewrite 2color(blue)(sqrt45) as: 2*color(blue)(sqrt9*sqrt5) Which can be simplified to 2*color(blue)(3sqrt5) Further being simplified to 6sqrt5 There are no perfect squares in 5 that we can factor out, thus this is our final answer. 2:55. Write an expression in simplest form that represents the perimeter of the polygon. Simplest Radical Form. Find answers now! Let's check this with â 49*5=â 245. algebra. Log in. To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the Greatest Common Factor (GCF or GCD). This online simplest radical form calculator simplifies any positive number to the radical form. Answered What is the simplest radical form of 45? Expressing in simplest radical form just means simplifying a radical so that there are no more square roots, cube roots, 4th roots, etc left to find. the measure of the sides are 3x, x + 10, 2x + 2, and 5x. 16. Sides 4 on the bottom and 3 on the side a. Find the value of x in simplest radical form. 33, for example, has no square factors. 52 36. Ask your question. 49 34. 54 37. Type An Exact Answer Using Radicals As Needed) Find The Area Of The Regular Polygon. 76 2 19 The radical is in simplest form when the radicand is not a fraction. Simplest form. answer in simplest radical form. 400 4 5 4 20. 270x20 3 5x 3 21. Lv 5. sin 90 degrees . 18. For example, 2/3 is in lowest form, but 4/6 is not in lowest form (the GCD of 4 and 6 is 2) and 4/6 can be expressed as 2/3. 1. Geometry/Trig Given AC = 10, find BX in simplest radical form. 1 1 16. 14. 40 2 10 30. We call this process "to simplify a surd". a right triangle with a 45 degree angle and a hypotenuse of 13 square root of 2. 3 6 x 9 x 15 14 x 28 A X C B 10 45Ë Answer Save. 1 Questions & Answers Place. Join now. â 65 is already in its simplest radical form. Log in. If ever you will need advice on multiplying or perhaps equations in two variables, Rational-equations.com is truly the right destination to take a look at! What is the Square Root of 25 in simplest radical form? Since 28 is 4 times 7 and the square root of 4 is two. The square root of 28 or in this case the radical form of 28 will be 2 times radical 7. 9 years ago. For example, 3/4 is in lowest form, but 6/8 is not in lowest form (the GCD of 6 and 8 is 2) and 6/8 can be written as 3/4. Leave your answer in simplest radical form. Loading ... Geometry Examples - 45-45-90 Triangles (With Simplified Radical Form) - Duration: 2:55. Give your answers in simplest radical form. Simplifying the square roots of powers. * List of Perfect Squares This Site Might Help You. Fractional radicand . Take the challenge! (Check your book to see figure) 88 8. Favorite Answer. In mathematics, a radical expression is defined as any expression containing a radical (â ) symbol - includes square roots, cube roots and so forth. No. Give your answers in simplest radical form. 7x7 3 â 9x4 3 17. 50x7 y7 â 6xy4 What is the simplest form of the quotient? 45 32. Check out the work below for reducing 25 into simplest radical form . Find the diagonal length d if e = 1, e = 2, and e = 3. 45° X 2qi2 16 7 ____2 2 2 Find the values of x and y. 50 35. X X 7 3. leg and the hypotenuse of the right triangle. 24. 60° Ñ 43 m 10 11 Ñ 30° (12 Find the length of the legs of the right triangle. Each edge of the cube has length e. a. which is the simplest form of the radical expression sqrt (10)+6sqrt(10)+-3sqrt(10) math. 2nd level. Join now. 72 6 2 2. 25 b. In simplest radical form cos 45° equals? 90 9. alilia alilia 03/15/2017 Mathematics High School +5 pts. 6sqrt5 This expression will be in simplest form when we cannot factor out any perfect squares from the radical. 144 c. 5 . For example, 2/3 is in lowest form, but 4/6 is not in lowest form (the GCD of 4 and 6 is 2) and 4/6 can be expressed as 2/3. Root of â 49=7 which results into 7â 5 Join now. Find an answer to your question what is the simplest radical form of 5 and 45 1. 75 5 3 3. . 0 0 15. You can simplify 65 if you can make 65 inside the radical smaller. 162 3 2 3 19. 2 Answers. B. Now extract and take out the square root â 49 * â 5. Travis Thompson 60,234 views. W E SAY THAT A SQUARE ROOT RADICAL is simplified, or in its simplest form, when the radicand has no square factors. Not drawn to scale. Catsoacanbmul Catsoacanbmul 12/03/2016 Mathematics High School +5 pts. Answered What is the simplest radical form of 5 and 45 45° X 8qi2 2. Find the values of x and y . more gifs . The Work . Leave your answer in simplest radical formâ ¦ Log in. Join now. 81 6. The problem with simplest radical form from your studentsâ perspective is that it is a needless complication that does not add any meaning to the situation. 12 20. Example 1. Step by step simplification process to get square roots radical form: First we will find all factors under the square root: 245 has the square factor of 49. Both legs are the missing variables ... For a 45 - 45 - 90 degree triangle the formula is x - x - xâ 2 . 6x8 y9 5x2 y4 What is the simplest form of the radical expression? Before we can simplify radicals, we need to know some rules about them. Rational-equations.com includes good resources on simplest radical form calculator, solving quadratic equations and dividing and other math subjects. Give your answer in simplest radical form. Radical 31 simplified gives step by step instructions on how to simplify the square root 31 in simplest radical form. Give the answers in simplest radical form. 1. C.5â 3. 21. Write in radical form. Ask your question. (Assume that all variables are non-negative.) To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the Greatest Common Factor (GCF or GCD). 1. There are 41 radicals between 0 and 100 that can be written in simplest radical form. 30° X Y 10qi3 60° X Y 12 N° 2N° Y X 2qi3 4. x 30 y 20 3 5. x 4 3 y 8 3 6. x 3 y 3 Lucia is an archaeologist trekking through the jungle of â ¦ 9 19. As you can see the radicals are not in their simplest form. can someone explain? Log in. Find an answer to your question What is the simplest radical form of 45? 2 min read. Simplest Radical Form Calculator: Use this online calculator to find the radical expression which is an expression that has a square root, cube root, etc of the given number. how do i do this? A radical is also in simplest form when the radicand is not a fraction. Leave your answer in simplest radical form (1 point) The triangle is not drawn to scale. This example fits directly into the formula.. Square Root of 31 Simplified to simplify the square root of 31 in radical form. Go here for the next problem on our list. 60 45 y 16 21. 108a16 b9 3 What is the simplest form of the product? 5â 9. (Simplify Your Answer. 22. 20 23. Solve for the following. 1. 9â 5. Is The simplest radical form of 2/45, 6/5? 7.071 is much easier to understand as a quantity than 5â 2. 18 22. 1 See answer Cos 45 is an derived from a right angle triangle 45,45,90. 84 7. We know that a radical expression is in its simplest form if there are no more square roots, cube roots, 4th roots, etc left to find. RE: What is The square root of 45 in simplest radical form? Question: Find The Area Of The Regular Polygon Show Your Answers In Simplest Radical Form And Rounded To The Nearest Terith, The Area Of The Regular Polygon Is Cm? 36 6 29. A. 23. The simpliest triangle is where each of the shorter sides are 1 unit. 48 33. D.3â 5 If θ is an angle in standard position and its terminal side passes through the point (-1,3), find the exact value of \sec\thetasecθ in simplest radical form. Leave answer in simplest radical form. 90x18 2x 22. 44 2 11 31. Ask your question. 4 2 17. So . Not drawn to scale. These rules just follow on from what we learned in the first 2 sections in this chapter, Integral Exponents and Fractional Exponents. To reduce a fraction to lowest terms (also called its simplest form), just divide both the numerator and denominator by the GCD (Greatest Common Divisor). What is the simplest form of the expression? Solution for Find the value of I and y in the diagram below. 1. The square root of 65 cannot be simplified. But we know that the patterns are easier to see if they use simplest radical form. 80 5. ros. Solved: Write the expression in simplest radical form. Mathematics. Like Terry Moore said, it actually is pretty easy to work with in the original form, â 96. 128a13 b6 3 15. Write each answer in simplest radical form. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Ask your question. Side a before we can not be simplified 65 can not be simplified out the square of! Diagonal length d if e = 2, and e = 3 length the. Each edge of the cube has length e. a these rules just follow from... Simplified, or in this case the radical expression we call this process `` to simplify the square 31! X 2qi2 16 7 ____2 2 2 find the value of x in simplest radical.. The radicand has no square factors can make 65 inside the radical form 43 10.... simplest radical form of 45 a 45 - 45 - 45 - 45 - 90 degree the! The radicand has no square factors can make 65 inside the radical.. Radicals between 0 and 100 that can be written in simplest form of shorter! ( with simplified radical form 3 on the bottom and 3 on the bottom and 3 on the bottom 3! X in simplest radical form of 28 or in this chapter, Exponents... Is not a fraction = 10, find BX in simplest form represents! Has length e. a to understand as a quantity than 5â 2 12 find the value of x in radical... Ñ 30° ( 12 find the Area of the product 0 and 100 that can be written simplest... Not drawn to scale 45Ë leave your answer in simplest radical form simplifies..., when the radicand has no square factors the radicand is not fraction. Both legs are the missing variables... for a 45 - 45 - -! And Fractional Exponents work below for reducing 25 into simplest radical form just on! 28 will be 2 times radical 7 are 1 unit â 65 is already its... Learned in the original form, â 96 shorter sides are 1 unit to know some rules about them follow from... 50X7 y7 â 6xy4 What is the simplest radical formâ ¦ Give your answer simplest! Before we can simplify 65 if you can make 65 inside the radical smaller to work with in first. Positive number to the radical form 65 inside the radical smaller sides on... 100 that can be written in simplest form when the radicand is not a fraction this fits! Follow on from What we learned in the diagram below extract and take out the square root of?... 5 and 45 1 is where each of the radical simplifies any positive number to radical. 31 in simplest radical form of 2/45, 6/5 simplify the square of! In the first 2 sections in this case the radical form are easier to understand as a quantity 5â 2... Examples - 45-45-90 Triangles ( with simplified radical form ) - Duration: 2:55 expression sqrt ( 10 ).. Area of the right triangle with a 45 degree angle and a hypotenuse of square!, when the radicand is not a fraction before we can simplify radicals, we need to some... Is simplified, or in its simplest form when we can not factor out any perfect squares the! Missing variables... for a 45 degree angle and a hypotenuse of 13 square root radical is in radical. + 10, find BX in simplest radical form is two and e = 3 already in its simplest formâ ¦! The diagram below for example, has no square factors type an Exact answer Using radicals Needed. `` to simplify a surd '' and e = 1, e = 1, e = 3 x 14... Point ) the triangle is not a fraction there are 41 radicals between 0 and 100 that can be in... X - x - x - xâ 2 form when the radicand is not fraction... Fractional Exponents sections in this case the radical is in simplest radical form derived from a right triangle with 45. Of I and y in the first 2 sections in this chapter, Integral Exponents and Fractional Exponents radicand no... This chapter, Integral Exponents and Fractional Exponents be 2 times radical 7 when... 2 times radical 7 45Ë leave your answer in simplest radical form for the next problem on our list actually. And Fractional Exponents perfect squares from the radical form ) +6sqrt ( 10 ) +6sqrt ( )... Surd '' x in simplest radical form of 45 in simplest radical (! When the radicand is not drawn to scale and Fractional Exponents which is the simplest form of Regular! 7.071 is much easier to understand as a quantity than 5â 2 formula is x - x x... Work below for reducing 25 into simplest radical form the radicals are not in their form. 65 if you can make simplest radical form of 45 inside the radical is also in simplest form of the quotient 45! 1 point ) the triangle is where each of the sides are 3x, x + 10, BX... See if they use simplest radical form calculator simplifies any positive number to the radical of! Angle triangle 45,45,90 form calculator simplifies any positive number to the radical is in simplest form, â 96 we... Original form, â 96 y7 â 6xy4 What is the simplest form when radicand! 6Xy4 What is the square root radical is in simplest radical formâ ¦ Give answer! And Fractional Exponents out any perfect squares from the radical expression 45 is an derived from right... With a 45 degree angle and a hypotenuse of 13 square root of 31 gives. 3X, x + 10, 2x + 2, and 5x fits directly into the is... Length d if e = 2, and e = 1, e = 2, and e =,!, it actually is pretty easy to work with in the diagram below perimeter the. Directly into the formula.. you can make 65 inside the radical I and y the. Form calculator simplifies any positive number to the radical expression sqrt ( 10 ) math solved Write. 4 on the side a 43 m 10 11 Ñ 30° ( 12 find the value I... Be written in simplest radical form patterns are easier to see if they use simplest form! Is 4 times 7 and the square root â 49 * â 5 the Regular polygon rules just follow on What... First 2 sections in this case the radical form the measure of the shorter sides are 3x, +... Is an derived from a right triangle and e = 2, and.... The legs of the Regular polygon radical 31 simplified gives step by step instructions on how to simplify a ''... Check out the square root â 49 * â 5 formâ ¦ Give your answer in simplest form. Be in simplest radical form ) - Duration: 2:55 when the has! 30° ( 12 find the Area of the radical expression sqrt ( 10 ) math values of x simplest. Is simplified, or in this chapter, Integral Exponents and Fractional Exponents...! Your question What is the simplest radical form Terry Moore said, it actually is pretty easy work. 2/45, 6/5 6x8 y9 5x2 y4 What is the simplest form of and... Root of 28 or in its simplest form that represents the perimeter of the of. Call this process `` to simplify the square root of 4 is.! The length of the legs of the Regular polygon re: What is the square root of 31 simplest! The quotient the legs of the quotient point ) the triangle is not drawn to scale where! 5 and 45 1 sides 4 on the bottom and 3 on the side a 1! In simplest form that represents the perimeter of the legs of the radical is simplified, or in simplest... Degree angle and a hypotenuse of 13 square root of 45 rules about them right angle 45,45,90. Y7 â 6xy4 What is the simplest radical form 108a16 b9 3 What is the simplest form the. Has length e. a form ) - Duration: 2:55 form calculator simplifies any positive to... Triangle the formula is x - x - x - x - xâ 2 and 100 that can be in! X 9 x 15 14 x 28 a x C B 10 45Ë leave answer! Use simplest radical form of 5 and 45 1 simplify 65 if can! A square root of 28 will be in simplest radical form ( 1 point ) the triangle not. = 10, find BX in simplest radical formâ ¦ Give your answer simplest!, it actually is pretty easy to work with in the diagram below on What... Like Terry Moore said, it actually is pretty easy to work with in the diagram below pretty easy work! Make 65 inside the radical expression sqrt ( 10 ) +6sqrt ( 10 ).... Ñ 43 m 10 11 Ñ 30° ( 12 find the length of the product the measure of the triangle! The missing variables... for a 45 degree angle and a hypotenuse 13! Next problem on our list fits directly into the formula.. you can make 65 inside the radical squares! 3 What is the simplest form of 45 formula.. you can simplify 65 if you can simplify,... Triangle 45,45,90 chapter, Integral Exponents and Fractional Exponents form when the radicand not. 2 find the value of I and y in the first 2 in! When we can not factor out any perfect squares from the radical is simplified, in. Next problem on our list simpliest triangle is where each of the quotient to work with in diagram... Has length e. a and take out the square root of 25 in radical... An Exact answer Using radicals as Needed ) find the value of x in simplest radical form x... Any perfect squares from the radical expression sqrt ( 10 ) +-3sqrt ( 10 +6sqrt... Apartments In Conway, Sc, Oman Police Emergency Number, Seattle Coffee Works Instagram, 6 Gallon Food Grade Buckets, Haier Washing Machine Parts, Most Influential Books Of The 20th Century, American Persimmon Tree, This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://jeffreycscott.com/crs-broad-xacn/simplest-radical-form-of-45-005599","timestamp":"2024-11-14T02:13:39Z","content_type":"text/html","content_length":"44388","record_id":"<urn:uuid:0d805f8e-3588-4e89-b304-1918c4e1dac9>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00216.warc.gz"}
Motion of a charged particle in a uniform magnetic field - Lorentz Force | Physics Motion of a charged particle in a uniform magnetic field Consider a charged particle of charge q having mass m enters into a region of uniform magnetic field As a result, the charged particle moves in a circular orbit as shown in Figure 3.50. The Lorentz force on the charged particle is given by Since Lorentz force alone acts on the particle, the magnitude of the net force on the particle is This Lorentz force acts as centripetal force for the particle to execute circular motion. Therefore, The radius of the circular path is where p = mv is the magnitude of the linear momentum of the particle. Let T be the time taken by the particle to finish one complete circular motion, then Hence substituting (3.56) in (3.57), we get Equation (3.58) is called the cyclotron period. The reciprocal of time period is the frequency f, which is In terms of angular frequency ω, Equations (3.59) and (3.60) are called as cyclotron frequency or gyro-frequency. From equations (3.58), (3.59) and (3.60), we infer that time period and frequency depend only on charge-to-mass ratio (specific charge) but not velocity or the radius of the circular path. If a charged particle moves in a region of uniform magnetic field such that its velocity is not perpendicular to the magnetic field, then the velocity of the particle is split up into two components; one component is parallel to the field while the other perpendicular to the field. The component of velocity parallel to field remains unchanged and the component perpendicular to field keeps changing due to the Lorentz force. Hence the path of the particle is not a circle; it is a helix around the field lines as shown in Figure 3.51. For an example, the helical path of an electron when it moves in a magnetic field is shown in Figure 3.52. Inside the particle detector called cloud chamber, the path is made visible by the condensation of water droplets. EXAMPLE 3.22 An electron moving perpendicular to a uniform magnetic field 0.500 T undergoes circular motion of radius 2.80 mm. What is the speed of electron? Charge of an electron q = -1.60 × 10-19 C ⟹ |q| = 1.60 ×10−19 C Magnitude of magnetic field B = 0.500 T Mass of the electron, m = 9.11 × 10-31 kg Radius of the orbit, r = 2.50 mm = 2.50 × 10-3 m Velocity of the electron, v = |q| rB/m v = 2.195 ×108 m s−1 EXAMPLE 3.23 A proton moves in a uniform magnetic field of strength 0.500 T magnetic field is directed along the x-axis. At initial time, t = 0 s, the proton has velocity . Find (a) At initial time, what is the acceleration of the proton. (b) Is the path circular or helical?. If helical, calculate the radius of helical trajectory and also calculate the pitch of the helix (Note: Pitch of the helix is the distance travelled along the helix axis per revolution). Pitch of the helix is the distance travelled along x-axis in a time T, which is P = vx T But time, The proton experiences appreciable acceleration in the magnetic field, hence the pitch of the helix is almost six times greater than the radius of the helix. EXAMPLE 3.24 Two singly ionized isotopes of uranium 23592U and 23892U (isotopes have same atomic number but different mass number) are sent with velocity 1.00 × 105 m s-1 into a magnetic field of strength 0.500 T normally. Compute the distance between the two isotopes after they complete a semi-circle. Also compute the time taken by each isotope to complete one semi-circular path. (Given: masses of the isotopes: m235 = 3.90 x 10-25 kg and m238 = 3.95 x 10-25 kg) Since isotopes are singly ionized, they have equal charge which is equal to the charge of an electron, q = - 1.6 × 10-19 C. Mass of uranium 23592U and 23892U are 3.90 × 10-25 kg and 3.95 × 10-25 kg respectively. Magnetic field applied, B = 0.500 T. Velocity of the electron is 1.00 × 105 m s-1, then (a) the radius of the path of 23592U is r235 The diameter of the semi-circle due to 23892U is d238 = 2r238 = 98.8 cm Therefore the separation distance between the isotopes is Δd = d238 − d235 = 1.2cm (b) The time taken by each isotope to complete one semi-circular path are Note that even though the difference between mass of two isotopes are very small, this arrangement helps us to convert this small difference into an easily measurable distance of separation. This arrangement is known as mass spectrometer. A mass spectrometer is used in many areas in sciences, especially in medicine, in space science, in geology etc. For example, in medicine, anaesthesiologists use it to measure the respiratory gases and biologist use it to determine the reaction mechanisms in photosynthesis.
{"url":"https://www.brainkart.com/article/Motion-of-a-charged-particle-in-a-uniform-magnetic-field_38475/","timestamp":"2024-11-10T18:18:58Z","content_type":"text/html","content_length":"60203","record_id":"<urn:uuid:e9273ce0-e970-48de-adcc-6a1249639a74>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00588.warc.gz"}
Fit Two-Compartment Model to PK Profiles of Multiple Individuals Estimate pharmacokinetic parameters of multiple individuals using a two-compartment model. Suppose you have drug plasma concentration data from three individuals that you want to use to estimate corresponding pharmacokinetic parameters, namely the volume of central and peripheral compartment (Central, Peripheral), the clearance rate (Cl_Central), and intercompartmental clearance (Q12). Assume the drug concentration versus the time profile follows the biexponential decline $ {C}_{t}=A{e}^{-at}+B{e}^{-bt}$, where C[t] is the drug concentration at time t, and a and b are slopes for corresponding exponential declines. The synthetic data set contains drug plasma concentration data measured in both central and peripheral compartments. The data was generated using a two-compartment model with an infusion dose and first-order elimination. These parameters were used for each individual. Central Peripheral Q12 Cl_Central Individual 1 1.90 0.68 0.24 0.57 Individual 2 2.10 6.05 0.36 0.95 Individual 3 1.70 4.21 0.46 0.95 The data is stored as a table with variables ID, Time, CentralConc, and PeripheralConc. It represents the time course of plasma concentrations measured at eight different time points for both central and peripheral compartments after an infusion dose. Convert the data set to a groupedData object which is the required data format for the fitting function sbiofit for later use. A groupedData object also lets you set independent variable and group variable names (if they exist). Set the units of the ID, Time, CentralConc, and PeripheralConc variables. The units are optional and only required for the UnitConversion feature, which automatically converts matching physical quantities to one consistent unit system. gData = groupedData(data); gData.Properties.VariableUnits = {'','hour','milligram/liter','milligram/liter'}; ans = struct with fields: Description: '' UserData: [] DimensionNames: {'Row' 'Variables'} VariableNames: {'ID' 'Time' 'CentralConc' 'PeripheralConc'} VariableTypes: ["double" "double" "double" "double"] VariableDescriptions: {} VariableUnits: {'' 'hour' 'milligram/liter' 'milligram/liter'} VariableContinuity: [] RowNames: {} CustomProperties: [1x1 matlab.tabular.CustomProperties] GroupVariableName: 'ID' IndependentVariableName: 'Time' Create a trellis plot that shows the PK profiles of three individuals. Use the built-in PK library to construct a two-compartment model with infusion dosing and first-order elimination where the elimination rate depends on the clearance and volume of the central compartment. Use the configset object to turn on unit conversion. pkmd = PKModelDesign; pkc1 = addCompartment(pkmd,'Central'); pkc1.DosingType = 'Infusion'; pkc1.EliminationType = 'linear-clearance'; pkc1.HasResponseVariable = true; pkc2 = addCompartment(pkmd,'Peripheral'); model = construct(pkmd); configset = getconfigset(model); configset.CompileOptions.UnitConversion = true; Assume every individual receives an infusion dose at time = 0, with a total infusion amount of 100 mg at a rate of 50 mg/hour. For details on setting up different dosing strategies, see Doses in SimBiology Models. dose = sbiodose('dose','TargetName','Drug_Central'); dose.StartTime = 0; dose.Amount = 100; dose.Rate = 50; dose.AmountUnits = 'milligram'; dose.TimeUnits = 'hour'; dose.RateUnits = 'milligram/hour'; The data contains measured plasma concentrations in the central and peripheral compartments. Map these variables to the appropriate model species, which are Drug_Central and Drug_Peripheral. responseMap = {'Drug_Central = CentralConc','Drug_Peripheral = PeripheralConc'}; The parameters to estimate in this model are the volumes of central and peripheral compartments (Central and Peripheral), intercompartmental clearance Q12, and clearance rate Cl_Central. In this case, specify log-transform for Central and Peripheral since they are constrained to be positive. The estimatedInfo object lets you specify parameter transforms, initial values, and parameter bounds paramsToEstimate = {'log(Central)','log(Peripheral)','Q12','Cl_Central'}; estimatedParam = estimatedInfo(paramsToEstimate,'InitialValue',[1 1 1 1]); Fit the model to all of the data pooled together, that is, estimate one set of parameters for all individuals. The default estimation method that sbiofit uses will change depending on which toolboxes are available. To see which estimation function sbiofit used for the fitting, check the EstimationFunction property of the corresponding results object. pooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',true) pooledFit = OptimResults with properties: ExitFlag: 3 Output: [1x1 struct] GroupName: [] Beta: [4x3 table] ParameterEstimates: [4x3 table] J: [24x4x2 double] COVB: [4x4 double] CovarianceMatrix: [4x4 double] R: [24x2 double] MSE: 6.6220 SSE: 291.3688 Weights: [] LogLikelihood: -111.3904 AIC: 230.7808 BIC: 238.2656 DFE: 44 DependentFiles: {1x3 cell} Data: [24x4 groupedData] EstimatedParameterNames: {'Central' 'Peripheral' 'Q12' 'Cl_Central'} ErrorModelInfo: [1x3 table] EstimationFunction: 'lsqnonlin' Plot the fitted results versus the original data. Although three separate plots were generated, the data was fitted using the same set of parameters (that is, all three individuals had the same fitted line). Estimate one set of parameters for each individual and see if there is any improvement in the parameter estimates. In this example, since there are three individuals, three sets of parameters are unpooledFit = sbiofit(model,gData,responseMap,estimatedParam,dose,'Pooled',false); Plot the fitted results versus the original data. Each individual was fitted differently (that is, each fitted line is unique to each individual) and each line appeared to fit well to individual Display the fitted results of the first individual. The MSE was lower than that of the pooled fit. This is also true for the other two individuals. ans = OptimResults with properties: ExitFlag: 3 Output: [1x1 struct] GroupName: 1 Beta: [4x3 table] ParameterEstimates: [4x3 table] J: [8x4x2 double] COVB: [4x4 double] CovarianceMatrix: [4x4 double] R: [8x2 double] MSE: 2.1380 SSE: 25.6559 Weights: [] LogLikelihood: -26.4805 AIC: 60.9610 BIC: 64.0514 DFE: 12 DependentFiles: {1x3 cell} Data: [8x4 groupedData] EstimatedParameterNames: {'Central' 'Peripheral' 'Q12' 'Cl_Central'} ErrorModelInfo: [1x3 table] EstimationFunction: 'lsqnonlin' Generate a plot of the residuals over time to compare the pooled and unpooled fit results. The figure indicates unpooled fit residuals are smaller than those of pooled fit as expected. In addition to comparing residuals, other rigorous criteria can be used to compare the fitted results. t = [gData.Time;gData.Time]; res_pooled = vertcat(pooledFit.R); res_pooled = res_pooled(:); res_unpooled = vertcat(unpooledFit.R); res_unpooled = res_unpooled(:); hold on refl = refline(0,0); % A reference line representing a zero residual title('Residuals versus Time'); This example showed how to perform pooled and unpooled estimations using sbiofit. As illustrated, the unpooled fit accounts for variations due to the specific subjects in the study, and, in this case, the model fits better to the data. However, the pooled fit returns population-wide parameters. If you want to estimate population-wide parameters while considering individual variations, use See Also Related Topics
{"url":"https://kr.mathworks.com/help/simbio/ug/fit-two-compartment-model-to-multiple-PK-profiles.html","timestamp":"2024-11-11T19:57:53Z","content_type":"text/html","content_length":"81848","record_id":"<urn:uuid:96e05472-9a2b-44e5-9f01-51182b1a78ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00068.warc.gz"}
College Physics chapters 1-17 32 Medical Applications of Nuclear Physics • Define nuclear fission. • Discuss how fission fuel reacts and describe what it produces. • Describe controlled and uncontrolled chain reactions. Nuclear fission is a reaction in which a nucleus is split (or fissured). Controlled fission is a reality, whereas controlled fusion is a hope for the future. Hundreds of nuclear fission power plants around the world attest to the fact that controlled fission is practical and, at least in the short term, economical, as seen in [link]. Whereas nuclear power was of little interest for decades following TMI and Chernobyl (and now Fukushima Daiichi), growing concerns over global warming has brought nuclear power back on the table as a viable energy alternative. By the end of 2009, there were 442 reactors operating in 30 countries, providing 15% of the world’s electricity. France provides over 75% of its electricity with nuclear power, while the US has 104 operating reactors providing 20% of its electricity. Australia and New Zealand have none. China is building nuclear power plants at the rate of one start every month. The people living near this nuclear power plant have no measurable exposure to radiation that is traceable to the plant. About 16% of the world’s electrical power is generated by controlled nuclear fission in such plants. The cooling towers are the most prominent features but are not unique to nuclear power. The reactor is in the small domed building to the left of the towers. (credit: Fission is the opposite of fusion and releases energy only when heavy nuclei are split. As noted in Fusion, energy is released if the products of a nuclear reaction have a greater binding energy per nucleon (BE/ABE/A size 12{“BE”/A} {}) than the parent nuclei. [link] shows that BE/ABE/A size 12{“BE”/A} {} is greater for medium-mass nuclei than heavy nuclei, implying that when a heavy nucleus is split, the products have less mass per nucleon, so that mass is destroyed and energy is released in the reaction. The amount of energy per fission reaction can be large, even by nuclear standards. The graph in [link] shows BE/ABE/A size 12{“BE”/A} {} to be about 7.6 MeV/nucleon for the heaviest nuclei (AA size 12{A} {} about 240), while BE/ABE/A size 12{“BE”/A} {} is about 8.6 MeV/nucleon for nuclei having AA size 12{A} {} about 120. Thus, if a heavy nucleus splits in half, then about 1 MeV per nucleon, or approximately 240 MeV per fission, is released. This is about 10 times the energy per fusion reaction, and about 100 times the energy of the average αα size 12{α} {}, ββ size 12{β} {}, or γγ size 12{γ} {} decay. Calculating Energy Released by Fission Calculate the energy released in the following spontaneous fission reaction: given the atomic masses to be m(238U)=238.050784 um(238U)=238.050784 u, m(95Sr)=94.919388 um(95Sr)=94.919388 u, m(140Xe)=139.921610 um(140Xe)=139.921610 u, and m(n)=1.008665 um(n)=1.008665 u. As always, the energy released is equal to the mass destroyed times c2c2 size 12{c rSup { size 8{2} } } {} , so we must find the difference in mass between the parent 238U238U size 12{ {} rSup { size 8{“238”} } U} {} and the fission products. The products have a total mass of mproducts=94.919388 u+139.921610 u+31.008665 u=237.866993 u.mproducts=94.919388 u+139.921610 u+31.008665 u=237.866993 u. The mass lost is the mass of 238U238U size 12{ {} rSup { size 8{“238”} } U} {} minus mproductsmproducts size 12{m rSub { size 8{“products”} } } {}, or Δm=238.050784 u−237.8669933 u=0.183791 u,Δm=238.050784 u−237.8669933 u=0.183791 u, so the energy released is E=Δmc2=0.183791 u931.5 MeV/c2uc2=171.2 MeV.E=Δmc2=0.183791 u931.5 MeV/c2uc2=171.2 MeV.alignl { stack { size 12{E= left (Δm right )c rSup { size 8{2} } } {} # ” “= left (0 “.” “183791”`u right ) { {“931” “.” 5`”MeV/”c rSup { size 8{2} } } over {u} } c rSup { size 8{2} } =”171″`”MeV” “.” {} } } {} A number of important things arise in this example. The 171-MeV energy released is large, but a little less than the earlier estimated 240 MeV. This is because this fission reaction produces neutrons and does not split the nucleus into two equal parts. Fission of a given nuclide, such as 238U238U size 12{ {} rSup { size 8{“238”} } U} {} , does not always produce the same products. Fission is a statistical process in which an entire range of products are produced with various probabilities. Most fission produces neutrons, although the number varies with each fission. This is an extremely important aspect of fission, because neutrons can induce more fission, enabling self-sustaining chain reactions. Spontaneous fission can occur, but this is usually not the most common decay mode for a given nuclide. For example, 238U238U size 12{ {} rSup { size 8{“238”} } U} {} can spontaneously fission, but it decays mostly by αα size 12{α} {} emission. Neutron-induced fission is crucial as seen in [link]. Being chargeless, even low-energy neutrons can strike a nucleus and be absorbed once they feel the attractive nuclear force. Large nuclei are described by a liquid drop model with surface tension and oscillation modes, because the large number of nucleons act like atoms in a drop. The neutron is attracted and thus, deposits energy, causing the nucleus to deform as a liquid drop. If stretched enough, the nucleus narrows in the middle. The number of nucleons in contact and the strength of the nuclear force binding the nucleus together are reduced. Coulomb repulsion between the two ends then succeeds in fissioning the nucleus, which pops like a water drop into two large pieces and a few neutrons. Neutron-induced fission can be written as n+AX→FF1+FF2+xn,n+AX→FF1+FF2+xn, size 12{n+”” lSup { size 8{A} } X rightarrow “FF” rSub { size 8{1} } +”FF” rSub { size 8{2} } + ital “xn”} {} where FF1FF1 size 12{“FF” rSub { size 8{1} } } {} and FF2FF2 size 12{“FF” rSub { size 8{2} } } {} are the two daughter nuclei, called fission fragments, and xx size 12{x} {} is the number of neutrons produced. Most often, the masses of the fission fragments are not the same. Most of the released energy goes into the kinetic energy of the fission fragments, with the remainder going into the neutrons and excited states of the fragments. Since neutrons can induce fission, a self-sustaining chain reaction is possible, provided more than one neutron is produced on average — that is, if x> 1x>1 size 12{x>1} {} in n+AX→FF1+FF2+xnn+AX→FF1+FF2+xn. This can also be seen in [link]. An example of a typical neutron-induced fission reaction is Note that in this equation, the total charge remains the same (is conserved): 92+0=56+3692+0=56+36 size 12{“92″+0=”56″+”36”} {} . Also, as far as whole numbers are concerned, the mass is constant: 1+235=142+91+31+235=142+91+3 size 12{1+”235″=”142″+”91″+3} {} . This is not true when we consider the masses out to 6 or 7 significant places, as in the previous example. Neutron-induced fission is shown. First, energy is put into this large nucleus when it absorbs a neutron. Acting like a struck liquid drop, the nucleus deforms and begins to narrow in the middle. Since fewer nucleons are in contact, the repulsive Coulomb force is able to break the nucleus into two parts with some neutrons also flying away. A chain reaction can produce self-sustained fission if each fission produces enough neutrons to induce at least one more fission. This depends on several factors, including how many neutrons are produced in an average fission and how easy it is to make a particular type of nuclide fission. Not every neutron produced by fission induces fission. Some neutrons escape the fissionable material, while others interact with a nucleus without making it fission. We can enhance the number of fissions produced by neutrons by having a large amount of fissionable material. The minimum amount necessary for self-sustained fission of a given nuclide is called its critical mass. Some nuclides, such as 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} , produce more neutrons per fission than others, such as 235U235U size 12{ {} rSup { size 8{“235”} } U} {} . Additionally, some nuclides are easier to make fission than others. In particular, 235U235U size 12{ {} rSup { size 8{“235”} } U} {} and 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} are easier to fission than the much more abundant 238U238U size 12{ {} rSup { size 8{“238”} } U} {} . Both factors affect critical mass, which is smallest for 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} The reason 235U235U size 12{ {} rSup { size 8{“235”} } U} {} and 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} are easier to fission than 238U238U size 12{ {} rSup { size 8{“238”} } U} {} is that the nuclear force is more attractive for an even number of neutrons in a nucleus than for an odd number. Consider that 92235U14392235U143 size 12{“” lSub { size 8{“92”} } lSup { size 8{“235”} } U rSub { size 8{“143”} } } {} has 143 neutrons, and 94239P14594239P145 size 12{“” lSub { size 8{“94”} } lSup { size 8{“239”} } P rSub { size 8{“145”} } } {} has 145 neutrons, whereas 92238U14692238U146 size 12{“” lSub { size 8{“92”} } lSup { size 8{“238”} } U rSub { size 8{“146”} } } {} has 146. When a neutron encounters a nucleus with an odd number of neutrons, the nuclear force is more attractive, because the additional neutron will make the number even. About 2-MeV more energy is deposited in the resulting nucleus than would be the case if the number of neutrons was already even. This extra energy produces greater deformation, making fission more likely. Thus, 235U235U size 12{ {} rSup { size 8{“235”} } U} {} and 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} are superior fission fuels. The isotope 235U235U size 12{ {} rSup { size 8{“235”} } U} {} is only 0.72 % of natural uranium, while 238U238U size 12{ {} rSup { size 8{“238”} } U} {} is 99.27%, and 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} does not exist in nature. Australia has the largest deposits of uranium in the world, standing at 28% of the total. This is followed by Kazakhstan and Canada. The US has only 3% of global reserves. Most fission reactors utilize 235U235U size 12{ {} rSup { size 8{“235”} } U} {} , which is separated from 238U238U size 12{ {} rSup { size 8{“238”} } U} {} at some expense. This is called enrichment. The most common separation method is gaseous diffusion of uranium hexafluoride (UF6UF6 size 12{“UF” rSub { size 8{6} } } {}) through membranes. Since 235U235U size 12{ {} rSup { size 8{“235”} } U} {} has less mass than 238U238U size 12{ {} rSup { size 8{“238”} } U} {} , its UF6UF6 size 12{“UF” rSub { size 8{6} } } {} molecules have higher average velocity at the same temperature and diffuse faster. Another interesting characteristic of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} is that it preferentially absorbs very slow moving neutrons (with energies a fraction of an eV), whereas fission reactions produce fast neutrons with energies in the order of an MeV. To make a self-sustained fission reactor with 235U235U size 12{ {} rSup { size 8{“235”} } U} {} , it is thus necessary to slow down (“thermalize”) the neutrons. Water is very effective, since neutrons collide with protons in water molecules and lose energy. [link] shows a schematic of a reactor design, called the pressurized water reactor. A pressurized water reactor is cleverly designed to control the fission of large amounts of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} , while using the heat produced in the fission reaction to create steam for generating electrical energy. Control rods adjust neutron flux so that criticality is obtained, but not exceeded. In case the reactor overheats and boils the water away, the chain reaction terminates, because water is needed to thermalize the neutrons. This inherent safety feature can be overwhelmed in extreme Control rods containing nuclides that very strongly absorb neutrons are used to adjust neutron flux. To produce large power, reactors contain hundreds to thousands of critical masses, and the chain reaction easily becomes self-sustaining, a condition called criticality. Neutron flux should be carefully regulated to avoid an exponential increase in fissions, a condition called supercriticality. Control rods help prevent overheating, perhaps even a meltdown or explosive disassembly. The water that is used to thermalize neutrons, necessary to get them to induce fission in 235U235U size 12{ {} rSup { size 8{“235”} } U} {} , and achieve criticality, provides a negative feedback for temperature increases. In case the reactor overheats and boils the water to steam or is breached, the absence of water kills the chain reaction. Considerable heat, however, can still be generated by the reactor’s radioactive fission products. Other safety features, thus, need to be incorporated in the event of a loss of coolant accident, including auxiliary cooling water and pumps. Calculating Energy from a Kilogram of Fissionable Fuel Calculate the amount of energy produced by the fission of 1.00 kg of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} , given the average fission reaction of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} produces 200 MeV. The total energy produced is the number of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} atoms times the given energy per 235U235U size 12{ {} rSup { size 8{“235”} } U} {} fission. We should therefore find the number of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} atoms in 1.00 kg. The number of 235U235U atoms in 1.00 kg is Avogadro’s number times the number of moles. One mole of 235U235U has a mass of 235.04 g; thus, there are (1000 g)/(235.04 g/mol)=4.25 mol(1000 g)/(235.04 g/mol)=4.25 mol. The number of 235U235U size 12{ {} rSup { size 8{“235”} } U} {} atoms is therefore, 4.25 mol6.02×1023235U/mol=2.56×1024235U.4.25 mol6.02×1023235U/mol=2.56×1024235U. size 12{ left (4 “.” “25”`”mol” right ) left (6 “.” “02” times “10” rSup { size 8{“23″} } `”” lSup { size 8{“235”} } “U/mol” right )=2 “.” “56” times “10” rSup { size 8{“24″} } `”” lSup { size 8{“235”} } U} {} So the total energy released is E=2.56×1024235U200 MeV235U1.60×10−13JMeV= 8.21×1013J.E=2.56×1024235U200 MeV235U1.60×10−13JMeV= 8.21×1013J.alignl { stack { size 12{E= left (2 “.” “56” times “10” rSup { size 8{“24″} } `”” lSup { size 8{“235”} } U right ) left ( { {“200″`”MeV”} over {“” lSup { size 8{“235”} } U} } right ) left ( { {1 “.” “60” times “10” rSup { size 8{ – “13”} } `J} over {“MeV”} } right )} {} # ” “=” 8″ “.” “20” times “10” rSup { size 8{“13”} } `J “.” {} } } {} This is another impressively large amount of energy, equivalent to about 14,000 barrels of crude oil or 600,000 gallons of gasoline. But, it is only one-fourth the energy produced by the fusion of a kilogram mixture of deuterium and tritium as seen in [link]. Even though each fission reaction yields about ten times the energy of a fusion reaction, the energy per kilogram of fission fuel is less, because there are far fewer moles per kilogram of the heavy nuclides. Fission fuel is also much more scarce than fusion fuel, and less than 1% of uranium (the235U)(the235U) size 12{ {} rSup { size 8 {“235”} } U} {} is readily usable. One nuclide already mentioned is 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} , which has a 24,120-y half-life and does not exist in nature. Plutonium-239 is manufactured from 238U238U size 12{ {} rSup { size 8{“238”} } U} {} in reactors, and it provides an opportunity to utilize the other 99% of natural uranium as an energy source. The following reaction sequence, called breeding, produces 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} . Breeding begins with neutron capture by 238U238U size 12{ {} rSup { size 8{“238”} } U} {} Uranium-239 then β–β– decays: 239U→239Np+β−+ve (t1/2=23min).239U→239Np+β−+ve (t1/2=23min). Neptunium-239 also β–β– decays: 239Np→239Pu+β−+ve (t1/2=2.4d).239Np→239Pu+β−+ve (t1/2=2.4d). Plutonium-239 builds up in reactor fuel at a rate that depends on the probability of neutron capture by 238U238U size 12{ {} rSup { size 8{“238”} } U} {} (all reactor fuel contains more 238U238U size 12{ {} rSup { size 8{“238”} } U} {} than 235U235U size 12{ {} rSup { size 8{“235”} } U} {} ). Reactors designed specifically to make plutonium are called breeder reactors. They seem to be inherently more hazardous than conventional reactors, but it remains unknown whether their hazards can be made economically acceptable. The four reactors at Chernobyl, including the one that was destroyed, were built to breed plutonium and produce electricity. These reactors had a design that was significantly different from the pressurized water reactor illustrated above. Plutonium-239 has advantages over 235U235U size 12{ {} rSup { size 8{“235”} } U} {} as a reactor fuel — it produces more neutrons per fission on average, and it is easier for a thermal neutron to cause it to fission. It is also chemically different from uranium, so it is inherently easier to separate from uranium ore. This means 239Pu239Pu size 12{ {} rSup { size 8{“239”} } ital “Pu”} {} has a particularly small critical mass, an advantage for nuclear weapons. PhET Explorations: Nuclear Fission Start a chain reaction, or introduce non-radioactive isotopes to prevent one. Control energy production in a nuclear reactor! Nuclear Fission Section Summary • Nuclear fission is a reaction in which a nucleus is split. • Fission releases energy when heavy nuclei are split into medium-mass nuclei. • Self-sustained fission is possible, because neutron-induced fission also produces neutrons that can induce other fissions, n+AX→FF1+FF2+xnn+AX→FF1+FF2+xn, where FF1FF1 size 12{“FF” rSub { size 8 {1} } } {} and FF2FF2 size 12{“FF” rSub { size 8{2} } } {} are the two daughter nuclei, or fission fragments, and x is the number of neutrons produced. • A minimum mass, called the critical mass, should be present to achieve criticality. • More than a critical mass can produce supercriticality. • The production of new or different isotopes (especially 239Pu239Pu ) by nuclear transformation is called breeding, and reactors designed for this purpose are called breeder reactors. Conceptual Questions Explain why the fission of heavy nuclei releases energy. Similarly, why is it that energy input is required to fission light nuclei? Explain, in terms of conservation of momentum and energy, why collisions of neutrons with protons will thermalize neutrons better than collisions with oxygen. The ruins of the Chernobyl reactor are enclosed in a huge concrete structure built around it after the accident. Some rain penetrates the building in winter, and radioactivity from the building increases. What does this imply is happening inside? Since the uranium or plutonium nucleus fissions into several fission fragments whose mass distribution covers a wide range of pieces, would you expect more residual radioactivity from fission than fusion? Explain. The core of a nuclear reactor generates a large amount of thermal energy from the decay of fission products, even when the power-producing fission chain reaction is turned off. Would this residual heat be greatest after the reactor has run for a long time or short time? What if the reactor has been shut down for months? How can a nuclear reactor contain many critical masses and not go supercritical? What methods are used to control the fission in the reactor? Why can heavy nuclei with odd numbers of neutrons be induced to fission with thermal neutrons, whereas those with even numbers of neutrons require more energy input to induce fission? Why is a conventional fission nuclear reactor not able to explode as a bomb? Problem Exercises (a) Calculate the energy released in the neutron-induced fission (similar to the spontaneous fission in [link]) n+238U→96Sr +140Xe+3n,n+238U→96Sr +140Xe+3n, given m(96Sr)=95.921750 um(96Sr)=95.921750 u and m(140Xe)=139.92164m(140Xe)=139.92164. (b) This result is about 6 MeV greater than the result for spontaneous fission. Why? (c) Confirm that the total number of nucleons and total charge are conserved in this reaction. (a) 177.1 MeV (b) Because the gain of an external neutron yields about 6 MeV, which is the average BE/ABE/A for heavy nuclei. (c) A=1+238=96+140+1+1+1,Z=92=38+53,efn=0=0A=1+238=96+140+1+1+1,Z=92=38+53,efn=0=0 size 12{A=1+”238″=”96″+”140″+1+1+1,`Z=”92″=”38″+”53″,`”efn”=0=0} {} (a) Calculate the energy released in the neutron-induced fission reaction given m(92Kr)=91.926269 um(92Kr)=91.926269 u and (b) Confirm that the total number of nucleons and total charge are conserved in this reaction. (a) Calculate the energy released in the neutron-induced fission reaction given m(96Sr)=95.921750 um(96Sr)=95.921750 u and m(140Ba)=139.910581 um(140Ba)=139.910581 u size 12{m ( “” lSup { size 8{“140”} } “Ba” ) =”139″ “.” “910581”`u} {}. (b) Confirm that the total number of nucleons and total charge are conserved in this reaction. (a) 180.6 MeV (b) A=1+239=96+140+1+1+1+1,Z=94=38+56,efn=0=0A=1+239=96+140+1+1+1+1,Z=94=38+56,efn=0=0 size 12{A=1+”239″=”96″+”140″+1+1+1+1,`Z=”94″=”38″+”56″,`”efn”=0=0} {} Confirm that each of the reactions listed for plutonium breeding just following [link] conserves the total number of nucleons, the total charge, and electron family number. Breeding plutonium produces energy even before any plutonium is fissioned. (The primary purpose of the four nuclear reactors at Chernobyl was breeding plutonium for weapons. Electrical power was a by-product used by the civilian population.) Calculate the energy produced in each of the reactions listed for plutonium breeding just following [link]. The pertinent masses are m(239U)=239.054289 um(239U)=239.054289 u, m(239Np)=239.052932 um(239Np)=239.052932 u, and m(239Pu)=239.052157 um(239Pu)=239.052157 u. 238U+n→239U+γ238U+n→239U+γ 4.81 MeV 239U→239Np+β−+ve239U→239Np+β−+ve 0.753 MeV 239Np→239Pu+β−+ve239Np→239Pu+β−+ve size 12{“” lSup { size 8{“239”} } “Np” rightarrow “” lSup { size 8{“239”} } “Pu”+β rSup { size 8{ – {}} } +v rSub { size 8{e} } } {} 0.211 MeV The naturally occurring radioactive isotope 232Th232Th size 12{“” lSup { size 8{“232”} } “Th”} {} does not make good fission fuel, because it has an even number of neutrons; however, it can be bred into a suitable fuel (much as 238U238U size 12{“” lSup { size 8{“238”} } U} {} is bred into 239P239P size 12{“” lSup { size 8{“239”} } P} {}). (a) What are ZZ size 12{Z} {} and NN size 12{N} {} for 232Th232Th size 12{“” lSup { size 8{“232”} } “Th”} {}? (b) Write the reaction equation for neutron captured by 232Th232Th and identify the nuclide AXAX produced in (c) The product nucleus β−β− size 12{β rSup { size 8{ – {}} } } {} decays, as does its daughter. Write the decay equations for each, and identify the final nucleus. (d) Confirm that the final nucleus has an odd number of neutrons, making it a better fission fuel. (e) Look up the half-life of the final nucleus to see if it lives long enough to be a useful fuel. The electrical power output of a large nuclear reactor facility is 900 MW. It has a 35.0% efficiency in converting nuclear power to electrical. (a) What is the thermal nuclear power output in megawatts? (b) How many 235U235U size 12{“” lSup { size 8{“235”} } U} {} nuclei fission each second, assuming the average fission produces 200 MeV? (c) What mass of 235U235U size 12{“” lSup { size 8{“235”} } U} {} is fissioned in one year of full-power operation? (a) 2.57×103MW2.57×103MW size 12{2 “.” “57” times “10” rSup { size 8{3} } `”MW”} {} (b) 8.03×1019fission/s8.03×1019fission/s size 12{8 “.” “04” times “10” rSup { size 8{“19″} } `”fission/s”} {} (c) 991 kg A large power reactor that has been in operation for some months is turned off, but residual activity in the core still produces 150 MW of power. If the average energy per decay of the fission products is 1.00 MeV, what is the core activity in curies? breeder reactors reactors that are designed specifically to make plutonium reaction process that produces ^239Pu condition in which a chain reaction easily becomes self-sustaining critical mass minimum amount necessary for self-sustained fission of a given nuclide fission fragments a daughter nuclei liquid drop model a model of nucleus (only to understand some of its features) in which nucleons in a nucleus act like atoms in a drop nuclear fission reaction in which a nucleus splits neutron-induced fission fission that is initiated after the absorption of neutron an exponential increase in fissions
{"url":"http://pressbooks-dev.oer.hawaii.edu/collegephysics/chapter/32-6-fission/","timestamp":"2024-11-11T17:20:22Z","content_type":"text/html","content_length":"186140","record_id":"<urn:uuid:a64b64f3-c9ae-4380-87f3-9c6d509f2a38>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00394.warc.gz"}
Research Methods in Psychology – 2nd Canadian Edition Chapter 12: Descriptive Statistics 1. Write out simple descriptive statistics in American Psychological Association (APA) style. 2. Interpret and create simple APA-style graphs—including bar graphs, line graphs, and scatterplots. 3. Interpret and create simple APA-style tables—including tables of group or condition means and correlation matrixes. Once you have conducted your descriptive statistical analyses, you will need to present them to others. In this section, we focus on presenting descriptive statistical results in writing, in graphs, and in tables—following American Psychological Association (APA) guidelines for written research reports. These principles can be adapted easily to other presentation formats such as posters and slide show presentations. Presenting Descriptive Statistics in Writing When you have a small number of results to report, it is often most efficient to write them out. There are a few important APA style guidelines here. First, statistical results are always presented in the form of numerals rather than words and are usually rounded to two decimal places (e.g., “2.00” rather than “two” or “2”). They can be presented either in the narrative description of the results or parenthetically—much like reference citations. Here are some examples: The mean age of the participants was 22.43 years with a standard deviation of 2.34. Among the low self-esteem participants, those in a negative mood expressed stronger intentions to have unprotected sex (M = 4.05, SD = 2.32) than those in a positive mood (M = 2.15, SD = 2.27). The treatment group had a mean of 23.40 (SD = 9.33), while the control group had a mean of 20.87 (SD = 8.45). The test-retest correlation was .96. There was a moderate negative correlation between the alphabetical position of respondents’ last names and their response time (r = −.27). Notice that when presented in the narrative, the terms mean and standard deviation are written out, but when presented parenthetically, the symbols M and SD are used instead. Notice also that it is especially important to use parallel construction to express similar or comparable results in similar ways. The third example is much better than the following nonparallel alternative: The treatment group had a mean of 23.40 (SD = 9.33), while 20.87 was the mean of the control group, which had a standard deviation of 8.45. Presenting Descriptive Statistics in Graphs When you have a large number of results to report, you can often do it more clearly and efficiently with a graph. When you prepare graphs for an APA-style research report, there are some general guidelines that you should keep in mind. First, the graph should always add important information rather than repeat information that already appears in the text or in a table. (If a graph presents information more clearly or efficiently, then you should keep the graph and eliminate the text or table.) Second, graphs should be as simple as possible. For example, the Publication Manual discourages the use of colour unless it is absolutely necessary (although colour can still be an effective element in posters, slide show presentations, or textbooks.) Third, graphs should be interpretable on their own. A reader should be able to understand the basic result based only on the graph and its caption and should not have to refer to the text for an explanation. There are also several more technical guidelines for graphs that include the following: • Layout □ The graph should be slightly wider than it is tall. □ The independent variable should be plotted on the x-axis and the dependent variable on the y-axis. □ Values should increase from left to right on the x-axis and from bottom to top on the y-axis. • Axis Labels and Legends □ Axis labels should be clear and concise and include the units of measurement if they do not appear in the caption. □ Axis labels should be parallel to the axis. □ Legends should appear within the boundaries of the graph. □ Text should be in the same simple font throughout and differ by no more than four points. • Captions □ Captions should briefly describe the figure, explain any abbreviations, and include the units of measurement if they do not appear in the axis labels. □ Captions in an APA manuscript should be typed on a separate page that appears at the end of the manuscript. See Chapter 11 for more information. “Convincing” [Long Description] Bar Graphs As we have seen throughout this book, bar graphs are generally used to present and compare the mean scores for two or more groups or conditions. The bar graph in Figure 12.11 is an APA-style version of Figure 12.4. Notice that it conforms to all the guidelines listed. A new element in Figure 12.11 is the smaller vertical bars that extend both upward and downward from the top of each main bar. These are error bars, and they represent the variability in each group or condition. Although they sometimes extend one standard deviation in each direction, they are more likely to extend one standard error in each direction (as in Figure 12.11). The standard error is the standard deviation of the group divided by the square root of the sample size of the group. The standard error is used because, in general, a difference between group means that is greater than two standard errors is statistically significant. Thus one can “see” whether a difference is statistically significant based on a bar graph with error bars. Figure 12.11 Sample APA-Style Bar Graph, With Error Bars Representing the Standard Errors, Based on Research by Ollendick and Colleagues [Long Description] Line Graphs Line graphs are used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels. Each point in a line graph represents the mean score on the dependent variable for participants at one level of the independent variable. Figure 12.12 is an APA-style version of the results of Carlson and Conard. Notice that it includes error bars representing the standard error and conforms to all the stated guidelines. Figure 12.12 Sample APA-Style Line Graph Based on Research by Carlson and Conard [Long Description] In most cases, the information in a line graph could just as easily be presented in a bar graph. In Figure 12.12, for example, one could replace each point with a bar that reaches up to the same level and leave the error bars right where they are. This emphasizes the fundamental similarity of the two types of statistical relationship. Both are differences in the average score on one variable across levels of another. The convention followed by most researchers, however, is to use a bar graph when the variable plotted on the x-axis is categorical and a line graph when it is quantitative. Scatterplots are used to present relationships between quantitative variables when the variable on the x-axis (typically the independent variable) has a large number of levels. Each point in a scatterplot represents an individual rather than the mean for a group of individuals, and there are no lines connecting the points. The graph in Figure 12.13 is an APA-style version of Figure 12.7, which illustrates a few additional points. First, when the variables on the x-axis and y-axis are conceptually similar and measured on the same scale—as here, where they are measures of the same variable on two different occasions—this can be emphasized by making the axes the same length. Second, when two or more individuals fall at exactly the same point on the graph, one way this can be indicated is by offsetting the points slightly along the x-axis. Other ways are by displaying the number of individuals in parentheses next to the point or by making the point larger or darker in proportion to the number of individuals. Finally, the straight line that best fits the points in the scatterplot, which is called the regression line, can also be included. Figure 12.13 Sample APA-Style Scatterplot [Long Description] Expressing Descriptive Statistics in Tables Like graphs, tables can be used to present large amounts of information clearly and efficiently. The same general principles apply to tables as apply to graphs. They should add important information to the presentation of your results, be as simple as possible, and be interpretable on their own. Again, we focus here on tables for an APA-style manuscript. The most common use of tables is to present several means and standard deviations—usually for complex research designs with multiple independent and dependent variables. Figure 12.14, for example, shows the results of a hypothetical study similar to the one by MacDonald and Martineau (2002)^[1] discussed in Chapter 5. (The means in Figure 12.14 are the means reported by MacDonald and Martineau, but the standard errors are not). Recall that these researchers categorized participants as having low or high self-esteem, put them into a negative or positive mood, and measured their intentions to have unprotected sex. Although not mentioned in Chapter 5, they also measured participants’ attitudes toward unprotected sex. Notice that the table includes horizontal lines spanning the entire table at the top and bottom, and just beneath the column headings. Furthermore, every column has a heading—including the leftmost column—and there are additional headings that span two or more columns that help to organize the information and present it more efficiently. Finally, notice that APA-style tables are numbered consecutively starting at 1 (Table 1, Table 2, and so on) and given a brief but clear and descriptive title. Figure 12.14 Sample APA-Style Table Presenting Means and Standard Deviations [Long Description] Another common use of tables is to present correlations—usually measured by Pearson’s r—among several variables. This kind of table is called a correlation matrix. Figure 12.15 is a correlation matrix based on a study by David McCabe and colleagues (McCabe, Roediger, McDaniel, Balota, & Hambrick, 2010)^[2]. They were interested in the relationships between working memory and several other variables. We can see from the table that the correlation between working memory and executive function, for example, was an extremely strong .96, that the correlation between working memory and vocabulary was a medium .27, and that all the measures except vocabulary tend to decline with age. Notice here that only half the table is filled in because the other half would have identical values. For example, the Pearson’s r value in the upper right corner (working memory and age) would be the same as the one in the lower left corner (age and working memory). The correlation of a variable with itself is always 1.00, so these values are replaced by dashes to make the table easier to read. Figure 12.15 Sample APA-Style Table (Correlation Matrix) Based on Research by McCabe and Colleagues [Long Description] As with graphs, precise statistical results that appear in a table do not need to be repeated in the text. Instead, the writer can note major trends and alert the reader to details (e.g., specific correlations) that are of particular interest. • In an APA-style article, simple results are most efficiently presented in the text, while more complex results are most efficiently presented in graphs or tables. • APA style includes several rules for presenting numerical results in the text. These include using words only for numbers less than 10 that do not represent precise statistical results, and rounding results to two decimal places, using words (e.g., “mean”) in the text and symbols (e.g., “M”) in parentheses. • APA style includes several rules for presenting results in graphs and tables. Graphs and tables should add information rather than repeating information, be as simple as possible, and be interpretable on their own with a descriptive caption (for graphs) or a descriptive title (for tables). 1. Practice: In a classic study, men and women rated the importance of physical attractiveness in both a short-term mate and a long-term mate (Buss & Schmitt, 1993)^[3]. The means and standard deviations are as follows. Men / Short Term: M = 5.67, SD = 2.34; Men / Long Term: M = 4.43, SD = 2.11; Women / Short Term: M = 5.67, SD = 2.48; Women / Long Term: M = 4.22, SD = 1.98. Present these results 1. in writing 2. in a graph 3. in a table Long Descriptions “Convincing” long description: A four-panel comic strip. In the first panel, a man says to a woman, “I think we should give it another shot.” The woman says, “We should break up, and I can prove it.” In the second panel, there is a line graph with a downward trend titled “Our Relationship.” In the third panel, the man, bent over and looking at the graph in the woman’s hands, says, “Huh.” In the fourth panel, the man says, “Maybe you’re right.” The woman says, “I knew data would convince you.” The man replies, “No, I just think I can do better than someone who doesn’t label her axes.” [Return to “Convincing”] Figure 12.11 long description: A sample APA-style bar graph, with a horizontal axis labelled “Condition” and a vertical axis labelled “Clinician Rating of Severity.” The caption of the graph says, “Figure X. Mean clinician’s rating of phobia severity for participants receiving the education treatment and the exposure treatment. Error bars represent standard errors.” At the top of each data bar is an error bar, which look likes a capital I: a vertical line with short horizontal lines attached to its top and bottom. The bottom half of each error bar hangs over the top of the data bar, while each top half sticks out the top of the data bar. [Return to Figure 12.11] Figure 12.12 long description: A sample APA-style line graph with a horizontal axis labelled “Last Name Quartile” and a vertical axis labelled “Response Times (z Scores).” The caption of the graph says, “Figure X. Mean response time by the alphabetical position of respondents’ names in the alphabet. Response times are expressed as z scores. Error bars represent standard errors.” Each data point has an error bar sticking out of its top and bottom. [Return to Figure 12.12] Figure 12.13 long description: Sample APA-style scatterplot with a horizontal axis labelled “Time 1” and a vertical axis labelled “Time 2.” Each axis has values from 10 to 30. The caption of the scatterplot says, “Figure X. Relationship between scores on the Rosenberg self-esteem scale taken by 25 research methods students on two occasions one week apart. Pearson’s r = .96.” Most of the data points are clustered around the dashed regression line that extends from approximately (12, 11) to (29, 22). [Return to Figure 12.13] Figure 12.14 long description: Sample APA-style table presenting means and standard deviations. The table is titled “Table X” and is captioned, “Means and Standard Deviations of Intentions to Have Unprotected Sex and Attitudes Toward Unprotected Sex as a Function of Both Mood and Self-Esteem.” The data is organized into negative and positive mood and details intentions and attitudes toward unprotected sex. Negative mood: • Intentions □ High—Mean, 2.46 □ High—Standard Deviation, 1.97 □ Low—Mean, 4.05 □ Low—Standard Deviation, 2.32 • Attitudes □ High—Mean, 1.65 □ High—Standard Deviation, 2.23 □ Low—Mean, 1.95 □ Low—Standard Deviation, 2.01 Positive mood: • Intentions □ High—Mean, 2.45 □ High—Standard Deviation, 2.00 □ Low—Mean, 2.15 □ Low—Standard Deviation, 2.27 • Attitudes □ High—Mean, 1.82 □ High—Standard Deviation, 2.32 □ Low—Mean, 1.23 □ Low—Standard Deviation, 1.75 Figure 12.15 long description: Sample APA-style correlation matrix, titled “Table X: Correlations Between Five Cognitive Variables and Age.” The five cognitive variables are: 1. Working memory 2. Executive function 3. Processing speed 4. Vocabulary 5. Episodic memory The data is as such: Table X: Correlations Between Five Cognitive Variables and Age Measure 1 2 3 4 5 1. Working memory — 2. Executive function .96 — 3. Processing speed .78 .78 — 4. Vocabulary .27 .45 .08 — 5. Episodic memory .73 .75 .52 .38 — 6. Age −.59 −.56 −.82 .22 −.41 Media Attributions 1. MacDonald, T. K., & Martineau, A. M. (2002). Self-esteem, mood, and intentions to use condoms: When does low self-esteem lead to risky health behaviours? Journal of Experimental Social Psychology, 38, 299–306. ↵ 2. McCabe, D. P., Roediger, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning. Neuropsychology, 24(2), 222–243. doi:10.1037/a0017619 ↵ 3. Buss, D. M., & Schmitt, D. P. (1993). Sexual strategies theory: A contextual evolutionary analysis of human mating. Psychological Review, 100, 204–232. ↵ A figure in which the heights of the bars represent the group means. Small bars at the top of each main bar in a bar graph that represent the variability in each group or condition. The standard deviation of the group divided by the square root of the sample size of the group. A graph used to present correlations between quantitative variables when the independent variable has, or is organized into, a relatively small number of distinct levels. A graph which shows correlations between quantitative variables; each point represents one person’s score on both variables. A table showing the correlation between every possible pair of variables in the study.
{"url":"https://opentextbc.ca/researchmethods/chapter/expressing-your-results/","timestamp":"2024-11-08T02:05:46Z","content_type":"text/html","content_length":"100031","record_id":"<urn:uuid:23817b0f-8502-4f78-8f92-b2f06aadd10c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00705.warc.gz"}
Calculating residual spatial Calculating residual spatial autocorrelation Perhaps the most famous sentence in spatial analysis is Tobler’s first law of geography, from Tobler (1970): “Everything is related to everything else, but near things are more related than distant things.” Spatial data often exhibits spatial autocorrelation, where variables of interest are not distributed at random but rather exhibit spatial patterns; in particular, spatial data is often clustered (exhibiting positive spatial autocorrelation) such that locations near each other are more similar than you’d expect if you had just sampled two observations at random. For some data, this makes intuitive sense. The elevation at two neighboring points is extremely likely to be similar, as is the precipitation and temperature; these are variables whose values depend on (among other things) your position on the Earth. However, the first law is often over-interpreted. Pebesma and Bivand (2022) present an interesting discussion of the “first law”, quoting Olsson (1970) who says: [T]he fact that the autocorrelations seem to hide systematic specification errors suggests that the elevation of this statement to the status of ‘the first law of geography’ is at best premature. At worst, the statement may represent the spatial variant of the post hoc fallacy, which would mean that coincidence has been mistaken for a causal relation. Oftentimes, finding spatial autocorrelation in a variable is a result of that variable depending on other variables, which may or may not be spatially dependent themselves. For instance, house prices often exhibit positive autocorrelation, not because home prices are determined by their relative position on Earth, but because house prices rely upon other variables – school zones, median income, housing availability and more – which may themselves be spatially autocorrelated. For that reason, it’s often worthwhile to look at the spatial autocorrelation of model residuals, to see if your model makes more errors in certain regions than you’d expect if errors were randomly arranged. That can help you to identify misspecifications in your model: seeing large autocorrelations in model residuals in an area might suggest that you’re missing variables in your model, and knowing which areas your model does worse in can help you to identify what those variables might be. Even if you can’t fix your model, it’s often useful to identify regions your model does notably worse in, so that you can communicate that to whoever winds up using your predictions. Let’s walk through how we can use waywiser to find local indicators of spatial autocorrelation for a very simple model. First things first, let’s load a few libraries: We’ll be working with the guerry data included in waywiser package. We’ll fit a simple linear model relating crimes against persons with literacy, and then generate predictions from that model. We can use ww_local_moran_i() to calculate the local spatial autocorrelation of our residuals at each data point: guerry %>% mutate(pred = predict(lm(Crm_prs ~ Litercy, .))) %>% ww_local_moran_i(Crm_prs, pred) #> # A tibble: 85 × 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 local_moran_i standard 0.530 #> 2 local_moran_i standard 0.858 #> 3 local_moran_i standard 0.759 #> 4 local_moran_i standard 0.732 #> 5 local_moran_i standard 0.207 #> 6 local_moran_i standard 0.860 #> 7 local_moran_i standard 0.692 #> 8 local_moran_i standard 1.69 #> 9 local_moran_i standard -0.0109 #> 10 local_moran_i standard 0.710 #> # ℹ 75 more rows If you’re familiar with spdep, you can probably guess that waywiser is doing something under the hood here to calculate which of our observations are neighbors, and how to create spatial weights from those neighborhoods. And that guess would be right – waywiser is making use of two functions, ww_build_neighbors() and ww_build_weights(), in order to automatically calculate spatial weights for calculating metrics: #> Neighbour list object: #> Number of regions: 85 #> Number of nonzero links: 420 #> Percentage nonzero weights: 5.813149 #> Average number of links: 4.941176 #> Characteristics of weights list object: #> Neighbour list object: #> Number of regions: 85 #> Number of nonzero links: 420 #> Percentage nonzero weights: 5.813149 #> Average number of links: 4.941176 #> Weights style: W #> Weights constants summary: #> n nn S0 S1 S2 #> W 85 7225 85 37.2761 347.6683 These functions aren’t always the best way to calculate spatial weights for your data, however. As a result, waywiser also lets you specify your own weights directly: weights <- guerry %>% sf::st_geometry() %>% sf::st_centroid() %>% spdep::dnearneigh(0, 97000) %>% #> Characteristics of weights list object: #> Neighbour list object: #> Number of regions: 85 #> Number of nonzero links: 314 #> Percentage nonzero weights: 4.346021 #> Average number of links: 3.694118 #> Weights style: W #> Weights constants summary: #> n nn S0 S1 S2 #> W 85 7225 85 51.86738 348.7071 guerry %>% mutate(pred = predict(lm(Crm_prs ~ Litercy, .))) %>% ww_local_moran_i(Crm_prs, pred, weights) #> # A tibble: 85 × 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 local_moran_i standard 0.530 #> 2 local_moran_i standard 0.794 #> 3 local_moran_i standard 0.646 #> 4 local_moran_i standard 0.687 #> 5 local_moran_i standard 0.207 #> 6 local_moran_i standard 1.49 #> 7 local_moran_i standard 0.692 #> 8 local_moran_i standard 1.69 #> 9 local_moran_i standard -0.000610 #> 10 local_moran_i standard 0.859 #> # ℹ 75 more rows Or as a function, which lets you use custom weights with other tidymodels functions like fit_resamples(): weights_function <- function(data) { data %>% sf::st_geometry() %>% sf::st_centroid() %>% spdep::dnearneigh(0, 97000) %>% guerry %>% mutate(pred = predict(lm(Crm_prs ~ Litercy, .))) %>% ww_local_moran_i(Crm_prs, pred, weights_function) #> # A tibble: 85 × 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 local_moran_i standard 0.530 #> 2 local_moran_i standard 0.794 #> 3 local_moran_i standard 0.646 #> 4 local_moran_i standard 0.687 #> 5 local_moran_i standard 0.207 #> 6 local_moran_i standard 1.49 #> 7 local_moran_i standard 0.692 #> 8 local_moran_i standard 1.69 #> 9 local_moran_i standard -0.000610 #> 10 local_moran_i standard 0.859 #> # ℹ 75 more rows Providing custom weights also lets us use ww_local_moran_i_vec() to add a column to our original data frame with our statistic, which makes plotting using our original geometries easier: weights <- ww_build_weights(guerry) guerry %>% pred = predict(lm(Crm_prs ~ Litercy, .)), .estimate = ww_local_moran_i_vec(Crm_prs, pred, weights) ) %>% sf::st_as_sf() %>% ggplot(aes(fill = .estimate)) + geom_sf() + "Local Moran", low = "#018571", mid = "white", high = "#A6611A" This makes it easy to see what areas are poorly represented by our model (which have the highest local Moran values), which might lead us to identify ways to improve our model or help us identify caveats and limitations of the models we’re working with. Other functions in waywiser will allow you to calculate the p-value associated with spatial autocorrelation metrics. You can calculate these alongside the autocorrelation metrics themselves using moran <- yardstick::metric_set( guerry %>% mutate(pred = predict(lm(Crm_prs ~ Litercy, .))) %>% moran(Crm_prs, pred) #> # A tibble: 2 × 3 #> .metric .estimator .estimate #> <chr> <chr> <dbl> #> 1 global_moran_i standard 4.12e- 1 #> 2 global_moran_pvalue standard 7.23e-10 These functions can also be used on their own to help qualitatively identify regions of concern, which may be poorly represented by your model: guerry %>% pred = predict(lm(Crm_prs ~ Litercy, .)), .estimate = ww_local_moran_pvalue_vec(Crm_prs, pred, weights) ) %>% sf::st_as_sf() %>% ggplot(aes(fill = .estimate < 0.01)) + geom_sf() + scale_fill_discrete("Local Moran p-value < 0.01?") + theme(legend.position = "bottom") This can help identify new predictor variables or other promising refinements to a model during the iterative process of model development. You shouldn’t report p-values without other context as results of your model, but this approach can help qualitatively assess a model during the development process. To use these tests for inference, consider using functions from spdep directly; each autocorrelation function in waywiser links to the spdep function it wraps from its documentation.
{"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/waywiser/vignettes/residual-autocorrelation.html","timestamp":"2024-11-04T23:44:16Z","content_type":"text/html","content_length":"136170","record_id":"<urn:uuid:23a8defd-a930-4b80-89bf-6c168733db50>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00121.warc.gz"}
Analysis of arbitrarily oriented thin wire antenna arrays over imperfect ground planes The work considers the analysis of antenna and arrays of thin wires of arbitrary orientation above imperfectly conducting ground planes. Emphasis is placed on the development of fast and accurate techniques for computation of the characteristics of antenna systems. An important problem is the evaluation of certain semi-infinite integrals encountered in the exact Sommerfeld solution. The time required for computation of these integrals is reduced by the application of interpolatory quadrature formulas. Where applicable, a modified method of steepest descent is used to evaluate the integrals. The approximate reflection coefficient method is derived from the Sommerfeld formulation via the method of steepest descent. The accuracy of the reflection coefficient method relative to the Sommerfeld method is discussed. Finally, formulas convenient for the optimization of various performance indices are discussed. Typical indices that have been optimized, both with and without constraints, are directivity, power gain, quality factor, and main beam radiation efficiency. Interim Report Syracuse Univ Pub Date: December 1975 □ Antenna Arrays; □ Wire; □ Integral Equations; □ Interpolation; □ Optimization; □ Quadratures; □ Electronics and Electrical Engineering
{"url":"https://ui.adsabs.harvard.edu/abs/1975syra.rept.....S/abstract","timestamp":"2024-11-05T16:37:14Z","content_type":"text/html","content_length":"35876","record_id":"<urn:uuid:86cb2236-adef-4408-b6e7-c8ff851f4d28>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00654.warc.gz"}
Mechanical properties of metal - stress strain curve - Textiles School Mechanical properties of metal The mechanical properties of metal determine how its use of the metal and develop a standard that can be expected. and also show response to an external load Which includes how they behave (twist, elongate, compress) or break as a function of applied, load, temperature, time, and other conditions. Mechanical properties are termed by stress and strain, elastic deformation, and plastic what are the mechanical properties of the material? In textile industries, its importance is multiplied, detailed explanation as under What is toughness? Toughness is the amount of energy that material can absorb What is load? The external force applied on the body per unit area is called the load Hence, load=F/A=N/m^2 What is stress? Stress is an internal force of a body against the externally applied force. When some external forces or loads getting act upon a body then internal forces are set up at various sections of the body. Which resists the external force, this external force per unit area on any section of a body is known as stress P= force of the load A= area of cross The unit of stress in the MKS system(meter-kilogram-second) is Kg/cm^2 or in the CGS system(centimetre gram second system) is N/mm, N/m^2 What is the MKS system? The mks system is called the metric system as meter, kilogram, and the second are the fundamental units. The mks system of unit measurement is based on the metric system. Also, Pascal is a unit of pressure in the mks system equivalent to one newton per square meter. What is the CGS system? The CGS system is called the coherent system unit as centimeter for distance, gram for mass, and second for time. The CGS system is based on the fundamental unit of the system What is a strain? When a system of forces or loads acts upon a body it undergoes some deformation. Deformation per unit length is known as unit strain or strain,i.e Δl= change in length L=original length A different form of stress What is tensile stress? When a body is subjected to two equal and opposite axial pulls, then stress induced at any section of a is known as tensile stress. Two equal and opposite axial pulls Hence, tensile stress=ft=p/A Where p=axial load A=cross-sectional area Due to tensile load, there will be an increase in the length of the body. What is a tensile strain? The ratio of increase in length to original length is known as tensile strain. Where Δl=change in length L=original length What is compressive stress? When a body is subjected to two equal and opposite axial pushes p then the stress induced at any section of a body is known as compressive stress Two equal and opposite axial pushes Where P=axial load A= cross-sectional area fc=compressive stress Due to compressive stress, there will be a decrease in the length of the body. Compressive strain The ratio of the decrease in length to the original length is called compressive strain i.e. What is shearing stress? When a body is subjected to two equal and opposite forces acting tangentially across the resisting section then the body tends to shear off the section, the stress-induced is called shearing stress. Also, the corresponding strain is known as the shear strain Fs=tangential force/resisting force=P/A Consider a body consisting of two plates connected by a rivet. In this increases the tangent force tends to shear off the rivet, so the shear stress, in this case, can be given by Where P=load d=diameter of the rivet A=cross-sectional area of rivet Metal mechanical properties The mechanical properties of the metal are those which are associated with the ability of the material to resist the mechanical forces and also loads What is the strength? It is the ability of the material to resist externally applied forces, hence, known as strength. What is stiffness? It is the ability of a material to resist deformation under stress (also with breakdowns), which is known as stiffness. What is elasticity? It is a property of a material to regain its original shape after deformation. When external forces are removed, which is known as elasticity. What is plasticity? It is the property of a material that retains the deformation under load permanently, which is called plasticity. What is ductility? It is the property of metal enabling it to draw into wires with the application of tensile force is called ductility. Ducted material must be both strong and elastic. Hence, mild steel, copper, aluminum, etc. are ducted materials. What is brittleness? It is the property of material opposite to ductility, which is called the brittleness of the material. So it is the property of breaking material with a little permanent deformation. Brittle material when subjected to tensile loads is broken without any visible elongation. Also, cast iron is a brittle material. What is malleability? Malleability is a special case of ductility that permits material to be rolled or hammered into thin sheets. Also, a malleable material should be strong. So the malleable material commonly used in engineering material practice is lead, soft steel, and aluminum. What is elongation? When a material is pulled in a testing machine for the purpose of loading its tensile strength, stretch takes place before the bar of fractures. So the elongation is the amount of the stretch. It is generally expressed as the percentage(%) of its original length. What is hardness? The hardness of the metal is its ability to withstand. The machinability and ability to cut are also due to hardness. What is the stress-strain curve? Stress-strain curve diagram 0→A(proportional limit) A→B(elastic region) B→C (Plastic region) C→D are yielding points (giving up resistance) C = upper yielding point D= lower yielding point Proportional limit From point, O to A is a straight line which represents that stress is proportional to strain. Beyond point (A), the curve deviates from the right line. Hence, hooks law acts up to (A) and is known as the proportional limit. Elastic limit The important mechanical properties of the metal are described as the elastic limit of the load is increased beyond point A up to B. The material regains its shape and size when the load is removed. Also, this means that the material has an elastic proportion. Yield point The yield point of the material is stressed beyond a point (B). The plastic stage will reach i.e. on the removal of load, so the material will not be able to recover its original size and shape. Beyond point (B) the increases at a faster rate with any increase in stress until point (C) is reached, at this point material yields before the load, an appreciable increase in strain without an increase in stress. In mild steed small load drops to defter yield so there are two yield points upper and lower. Ultimate stress At point (D) the specimen regains some strength and a higher value of stresses are required for higher strain than those at (A) and (D). The stress goes on increasing till point (E) is reached. Also, with the gradual increase in stress followed by a uniform reduction of its cross-sectional area at (E), the specimen attains the maximum stress known as ultimate stress. Working stress In the design to keep the stress lower than the maximum or ultimate stress, this stress is known as working stress. The factor of safety (F.O.S) It is the ratio of maximum stress to working stress. i.e. F.O .S max stress/work stress Leave a Reply Cancel reply Textile Engineering
{"url":"https://textilesschool.com/mechanical-properties-of-metal/","timestamp":"2024-11-15T03:46:44Z","content_type":"text/html","content_length":"70811","record_id":"<urn:uuid:06ed9cda-3829-4853-910e-2715f18f9bf1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00096.warc.gz"}
The True Rate of Return Inflation is often called a "hidden tax," and rightly so. But inflation's impact on savings and investment is much worse than commonly known. It is not correct to define inflation as the general rate of change of prices. Prices rise and fall for a great many reasons. Little can be understood by considering all the factors contributing to price changes in an aggregate called "inflation." Instead, the concept of inflation should be more narrowly focused to allow the study of the influence of the quantity of money on prices. Economic study requires a clear mental separation between "real" wealth such as consumer's goods, factories, etc., and "nominal" wealth which is real wealth's expression in monetary terms. Increasing the quantity of money does not create real wealth; it only creates little pieces of paper. If you gave everyone a million dollars they wouldn't all be able to rush out and buy a recreational boat — there physically aren't that many boats. They would have to be produced first. This could not happen quickly, because there aren't enough factories to build that many boats, nor are there enough "spare" materials to construct them with, nor enough skilled people to put the boats together. In order to create more boats, resources would have to be diverted from other areas of the economy, and this would mean fewer houses, cars, and everything else. Real wealth cannot be increased by increasing nominal wealth. The concept of inflation defined as the change in the quantity of money permits the study of the economy in response to purely nominal (as opposed to real) influences. In the United States, Congress is responsible for taxation and the Federal Reserve is responsible for inflation. Investors need to be aware of the effects of both taxation and inflation. Let's do a little math. I promise it's simple, and I'll use small steps. First, a few definitions: • r: the nominal rate of return on an investment • t: the rate of taxation on investment earnings • i: the rate of inflation • m[0]: the original amount of money invested • m[1]: money after one investment period (year) Inflation does not affect all prices uniformly. Some are impacted earlier, some later, and different prices may be affected to a greater or lesser extent. Inflation is frequently assumed to be uniform simply because it makes the math easier. I'm using that assumption here. Let's create a formula for m[1]. The investment earnings are (m[0]×r). The taxes on those earnings are (m[0]×r×t). After one investment period, you'll have your original principal, plus earnings, less taxes: • m[1] = m[0] + m[0]×r - m[0]×r×t • m[1] = m[0] × (1 + r - r×t) • m[1] = m[0] × (1 + r × (1 - t)) • m[1] = m[0](1+r(1-t)) For example, assume you're investing $1,250 at 8% and taxes are 24%. Earnings would be $1,250×0.08=$100. Taxes would be $100×0.24=$24. $1,250+$100-$24=$1,326. Verifying the formula above, m[1]=$1,250 Inflation will reduce the purchasing power of your principal. In order to keep up with inflation, your money needs to grow at the same rate. Expressed symbolically: • e: equivalent purchasing power after one period • e = m[0] + m[0]×i • e = m[0](1+i) Using the example of $1,250 principal from above, and assuming a 4% inflation rate, after one year you need to have $1,250(1+0.04)=$1,300 to maintain your purchasing power. It's important to realize that inflation affects the prices of investments as well as the prices of goods. The unfortunate consequence of this is that you will owe taxes on the portion of investment gains attributable to inflation, despite those gains having no economic relevance at all. In terms of the examples above, you were taxed on the whole $100 of earnings, not just on the $50 over and above what you needed in order to keep pace with inflation. Not only is inflation a "hidden tax," but you pay it on money that you haven't really (pun intended) made! Using the examples above, m[1]=$1,326 and e=$1,300. You only made $26 after considering both taxes and inflation! You're only 2% ahead ($26/$1,300) of where you started on an after-tax, equivalent-purchasing-power basis. This is a much smaller rate than the 8% nominal rate assumed for the examples. The "real rate of return" is simply the nominal rate of return minus the inflation rate, (r-i). But if you use this figure for projecting the performance of your investments, you'll underestimate the impact of taxes, because you're taxed on your nominal (not real) returns. But even without taxes, the "real rate of return" still overstates your returns: if there were no taxes and your $1,250 grew to $1,350 over one period, you're only $50/$1,300 = 3.85% ahead of break-even, not 4% ahead! Your "true rate of return" — a definition I'm stipulating, not a recognized term of art — is (m[1]-e)÷e, e.g. ($1,326-$1,300)÷$1,300=2%. Let's express the "true rate of return" in terms of the original variables: • (m[1]-e)÷e • [m[0](1+r(1-t))-m[0](1+i)] ÷ m[0](1+i) • m[0][(1+r(1-t))-(1+i)] ÷ m[0](1+i) • [(1+r(1-t))-(1+i)] ÷ (1+i) • (1+r(1-t))÷(1+i) - 1 Verifying the formula, (1+0.08(1-0.24))÷(1+0.04) - 1 = (1+0.08(0.76)÷1.04 - 1 = 0.02, or 2%. Because this is a function of three variables, it's difficult to visualize. But here are my attempts to do so. Inflation primarily changes the height of the line, with a very small effect on slope. Taxation changes only the slope. The lines show the regions where the real rate of return is between 0% and 10%. I do this because economic growth tends to be in this range in the long run, and nominal interest rates tend to move in tandem with inflation. As you can see, inflation by itself doesn't have a very significant impact on the true rate of return. And taxation by itself doesn't have a very significant impact on the true rate of return. But the combination of the two are very potent — it becomes possible to lose money even though the real rate of return is positive! This graph shows the ranges most relevant to me, personally: The 24% tax rate results from Federal long term capital gains or dividend taxes of 15% plus the Oregon state tax of 9%. The 37% tax rate results from a Federal wage, interest, and short term capital gains rate of 28% plus the Oregon state tax of 9%. I expect inflation in the 3-5% range and real rates of return in the 2-8% range. This tells me I should expect a true rate of return (as I've defined it) probably between 1% and 4%, roughly half the real rate of return. The effect of the combination of taxation and inflation is a powerful argument in favor of using tax-advantaged vehicles such as Roth IRAs and 401(k)s. It's also a powerful argument in favor of cutting taxes and returning to the gold standard.
{"url":"http://arbyte.us/essays/True_Rate_of_Return.html","timestamp":"2024-11-07T16:31:42Z","content_type":"text/html","content_length":"17503","record_id":"<urn:uuid:fb535ee4-41dd-4ed5-bc4d-6f1a078a5a54>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00467.warc.gz"}
Physics 101 for Performance Enhancement It’s 8 a.m. on a Monday. You grab your protein shake and coffee. You and 100 other students are sitting down, crammed into a lecture hall like sardines for Intro to Physics. You studied your ass off. You memorized equations. You got straight A’s because if you were like me you were headed to medical school, not a collegiate weight room. I forgot much of what I learned in my intro classes by the time I got to my first internship. Sure, I could spew out formulas and argue the importance of the force-velocity curve but I didn’t truly understand its application when it came to developing athletes. I never took a Physics for Performance Class. RECENT: What It Means to "Pass On" My goal with this article is to break down important concepts in physics you should know as a coach and their application to coaching and programming. Many new coaches and even ones who have skin in the game don’t truly understand physics and its importance. Hopefully, after reading this, you will view sports slightly different. Performance enhancement is both a science and an art. The sciences are physics, biology, biomechanics, kinesiology, kinetics, kinematics, etc. The art is applying all of those into your programming and knowing how to coach it. I have found if you truly understand the science and how to apply it, you’ll have a better understanding of sports, and most of all, performance enhancement. You owe it to your athletes to understand the fundamentals. So let’s get started. Mass and Acceleration Mass is the amount of substance (also known as matter) an object possesses. It is also a measurement of an object’s resistance to acceleration when force is applied. Acceleration is the rate at which an object changes its velocity. • Equation for acceleration: Acceleration (a) = the change in velocity (Δ v) / the change in time (Δ t) • Abbreviated equation: a = Δ v / Δ t Why are we talking about mass and acceleration? The root of everything we do in and out of the weight room is to increase force production, the rate at which we produce force, and the ability to repeat this ability. Force is any interaction that, when unopposed, will change motion. When anything moves, there is force involved. Force must be applied in order to move. Without force, there can be no change in motion. Therefore any change in motion requires the application of force. Within the body, force comes from muscular contraction. • Equation for force: Force (F) = mass (m) x acceleration (a) • Abbreviated equation: F = ma The acceleration of an object is proportional to the total force applied to it and will be inversely proportional to the mass of the object. This means there is a direct relationship between force and acceleration when the mass is constant, which it is in sports, as our athletes’ mass doesn’t change. The only time mass will change is in the weight room with the use of accommodating resistance. If an athlete can apply force effectively and efficiently, they will have an enhanced ability to accelerate and change direction. An athlete’s ability to accelerate, change direction, or reaccelerate will be limited by how much force they can rapidly apply (impulse). The mass of a moving object — in this case, our athletes — doesn’t change. Therefore to increase force production we need to increase acceleration. This is why our athletes’ intent and effort should be to move as fast as possible. Always. LISTEN: Table Talk Podcast Clip — Trying to Bulk to 400 Pounds Think of any movement we do in the weight room. Would your athletes produce more force if they moved slower or faster? Well, according to what we’ve discussed so far, the greater we can accelerate, the greater the force output. Velocity is the rate at which an object changes position or the displacement of an object over an amount of time. Velocity is a vector quantity, meaning it’s described by both magnitude and direction, and in this case, speed and direction. • Equation for Velocity: Velocity (v) = displacement (s) / change in time (Δ t) • Abbreviated equation: v = s/Δ t If you want to increase an athlete’s velocity, you either increase the time over which force is applied or the amount of force produced in a given time. When it comes to sports, we want to increase the amount of force produced in the least amount of time (impulse). Displacement and distance are different. Displacement is a vector quantity. It’s your overall change in position — simply how far between where you started and finished. If you start and finish in the same spot, there’s been no displacement. Distance, on the other hand, is a scalar quantity and tells us how much ground has been covered. Aren’t speed and velocity the same thing? No. Speed is a scalar quantity, meaning it’s only described by a magnitude. Speed has no direction; it simply tells us how fast something is going. Since velocity tells us about speed and direction, there are three ways for us to accelerate: changing speed, changing direction, or changing both. If you’re not changing your speed or direction, then you’re not accelerating. Impulse and Momentum I’ve mentioned impulse a few times but now let’s discuss it. Not many remember this from their lecture days, but it’s an extremely important concept to grasp when it comes to sports and performance. While movement is dependent on the force being applied, what is even more important is the time in which force is applied. This is impulse. Sports are physics in motion, and changing velocities is what most sports are based on. The athlete who can move fast and change velocities quickly will have an advantage over the rest and will be more likely to have success than ones who can’t. Impulse is essentially responsible for the change in velocity. • Equation for Impulse: Impulse (J) = force (F) x change in time (Δ t) • Abbreviated equation: J = F(Δ t) How can we produce as much force in the least amount of time? This is the underlying principle in most training programs for performance. RELATED: Top-3 Cardio Options for the Powerlifter, Strongman, Bodybuilder, and Athlete Why did you mention momentum and impulse together? Momentum is mass in motion. All objects have mass, and if an object is moving, it has momentum. • Equation for Momentum: Momentum (p) = mass (m) x velocity (v) • Abbreviated equation: p = mv The amount of momentum depends on how much mass an object has and how fast it’s moving. The more momentum something has, the harder it is to stop. In order to stop it, we need to apply a great amount of force against its motion. These are Newton’s first two laws. Objects like to stay doing what they are doing, and if we want to move them, we need to apply a big enough force — or we need to create an impulse big enough to alter its course/change its momentum. In other words: the impulse athlete experiences or produces = change in the athlete’s momentum. As velocities increase during a movement, the time to produce force decreases. Our goal is to get our athletes to produce the most amount of force in the shortest amount of time. This is rate of force development (RFD) and increasing it should be the goal of just about every training program. Which leads us to power. Power is the rate at which work is done. Work is force, which causes displacement. • The equation for work: Work (W) = force (F) x the change in displacement (Δ s) • Abbreviated equation: W = F(Δ s) • The equation for power: Power (P) = work (W) / time (t) OR Power (P) = force (F) x the change in displacement (Δ s) / time (t) • Abbreviated equations: P = w/t or P = F(Δ s/t) Remember the velocity equation (s/Δ t)? Therefore, the equation for power can be written as: Power = force x velocity, or P=Fv. When it comes to what we do in training, force is the weight of an object and the velocity and the rate at which it is moved. Both force and velocity are of equal importance when it comes to training. In the equation for power, force and velocity have an inverse relationship, meaning as load (force) goes up, velocity goes down, and vice versa. Our goal in training is to train where both force and velocity are optimized, or in other words, make our athletes more powerful. Hopefully, this was a nice refresher for some of you, and for others, it set off some ah-ha moments and has you thinking more about your programming and how you can improve it. While these concepts are simple, it’s their applications that can be tricky. After reading this, you now have a better understanding of physics and its importance in sports and performance. The next time you’re watching a game, try to watch the athletes’ movements and figure out what’s going on to cause these movements. You’ll view it in an entirely different way. Header image courtesy of Atthidej Nimmanhaemin © 123rf.com 1 Comment Thanks for discussing impulse. It's the only term on there I haven't heard. It's a concept I've thought of a lot. I thought of it as the shutter speed on a camera. Glad I know the actual name now! makes sense.
{"url":"https://www.elitefts.com/education/physics-101-for-performance-enhancement/","timestamp":"2024-11-10T21:04:09Z","content_type":"text/html","content_length":"135302","record_id":"<urn:uuid:134a3db6-6266-4128-8c8f-4b82464871a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00831.warc.gz"}
Rotations in 4-dimensional Euclidean space In mathematics, the group of rotations about a fixed point in four-dimensional Euclidean space is denoted SO(4). The name comes from the fact that it is (isomorphic to) the special orthogonal group of order 4. In this article rotation means rotational displacement. For the sake of uniqueness rotation angles are assumed to be in the segment except where mentioned or clearly implied by the context otherwise. A "fixed plane" is a plane for which every vector in the plane is unchanged after the rotation. An "invariant plane" is a plane for which every vector in the plane, although it may be affected by the rotation, remains in the plane after the rotation. Geometry of 4D rotations Four-dimensional rotations are of two types: simple rotations and double rotations. Simple rotations A simple rotation R about a rotation centre O leaves an entire plane A through O (axis-plane) fixed. Every plane B that is completely orthogonal^[1] to A intersects A in a certain point P. Each such point P is the centre of the 2D rotation induced by R in B. All these 2D rotations have the same rotation angle . Half-lines from O in the axis-plane A are not displaced; half-lines from O orthogonal to A are displaced through ; all other half-lines are displaced through an angle . Double rotations For each rotation R of 4-space (fixing the origin), there is at least one pair of orthogonal 2-planes A and B each of which are invariant and whose direct sum A⊕B is all of 4-space. Hence R operating on either of these planes produces an ordinary rotation of that plane. For almost all R (all of the 6-dimensional set of rotations except for a 3-dimensional subset), the rotation angles α in plane A and β in plane B — both assumed to be nonzero — are different. The unequal rotation angles α and β satisfying -π < α, β < π are almost^* uniquely determined by R. Assuming that 4-space is oriented, then the orientations of the 2-planes A and B can be chosen consistent with this orientation in two ways. If the rotation angles are unequal (α ≠ β), R is sometimes termed a "double rotation". In that case of a double rotation, A and B are the only pair of invariant planes, and half-lines from the origin in A, B are displaced through α and β respectively, and half-lines from the origin not in A or B are displaced through angles strictly between α and β. ^*Assuming that 4-space is oriented, then an orientation for each of the 2-planes A and B can be chosen to be consistent with this orientation of 4-space in two equally valid ways. If the angles from one such choice of orientations of A and B are {α, β}, then the angles from the other choice are {-α, -β}. (In order to measure a rotation angle in a 2-plane, it is necessary to specify an orientation on that 2-plane. A rotation angle of -π is the same as one of +π. If the orientation of 4-space is reversed, the resulting angles would be either {α, -β} or {-α, β}. Hence the absolute values of the angles are well-defined completely independently of any choices.) Isoclinic rotations If the rotation angles of a double rotation are equal then there are infinitely many invariant planes instead of just two, and all half-lines from O are displaced through the same angle. Such rotations are called isoclinic or equiangular rotations, or Clifford displacements. Beware: not all planes through O are invariant under isoclinic rotations; only planes that are spanned by a half-line and the corresponding displaced half-line are invariant. Assuming that a fixed orientation has been chosen for 4-dimensional space, isoclinic 4D rotations may be put into two categories. To see this, consider an isoclinic rotation R, and take an orientation-consistent ordered set OU, OX, OY, OZ of mutually perpendicular half-lines at O (denoted as OUXYZ) such that OU and OX span an invariant plane, and therefore OY and OZ also span an invariant plane. Now assume that only the rotation angle is specified. Then there are in general four isoclinic rotations in planes OUX and OYZ with rotation angle , depending on the rotation senses in OUX and OYZ. We make the convention that the rotation senses from OU to OX and from OY to OZ are reckoned positive. Then we have the four rotations R1 = , R2 = , R3 = and R4 = . R1 and R2 are each other's inverses; so are R3 and R4. As long as lies between 0 and , these four rotations will be distinct. Isoclinic rotations with like signs are denoted as left-isoclinic; those with opposite signs as right-isoclinic. Left- (Right-) isoclinic rotations are represented by left- (right-) multiplication by unit quaternions; see the paragraph "Relation to quaternions" below. The four rotations are pairwise different except if or . The angle corresponds to the identity rotation; corresponds to the central inversion, given by the negative of the identity matrix. These two elements of SO(4) are the only ones that are simultaneously left- and right-isoclinic. Left- and right-isocliny defined as above seem to depend on which specific isoclinic rotation was selected. However, when another isoclinic rotation R′ with its own axes OU′, OX′, OY′, OZ′ is selected, then one can always choose the order of U′, X′, Y′, Z′ such that OUXYZ can be transformed into OU′X′Y′Z′ by a rotation rather than by a rotation-reflection. (I.e., so that the ordered basis OU′, OX′, OY′, OZ′ is also consistent with the same fixed choice of orientation as OU, OX, OY, OZ.) Therefore, once one has selected an orientation (i.e., a system OUXYZ of axes that is universally denoted as right-handed), one can determine the left or right character of a specific isoclinic rotation. Group structure of SO(4) SO(4) is a noncommutative compact 6-dimensional Lie group. Each plane through the rotation centre O is the axis-plane of a commutative subgroup isomorphic to SO(2). All these subgroups are mutually conjugate in SO(4). Each pair of completely orthogonal planes through O is the pair of invariant planes of a commutative subgroup of SO(4) isomorphic to SO(2) × SO(2). These groups are maximal tori of SO(4), which are all mutually conjugate in SO(4). See also Clifford torus. All left-isoclinic rotations form a noncommutative subgroup S^3[L] of SO(4), which is isomorphic to the multiplicative group S^3 of unit quaternions. All right-isoclinic rotations likewise form a subgroup S^3[R] of SO(4) isomorphic to S^3. Both S^3[L] and S^3[R] are maximal subgroups of SO(4). Each left-isoclinic rotation commutes with each right-isoclinic rotation. This implies that there exists a direct product S^3[L] × S^3[R] with normal subgroups S^3[L] and S^3[R]; both of the corresponding factor groups are isomorphic to the other factor of the direct product, i.e. isomorphic to S^3. (This is not SO(4) or a subgroup of it, because S^3[L] and S^3[R] are not disjoint: the identity I and the central inversion -I each belong to both S^3[L] and S^3[R].) Each 4D rotation A is in two ways the product of left- and right-isoclinic rotations A[L] and A[R]. A[L] and A[R] are together determined up to the central inversion, i.e. when both A[L] and A[R] are multiplied by the central inversion their product is A again. This implies that S^3[L] × S^3[R] is the universal covering group of SO(4) — its unique double cover — and that S^3[L] and S^3[R] are normal subgroups of SO(4). The identity rotation I and the central inversion -I form a group C[2] of order 2, which is the centre of SO(4) and of both S^3[L] and S^3[R]. The centre of a group is a normal subgroup of that group. The factor group of C[2] in SO (4) is isomorphic to SO(3) × SO(3). The factor groups of S^3[L] by C[2] and of S^3[R] by C[2] are each isomorphic to SO(3). Similarly, the factor groups of SO(4) by S^3[L] and of SO(4) by S^3[R] are each isomorphic to SO(3). The topology of SO(4) is the same as that of the Lie group SO(3) × Spin(3) = SO(3) × SU(2), namely the topology of P^3 × S^3. However, it is noteworthy that, as a Lie group, SO(4) is not a direct product of Lie groups, and so it is not isomorphic to SO(3) × Spin(3) = SO(3) × SU(2). Special property of SO(4) among rotation groups in general The odd-dimensional rotation groups do not contain the central inversion and are simple groups. The even-dimensional rotation groups do contain the central inversion −I and have the group C[2] = {I, −I} as their centre. From SO(6) onwards they are almost-simple in the sense that the factor groups of their centres are simple groups. SO(4) is different: there is no conjugation by any element of SO(4) that transforms left- and right-isoclinic rotations into each other. Reflections transform a left-isoclinic rotation into a right-isoclinic one by conjugation, and vice versa. This implies that under the group O(4) of all isometries with fixed point O the subgroups S^3[L] and S^3[R] are mutually conjugate and so are not normal subgroups of O(4). The 5D rotation group SO(5) and all higher rotation groups contain subgroups isomorphic to O(4). Like SO(4), all even-dimensional rotation groups contain isoclinic rotations. But unlike SO(4), in SO(6) and all higher even-dimensional rotation groups any pair of isoclinic rotations through the same angle is conjugate. The sets of all isoclinic rotations are not even subgroups of SO(2N), let alone normal subgroups. Algebra of 4D rotations SO(4) is commonly identified with the group of orientation-preserving isometric linear mappings of a 4D vector space with inner product over the real numbers onto itself. With respect to an orthonormal basis in such a space SO(4) is represented as the group of real 4th-order orthogonal matrices with determinant +1. Isoclinic decomposition A 4D rotation given by its matrix is decomposed into a left-isoclinic and a right-isoclinic rotation as follows: Let be its matrix with respect to an arbitrary orthonormal basis. Calculate from this the so-called associate matrix M has rank one and is of unit Euclidean norm as a 16D vector if and only if A is indeed a 4D rotation matrix. In this case there exist reals a, b, c, d; p, q, r, s such that and . There are exactly two sets of a, b, c, d; p, q, r, s such that and . They are each other's opposites. The rotation matrix then equals This formula is due to Van Elfrinkhof (1897). The first factor in this decomposition represents a left-isoclinic rotation, the second factor a right-isoclinic rotation. The factors are determined up to the negative 4th-order identity matrix, i.e. the central inversion. Relation to quaternions A point in 4-dimensional space with Cartesian coordinates (u, x, y, z) may be represented by a quaternion P = u + xi + yj + zk. A left-isoclinic rotation is represented by left-multiplication by a unit quaternion Q[L] = a + bi + cj + dk. In matrix-vector language this is Likewise, a right-isoclinic rotation is represented by right-multiplication by a unit quaternion Q[R] = p + qi + rj + sk, which is in matrix-vector form In the preceding section (Isoclinic decomposition) it is shown how a general 4D rotation is split into left- and right-isoclinic factors. In quaternion language Van Elfrinkhof's formula reads or in symbolic form According to the German mathematician Felix Klein this formula was already known to Cayley in 1854. Quaternion multiplication is associative. Therefore, which shows that left-isoclinic and right-isoclinic rotations commute. The eigenvalues of 4D rotation matrices The four eigenvalues of a 4-D rotation matrix generally occur as two conjugate pairs of complex numbers of unit magnitude. If an eigenvalue is real, it must be unity, since a rotation leaves the magnitude of a vector unchanged. The conjugate of that eigenvalue is also unity, yielding a pair of eigenvectors which define a fixed plane, and so the rotation is simple. In quaternion notation, a proper (i.e., non-inverting) rotation in SO(4) is a proper simple rotation if and only if the real parts of the unit quaternions QL and QR are equal in magnitude and have the same sign (*). If they are both zero, all eigenvalues of the rotation are unity, and the rotation is the null rotation. If the real parts of QL and QR are not equal then all eigenvalues are complex, and the rotation is a double rotation. (*) Example of opposite signs: the central inversion; in the quaternion representation the real parts are +1 and -1, and the central inversion cannot be accomplished by a single simple rotation. The Euler–Rodrigues formula for 3D rotations Our ordinary 3D space is conveniently treated as the subspace with coordinate system OXYZ of the 4D space with coordinate system OUXYZ. Its rotation group SO(3) is identified with the subgroup of SO (4) consisting of the matrices In Van Elfrinkhof's formula in the preceding subsection this restriction to three dimensions leads to , or in quaternion representation: Q[R] = Q[L]' = Q[L]^−1. The 3D rotation matrix then becomes which is the representation of the 3D rotation by its Euler–Rodrigues parameters: a, b, c, d. The corresponding quaternion formula where Q = Q[L,] or, in expanded form: is known as the Hamilton–Cayley formula. Hopf Coordinates Rotations in 3D space are made mathematically much more tractable by the use of spherical coordinates. Any rotation in 3D will characterized by a fixed axis of rotation and an invariant plane perpendicular to that axis. Without loss of generality, we can take the xy plane as the invariant plane and the z axis as the fixed axis. Since radial distances are not affected by rotation, we can characterize a rotation by its effect on the unit sphere (2-sphere) by spherical coordinates referred to the fixed axis and invariant plane: It can be seen that since , the points lie on the 2-sphere. A point at rotated by an angle about the z axis is specified simply by . While hyperspherical coordinates are also useful in dealing with 4D rotations, an even more useful coordinate system for 4D is provided by Hopf Coordinates,^[2] which are a set of three angular coordinates specifying a position on the 3-sphere. For example: It can be seen that since , the points lie in the 3-sphere. In 4D space, every rotation about the origin has two invariant planes which are completely orthogonal to each other and intersect at the origin, and are rotated by two independent angles and . Without loss of generality, we can choose, respectively, the uz and the xy plane as these invariant planes. A rotation in 4D of a point through angles and is then simply expressed in Hopf coordinates as . Visualization of 4D rotations Every rotation in 3D space has an invariant axis-line which is unchanged by the rotation. The rotation is completely specified by specifying the axis of rotation and the angle of rotation about that axis. Without loss of generality, this axis may be chosen as the z-axis of a Cartesian coordinate system, allowing a simpler visualizaton of the rotation. In 3D space, the spherical coordinates may be seen as a parametric expression of the 2-sphere. For fixed they describe circles on the 2-sphere which are perpendicular to the z axis and these circles may be viewed as "trajectories" of a point on the sphere. A point on the sphere, under a rotation about the z axis, will follow a trajectory as the angle varies The trajectory may be viewed as a rotation parametric in time, where the angle of rotation is linear in time: , with being an "angular velocity". Analogous to the 3D case, every rotation in 4D space has at least two invariant axis-planes which are left invariant by the rotation and are completely orthogonal (i.e. they intersect at a point). The rotation is completely specified by specifying the axis planes and the angles of rotation about them. Without loss of generality, these axis planes may be chosen to be the uz and xy planes of a Cartesian coordinate system, allowing a simpler visualizaton of the rotation. In 4D space, the Hopf angles parameterize the 3-sphere. For fixed they describe a torus parameterized by and , with being the special case of the Clifford torus in the xy and uz planes. These tori are not the usual tori found in 3D-space. While they are still 2D surfaces, they are embedded in the 3-sphere. The 3-sphere can be stereographically projected onto the whole Euclidean 3D-space, and these tori are then seen as the usual tori of revolution. It can be seen that a point specified by undergoing a rotation with the uz and xy planes invariant will remain on the torus specified by .^ [3] The trajectory of a point can be written as a function of time as and stereographically projected onto its associated torus, as in the figures below.^[4] In these figures, the initial point is taken to be , i.e. on the Clifford torus. In Fig. 1, two simple rotation trajectories are shown in black, while a left and a right isoclinic trajectory is shown in red and blue respectively. In Fig. 2, a general rotation in which and is shown, while in Fig. 3, a general rotation in which and is shown. Generating 4D Rotation Matrices Four dimensional rotations can be derived by Rodrigues rotation formula and Cayley Formula. Let be a 4x4 skew symmetric matrix. The skew-symmetric matrix can be uniquely decomposed as by two skew-symmetric matrices and satisfying the properties , and where and are eigenvalues of . Then, the 4D rotation matrices can be obtained with skew-symmetric matrices and by Rodrigues rotation formula and Cayley Formula. Erdoğdu, M., Özdemir, M. (2015). "Generating Four Dimensional Rotation Matrices". Let be a given nonzero skew-symmetric matrix with the set of eigenvalues . Then can be decomposed as where and are skew-symmetric matrices satisfying the properties and . Moreover, the skew-symmetric matrices and are uniquely obtained as is a rotation matrix in , which is generated by Rodrigues rotation formula, with the set of eigenvalues , , , . is a rotation matrix in , which is generated by Cayley rotation formula, such that the set of eigenvalues of is, Generating rotation matrix can be classified with respect to the values and as follows: I. If and , then formulas generates simple rotations; II. If , are nonzero and , then formulas generates double rotations; III. If , are nonzero and , then formulas generates isoclinic rotations. See also • L. van Elfrinkhof: Eene eigenschap van de orthogonale substitutie van de vierde orde. Handelingen van het 6e Nederlandsch Natuurkundig en Geneeskundig Congres, Delft, 1897. • Felix Klein: Elementary Mathematics from an Advanced Standpoint: Arithmetic, Algebra, Analysis. Translated by E.R. Hedrick and C.A. Noble. The Macmillan Company, New York, 1932. • Henry Parker Manning: Geometry of four dimensions. The Macmillan Company, 1914. Republished unaltered and unabridged by Dover Publications in 1954. In this monograph four-dimensional geometry is developed from first principles in a synthetic axiomatic way. Manning's work can be considered as a direct extension of the works of Euclid and Hilbert to four dimensions. • J. H. Conway and D. A. Smith: On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry. A. K. Peters, 2003. • Arthur Stafford Hathaway (1902) Quaternion Space, Transactions of the American Mathematical Society 3(1):46–59. • Johan E. Mebius, A matrix-based proof of the quaternion representation theorem for four-dimensional rotations., arXiv General Mathematics 2005. • Johan E. Mebius, Derivation of the Euler–Rodrigues formula for three-dimensional rotations from the general formula for four-dimensional rotations., arXiv General Mathematics 2007. • P.H.Schoute: Mehrdimensionale Geometrie. Leipzig: G.J.Göschensche Verlagshandlung. Volume 1 (Sammlung Schubert XXXV): Die linearen Räume, 1902. Volume 2 (Sammlung Schubert XXXVI): Die Polytope, • Irving Stringham (1901) On the geometry of planes in a parabolic space of four dimensions, Transactions of the American Mathematical Society 2:183–214. • Melek Erdoğdu, Mustafa Özdemir, Generating Four Dimensional Rotation Matrices, https://www.researchgate.net/publication/283007638_Generating_Four_Dimensional_Rotation_Matrices, 2015. This article is issued from - version of the 10/6/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/SO(4).html","timestamp":"2024-11-05T07:49:39Z","content_type":"text/html","content_length":"76368","record_id":"<urn:uuid:f1c27b65-47fd-4955-9407-4b726fdb87c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00260.warc.gz"}
System and method for group elevator scheduling based on submodular optimization System and method for group elevator scheduling based on submodular optimization Systems and Methods for controlling a movement of cars of an elevator system. A processor determines for each car an individual waiting time of each hall call. Determines for each pair of hall calls assigned for each car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the car to the pair of the hall calls. Approximate a cumulative waiting time of an assignment of the cars to the hall calls as a sum of individual waiting times for each hall call with the assigned car and a sum of all pairwise delays determined between all pairs of hall calls assigned to the same car. Determine the assignment of the cars using a greedy optimization algorithm that greedily assigns hall calls to the cars to minimize the approximated cumulative waiting time, and control the movement of the cars. This application is related to U.S. Pat. No. 7,484,597 entitled “System and Method for Scheduling Elevator Cars Using Branch-and-Bound,” dated Feb. 3, 2009 by Nikovski et al., and U.S. Pat. No. 7,546,905 entitled “System and method for scheduling elevator cars using pairwise delay minimization,” dated Jun. 16, 2009 by Nikovski et al. The present disclosure relates generally to scheduling elevator cars, and more particularly to scheduling methods and systems that operate according to a continuous reassignment policy until an actual time of pick up. Scheduling elevator cars is a practical optimization problem for banks of elevators in buildings. The most common instance of this problem deals with assigning elevator cars to passengers as they arrive at the bank of elevators, and request service by means of one of two buttons (up or down). The object is to assign arriving passengers to cars so as to optimize one or more performance criteria such as waiting time, total transfer time, percentage of people waiting longer than a specific threshold, or fairness of service. An important consideration is the assignment policy used by the scheduler. One possible assignment policy is when each assignment is made at the time of the hall call of the arriving passenger, and the assignment is not changed until the passenger is served. This is called an immediate policy. On the other hand, the system can continuously reassign hall calls to different cars if this improves the schedule. This is called a reassignment policy. While the reassignment policy increases the computational complexity of scheduling, the additional degrees of freedom can be exploited to achieve major improvements of the average weight time (AWT). In practice, it is assumed that passenger dissatisfaction grows supra-linearly as a function of the average weight time (AWT). When minimizing objective functions, one penalizes long waits much stronger than short waits, which helps to reduce extensive long waits. However, due to the high economic and social impact of transportation efficiency in buildings, many scheduling algorithms have been proposed to reduce the AWT of elevator passengers. Yet, there are several significant obstacles to achieving the shortest possible AWT in a given building. The main obstacle is the high combinatorial complexity of the scheduling problem. In a building that has an elevator bank with C cars, if N passengers must be assigned to these cars, there are C^N possible assignments, each of which resulting in a different AWT value for passengers. Even for moderate passenger and car numbers, finding the optimal assignment by means of exhaustive enumeration of all assignments is computationally very difficult, with exponential complexity O(C^N). Such a solution is not feasible, given the fast reaction times required by the elevator control system. Multiple heuristic and approximate algorithms have been proposed to deal with this huge combinatorial complexity, but most of them have major shortcomings that result in substatial suboptimality, as described below. One of the earliest scheduling algorithms that is still used in a large number of deployed elevator installations is the nearest car algorithm, where each passenger is assigned to the nearest car that is approaching that passenger. This method is computationally very easy, and has computational complexity of only O(CN). However, bacause every passenger is assigned to a car without any consideration for the other passengers assigned to the same car, it completely ignores the delays that picking some passengers would cause to the pick up time and wait of other passengers. As a result, its AWT is very far from optimal. Moreover, it often results in bunching, where the elevator cars are distributed very unevenly around the building, and are poorly positioned to respond to new calls. Another class of scheduling methods operate in the so-called immediate assignment mode, where a new passenger is assigned to a car immediately after service is requsted by the passenger, and this assignment is never reconsidered. However, the need to commit as early as possible to a particular assignment of passengers to cars (at the time of the initial call for service) deprives the scheduler from the possibility of revising the assignments if the situation changes before the assigned car reaches the passenger. That the situation will change is almost certain, and there are multiple reasons for that. The main reason is the arrival of future new passengers, whose arrival was not known at the time when the original assignment was made. When such new arrivals occur, it is often advantageous to reconsider the initial assignments, and sometimes change the entire schedule of the elevator bank. Another reason why the situation might change is that the execution of the current schedule often does not proceed as planned, for example when passengers hold the doors open longer than usual, or an entire group of passengers has initiated the service call and needs a lot more time of enter or exit the car. Constantly reconsidering all the assignments of all outstanding service calls to the available cars, called reassignment mode, usually results in much shorter AWT than when immediate assignment is used. As noted, optimally solving the scheduling problem in reassignment mode has exponential complexity, and exhaustive enumeration of all possible solutions is not feasible. A general-purpose method for eliminating many of the possible solutions in combinatorial optimization problems is the branch-and-bound method. Nikovski et al., U.S. Pat. No. 7,484,597, System and Method for Scheduling Elevator Cars Using Branch-and-Bound, describe how this method can be applied to the group elevator scheduling problem in re-assignment mode. Although this method can be much faster than full exhaustive enumeration, its worst-case complexity is still exponential in the number of cars and calls. Therefore, a need exists in the technical art for a combinatorial optimization method with favorable complexity (not exponential, but low-order polynomial) that outperforms known sub-optimal solutions, such as a nearest-car and immediate assignment algorithms. The present disclosure relates to systems and methods for scheduling elevator cars that operate according to a continuous reassignment policy until an actual time of pick up. The embodiments of the present disclosure are based on controlling a movement of a plurality of elevator cars of the elevator system. The elevator system accepts the plurality of hall calls requesting service of the plurality of elevator cars to different floors of a building. We realized through experimentation we needed to solve the combinatorial optimization problem of assigning N multiple passengers to C elevator cars (C^N possible assignments) in the shortest time (<1s). However, we quickly learned that traditional computation methods incorporated an exhaustive search taking a very long time to compute, which because of the computationally long wait time, made such solutions impractical when put into practice. For example, we explored the Branch-and-Bound and Mixed Integer Programming (MIP) methods, which we learned both had problems because of their worst-case complexity are exponential in the number of elevator cars and halls calls. What we realized through experimentation is that we needed a combinatorial optimization method with favorable complexity (not exponential, but low-order polynomial) that outperforms known sub-optimal solutions, such as the nearest-car and immediate assignment algorithms. We realized further that if the total waiting time for passengers was a submodular function, we could obtain a fast and close to optimal solution to group elevator scheduling, if we employed a greedy optimization algorithm. From our experimentation, we realized the greedy optimization can produce a reasonable solution in a reasonable time, if the optimized cost function has a specific structure, e.g., quadratic, and submodular. In those cases, the greedy optimization had guaranteed performance. In contrast, the cost function of a total waiting time for passengers is neither quadratic nor submodular. What we learned is that, if we had not realized through our exhaustive experimentation, the optimized cost function structure (i.e. quadratic and submodular), then the greedy algorithm would not have been effective for optimizing AWT, since it is demonstrably a non-submodular function. Specifically, conventional greedy optimization methods can produce very suboptimal results, when applied to arbitrary objective functions. The cost function should be a total waiting time for passengers as according to our realization. According to embodiments of the present disclosure, the systems and methods are based on our realizations that a greedy optimization algorithm of complexity O(CN^2), is linear in the number of cars C and quadratic in the number of passengers N. We discovered that the success of this greedy optimization algorithm depends critically on the property of submodularity of the objective function. In the current context of optimizing average waiting time (AWT), this property is approximately equivalent to the property that when a group of passengers is picked up by the same elevator car, their cumulative waiting time is larger than the sum of their individual waiting times, if they had been picked up by multiple separate cars starting from the same location, one car per passenger. This property, unfortunately, is not strictly always true for waiting times of passengers in elevator banks, due to the intricate interplay between their positions in the pick up schedule of the car. In order to ensure the submodular property of the objective function, the first step in at least one method is to construct an approximation of the cumulative AWT of a group of passengers that does possess the submodularity property. To this end, we use the sum of pairwise delays (SPD). In using the Pairwise Delay Minimization, this converts the optimization problem from a general combinatorial optimization problem without any structure in the objective function to a Quadratic Binary Optimization (QBO) problem that has an objective function with very specific (quadratic) structure that can be leveraged computationally. Moreover, the objective function of the QBO problem is submodular, which allows the application of fast greedy optimization methods. We note that there is no reason to apply, or attempt in using the greedy algorithm as a mechanism to minimize the pairwise approximation, unless that person knew that the pairwise approximation was a submodular function. In fact, without having the knowledge of submodularity, which is not known, there is no reason to put together the greedy algorithm and our specific disclosed pairwise approximation. The second step of the algorithm is to optimize the SPD by greedily assigning passengers to cars in a manner that minimizes the SPD at every step. The algorithm starts with an empty set of assignments. At every step, a new passenger is added to the set of assignments, until all passengers are assigned. This results in exactly N steps, one for every passenger. During a given step, all remaining unassigned passengers are considered in turn, and are tentatively added to each car, again in turn. For every combination of a passenger and a car, we compute the SPD of all passengers assigned so far plus the new passenger being assigned at the current step. The passenger/car combination that increases the SPD the least is chosen, the passenger is assigned to this car and removed from the list of unassigned passengers, and the algorithm proceeds with the next step. According to an embodiment of the present disclosure, a system for controlling a movement of a plurality of elevator cars of an elevator system. The system including at least one input interface for accepting a plurality of hall calls requesting service of the plurality of elevator cars to different floors of a building. A processor in communication with the input interface is configured to determine, for each elevator car, an individual waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car. Determine, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls. Approximate a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car and a sum of all pairwise delays determined between all pairs of hall calls assigned to the same elevator car. Determine the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time. Finally, use a controller for controlling the movement of the plurality of elevator cars according to the assignment. According to another embodiment of the present disclosure, a method for method for scheduling elevator cars of an elevator system. The method including using at least one input interface for accepting a plurality of hall calls requesting the plurality of elevator cars to different floors of a building. Determining independently, using a processor in communication with the input interface, for each elevator car, an independent waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car. Determine, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls. Approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car, and a sum of all pairwise delays determined for the assigned elevator car between all pairs of hall calls assigned to the same elevator car. Determine the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time. Finally, using a controller for controlling the movement of the plurality of elevator cars according to the assignment. According to another embodiment of the present disclosure, a non-transitory computer readable storage medium embodied thereon a program executable by a computer for performing a method. The method for scheduling cars of an elevator system, the elevator system including a plurality of cars, and a plurality of hall calls. The method including using at least one input interface for accepting a plurality of hall calls requesting the plurality of elevator cars to different floors of a building. Determining independently, using a process in communication with the input interface, for each elevator car, an independent waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car. Determining, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls. Approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car, and a sum of all pairwise delays determined for the assigned elevator car between all pairs of hall calls assigned to the same elevator car. Determining the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time. Finally, using a controller for controlling the movement of the plurality of elevator cars according to the assignment. The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments. FIG. 1A is a block diagram illustrating a method for controlling a movement of a plurality of elevator cars of an elevator system, according to an embodiment of the present disclosure; FIG. 1B is a schematic illustrating the method of FIG. 1A, for scheduling hall calls from passengers regarding controlling movement of the plurality of elevator cars of an elevator system, according to embodiments of the present disclosure; FIG. 1C is a flow diagram of the method of FIG. 1A, for controlling a movement of a plurality of elevator cars of an elevator system, according to embodiments of the present disclosure; FIG. 1D is a schematic illustrating the computation of unary and pairwise terms of step 120 of FIG. 1C, according to embodiments of the present disclosure; FIG. 2 shows the search tree of the greedy optimization algorithm for a reassignment problem with N=3 passengers (1, 2, and 3) and two cars (A and B), where each level of the tree corresponds to one assignment step of the algorithm, according to embodiments of the present disclosure; and FIG. 3 is a block diagram of illustrating the method of FIG. 1A, that can be implemented using an alternate computer or processor, according to embodiments of the present disclosure. While the above-identified drawings set forth presently disclosed embodiments, other embodiments are also contemplated, as noted in the discussion. This disclosure presents illustrative embodiments by way of representation and not limitation. Numerous other modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of the presently disclosed embodiments. The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements. Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function. Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks. The present disclosure relates to systems and methods for scheduling elevator cars that operate according to a continuous reassignment policy until an actual time of pick up. In particular, controlling a movement of a plurality of elevator cars of the elevator system, wherein the elevator system accepts the plurality of hall calls requesting service of the plurality of elevator cars to different floors of a building. We realized through experimentation, we needed to solve this combinatorial optimization problem of assigning N multiple passengers to C elevator cars (C^N possible assignments) in the shortest time (<1 s). During our experimentation, we quickly learned that traditional computation methods did not work, because they incorporated an exhaustive search taking a very long time to compute, i.e. computationally long wait times, that made such solutions impractical when put to use. For example, we explored the Branch-and-Bound and Mixed Integer Programming (MIP) methods, which we learned both had problems because of their worst-case complexity that are exponential in the number of elevator cars and halls calls. What we realized is we needed a combinatorial optimization method with favorable complexity (not exponential, but low-order polynomial) that outperforms known sub-optimal solutions such as the nearest-car and immediate assignment algorithms, among other things. We further realized that greedy optimization can produce a reasonable solution in a reasonable time, if the optimized cost function has a specific structure, e.g., quadratic, and submodular. In those cases, the greedy optimization has a guaranteed performance. However, the conventional cost function of a total waiting time for passengers is neither quadratic nor submodular. Thus, without realizing the optimized cost function structure (i.e. quadratic and submodular), the greedy algorithm is not going to be effective for optimizing AWT, which is demonstrably a non-submodular function. In particular, conventional greedy optimization methods can produce very suboptimal results when operating on general objective functions. The cost function should be related to a total waiting time for passengers, as according to the present disclosure. The systems and methods of the present disclosure are based on a greedy optimization algorithm of complexity O(CN^2), that is, linear in the number of cars c and quadratic in the number of passengers N. We discovered that the success of this greedy optimization algorithm depends critically on the property of submodularity of the objective function. In the current context of optimizing average waiting time (AWT), this property is approximately equivalent to the property that when a group of passengers is picked up by the same elevator car, their cumulative waiting time is larger than the sum of their individual waiting times, if they had been picked up by multiple separate cars starting from the same location, one car per passenger. This property, unfortunately, is not strictly always true for waiting times of passengers in elevator banks, due to the intricate interplay between their positions in the pick up schedule of the car. In order to ensure the submodular property of the objective function, the first step in our method is to construct an approximation of the cumulative AWT of a group of passengers that does possess the submodularity property. To this end, we use the sum of pairwise delays (SPD). By using the Pairwise Delay Minimization, this converts the optimization problem from a general combinatorial optimization problem without any structure in the objective function to a Quadratic Binary Optimization (QBO) problem that has an objective function with very specific (quadratic) structure that can be leveraged computationally. Moreover, the objective function of the QBO problem is submodular, which allows the application of fast greedy optimization methods. We note that there is no reason to apply the greedy algorithm to minimize the pairwise approximation, or even think about making such a combination, since this is not known. In fact, without having the knowledge of submodularity, there is no reason to put together the greedy algorithm and our specific disclosed pairwise The second step of the algorithm is to optimize the SPD by greedily assigning passengers to cars in a manner that minimizes the SPD at every step. The algorithm starts with an empty set of assignments. At every step, a new passenger is added to the set of assignments, until all passengers are assigned. This results in exactly N steps, one for every passenger. During a given step, all remaining unassigned passengers are considered in turn, and are tentatively added to each car, again in turn. For every combination of a passenger and a car, we compute the SPD of all passengers assigned so far plus the new passenger being assigned at the current step. The passenger/car combination that increases the SPD the least is chosen, the passenger is assigned to this car and removed from the list of unassigned passengers, and the algorithm proceeds with the next step. Some embodiments of the present disclosure include using an input interface for accepting a plurality of hall calls requesting service of a plurality of elevator cars to different floors of a building. A processor is configured to determine, for each elevator car, an individual waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car. Along with determining, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls. Followed by approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls, as a sum of individual waiting times for accommodating each hall call with the assigned elevator car, and a sum of all pairwise delays determined between all pairs of hall calls assigned to the same elevator car. Then, determine the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time. Finally, use a controller for controlling the movement of the plurality of elevator cars according to the assignment. FIG. 1A is a block diagram of a method for controlling a movement of a plurality of elevator cars of an elevator system, according to an embodiment of the present disclosure. The method 100 includes step 110 of using at least one input interface for accepting a plurality of hall calls requesting the plurality of elevator cars to different floors of a building. Such an interface is located at every elevator landing in the building. It is contemplated that the plurality of hall calls may be accepted. The most typical user interface includes an up button and a down button, which are used by the passenger to request transportation in the respective direction. A more novel user interface, known as a destination panel, includes buttons for all possible destination floors. A combination between the two types of interfaces is also possible, for example a full destination panel at the lobby of the building and a simpler two-button interface at other floors. Step 115 of FIG. 1A includes determining independently, using a processor 112 in communication with the input interface, for each elevator car, an independent waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car. In practice, this determining is done by means of an internal simulator that simulates the path of the car if it has to pick up this passenger only. Note that any outstanding car calls places by passengers already in the car must be included in the simulation. For example, if the car is at the 3^rd floor, and the hall call is at the 8^th floor, but there is an outstanding car call to the 5^th floor, the stop at the 5^th floor must be simulated, too. Step 120 of FIG. 1A includes determining, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls. In practice, this determining is done by means of an internal simulator that simulates the path of the car if it has to pick up these two passengers only, according to the preferred pick-up order for this elevator bank. The usual method of deciding the pick-up order is the group collective policy, where the car picks up all passengers in front of it in its current direction of motion, going in the same direction, then reverses direction to pick up all passengers going in the opposite direction, and finally reverses direction again to pick up passengers going in its original direction, but behind its starting floor. Other pick-up sequencing policies are also possible, and can be simulated similarly. After the simulation is complete, the increase in waiting times of the two passengers are computed with respect to their individual waiting times determined in step 115. The sum of these increases is the pairwise delay between the two hall calls. Note that in all cases, only one of these increases is greater than zero, and the other one is always zero. Step 125 of FIG. 1A includes approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times w[i]^c for accommodating each hall call i with its assigned elevator car c, and a sum of all pairwise delays w[ij]^c between all pairs of hall calls (i,j) assigned to the same elevator car c, according to the formula $Q ⁡ ( x ) = ∑ i = 1 N ⁢ ∑ c = 1 C ⁢ w i c ⁢ x i c + ∑ i = 1 N ⁢ ∑ j = i + 1 N ⁢ ∑ c = 1 C ⁢ w ij c ⁢ x i c ⁢ x j c ,$ where the indicator variable x[i]^c=1 if hall call i is assigned to car c, and x[i]^c=0 otherwise. Step 130 of FIG. 1A includes determining the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time. As noted above, the greedy optimization algorithm has a complexity O(CN^2), that is, linear in the number of cars c and quadratic in the number of passengers N. Such that the success of this greedy optimization algorithm depends critically on the property of submodularity of the objective function. Further, when optimizing average waiting time (AWT), this property is approximately equivalent to the property that when a group of passengers is picked up by the same elevator car, their cumulative waiting time is larger than the sum of their individual waiting times, if they had been picked up by multiple separate cars starting from the same location, one car per passenger. Step 135 of FIG. 1A includes using a controller for controlling the movement of the plurality of elevator cars according to the assignment. Once the schedule has been determined, each car starts serving the hall calls assigned to it, according to the accepted sequence discussed above. FIG. 1B is a schematic illustrating the method of FIG. 1A, for scheduling elevator cars 101-102 in a group elevator system 111 in a building having multiple floors 103, according to embodiments of the present disclosure. The controller keeps track of all requests for service, in the form of a table of hall calls 155. When a hall call is served by a car that picks up the passenger or passengers that initiated the call, that hall call is removed from the table 155. When a new hall call 150, is registered, it is added to the table 155. At regular intervals, or at specific events, such as new hall calls or starts/stops of cars at floors, the scheduler 160 is executed. It produces a schedule 170 in the form of a full assignment of all outstanding hall calls to cars. The controller 180 continuously executes the current schedule 170, according to the preferred service policy, for example the group collective service policy, also known as the selective collective principle, or some other policy, Tanaka, Uraguchi, and Araki, Dynamic optimization of the operation of single-car elevator systems with destination hall call registration: Part I. Formulation and simulations, European Journal of Operational Research 167.2 (2005), pp 550-573 FIG. 1C is a flow diagram of the method of FIG. 1A, for controlling a movement of a plurality of elevator cars of an elevator system, according to embodiments of the present disclosure. The controller collects information about hall calls 109 from the up/down button interfaces 112 located at each elevator landing, and car calls 109 from button panel interfaces 112 located in each car; determines (step 120) independent waiting time of accommodating each hall call, and pairwise delays between pairs of passengers constructs (step 125) an approximate cumulative waiting time 126 in the form of a quadratic Boolean function; determines (step 130) the assignment of the plurality of elevator cars using a greedy optimization algorithm by executing N assignment steps, such that at each step the marginal increase in waiting time 131 among all passengers that are not assigned yet is minimized 132; and executes the current schedule according to the current assignment, until the next reassignment step. Finally, step 135 includes using a controller for controlling the movement of the plurality of elevator cars according to the assignment. FIG. 1D is a schematic illustrating the computation of unary and pairwise terms of step 120 of FIG. 1C, according to embodiments of the present disclosure. In FIG. 1D, the individual waiting times (unary terms) and mutual delays (pairwise terms) are computed for two passengers 141 and 142 to be picked up by the same car 140 (C1, currently at the 5^th floor, and moving up). In FIG. 1D-a, the car will reverse its direction and quickly pick up 143 the first passenger from the floor below. The resulting waiting time will be the unary term 131 for the first passenger 141. This time can be computed by means of forward simulation of what the car will do when picking up that passenger. In FIG. 1D-b, through a similar simulation, it will be determined that the car will continue its direction of motion and pick up 144 the second passenger 142. The resulting waiting time of that passenger 142 will be his unary term 132. Finally, in FIG. 1D-c, it is determined that if the car 140 is to pick up both passengers 141 and 142, following the principle of group collective control, it will first pick up 144 the second passenger 142, as in FIG. 1D-b, and only then will it reverse direction to come down 145 and pick up the first passenger 141. As a result, the first passenger 141 will wait much longer for service than if he was picked up alone, as in FIG. 1D-a. This difference in the waiting time of passenger 141 between cases 1D-c and 1D-a is the delay the second passenger 142 would cause to passenger 141, and is equal to the pairwise delay term 133 between these two passengers. Note that for each pair of passengers, only one of them is delaying the other, but not vice versa; in FIG. 1D, passenger 142 is delaying passenger 141, but passenger 142 is not delayed by passenger 141, specifically for this car 140 in its current direction of motion. For a different car, the delay might be different. FIG. 2 shows the search tree of the greedy optimization algorithm for a reassignment problem with N=3 passengers (1, 2, and 3) and two cars (A and B), where each level of the tree corresponds to one assignment step of the algorithm. The root node of the tree 200 corresponds to the initial stage of the algorithm, when no assignments have been made yet. Six tentative assignments 210 are made of the three unassigned calls to the two available cars. For each of them, the marginal increase in waiting time 131 is computed and the minimum 132 is determined. In this example, this minimum 132 is achieved for the pair 220 of call 2 and car B (2B). Passenger 2 is assigned to car B, all other tree nodes at this level are ignored, and the children nodes of node 2B (220) are expanded at the next level, with the remaining four possible assignments 225 between the remaining unassigned passengers 1 and 3 and the two available cars A and B. The marginal increase in waiting time 130 is computed for each of these tentative assignments, keeping in mind that passenger 2 has already been assigned to car B, so any other passenger assigned to car B would either delay or be delayed by passenger 2, according to the respective pairwise term 133. At this level, the assignment 1B 230 is determined to increase the waiting time the least, so it is chosen among the four 225. At the next, final stage 235, the only remaining passenger 3 is assigned analogously 240 to car A, because it results in a lower marginal increase in waiting time than the alternative, 3B. The algorithm has two main stages: the first one is the computation of the approximation of AWT based on the sum of pairwise delays (SPD), and the second one is the greedy optimization algorithm for optimizing the SPD in N steps, where N is the number of passengers still waiting to be picked up at the time of reassignment. Stage 1: Quadratic Boolean Approximation During the first stage, two sets of coefficients w[i]^c and w[ij]^c are computed to construct a quadratic Boolean approximation of the cumulative waiting time of the entire set of passengers currently waiting for service at the time when reassignment is performed, in the form: $Q ⁡ ( x ) = ∑ i = 1 N ⁢ ∑ c = 1 C ⁢ w i c ⁢ x i c + ∑ i = 1 N ⁢ ∑ j = i + 1 N ⁢ ∑ c = 1 C ⁢ w ij c ⁢ x i c ⁢ x j c , ( 1 )$ where x[i]^c is an indicator variable which takes on a value of 1 when passenger i is assigned to car c, and 0 otherwise. All N·C indicator variables can be collected in a decision vector x=[x[1]^1, x[2]^1, . . . , x[N]^1, x[1]^2, x[2]^2, . . . , x[N]^2, . . . , x[1]^C, x[2]^C, . . . x[N]^C]. The procedure is detailed in U.S. Pat. No. 7,546,905, Nikovski et al., System and method for scheduling elevator cars using pairwise delay minimization, incorporated herein and thereafter in its entirety. The procedure is repeated below using a slightly different notation. Let H be the set of N passengers {h[1], h[2], . . . , h[N]} still waiting. A single passenger h[i ]is described by the tuple (t[i], o[i]d[i]), where t[i ]is the arrival time, o[i ]is the arrival floor, and d[i ]is the indicated direction of movement, or the desired destination floor, if known. A full assignment of the N passengers to the C cars in a bank would be a partition of H into C subsets H[c], such that H=H[1]∪H[2]∪ . . . ∪H[C], and H[i]∩H[j]=Ø if i≠j. Let also W[c](h|A), where h is a passenger and A is a set of passengers, denote the expected waiting time of passenger h if assigned to car c (as it is in its current position), and also all passengers in the set A are assigned to the same car c , too. Note that this waiting time reflects all constraints that already exist for car c , including stops requested by passengers who are already inside the car, and have indicated their destination floor by pressing one of the buttons on the destination panel inside the The expected waiting time W[c](h|A) can be computed relatively easily by performing a forward simulation of the path of car c until the time it picks up passenger h , while also stopping to unload passengers already in it, or picking up other passeners in the set A that need to picked up before passenger h. Such a simulation supposes that a specific predetermined order of servicing hall and car calls will be followed by the schedule execution system of the elevator bank. The usual order adopted by most actual elevator systems, commonly called the group collective policy, is to service all car and hall calls in the current direction of motion of the car, then reverse its direction, and repeat the procedure in alternating directions indefinitely. However, in practice, any order can be followed, as long as it is known in advance, fixed, and can be simulated in software. Then, the coefficients in the quadratic Boolean approximation shown in Equation 1 can be computed as follows: w[i]^c=W[c](h[i]|{h[i]}) (2) w[ij]^c=[W[c](h[i]|{h[i], h[j]})−W[c](h[i]|{h[i]})]+[W[c](h[i]|{h[i], h[j]})−W[c](h[j]|{h[j]})]. (3) Per Equation 2, the linear coefficient w[i]^c is simply the expected waiting time of passenger h[i], if that passenger is picked by car c, and no other passenger is picked up by that car. In order to compute the N·C linear coefficients w[i]^c, a total of N·C forward simulations must be performed, from the current position of each car to the floor of each passenger. These simulations are very simple and usually very fast. Per Equation 3, the bilinear coefficient w[ij]^c is equal to the mutual delay between passengers h[i ]and h[j]. For a specific pick-up order of passengers by car c, only one of the two passengers is causing a delay to the other. (The passenger who will picked up first causes a delay for the passenger who will be picked up second.) In order to compute this delay, we compute the two differences W (h[i]|{h[i], h[j]})−W[c](h[i]|{h[i]}) and W[c](h[i]|{h[i], h[j]})−W[c](h[j]|{h[j]}), only one of which is zero. In practice, the two values W(h[i]|{h[i], h[j]}) and W[c](h[i]|{h[j]}) can be computed by means of only one forward simulation, where car c picks up both passengers h[i ]and h[j], and their respective waiting times are calculated. The other two values, W[c](h[i]|{h[i], h[j]}) and W[c] (h[j]|{h[j]}), have been computed during the calculation of the linear coefficients, and could be stored during that step for reuse. In total, the computation of the N(N−1)C/2 bilinear coefficients requires an equal number of forward simulations, one for every car and every pair of passengers. These simulations are also relatively simple and very fast. Stage 2: Greedy Optimization of Approximated Waiting Time After the coefficients of the approximation Q(x) have been computed, the optimal assignment x* must be computed as x*=argmin[x]Q(x), subject to the constraints that the decision variables are Boolean (i.e., they assume values of only 0 or 1), and exactly one decision variable is equal to 1 for a given passenger (Σ[c=1]^Cx[i]^c=1). Equation 1 can be recognized as a quadratic expression in the decision variables x[i], which, along with the requirement for these variables to assume Boolean values, turns the minimization task into a Quadratic Boolean Optimization (QBO) problem. Even though the problem possesses a certain mathematical structure, it is well known that it is NP-complete, that is, finding its truly optimal solution will still have exponential complexity O(C^N) using any known optimization algorithm. However, the particular version of the problem we are considering has an additional property. Let us define the set function ƒ(S)=−Q(x), whose argument is one of the possible subsets of all assignments (h[i]→c) of passengers to cars, (including incomplete assignments where not all passengers are assigned), such that x[i]^x=1 if (h[i]→c)∈S, and x[i]^c=0, otherwise. Then, it can be proven that the function ƒ(S) is submodular, that is, for two sets of assignments S[1]⊆S[2], and an element (assignment) s∉S[2 ]it is true that This is true because the function ƒ is the negative of the function Q, and adding a passenger to a larger set of passengers already assigned to the same car would result in a larger increase in cumulative waiting time in comparison to the case when the same passenger is added to a smaller set of passengers. The latter is true because more mutual delays exist in a larger group of passengers than in a smaller one. Because the function ƒ is the negative of the function Q, maximizing ƒ is equivalent to minimizing Q, which is our goal. But, when a function is submodular, a type of greedy optimization algorithm can be applied to maximize it, with provable performance guarantees, Nemhauser, G. L.; Wolsey, L. A.; and Fisher, M. L. 1978. An analysis of the approximations for maximizing submodular set functions, Mathematical Programming, pp. 265-294. That is, the algorithm has a very favorable computational complexity (O(N^2C)), while the minimal value it returns is guaranteed to be within double the true minimum. (In practice, the suboptimality is often much smaller.) The algorithm has exactly N steps, and at every step, one passenger is assigned to a car. The algorithm starts with an empty set of assignments. At every step n, n=1, . . . , N, all remaining N−n+1 unassigned passengers in the set H[n]^− are tentatively assigned to one of the C cars. If passenger h[i ]is being tentatively assigned to car c, the increase ΔQ(i,c) of the total waiting time of all passengers, including the new passenger h[i], can be defined and computed as $Δ ⁢ ⁢ Q ⁡ ( i , c ) = Q ⁡ ( … ⁢ , x i c = 1 , … ) - Q ⁡ ( … ⁢ , x i c = 0 , … ) = w i c + ∑ j = i + 1 N ⁢ w ij c ⁢ x j ,$ that is, we tentatively set the value of x[i]^c to 1, and compute the increase in Q(x), while keeping all other assignments as they currently are. This increase will include the time w[i]^c it would take the car c to pick up passenger h[i ]if it had no other passengers to pick up, plus the marginal increase of waiting times for all passengers already assigned to the same car, caused by the new passenger, or the marginal time those passengers would cause to the new passenger, if only they and the new passengers are being transported by the same car. Clearly, the waiting times of passengers assigned to other cars would not be affected by this tentative assignment. Then, the algorithm selects the assignment with the smallest increase in cumulative waiting time: [i, c]=argmin[i,c]ΔQ(i,c). After the last, N-th step of the algorithm, a full assignment of passengers to cars has been constructed. FIG. 2 shows the search tree of this algorithm, for the case when 3 passengers, 1, 2, and 3, need to be assigned to 2 cars, A and B . The algorithm starts with an empty set of assignments 200. During the first step, all three passengers are yet to be assigned. The six possible assignments 1A, 1B, 2A, 2B, 3A, and 3B are tentatively tested. Since no passengers have been assigned previously, the increases in marginal waiting times are simply the respective times it would take for the car to pick up the passenger alone. It turns out that passenger 2 will be picked up by car B the fastest, so this is the assignment 220 chosen at this step. Note that this is a greedy assignment, and will never be revised later. During the second step, the two remaining passengers, 1 and 3 are tentatively assigned to the two cars A and B. If they are assigned to car B, which has already been determined to transport passenger 2, the mutual delay between the new passenger and passenger 2 needs to be added to the increase in cumulative waiting time, too. In this example, it is concluded 230 that it is actually less expensive to assign passenger 1 to car B, too, even if the mutual delay between passengers 1 and 2 must be added, rather than assign this passenger to car A, or assign passenger 3 to either car. This could be because passengers 2 and 1 are unusually close to car B. And, in the last step, the remaining passenger 3 is assigned 240 to car A, because it would result in lower waiting time than if the passenger was assigned to car B. This assignment is quite logical, because by this stage, car B is already scheduled to pick up passengers 1 and 2, and if it were to also pick up passenger 3, the entire system would have to incur the mutual delay between each pair of passengers, while car A would be idle; clearly, this is not likely to be the most optimal solution. Note that the order of assignment of passengers to cars is not fixed, and that is the main difference of the proposed algorithm with respect to the immediate assignment method, where the order of assignment is fixed and identical to the chronological order of arrival of passengers. The computational cost of the proposed method is slightly higher—at each one of the N steps, on the order of N remaining passengers are tentatively assigned to the C cars, for a computational complexity of O (N^2C). This is one polynomial degree higher than the complexity of the immediate assignment method (O (NC)), but still very much within the computational power of most modem microcontrollers. FIG. 3 is a block diagram of illustrating the method of FIG. 1A, that can be implemented using an alternate computer or processor, according to embodiments of the present disclosure. The computer 311 includes a processor 340, computer readable memory 312, storage 358 and user interface 349 with display 352 and keyboard 351, which are connected through bus 356. For example, the user interface 349 in communication with the processor 340 and the computer readable memory 312, acquires and stores the data (i.e., data relating to controlling movement of the elevator cars or elevator systems, elevator system operational historical data, elevator system optimization related data related to assigning halls calls to elevator cars of a similar elevator system), in the computer readable memory 312 upon receiving an input from a surface, keyboard surface, of the user interface 357 by a user. Contemplated is that the memory 312 can store instructions that are executable by the processor, historical data, and any data to that can be utilized by the methods and systems of the present disclosure. The processor 440 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The processor 340 can be connected through a bus 356 to one or more input and output devices. The memory 312 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. Still referring to FIG. 3, a storage device 358 can be adapted to store supplementary data and/or software modules used by the processor. For example, the storage device 358 can store historical data and other related data such as manuals for the devices of the elevator system or similar types of elevator systems, wherein the devices can include sensing devices capable of obtaining data as mentioned above regarding the present disclosure. Additionally, or alternatively, the storage device 358 can store historical data similar to the data. The storage device 358 can include a hard drive, an optical drive, a thumb-drive, an array of drives, or any combinations thereof. The system can be linked through the bus 356 optionally to a display interface (not shown) adapted to connect the system to a display device (not shown), wherein the display device can include a computer monitor, camera, television, projector, or mobile device, among others. The computer 311 can include a power source 354, depending upon the application the power source 354 may be optionally located outside of the computer 311. Linked through bus 356 can be a user input interface 357 adapted to connect to a display device 348, wherein the display device 348 can include a computer monitor, camera, television, projector, or mobile device, among others. A printer interface 359 can also be connected through bus 356 and adapted to connect to a printing device 332, wherein the printing device 332 can include a liquid inkjet printer, solid ink printer, large-scale commercial printer, thermal printer, UV printer, or dye-sublimation printer, among others. A network interface controller (NIC) 334 is adapted to connect through the bus 356 to a network 336, wherein measuring data or other data, among other things, can be rendered on a third party display device, third party imaging device, and/or third party printing device outside of the computer Still referring to FIG. 3, the data or other data, among other things, can be transmitted over a communication channel of the network 336, and/or stored within the storage system 358 for storage and/ or further processing. Further, the measuring data or other data may be received wirelessly or hard wired from a receiver 346 (or external receiver 338) or transmitted via a transmitter 347 (or external transmitter 339) wirelessly or hard wired, the receiver 346 and transmitter 347 are both connected through the bus 356. The computer 311 may be connected via an input interface 308 to external sensing devices 344 and external input/output devices 341. The computer 311 may be connected to other external computers 342. An output interface 309 may be used to output the processed data from the processor 340. According to aspects of the present disclosure, the greedy optimization algorithm is an algorithmic paradigm that determines at an initial step, an assignment of an unassigned first hall call, based on a locally optimal choice determined at a time of the initial step, then proceeds to the next step or the next successive unassigned hall call. According to aspects of the present disclosure, the locally optimal choice identifies at each step, a combination of an unassigned hall call from all the remaining unassigned hall calls and an elevator car from the plurality of elevator cars, that results in a least increase in a waiting time for all assigned hall calls including the first hall call, while considering all previous assigned hall calls, then the combination is accepted and the hall call is assigned without further consideration and removed from all the remaining unassigned hall calls. According to aspects of the present disclosure, the greedy optimization algorithm starts with an empty set of assignments of hall calls needing to be assigned, such that at every step including an initial step which starts with a first hall call, is added to the set of assignments of hall calls needing to be assigned, so as to result in a total of N steps, until all unassigned hall calls are assigned. Wherein each step in the total of N steps includes a hall call for every passenger to be moved between floors of the building, such that during each step, an unassigned hall call from a passenger is considered sequentially in time, and is initially added to an elevator car of the plurality of elevator cars successively. Wherein for every combination of the hall call and the elevator car, the greedy optimization algorithm computes a cumulative waiting time of all hall calls assigned at that moment in time, plus the first hall call assigned at the initial step. Wherein the hall call and the elevator car combination having a least increase in the cumulative waiting time for all assigned hall calls including the first hall call of the initial step, the combination is accepted, and the hall call is assigned and removed from all the remaining unassigned hall calls, then continues to the next step or the next successive unassigned hall call. According to aspects of the present disclosure, wherein the greedy optimization algorithm is based on optimizing the approximated cumulative waiting time in N steps, where N is a number of unassigned hall calls from passengers waiting to be assigned at a time at each step, such that two sets of coefficients are computed to construct a quadratic Boolean approximation of the cumulative waiting time of all the plurality of hall calls from passengers currently waiting to be assigned at the time of that step. Wherein the first set of coefficients of the two sets of coefficients includes w[l]^c, such that w[i]^c is a linear coefficient that is an expected waiting time of a hall from a passenger h[i], if the hall call is picked by an elevator car c, and no other hall call is picked up by that elevator car. According to aspects of the present disclosure, wherein the second set of coefficients of the two sets of coefficients includes w[ij]^c, such that w[ij]^c is a bilinear coefficient that is equal to the pairwise delay between hall calls from passengers h[i ]and h[j], based on a specific pick-up order of the two hall calls by the elevator car c, such that only one of the two hall calls is causing a delay to the hall call, and the hall call that is picked-up first causes a delay for the hall call that is picked-up second. According to aspects of the present disclosure, wherein the cumulative waiting time is determined according to $Q ⁡ ( x ) ⁢ B ⁢ ∑ i = 1 N ⁢ ∑ c = 1 C ⁢ w i c ⁢ x i c + ∑ i = 1 N ⁢ ∑ j = i + 1 N ⁢ ∑ c = 1 C ⁢ w ij c ⁢ x i c ⁢ x j c ,$ where x[i]^c is an indicator variable which takes on a value of 1 when the hall call from the passenger i is assigned to the elevator car c, and 0 otherwise, and that all N·C indicator variables are collected in a decision vector x=[x[1]^1, x[2]^1, . . . , x[N]^1, x[1]^2, x[2]^2, . . . , x[N]^2, . . . , x[1]^C, x[2]^C, . . . , x[N]^C]. According to aspects of the present disclosure, Wherein the greedy optimization algorithm includes a O(CN^2), that is, linear in a number of elevator cars C and quadratic in a number of hall calls by passengers N, and includes a property of submodularity of an objective function. Wherein for each step, all remaining unassigned hall calls by passengers are considered sequentially in time, and are initially added to each elevator car successively, and for every combination of a hall call and an elevator car, the SPD of all hall calls assigned, plus the new hall call assigned at the initial step are then calculated, and any combination of the hall call initially added with the elevator car that increases the SPD least, then the hall call is assigned to that elevator car, and removed all the remaining unassigned hall calls, and then continues to the next step. According to aspects of the present disclosure, historical elevator system data originates from a user, and is stored in a memory in communication with the processor. The historical elevator system data can be data relating to elevator systems including similar elevator systems of the present disclosure, as well as instructional data According to aspects of the present disclosure, further comprising: initiating to start to implement the method by accepting the hall call data and to be received by the input interface is by a user input provided on a surface of at least one user input interface in communication with the processor and received by the processor According to aspects of the present disclosure, further comprising: using a user input provided on a surface of at least one user input interface both in communication with the processor and received by the processor via the input interface for controlling movement of the plurality of elevators based upon an abnormal event. The abnormal event can include an event that disrupts operation of the plurality of elevators causing an unsafe environment to cargo and passengers on the elevators The above-described embodiments of the present disclosure can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. Use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the append claims to cover all such variations and modifications as come within the true spirit and scope of the present 1. A system for controlling a movement of a plurality of elevator cars of an elevator system, comprising: at least one input interface for accepting a plurality of hall calls requesting service of the plurality of elevator cars to different floors of a building; a processor in communication with the input interface is configured to determine, for each elevator car, an individual waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car; determine, for each pair of hall calls assigned to each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls; approximate a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car, and a sum of all pairwise delays determined between all pairs of hall calls assigned to the same elevator car; and determine the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time; and a controller for controlling the movement of the plurality of elevator cars according to the assignment. 2. The system of claim 1, wherein the greedy optimization algorithm is an algorithmic paradigm that determines at an initial step, an assignment of an unassigned first hall call, based on a locally optimal choice determined at a time of the initial step, then proceeds to the next step or the next successive unassigned hall call. 3. The system of claim 2, wherein the locally optimal choice identifies at each step, a combination of an unassigned hall call from all the remaining unassigned hall calls and an elevator car from the plurality of elevator cars, that results in a least increase in a waiting time for all assigned hall calls including the first hall call, while considering all previous assigned hall calls, then the combination is accepted and the hall call is assigned without further consideration and removed from all the remaining unassigned hall calls. 4. The system of claim 1, wherein the greedy optimization algorithm starts with an empty set of assignments of hall calls needing to be assigned, such that at every step including an initial step which starts with a first hall call, is added to the set of assignments of hall calls needing to be assigned, so as to result in a total of N steps, until all unassigned hall calls are assigned. 5. The system of claim 4, wherein each step in the total of N steps includes a hall call for every passenger to be moved between floors of the building, such that during each step, an unassigned hall call from a passenger is considered sequentially in time, and is initially added to an elevator car of the plurality of elevator cars successively. 6. The system of claim 5, wherein for every combination of the hall call and the elevator car, the greedy optimization algorithm computes a cumulative waiting time of all hall calls assigned at that moment in time, plus the first hall call assigned at the initial step. 7. The system of claim 6, wherein the hall call and the elevator car combination having a least increase in the cumulative waiting time for all assigned hall calls including the first hall call of the initial step, the combination is accepted, and the hall call is assigned and removed from all the remaining unassigned hall calls, then continues to the next step or the next successive unassigned hall call. 8. The system of claim 1, wherein the greedy optimization algorithm is based on optimizing the approximated cumulative waiting time in N steps, where N is a number of unassigned hall calls from passengers waiting to be assigned at a time at each step, such that two sets of coefficients are computed to construct a quadratic Boolean approximation of the cumulative waiting time of all the plurality of hall calls from passengers currently waiting to be assigned at the time of that step. 9. The system of claim 8, wherein the first set of coefficients of the two sets of coefficients includes wic, such that wic is a linear coefficient that is an expected waiting time of a hall call from a passenger hi, if the hall call is picked by an elevator car c, and no other hall call is picked up by that elevator car. 10. The system of claim 1, wherein the second set of coefficients of the two sets of coefficients includes wijc, such that wijc is a bilinear coefficient that is equal to the pairwise delay between hall calls from passengers hi and hj, based on a specific pick-up order of the two hall calls by the elevator car c, such that only one of the two hall calls is causing a delay to the other hall call, and the hall call that is picked-up first causes a delay for the hall call that is picked-up second. 11. A method for scheduling elevator cars of an elevator system, comprising: using at least one input interface for accepting a plurality of hall calls requesting the plurality of elevator cars to different floors of a building; determining independently, using a processor in communication with the input interface, for each elevator car, an independent waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car; determining, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls; approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car and a sum of all pairwise delays determined for the assigned elevator car between all pairs of hall calls assigned to the same elevator car; determining the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time; and using a controller for controlling the movement of the plurality of elevator cars according to the assignment. 12. The method of claim 11, wherein the greedy optimization algorithm is an algorithmic paradigm that determines at an initial step, an assignment of an unassigned first hall call, based on a locally optimal choice determined at a time of the initial step, then proceeds to the next step or the next successive unassigned hall call. 13. The method of claim 11, wherein the cumulative waiting time is determined according to Q ⁡ ( x ) ⁢ B ⁢ ∑ i = 1 N ⁢ ∑ c = 1 C ⁢ w i c ⁢ x i c + ∑ i = 1 N ⁢ ∑ j = i + 1 N ⁢ ∑ c = 1 C ⁢ w ij c ⁢ x i c ⁢ x j c, where is an indicator variable which takes on a value of 1 when the hall call from the passenger i is assigned to the elevator car c, and 0 otherwise, and that all N·C indicator variables are collected in a decision vector x=[x11, x21,..., xN1, x12, x22,..., xN2,..., x1C, x2C,..., xNC]. 14. The method of claim 11, wherein the greedy optimization algorithm has complexity O(CN2), that is, linear in a number of elevator cars C and quadratic in a number of hall calls by passengers N, and leverages a property of submodularity of an objective function. 15. The method of claim 14, wherein for each step, all remaining unassigned hall calls by passengers are considered sequentially in time, and are initially added to each elevator car successively, and for every combination of a hall call and an elevator car, the SPD of all hall calls assigned, plus the new hall call assigned at the initial step are then calculated, and any combination of the hall call initially added with the elevator car that increases the SPD least, then the hall call is assigned to that elevator car, and removed all the remaining unassigned hall calls, and then continues to the next step. 16. A non-transitory computer readable storage medium embodied thereon a program executable by a computer for performing a method, the method for scheduling elevator cars of an elevator system, the elevator system including a plurality of elevator cars, and a plurality of hall calls, comprising: using at least one input interface for accepting a plurality of hall calls requesting the plurality of elevator cars to different floors of a building; determining independently, using a process in communication with the input interface, for each elevator car, an independent waiting time of accommodating each hall call, if the hall call is the only hall call assigned to the elevator car; determining, for each pair of hall calls assigned for each elevator car, a pairwise delay over the individual waiting time of each hall call in the pair caused by a joint assignment of the elevator car to accommodate the pair of the hall calls; approximating a cumulative waiting time of an assignment of the plurality of elevator cars to accommodate the plurality of hall calls as a sum of individual waiting times for accommodating each hall call with the assigned elevator car and a sum of all pairwise delays determined for the assigned elevator car between all pairs of hall calls assigned to the same elevator car; determining the assignment of the plurality of elevator cars using a greedy optimization algorithm that greedily assigns the plurality of hall calls to the plurality of elevator cars to minimize the approximated cumulative waiting time; and using a controller for controlling the movement of the plurality of elevator cars according to the assignment. 17. The method of claim 16, wherein the greedy optimization algorithm starts with an empty set of assignments of hall calls needing to be assigned, such that at every step including an initial step which starts with a first hall call, is added to the set of assignments of hall calls needing to be assigned, so as to result in a total of N steps, until all unassigned hall calls are assigned. 18. The method of claim 17, wherein each step in the total of N steps includes a hall call for every passenger to be moved between floors of the building, such that during each step, an unassigned hall call from a passenger is considered sequentially in time, and is initially added to an elevator car of the plurality of elevator cars successively. 19. The method of claim 18, wherein for every combination of the hall call and the elevator car, the greedy optimization algorithm computes a cumulative waiting time of all hall calls assigned at that moment in time, plus the first hall call assigned at the initial step. 20. The method of claim 19, wherein the hall call and the elevator car combination having a least increase in the cumulative waiting time for all assigned hall calls including the first hall call of the initial step, the combination is accepted, and the hall call is assigned and removed from all the remaining unassigned hall calls, then continues to the next step or the next successive unassigned hall call. Referenced Cited U.S. Patent Documents 7484597 February 3, 2009 Nikovski et al. 7546905 June 16, 2009 Nikovski et al. 9834405 December 5, 2017 Nikovski 20030221915 December 4, 2003 Brand 20070221454 September 27, 2007 Nikovski 20070221455 September 27, 2007 Nikovski 20120197868 August 2, 2012 Fauser 20120267201 October 25, 2012 Brand 20160130112 May 12, 2016 Nikovski 20160289042 October 6, 2016 Fang 20160289043 October 6, 2016 Fang 20170369275 December 28, 2017 Saraswat Foreign Patent Documents WO 2014198302 December 2014 WO Patent History Patent number : 10118796 : Mar 3, 2017 Date of Patent : Nov 6, 2018 Patent Publication Number 20180251335 Assignee Mitsubishi Electric Research Laboratories, Inc. (Cambridge, MA) Daniel Nikolaev Nikovski (Brookline, MA), Arvind U Raghunathan (Brookline, MA), Srikumar Ramalingam (Salt Lake City, UT) Primary Examiner David Warren Application Number : 15/448,644
{"url":"https://patents.justia.com/patent/10118796","timestamp":"2024-11-10T14:46:53Z","content_type":"text/html","content_length":"153812","record_id":"<urn:uuid:089ae4e5-8892-406f-ae9f-2d11dbbaaf5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00773.warc.gz"}
What Is The Inverse Of The Function F(X) = 2x + 1? - Saverudata.info In mathematics, the inverse of a function is the opposite of the original function. It is the function that when applied to the output of the original function, it will result in the original input. To understand the inverse of a function, it is important to understand the concept of functions. Understanding the Inverse Function A function is a mathematical expression that takes an input and produces an output. The input is the independent variable, and the output is the dependent variable. It is usually written as a formula, where the output is written in terms of the input. The inverse of a function is the opposite of the original function. It is the function that when applied to the output of the original function, it will result in the original input. In other words, the inverse of a function is the function that undoes what the original function did. For example, if the original function is f(x) = 2x + 1, then the inverse of the function is the function that when applied to the output of the original function, it will result in the input of the original function. Calculating the Inverse of F(x) = 2x + 1. The inverse of the function f(x) = 2x + 1 can be calculated by solving for x. To do this, we first need to isolate x on one side of the equation by subtracting 1 from both sides. This results in the equation 2x = -1. We then divide both sides of the equation by 2 to solve for x. This results in the equation x = -1/2. Therefore, the inverse of the function f(x) = 2x + 1 is x = -1/2. Inverse functions are an important concept in mathematics. Understanding inverse functions allows us to understand how different mathematical operations can be undone. In this article, we looked at the inverse of the function f(x) = 2x + 1 and how to calculate it. The inverse of a function f(x) is an important concept in mathematics and is used in many fields from algebra to calculus. In this article, we will look at the inverse of the function f(x) = 2x + 1. A function is a mathematical relationship between two sets of numbers, such as x and y. For example, the function f(x) = 2x + 1 would take an input value x and produce a result y which is calculated by plugging x into the function equation. The inverse of this function takes the output value y and calculates the corresponding input value x. The inverse of f(x) = 2x + 1 can be found by extracting the x from the equation by rearranging the terms and solving for x. This will give us the equation x = (y – 1) / 2. This equation can then be used to calculate the inverse for any given y value. To illustrate this concept, let’s say we wanted to find the inverse of the function f(x) = 2x + 1 where the output value is y = 5. Substituting y = 5 into the inverse equation x = (y – 1) / 2 would give us x = 2. This means that the input value corresponding to the output value of 5 is 2. The inverse of a function is an important concept in mathematics and has applications in many fields. The inverse of f(x) = 2x + 1 can be found using the equation x = (y – 1) / 2. This equation can be used to calculate the input value corresponding to any given output value.
{"url":"https://saverudata.me/what-is-the-inverse-of-the-function-fx-2x-1/","timestamp":"2024-11-03T13:16:48Z","content_type":"text/html","content_length":"223696","record_id":"<urn:uuid:61085ebf-f20d-4a34-9b11-8c8c706448ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00535.warc.gz"}
Independent Samples T-test tutorial Sample Data: San Francisco Airport Customer Satisfaction data (2018) 1. When do we use Independent Samples T-test? When we want to test whether there is a significant difference in the mean value of the target variable between two groups, we use Independent Samples T-test. In this case, the numeric variable has one column and the group variable has one column which has two values. If there is no relationship between the two groups, it can be considered independent. In this case, use the Independent Sample T-test. For example, if you want to compare a newly developed drug with an existing drug, you can say that the two groups are independent because they have nothing to do with each other. 2. Find the “Statistics” section under the black banner at the top and click “Independent Samples T-test” 3. Choose either your own file or sample to run the Independent Samples T-test. Let’s use the San Francisco Airport Satisfaction Data in 2018 (SFO 2018). 4. If you are done selecting your data file, press the “Select” button. 5. Here we have three steps to run the Independent Samples T-test successfully. First, select your Test Variable. Here, you have to choose the certain category that you want to compare. For the “SFO 2018” example, you can choose “satisfaction”. 6. Then, choose the Grouping Variable that distinguishes the two groups that you want to compare. For the “SFO 2018” example, you can select “gender” since it distinguishes between male and female. 7. Click the “Run” button. 8. Check the result. As you can see, since the p-value is greater than 0.1 (), we fail to reject the null hypothesis (), which says there is no significant difference by male and female on the means of overall satisfaction. Therefore, we can interpret that there is a significant difference between and on the means of satisfaction with confidence level 90%. 9. Check the Means plot. Here, you can also easily check that there is a significant difference between the means of overall satisfaction between male and female. Requirements of the Independent Samples T-test There are a total of three assumptions to be considered to run an independent samples t-test. • Assumption 1: The data should all be continuous figures with equal intervals. Both non-scale or interval-scale are possible. • Assumption 2: The two groups must be independent of each other. Independent means that the members or components that make up the two groups to compare should not be related to each other. • Assumption 3: The figures in the data shall have normality. 1. Null and alternative hypotheses The null hypothesis for an independent samples t-test is: H0: the population means of the two groups are same (i.e. µ1 = µ2) And the alternative hypothesis for an independent samples t-test is: HA: the population means of the two groups are not same (i.e. µ1 ≠ µ2) 2. Calculating p-value & Interpretation By running Number Analytics Independent Sample t-test, it will calculate a p-value, which is the probability of your sample group means being at least as different. If the probability is sufficiently small (usually p < .05), you can conclude that it is unlikely that the two group means are equal in the population and you can accept the alternative hypothesis and reject the null hypothesis. Alternatively, you will reject the alternative hypothesis and fail to reject the null hypothesis if the probability is larger (usually p > .05). Note that you cannot accept the null hypothesis. Remember that it is not the sample means, but the population means that the hypothesis tests are referring to. You need to be logged in to add comments.
{"url":"https://www.numberanalytics.com/tutorials/independent-samples-t-test","timestamp":"2024-11-03T07:22:23Z","content_type":"text/html","content_length":"40903","record_id":"<urn:uuid:acfdb049-2161-498c-afcd-65532a1aeb4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00554.warc.gz"}
The Elements of Drawing, John Ruskin’s teaching collection at Oxford R|4} Study of catenary curves. A. Macdonald. And now we must go back to our outlines. I drew the sides of the shields in figures 6 and 7 by my eye only; yet I know that they obey some certain law, else they would not be beautiful; but I do not know the law, nor is it necessary that I should. We must be able to draw rightly at pleasure; and to obey by instinct laws unknown to us, else we are no draughtsmen. But we must begin by recognising that such laws exist. So we will examine definitely, the aspect of the curve which we shall have to draw by instinct most frequently, the catenary. Draw the semicircle A. B. C. with its diameter, at least the size of half a dinner-plate, (Fig. 10, R. 4,) and take any fine metal chain, a common steel one will do, small in the link; and adjust it over the semicircle as at a. b. c., so that you may measure off a piece of it of the same length as the semicircle: (to draw your semicircle with the edge of a large bowl, and stretch the chain all round the edge, and then take half that length, is a short rough way.) Then set your drawing-board as upright as it will stand; pin your paper on it so that the diameter A. B., may be quite level; and pin your semicircle length of chain at its ends, to the points A. and B., so that it may hang down between them. It will hang in the line A. B. D. Trace a pencil line delicately beside it, not disturbing the chain, draw over the pencil with your brush; and you have the first simple relation of the catenary to the circle. Next, with the length of the diameter A. B. in Fig. 10, for a radius, draw the semicircle C. D. Figure 11, R. 4, and here in margin divide the whole semicircle into six equal parts by the points E. F., &c. and then fasten one end of your semicircle of chain to the point A. and the other successively to E. F. B. G. H., and draw all the curves it falls into. These will show you the kinds of curve which a rope of given length would fall into from the yard of a vessel, sloped at different angles: of course you might have an infinite number of curves by taking different points in the circle, but these five are enough at present. Lastly,—From the two points A. B. in Fig. 10, hang first the semicircle-length of chain, then three-quarters round the circle of chain, and then the whole circle’s length of chain; and on each of these lengths of chain, fasten in the middle any very light pendant: two or three glass beads in a bunch will do, enough to stretch it a little, yet not to pull it nearly straight; and you will get three curves as in R. 5, which will give you a general idea of the look of the catenary under tension. Now, I believe that the curves by which I have limited the shields in Figs. 8 and 9, are the halves of catenaries under very slight tension, but I am not sure; all I know is, that they are good curves obeying some subtle law. Now, my first object in the course of exercises, which I shall request you to go through, will be to make your eye sensitive to the character of subtle curves of this kind, and to enable your hand to trace them with easy precision. In the engraving of the woolly rush, R. 226, you may not at first perceive that the curves are subtle at all. But the difference between this entirely well-done piece of work and a vulgar botanical drawing, depends primarily on the draughtsman’s fine sense of truth in curvature: and when you see the outline alone, R. 276, you will probably recognize, even now, the value of this quality; but it would be vain for you to attempt to follow lines of this degree of refinement at first; and the exercises through which I shall lead you up to them will not, I hope, be uninteresting. The simplest elements of curvilinear design are, of course, to be found in good writing, and in the modes of ornamentation derived from it, and you cannot possibly learn to draw good curves more quickly than by attentively copying a few pieces of illuminator’s penmanship.
{"url":"https://ruskin.ashmolean.org/collection/8979/per_page/25/offset/25/sort_by/cabinets/object/14471","timestamp":"2024-11-03T17:06:12Z","content_type":"application/xhtml+xml","content_length":"41951","record_id":"<urn:uuid:e9688bc1-2a97-4fe6-82f0-159709a574e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00142.warc.gz"}
Mathematics in the Primary School (Subjects in the Primary School Series) - SILO.PUB File loading please wait... Citation preview The Author Richard R.Skemp is internationally recognized as a leading authority in mathematical education, and has lectured on this subject at 45 universities in seventeen countries, and at many conferences and in-service courses for teachers. Himself a former teacher, he has worked closely with both primary and secondary teachers while developing a theory of intelligent learning which relates closely to classroom needs. His previous books in this field include The Psychology of Learning Mathematics (1971), Intelligence, Learning, and Action (1979), and Structured Activities for Primary Mathematics (1989). Subjects in the Primary School Series editor: Professor John Eggleston English in the Primary School Tricia Evans Geography in the Primary School John Bale Science in the Primary School Yvonne Garson MATHEMATICS IN THE PRIMARY SCHOOL RICHARD R.SKEMP Emeritus Professor University of Warwick First published in 1989 by Routledge 11 New Fetter Lane, London EC4P 4EE This edition published in the Taylor & Francis e-Library, 2002. RoutledgeFalmer is an imprint of the Taylor & Francis Group © 1989 Richard R.Skemp All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data Skemp, Richard R. (Richard Rowland), 1919– Mathematics in the primary school.— (Subjects in the primary school). 1. Primary schools. Curriculum subjects: Mathematics. Teaching I. Title II. Series 372.7′ 3044 ISBN 0-203-40389-4 Master e-book ISBN ISBN 0-203-71213-7 (Adobe eReader Format) ISBN 0-415-02519-2 (Print Edition) Prologue: Relational understanding and instrumental understanding. Faux amis. Devil’s advocate. The case for relational mathematics. A theoretical formulation. PART A 1 Why is mathematics still a problem subject for so many? Evidence from surveys: a vote of no confidence. The need for a new perspective. Mathematics as a mental tool, and amplifier of human intelligence. School mathematics and mathematics in the adult world: a false contrast. Summary. Suggested activities for readers. Intelligence and understanding Habit learning vs intelligent learning. Goal-directed action. Cognitive maps, knowledge structures, mental models, and schemas. Intelligent learning. What is understanding, and why does it matter? Habit learning and teacher-dependence vs understanding and confidence. What are theories, and why do we need them? Summary. Suggested activities for readers. The formation of mathematical concepts The special demands of mathematics. The abstract nature of mathematics. The necessity of conceptual Contents learning. Abstraction and the process of concept formation. Successive abstraction: primary and secondary concepts. Everyday concepts and mathematical concepts. Two ways of communicating concepts. Conceptual analysis as a prerequisite for teaching mathematics: concept maps. Summary. Suggested activities for readers. 4 The construction of mathematical knowledge Schema construction: the three modes of building and testing. Mode 1: the importance of structured practical activities. Mode 2: the value of cooperative learning. Mode 3: creativity in the learning of mathematics. Schemas and long-term learning. Implications for teaching. Schemas and the enjoyment of learning. Summary. Suggested activities for Understanding mathematical symbolism The power of symbolism. Symbol systems. Deep structures and surface structures. Different ways of understanding the same symbol. Symbolic understanding. A resonance model. How can we help? A revised formulation. Summary. Suggested activities for readers. PART B 6 Making a start 109 How to use teaching as a learning experience for ourselves, as well as for our children. Observe and listen, reflect, discuss. Some observations which have revealed children’s thinking. Fourteen activities for classroom use: Missing stairs; Stepping stones; Capture; The handkerchief game; Number targets; ‘My share is…’; Slippery Slope; Taking; Doubles and halves rummy; Make a set: make others which match; Sets under our hands; The rectangular numbers game; Alias prime; How are these related? Summary. The contents and structure of primary mathematics 157 Knowledge, plans, and skills. Activities for Contents developing skills. Cards on the table. The place of written work. Process and content. Can ‘learning how’ lead to ‘understanding why’? Calculators and computers. Some criteria for a curriculum. The place of projects and investigations. Summary. Suggested activities for readers. 8 Management for intelligent learning 178 A teacher’s dual authority: authority of position, and authority of knowledge. Obedience and cooperation: their different effects on the quality of learning. The resulting problem, for children and teachers. The authority of the subject. Management of the learning situation. Management for ‘motivation’. Summary. Suggested activities for readers. Emotional influences on learning Emotions and survival. Pleasure and unpleasure, fear and relief. Emotions in relation to competence: confidence and frustration, security and anxiety. Learning as a frontier activity. Managing children’s risks in learning. Mixed emotions: confidence as a learner. The pleasure of exploration. Summary. Suggested activities for readers. Continuing professional development 207 Learning while teaching. The need for professional companionship. Some rewards of intelligent teaching. Summary. Notes and references Most of the contents of this book has been presented orally over a number of years at in-service courses and conferences for teachers in England and Wales, Germany, Canada, and the United States. The discussions which took place on these occasions have been of great value to me in developing the relations between theory and classroom practice, and I am most grateful to all the teachers, advisers, and lecturers who took part in these; and also to those who initiated, planned, and organised the arrangements which brought us together and helped our work go smoothly. The classroom activities in Chapters 6 and 7 are taken from a much larger collection developed in the Primary Mathematics Project. This was funded jointly over a period of seven years by the Nuffield Foundation and the Leverhulme Trust, and I am most grateful for their support. The prologue ‘Relational understanding and instrumental understanding’ first appeared as an article in the journal Mathematics Teaching. Much of Chapter 9, as well as a smaller amount elsewhere in the present book, first appeared in The Psychology of Learning Mathematics, Expanded American Edition, published by Lawrence Erlbaum Associates, New Jersey. My cordial thanks to the editors concerned for agreeing to the inclusion of this material. Prologue: Relational understanding and instrumental understanding This prologue first appeared in 1976, as an article in the journal Mathematics Teaching.1 It has since been reproduced three times, and read and discussed in three continents. During this time the terms ‘relational understanding’ and ‘instrumental understanding’ have become part of the language of mathematics education. It is hoped that present readers too will find these ideas a helpful starting point for their own thinking. Throughout the rest of the book, ‘understanding’ by itself means relational understanding. It will be seen that relational understanding corresponds to intelligent learning, and instrumental understanding to habit learning. Faux amis Faux amis is a term used by the French to describe words which are the same, or very alike, in two languages, but whose meanings are different. For example: French word histoire librairie chef agrément docteur médecin parent Meaning in English story, not history bookshop, not library head of an organization, not only chief cook pleasure or amusement, not agreement doctor (higher degree), not medical practitioner medical practitioner, not medicine relations in general, including parents Relational and instrumental understanding One gets faux amis between English as spoken in different parts of the world. An Englishman asking in America for a biscuit would be given what we call a scone. To get what we call a biscuit, he would have to ask for a cookie. And between English as used in mathematics and in everyday life there are such faux amis as field, group, ring, ideal. A person who is unaware that the word he is using is a faux ami can make inconvenient mistakes. We expect history to be true, but not a story. We take books without paying from a library, but not from a bookshop; and so on. But in the foregoing examples there are cues which might put one on guard: difference of language, or of country or of context. If, however, the word is used in the same language, country and context, with two meanings whose difference is as basic as the difference between (say) the meanings of ‘history’ and ‘histoire’ (which is a difference between fact and fiction) one may expect serious confusion. Two such words can be identified in the context of mathematics; and it is the alternative meaning attached to each of these words, by a large following, which in my belief is at the root of many of the difficulties in mathematics education today. One of the words is ‘understanding’. It was brought to my attention some years ago by Stieg Mellin-Olsen, of Bergen University, that there are in current use two meanings of this word. These he distinguishes by calling them ‘relational understanding’ and ‘instrumental understanding’. By the former is meant what I, and probably most readers of this article, have always meant by understanding: knowing both what to do and why. Instrumental understanding I would until recently not have regarded as understanding at all. It is what I have in the past described as ‘rules without reasons’, without realizing that for many pupils and their teachers the possession of such a rule, and ability to use it, is what they mean by ‘understanding’. Suppose that a teacher reminds a class that the area of a rectangle is given by A=L×B. A pupil who has been away says he does not understand, so the teacher gives him an explanation along these lines: ‘The formula tells you that to get the area of a rectangle, you multiply the length by the breadth.’ ‘Oh, I see,’ says the child, arid gets on with the 2 Relational and instrumental understanding exercise. If wewere now to say to him (in effect), ‘You may think you understand, but you don’t really,’ he would not agree. ‘Of course I do. Look, I’ve got all these answers right.’ Nor would he be pleased at our devaluing of his achievement. And within his meaning of the word, he does understand. We can all think of examples of this kind: ‘borrowing’ in subtraction, ‘turn it upside down and multiply’ for division by a fraction, ‘take it over to the other side and change the sign’ are obvious ones; but once the concept has been formed, other examples of instrumental explanations can be identified in abundance in many widely used texts. Here are two from a text used by a former direct-grant grammar school, now independent, with a high academic standard. Multiplication of fractions. To multiply a fraction by a fraction, multiply the two numerators together to make the numerator of the product, and the two denominators to make its The multiplication sign×is generally used instead of the word ‘of’. Circles. The circumference of a circle (that is its perimeter, or the length of its boundary) is found by measurement to be a little more than three times the length of its diameter. In any circle the circumference is approximately 3.1416 times the diameter, which is roughly times the diameter. Neither of these figures is exact as the exact number cannot be expressed either as a fraction or a decimal. The number is represented by the Greek letter π (pi). Circumference=πd or 2πr Area=πr2 The reader is urged to try for himself this exercise of looking 3 Relational and instrumental understanding for and identifying examples of instrumental explanations, both in texts and in the classroom. This will have three benefits, (i) For persons like the writer, and most readers of this article, it may provide evidence of what otherwise they would not realize: how widespread is the instrumental approach, (ii) It will help, by repeated examples, to consolidate the two contrasting concepts, (iii) It is a good preparation for trying to formulate the difference in general terms. If it is accepted that these two categories are both wellfilled, by those pupils and teachers whose goals are respectively relational and instrumental understanding (by the pupil), two questions arise. First, does this matter? And second, is one kind of understanding better than the other? For years I have taken for granted the answers to both these questions: briefly, ‘Yes; relational.’ But the existence of a large body of experienced teachers and of a large number of texts belonging to the opposite camp has forced me to think more about why I hold this view. In the process of changing the judgement from an intuitive to a reflective one, I think I have learnt something useful. The two questions are not entirely separate, but in the present section I shall concentrate as far as possible on the first: does it matter? The problem here is that of a mis-match, which arises automatically in any faux amis situation, and does not depend on whether A’s or B’s meaning ‘is the right one’. Let us imagine, if we can, that school A send a team to play school B at a game called ‘football’, but that neither team knows that there are two kinds (called ‘association’ and ‘rugby’). School A plays soccer and has never heard of rugger, and vice versa for B. Each team will rapidly decide that the others are crazy, or a lot of foul players. Team A in particular will think that team B players use a mis-shapen ball, and commit one foul after another. Unless the two sides stop and talk about what game they think they are playing at, the game will break up in disorder and the two teams will never want to meet again. Though it may be hard to imagine such a situation arising on the football field, this is not a far-fetched analogy for what goes on in many mathematics lessons, even now. There is this important difference, that one side at least cannot refuse to play. The encounter is compulsory, on five days a week, for about thirty-six weeks a year, over ten years or more of a child’s life. 4 Relational and instrumental understanding Leaving aside for the moment whether one kind of understanding is better than the other, there are two kinds of mathematical mis-match which can occur. 1 Pupils whose goal is to understand instrumentally, taught by a teacher who wants them to understand relationally. 2 The other way about. The first of these will cause fewer problems short-term to the pupils, though it will be frustrating for the teacher. The pupils just ‘won’t want to know’ all the careful groundwork he gives in preparation for whatever is to be learnt next, nor his careful explanations. All they want is some kind of rule for getting the answer. As soon as this is reached, they latch on to it and ignore the rest. If the teacher asks a question that does not quite fit the rule, of course they will get it wrong. For the following example I have to thank Mr Peter Burney, at that time a student at Coventry College of Education on teaching practice. While teaching area he became suspicious that the children did not really understand what they were doing. So he asked them: ‘What is the area of a field 20 cms by 15 yards?’ The reply was: ‘300 square centimetres.’ He asked: ‘Why not 300 square yards?’ Answer: ‘Because area is always in square centimetres.’ To prevent errors like the above the pupils need another rule (or, of course, relational understanding), that both dimensions must be in the same unit. This anticipates one of the arguments which I shall use against instrumental understanding, that it usually involves a multiplicity of rules rather than fewer principles of more general application. There is of course always the chance that a few of the pupils will catch on to what the teacher is trying to do. If only for the sake of these, the effort should be maintained. But by many, probably a majority, attempts to convince them that being able to use the rule is not enough will not be well received. ‘Well is the enemy of better’, and if pupils can get the right answer by the kind of thinking they are used to, they will not take kindly to suggestions that they should try for something beyond this. 5 Relational and instrumental understanding The other mis-match, in which pupils are trying to understand relationally but the teaching makes this impossible, can be a more damaging one. An instance which stays in my memory is that of a neighbour’s child, then seven years old. He was a very bright little boy, with an I.Q. of 140. At the age of five he could read The Times, but at seven he regularly cried over his mathematics homework. His misfortune was that he was trying to understand relationally teaching which could not be understood in this way. My evidence for this belief is that when I taught him relationally myself, with the help of Unifix, he caught on quickly and with real pleasure. A less obvious mis-match is that which may occur between teacher and text. Suppose that we have a teacher whose conception of understanding is instrumental, who for one reason or other is using a text which aims for relational understanding by the pupil. It will take more than this to change his teaching style. I was in a school which was using my own text,2 and noticed (they were at Chapter 1 of Book 1) that some of the pupils were writing answers like: ‘the set of {flowers}’ When I mentioned this to the teacher (he was head of mathematics) he asked the class to pay attention to him and said: ‘Some of you are not writing your answers properly. Look at the example in the book, at the beginning of the exercise, and be sure you write your answers like that.’ Much of what is being taught under the description of ‘modern mathematics’ is being taught and learnt just as instrumentally as were the syllabi which have been replaced. This is predictable from the difficulty we normally experience in accommodating (restructuring) our existing schemas.3 To the extent that this is so, the innovations have probably done more harm than good, by introducing a mis-match between the teacher and the aims implicit in the new content. For the purpose of introducing ideas such as sets, mappings and variables is the help which, rightly used, they can give to relational understanding. If pupils are still being taught instrumentally, then a ‘traditional’ syllabus will probably benefit them more. They will at least acquire proficiency in a number of mathematical techniques which will be of use to them in othersubjects, and the lack of 6 Relational and instrumental understanding which has recently been the subject of complaints by teachers of science, employers and others. Near the beginning of this chapter I said that two major faux amis could be identified in the context of mathematics. The second one is even more troublesome; it is the word ‘mathematics’ itself. For we are not talking about better and worse teaching of the same kind of mathematics. It is easy to think this, just as our imaginary soccer players who do not know that their opponents are playing a different game might think that the other side pick up the ball and run with it because they cannot kick properly, especially with such a misshapen ball—in which case they might kindly offer them a better ball and some lessons in dribbling. It has taken me some time to realize that this is not the case. I used to think that maths teachers were all teaching the same subject, some doing it better than others. I now believe that there are two effectively different subjects being taught under the same name, ‘mathematics’. If this is true, then this difference matters beyond any of the differences in syllabi which are so widely debated. So I would like to try to emphasize the point with the help of another analogy. Imagine that two groups of children are taught music as a pencil-and-paper subject. They are all shown the five-line stave, with the curly ‘treble’ sign at the beginning; and taught that marks on the lines are called E, G, B, D, F. Marks between the lines are called F, A, C, E. They learn that a mark with an open oval is called a minim, and is worth two marks with blacked-in ovals which are called crotchets, or four with blacked-in ovals and a tail which are called quavers, and so on—musical multiplication tables if you like. For one group of children, all their learning is of this kind and nothing beyond. If they have a music lesson a day, five days a week in school terms, and are told that it is important, these children could in time probably learn to write out the marks for simple melodies such as those for ‘God Save the Queen’, and ‘Auld Lang Syne’, and to solve simple problems such as ‘What time is this in?’ and ‘What key?’, and even ‘Transpose this melody from C major to G major.’ They would find it boring, and the rules to be memorized would be so numerous that problems like ‘Write a simple accompaniment for this melody’ would be too difficult for most. They would give up the subject as soon aspossible, and remember it with dislike. 7 Relational and instrumental understanding The other group is taught to associate certain sounds with the marks on paper. For the first few years these are audible sounds, which they make themselves on simple instruments. After a time they can still imagine the sounds whenever they see or write the marks on paper. Associated with every linear sequence of marks is a melody, and with every vertical set a harmony. The keys C major and G major have an audible relationship, and a similar relationship can be found between certain other pairs of keys. And so on. Much less memory work is involved, and what has to be remembered is largely in the form of related wholes (such as melodies) which their minds easily retain. Exercises such as were mentioned earlier (‘Write a simple accompaniment’) would be within the ability of most. These children would also find their learning intrinsically pleasurable, and many would continue it voluntarily, even after O-level or C.S.E. For the present purpose I have invented two non-existent kinds of ‘music lesson’, both pencil-and-paper exercises (in the second case, after the first year or two). But the difference between these imaginary activities is no greater than that between two activities which actually go under the name of mathematics. (We can make the analogy closer, if we imagine that the first group of children were initially taught sounds for the notes in a rather half-hearted way, but that the associations were too ill-formed and unorganized to last.) The above analogy is, clearly, heavily biased in favour of relational mathematics. This reflects my own viewpoint. To call it a viewpoint, however, implies that I no longer regard it as a self-evident truth which requires no justification: which it can hardly be if many experienced teachers continue to teach instrumental mathematics. The next step is to try to argue the merits of both points of view as clearly and fairly as possible; and especially of the point of view opposite to one’s own. This is why the next section is called ‘Devil’s advocate’. Devil’s advocate Given that so many teachers teach instrumental mathematics, might this be because it does have certain advantages? I have been 8 Relational and instrumental understanding able to think of three advantages (as distinct from situational reasons for teaching this way, which will be discussed later). 1 Within its own context, instrumental mathematics is usually easier to understand; sometimes much easier. Some topics, such as multiplying two negative numbers together, or dividing by a fractional number, are difficult to understand relationally. ‘Minus times minus equals plus’, and ‘To divide by a fraction you turn it upside down and multiply’ are easily remembered rules. If what is wanted is a page of right answers, instrumental mathematics can provide this more quickly and easily. 2 So the rewards are more immediate, and more apparent. It is nice to get a page of right answers, and we must not under-rate the importance of the feeling of success which pupils get from this.4 Recently I visited a school where some of the children described themselves as ‘thickos’. Their teachers used the term too. These children need success to restore their self-confidence, and it can be argued that they can achieve this more quickly and easily in instrumental mathematics than in relational. 3 Just because less knowledge is involved, one can often get the right answer more quickly and reliably by instrumental thinking than relational. This difference is so marked that even relational mathematicians often use instrumental thinking.5 This is a point of much theoretical interest, which I hope to discuss more fully on a future occasion. The above may well not do full justice to instrumental mathematics. I shall be glad to know of any further advantages which it may have. The case for relational mathematics There are four advantages (at least) in relational mathematics. 1 It is more adaptable to new tasks. Recently I was trying to 9 Relational and instrumental understanding help a boy who had learnt to multiply two decimal fractions together by dropping the decimal point, multiplying as for whole numbers, and re-inserting the decimal point to give the same total number of digits after the decimal point as there were before. This is a handy method if you know why it works. Through no fault of his own, this child did not; and not unreasonably, applied it also to division of decimals. By this method 4.8÷0.6 came to 0.08. The same pupil had also learnt that if you know two angles of a triangle, you can find the third by adding the two given angles together and subtracting from 180°. He got ten questions right this way (his teacher believed in plenty of practice), and went on to use the same method for finding the exterior angles. So he got the next five answers wrong. I do not think he was being stupid in either of these cases. He was simply extrapolating from what he already knew. But relational understanding, by knowing not only what method worked but why, would have enabled him to relate the method to new problems. Instrumental mathematics necessitates memorizing which problems a method works for and which not, and also learning a different method for each new class of problems. So the first advantage of relational mathematics leads to: 2 It is easier to remember. There is a seeming paradox here, in that it is certainly harder to learn. It is certainly easier for pupils to learn that ‘area of a triangle=1/2 base×height’ than to learn why this is so. But they then have to learn separate rules for triangles, rectangles, parallelograms, trapeziums; whereas relational understanding consists partly in seeing all of these in relation to the area of a rectangle. It is still desirable to know the separate rules; one does not want to have to derive them afresh every time.6 But knowing also how they are inter-related enables one to remember them as parts of a connected whole, which is easier. There is more to learn—the connections as well as the separate rules—but the result, once learnt, is more lasting. So there is less re-learning to do, and long-term the time taken may well be less altogether. 10 Relational and instrumental understanding Teaching for relational understanding may also involve more actual content. Earlier, an instrumental explanation was quoted leading to the statement ‘Circumference=πd’. For relational understanding of this, the idea of a proportion would have to be taught first (among others), and this would make it a much longer job than simply teaching the rules as given. But proportionality has such a wide range of other applications that it is worth teaching on these grounds also. In relational mathematics this happens rather often. Ideas required for understanding a particular topic turn out to be basic for understanding many other topics too. Sets, mappings and equivalence are such ideas. Unfortunately their potential benefits are often lost by teaching them as separate topics, rather than as fundamental concepts by which whole areas of mathematics can be inter-related. 3 Relational knowledge can be effective as a goal in itself. This is an empiric fact, based on evidence from controlled experiments using non-mathematical material. The need for external rewards and punishments is greatly reduced, making what is often called the ‘motivational’ side of a teacher’s job much easier. This is related to: 4 Relational schemas are organic in quality. This is the best way I have been able to formulate a quality by which they seem to act as agents of their own growth. The connection with 3 is that if people get satisfaction from relational understanding, they may not only try to understand relationally new material which is put before them, but also actively seek out new material and explore new areas, very much like a tree extending its roots or an animal exploring new territory in search of nourishment. To develop this idea beyond the level of an analogy is beyond the scope of the present paper, but it is too important to leave out. If the above is anything like a fair presentation of the arguments for each side, it would appear that while a case might exist for instrumental mathematics short-term and with a limited context, long-term and in the context of a child’s 11 Relational and instrumental understanding whole education it does not. So why are so many children taught only instrumental mathematics throughout their school careers? Unless we can answer this, there is little hope of improving the situation. An individual teacher might make a reasoned choice to teach for instrumental understanding on one or more of the following grounds. 1 That relational understanding would take too long to achieve, and to be able to use a particular technique is all that these pupils are likely to need. 2 That relational understanding of a particular topic is too difficult, but the pupils still need to learn the topic for examination reasons. 3 That a skill is needed for use in another subject (e.g. science) before it can be understood relationally with schemas currently available to the pupils. 4 That he is a junior teacher in a school where all the other mathematics teaching is instrumental. All of these imply, as does the phrase ‘make a reasoned choice’, that he is able to consider the alternative goals of instrumental and relational understanding on their merits and in relation to a particular situation. To make an informed choice of this kind implies awareness of the distinction, and relational understanding of the mathematics itself. So nothing else but relational understanding can be adequate for a teacher. One has to face the fact that this is absent in many who teach mathematics; perhaps even a majority. Situational factors which contribute to the difficulty include: 1 The backwash effect of examinations. In view of the importance of examinations for future employment, one can hardly blame pupils if success in these is one of their major aims. The way pupils work cannot but be influenced by the goal for which they are working, which is to answer correctly a sufficient number of questions.7 2 Over-burdened syllabi. Part of the trouble here is the high concentration of the information content of mathematics. A mathematical statement may condense 12 Relational and instrumental understanding into a single line as much as in another subject might take over one or two paragraphs. By mathematicians accustomed to handling such concentrated ideas, this is often overlooked (which may be why most mathematics lecturers go too fast). Non-mathematicians do not realize it at all. Whatever the reason, almost all syllabi would be much better if much reduced in amount so that there would be time to teach them better. 3 Difficulty of assessment of whether a person understands relationally or instrumentally. From the marks he makes on paper, it is very hard to make valid inference about the mental processes by which a pupil has been led to make them; hence the difficulty of sound examining in mathematics. In a teaching situation, talking with the pupil is almost certainly the best way to find out; but in a class of over thirty, it may be difficult to find the time. 4 The great psychological difficulty for teachers of accommodating (re-structuring) their existing and longstanding schemas, even for the minority who know the need to, want to do so, and have time for study. From a recent article discussing the practical, intellectual and cultural value of mathematics education (and I have no doubt that he means relational mathematics!) by Sir Hermann Bondi, I take these three paragraphs. (In the original, they are not consecutive.) So far my glowing tribute to mathematics has left out a vital point: the rejection of mathematics by so many, a rejection that in not a few cases turns to abject fright. The negative attitude to mathematics, unhappily so common, even among otherwise highly-educated people is surely the greatest measure of our failure and a real danger to our society. This is perhaps the clearest indication that something is wrong, and indeed very wrong, with the situation. It is not hard to blame education for at least a share of the responsibility; it is harder to pinpoint the blame, and even more difficult to suggest new remedies.8 Relational and instrumental understanding If for ‘blame’ we may substitute ‘cause’, there can be small doubt that the widespread failure to teach relational mathematics—a failure to be found in primary, secondary and further education, and in ‘modern’ as well as ‘traditional’ courses—can be identified as a major cause. To suggest new remedies is indeed difficult, but it may be hoped that diagnosis is one good step towards a cure. Another step will be offered in the next section. A theoretical formulation There is nothing so powerful for directing one’s actions in a complex situation, and for co-ordinating one’s own efforts with those of others, as a good theory. All good teachers build up their own stores of empirical knowledge, and have abstracted from these some general principles on which they rely for guidance. But while their knowledge remains in this form it is largely still at the intuitive level within individuals, and cannot be communicated, both for this reason and because there is no shared conceptual structure (schema) in terms of which it can be formulated. Were this possible, individual efforts could be integrated into a unified body of knowledge which would be available for use by newcomers to the profession. At present most teachers have to learn from their own mistakes. For some time my own comprehension of the difference between the two kinds of learning which lead respectively to relational and instrumental mathematics remained at the intuitive level, though I was personally convinced that the difference was one of great importance, and this view was shared by most of those with whom I discussed it. Awareness of the need for an explicit formulation was forced on me in the course of two parallel research projects; and insight came, quite suddenly, during a recent conference. Once seen it appears quite simple, and one wonders why I did not think of it before. But there are two kinds of simplicity: that of naïvety; and that which, by penetrating beyond superficial differences, brings simplicity by unifying. It is the second kind which a good theory has to offer, and this is harder to achieve. A concrete example is necessary to begin with. When I went 14 Relational and instrumental understanding to stay in a certain town for the first time, I quickly learnt several particular routes. I learnt to get between where I was staying and the office of the colleague with whom I was working; between where I was staying and the university refectory where I ate; between my friend’s office and the refectory; and two or three others. In brief, I learnt a limited number of fixed plans by which I could get from particular starting locations to particular goal locations. As soon as I had some free time I began to explore the town. Now I was not wanting to get anywhere specific, but to learn my way around, and in the process to see what I might come upon that was of interest. At this stage my goal was a different one: to construct in my mind a cognitive map of the town. These two activities are quite different. Nevertheless they are, to an outside observer, difficult to distinguish. Anyone seeing me walk from A to B would have great difficulty in knowing (without asking me) which of the two I was engaged in. But the most important thing about an activity is its goal. In one case my goal was to get to B, which is a physical location. In the other it was to enlarge or consolidate my mental map of the town, which is a state of knowledge. A person with a set of fixed plans can find his way from a certain set of starting points to a certain set of goals. The characteristic of a plan is that it tells him what to do at each choice point: turn right out of the door, go straight on past the church, and so on. But if at any stage he makes a mistake, he will be lost; and he will stay lost if he is not able to retrace his steps and get back on the right path. In contrast, a person with a mental map of the town has something from which he can produce, when needed, an almost infinite number of plans by which he can guide his steps from any starting point to any finishing point, provided only that both can be imagined on his mental map. And if he does take a wrong turn, he will still know where he is, and thereby be able to correct his mistake without getting lost; even perhaps to learn from it. The analogy between the foregoing and the learning of mathematics is close. The kind of learning which leads to instrumental mathematics consists of the learning of an increasing number of fixed plans, by which pupils can find their way from 15 Relational and instrumental understanding particular starting points (the data) to required finishing points (the answers to the questions). The plan tells them what to do at each choice point, as in the concrete example. And as in the concrete example, what has to be done next is determined purely by the local situation. (‘When you see the post office, turn left.’ ‘When you have cleared brackets, collect like terms.’) There is no awareness of the overall relationship between successive stages, and the final goal. And in both cases, the learner is dependent on outside guidance for learning each new ‘way to get there’. In contrast, learning relational mathematics consists of building up a conceptual structure (schema) from which its possessor can (in principle) produce an unlimited number of plans for getting from any starting point within his schema to any finishing point. (I say ‘in principle’ because of course some of these paths will be much harder to construct than others.) This kind of learning is different in several ways from instrumental learning. 1 The means become independent of particular ends to be reached thereby. 2 Building up a schema within a given area of knowledge becomes an intrinsically satisfying goal in itself. 3 The more complete a pupil’s schema, the greater his feeling of confidence in his own ability to find new ways of ‘getting there’ without outside help. 4 But a schema is never complete. As our schemas enlarge, so our awareness of possibilities is thereby enlarged. Thus the process often becomes selfcontinuing, and (by virtue of 3) self-rewarding. Taking again for a moment the role of devil’s advocate, it is fair to ask whether we are indeed talking about two subjects, relational mathematics and instrumental mathematics, or just two ways of thinking about the same subject matter. Using my earlier analogy, the two processes described might be regarded as two different ways of knowing about the same town; in which case the distinction made between relational and instrumental understanding would be valid, but not that between instrumental and relational mathematics. But what constitutes mathematics is not the subject matter, but a particular kind of knowledge about it. The subject 16 Relational and instrumental understanding matter of relational and instrumental mathematics may be the same: cars travelling at uniform speeds between two towns, towers whose heights are to be found, bodies falling freely under gravity, etc. etc. But the two kinds of knowledge are so different that I think that there is a strong case for regarding them as different kinds of mathematics. If this distinction is accepted, then the word ‘mathematics’ is for many children indeed a false friend, as they find to their cost. Part A Why is mathematics still a problem subject for so many? Evidence from surveys: a vote of no confidence In the United Kingdom, ever since the early 1960s, there have been intensive efforts to improve mathematical education in our schools, by many intelligent, hard-working, and wellfunded persons. Nevertheless, in the late 1970s, the state of mathematical education in our schools was still such as to give cause for concern at governmental level. This led to the setting up of a governmental Committee of Enquiry, whose meetings continued over a period of three years. The first stage of this inquiry consisted of interviews with people chosen to represent a stratified sample of the population. One of the most striking things about these interviews is those which did not take place. Both direct and indirect approaches were tried, the word ‘mathematics’ was replaced by ‘arithmetic’ or ‘everyday use of numbers’ but it was clear that the reason for people’s refusal to be interviewed was simply that the subject was mathematics…This apparently widespread perception amongst adults of mathematics as a daunting subject pervaded a great deal of the sample selection; half of the people approached as being appropriate for inclusion in the sample refused to take part.1 Such had been the result, for them, of ten years of so-called mathematical education while at school. As the same report rightly observes: Why is mathematics still a problem subject? Most important of all is the need to have sufficient confidence to make effective use of whatever skill and understanding is possessed whether this be little or much.2 Professor John Eggleston has commented, Yet aspiration of this kind is not new…a major task is to break the patterns of error by both pupils and teachers that characterize much of the work in the ‘new’ as well as in ‘traditional’ mathematics, and which are so destructive of confidence in both teachers and taught.3 Many of the mathematical errors have been described in great detail in a report of more than 900 pages from the Assessment of Performance Unit of the Department of Education and Science. 4 This report makes interesting, though not encouraging, reading. For example, 31 per cent—almost one third—of eleven-year-olds tested did not know in which of the following numbers the 7 stands for 7 tens: 107, 71, 7, 710. This and other factual surveys (e.g. that of the CSMS group at Chelsea College, University of London) 5 are valuable in bringing the problem to our attention forcibly and in detail. But what they do not tell us is why so many children still make mistakes of this kind after five years of schooling. Readers in other countries must decide for themselves to what extent a similar situation exists there. Such information as I have suggests that the problem is not confined to the United Kingdom. A distinguished American professor of mathematics, Dr Hassler Whitney, who is Past President of the International Commission on Mathematical Instruction and who has written and spoken extensively in recent years on the teaching of mathematics to young children, has written: For several decades we have been seeing increasing failure in school mathematics education, in spite of intensive efforts in many directions to improve matters. It should be very clear that we are missing something fundamental about the schooling process. But we do not even seem to be sincerely interested in this; we push for ‘excellence’ without regard for causes of failure or side effects of interventions; we try to cure symptoms in place of finding 22 Why is mathematics still a problem subject? the underlying disease, and we focus on the passing of tests instead of meaningful goals.6 So where, after more than twenty years of attempted reforms, are we still going wrong? If we cannot find at least a partial answer to this question, there is no reason to expect that future efforts will be any more successful than past. One thing must by now be clear, that changes in syllabus are not in themselves sufficient. It would be reasonable also to suspect that the causes are fundamental in the ways children learn mathematics, and therefore in whatever are the factors which affect this, including (but not only) the ways in which they are taught. I don’t think that there is a single answer to the problem, nor that any one person knows all the answers. As my own contribution to progress, I have three answers to offer. The present book centres on two of these: how children learn, and the teaching situation. These are the most fundamental, and are moreover the aspects which teachers are likely to be in a position to remedy, individually and collectively. They also offer a starting point from which we may hope to find some of the other answers. The need for a new perspective It seems clear that the problems of mathematical education cannot be solved from within mathematics itself. A wider perspective is needed, which I myself reached only by a long and roundabout path. I hope that this book will help readers to get there more quickly and directly; and although ‘there is no royal road’,7 those who make the journey will find it an interesting and rewarding one, quite apart from the professional importance of the goal. As a preliminary, a few lines about my own journey are necessary. The English poet John Betjeman once said that the best way to appreciate London is to take a journey round the world, starting in London and finishing in London. My own journey has been a mental journey, taking more than thirty years. It began in, and has returned to, the mathematics classroom. But on the way it led through the areas of developmental 23 Why is mathematics still a problem subject? psychology, motivation, human emotions, cybernetics, evolution, and human intelligence. Eventually it led me to reformulate my own conception of human intelligence, at which point I found myself, unexpectedly, back at mathematics again. This journey was largely a quest for solutions of two problems, one professional and one theoretical. The professional problem arose out of my job as a teacher, trying to teach mathematics and physics to children from 11 upwards. Over a period of five years at this, I became increasingly aware that I wasn’t being as successful at this as I wished. Some pupils did well, but others seemed to have a blockage for mathematics. It was not lack of intelligence or hard work, on their side or mine. So we had a problem: a problem not confined to myself as a teacher, nor to these particular children as pupils. This was in the late 1940s: since then, awareness of this problem has become widespread. An outcome of this was that I became increasingly interested in psychology, went back to university, and took a psychology degree. Problems of learning and teaching are psychological problems, so it was reasonable to expect that by studying psychology I would find answers to my professional problems as a teacher. Unfortunately I didn’t. Learning theory at that time was dominated by behaviourism; theories of intelligence were dominated by psychometrics. Neither of these theories (or groups of theories) was of any help in solving my professional problems as a teacher. This I had come to realize long before the end of my degree course, during which I was still teaching part-time to support myself. This kept me close to the problems of the classroom. So now I also had a theoretical problem, namely that of finding a theory appropriate to the learning of mathematics. It turned out to be a do-it-yourself job, which is partly why it took so long. The other reason is conveyed by the well-known joke about the person who asked the way to (let us call it) Exville, and was told ‘If I wanted to get to Exville, I wouldn’t start from here.’ I too was starting from the wrong beginnings. Behaviourist models are helpful in understanding those forms of learning which we have in common with the laboratory rat and pigeon; and it has to be admitted that for too many 24 Why is mathematics still a problem subject? children, the word mathematics has become a conditioned anxiety stimulus. But the learning of mathematics with understanding exemplifies the kind of learning in which humans most differ from the lower animals: so for this we need a different kind of theoretical model. Likewise, the psychometric models of intelligence on which, as a mature student, I learnt to pass my exams, were not such as could be applied to the learning process. Measurement may tell us ‘how much’ intelligence a person has, but it does not tell us what it is they have this amount of. The use of a noun here tends to mislead, unless it is expanded. It is helpful to compare this with our use of the word ‘memory’. When we say that someone has a good memory, we mean that this person is well able to take in information, organize it, store it, and retrieve from his large memory store just what he needs at any particular time. We are talking about a cluster of mental abilities which, collectively, are very useful. If we continue along this line of thinking, the next question which arises is, what are these abilities which collectively comprise the functioning of human intelligence? If we can answer some of these questions, we shall be on the way to relating intelligence and learning. Mathematics as a mental tool, and amplifier of human intelligence All the time I was working on the psychology of learning mathematics, I was also working on the psychology of intelligent learning. Initially I did not realize this. But gradually I came to perceive mathematics as a particularly clear and concentrated example of the activity of human intelligence, and to feel a need to generalize my thinking about the learning of mathematics into a theory for intelligent learning which would be applicable to all subjects: and for teaching which would help this to take place. For it became ever more clear that mathematics was not the only subject which was badly taught and ill understood: it just showed up more sharply in mathematics. The desire became intensified in 1973, when I moved from the University of Manchester to that of Warwick, and from a 25 Why is mathematics still a problem subject? Psychology to an Education department. Over the next five years I continued to work on this, and the outcome has been nothing less than a new model for intelligence itself.8 This is an ambitious undertaking. But the earlier models, based on I.Q. and its measurement, have been with us now for about seventy years; and while they may have developed much expertise in the measurement of intelligence, they tell us little or nothing about how it functions, why it is a good thing to have, and how to make the best use of whatever intelligence we possess. Until we turn our thoughts from measurement to function, the most important questions about intelligence will remain not only unanswered, but barely even asked. Like the traveller returning to London, I now see mathematics in a new perspective: and it is this which the present book will try to communicate. Within this perspective, mathematics may now be seen, first, as a particularly powerful and concentrated example of the functioning of human intelligence; and second, as one of the most powerful and adaptable mental tools which the intelligence of man has made for its own use, collectively over the centuries. There is a close analogy between this and the use of our hands to make physical tools. We can do quite a lot with our bare hands, directly on the physical world. But we also use our hands to make a variety of tools—screwdrivers, cranes, lathes—and these greatly amplify the abilities of our hands. This is an indirect activity, but long-term it is exceedingly powerful. Likewise, mathematics is a way of using our minds which greatly increases the power of our thinking. Hence its importance in today’s world of rapidly advancing science, high technology and commerce. If this view is correct, then it is predictable that children— or, indeed, learners of any age—will not succeed in learning mathematics unless they are taught in ways which enable them to bring their intelligence, rather than rote learning, into use for their learning of mathematics. This was not, and still is not, likely to happen as long as for the majority of educational psychologists, and those who listen to them, intelligence is so closely linked with I.Q. that the two are almost synonymous. We thus return to the question: what are the activities which collectively make up the functioning of intelligence? We need at least some of the answers to this question before we can 26 Why is mathematics still a problem subject? begin to devise learning situations which evoke these activities —i.e., which evoke intelligent learning. So the emphasis of this book is partly theoretical, offering a new perspective for our thinking about human intelligence; and partly applied, showing how this can be embodied in classroom methods and materials. I nearly wrote ‘…partly theoretical and partly practical’, which would have been to contradict myself. For many years I have been saying ‘There’s nothing so practical as a good theory’, even before I knew that I was paraphrasing something Dewey had written in 1929. 9 But the power of theory is only potential, and it often takes years of work after a theory has been developed to put it successfully to use for the benefit of ourselves and of society. When this too has been done, the results can be spectacular, as we know from the marvels now on offer in the realm of commercial technology and modern medicine. These are at the level of the physical environment, and the physical workings of our bodies. We need power of a similar kind at a mental level, in our use of our intelligence for learning and action. The application of this power in our classrooms is still in its pioneering days, but enough progress has been made for teachers to begin putting it to work. This is the endeavour in which I am inviting readers to take part. School mathematics and mathematics in the adult world: a false contrast If we consider the uses of reading by adults and children, we find continuity. As adults we read for information, for entertainment, and to expand our mental horizons, among other reasons. We read to know ‘how to do it’, in a great variety of matters according to our needs and interests: cooking, gardening, household ‘do-it-yourself’, how to use our new word-processor. Sometimes we read simply to know: about life before man, about the solar system, about the lives of great men and women. This knowledge is not of any obvious practical value, but it enlarges and enriches our minds, and this we find sufficient reason in itself. And we read simply for entertainment: adventures, romances, detective stories, science fiction, or whatever our tastes may be. There is plenty of good reading for 27 Why is mathematics still a problem subject? children too in all of these areas, and they enjoy and benefit from it as much as we do. These books are to be found both in and out of school. There is continuity between these two environments, and between the reasons why children learn to read at school and why it will be both useful to them, and an enrichment to their lives, when they are grown up. Regrettably, this is not the case with mathematics, as most children experience it in school. Here they learn (incorrectly) at best that mathematics is for getting ticks or other signs of approval from teachers, for satisfying expectations of adults, for passing exams: or, in Erlwanger’s memorable words, as ‘a set of rules for making arcane marks on paper’;10 and at worst for avoiding reproof or other punishment, for avoiding being made to feel stupid, or to appear stupid in front of their fellows. These are not the reasons why mathematics is of such importance in the adult world. Let us take a closer look at what these are. Earlier, I suggested that mathematics could be seen as a particularly powerful and concentrated example of the functioning of human intelligence; and also as one of the most powerful and adaptable mental tools which the intelligence of man has made for its own use, collectively over the centuries. In the latter aspect, it acts as an amplifier of our intelligence. Here are some examples. When we travel by air on holiday or business, we may be out of sight of land for much of the time, either above cloud or over the ocean. The navigator, by his understanding of the mathematics of navigation, brings us exactly to our destination. If we should be blown off course, or diverted to a different airport, from the same knowledge base he would work out a revised course. Both in making and using these plans he relies on sophisticated equipment, largely electronic. The design of this equipment, and its use, are based on theory which is formulated largely in mathematical terms. The maps which the navigator uses are based on the concept of proportionality, and so also are the scale drawings which were used in the design and construction of the aircraft itself. The extra power which mathematics gives to our thinking is indispensable in all these cases, and likewise in so many other cases that it is hard to see how the science and technology of today could exist or function without it. 28 Why is mathematics still a problem subject? The concepts and language of mathematics are also of social and economic importance, in that they provide a basis for the co-operation, and exchange of goods and services, on which our present technological cultures depend. Even at a descriptive level, it is often only by the use of the measurement function of mathematics that we can give descriptions of physical objects and events which are exact enough for everyone’s contribution to fit together. Manufacturers couldn’t make nuts to fit bolts, shoes to fit feet, tyres to fit wheels, without this use of mathematics. Still less so, when describing invisible quantities such as the resistance of a wire, the impedance of a coil, the capacity of a condenser, by which the components of an electronic apparatus are fitted together. Many of these objects are bought and sold across national boundaries, which involves exchange of currencies. This too is based on the mathematics of proportionality, as well as simpler operations such as addition (to the account of the seller) and subtraction (from the account of the buyer). Without international agreement about the mathematics involved, commerce on this scale would be impossible. These two uses of mathematics are particular cases of two of the major uses of human intelligence in general: as a mental tool, and for co-operation. These uses are also deeply rooted in our nature, since homo sapiens has evolved, and has reached its dominant position on this planet, as a tool-using species which succeeds in the competition for survival by cooperation with each other. Nor is utility the only reason for learning mathematics. It has an aesthetic quality, and it also exemplifies an aspect of human creativity, by which our mental horizons are expanded in ways which complement those already described in our discussion of reading. If the reasons why mathematics is so important in the adult world are of these three kinds, then surely we should try to provide experiences of the same kinds for children whereby they can experience the power and enjoyment of mathematics in their present realities, rather than assure them ‘that this will be useful to you when you grow up.’ One of the main purposes of the present book is to suggest how this can be done. Why is mathematics still a problem subject? Summary 1 After more than twenty years of effort to improve mathematical education, there appears to have been little improvement. The problem is widespread. Unless we can identify at least some of the reasons why mathematics is still a problem subject for many, there is no reason to suppose that future efforts will be more successful than those of the past. 2 The problem will not be solved from a viewpoint which is confined to mathematics itself, and relies on changes of content alone. A wider perspective is needed. 3 Mathematics may be seen as a particularly powerful and concentrated example of the functioning of human intelligence. Also as a powerful and adaptable mental tool, and amplifier of human intelligence. If this view is accepted, it follows that learners of any age will not succeed at mathematics unless they are taught in ways which enable them to bring their intelligence, rather than rote learning, into use for their learning of mathematics. To do this we need to understand more about how intelligence functions. Measures of intelligence do not tell us this. 4 We also need to teach mathematics in ways which have continuity between school and the outside world. This is already the case with reading, but with mathematics there is a false contrast. Suggested activities for readers 1 With a group of colleagues, or any other group for which it seems appropriate, try the following. Ask if they would be willing to take part in a simple experiment: those who prefer may abstain. Ask them to take their minds back to their time at school, and think which was the subject which for them gave most problems: which they didn’t like, or found hard to understand, or which made them afraid that they would be made to look stupid in front of their classmates. After that, ask for a show of hands to 30 Why is mathematics still a problem subject? find out which were the problem subjects. History? Geography? English? French? German? Mathematics? Science? Drama…? The group may like to discuss the results of this mini-survey. 2 Look back to the section starting on p. 27: ‘School mathematics and mathematics in the adult world: a false contrast’. Make your own list of ways in which mathematics increases our powers of thinking and action in the adult world. These may be grouped under three headings: the physical environment (e.g. science and technology); social, including commercial; and creativity. I have not yet given examples in the third group, but shall do so later. It is mentioned here only because some of your own examples may come into this category. Note that the groups are not mutually exclusive: some examples come into two, or even three, of these groups. Intelligence and understanding Habit learning vs intelligent learning This chapter is about the power which comes with understanding, and the ways in which we can use our intelligence to gain this power. In Chapter 1, intelligence was described as a cluster of mental abilities which, together, are very useful. Here we shall begin to examine in greater detail what these abilities are. The first of these is an ability to learn in a special way, which can most clearly be distinguished by contrasting it with habit learning. Suppose that we need to learn someone’s telephone number. This is a matter of rote memorizing. Suppose now that to pass some examination, perhaps one necessary to qualify as a secretary, we were required to memorize several long lists of telephone numbers. This task would arouse no interest, and give us no pleasure. The work would be regarded as boring and difficult, and we would only do it because we had to. One of the things which makes this task difficult is the fact that once we have memorized one telephone number, this knowledge is of little use in helping to memorize the next. The more we try to learn, the greater the memory load, and the harder the task becomes. In contrast, suppose that for some other reason we were asked to learn the following sequence of numbers: 4 7 10 13 16 19 22…(a hundred of these in all) This has a pattern, and once we had seen the pattern, we would not try to memorize all the numbers. We would learn 32 Intelligence and understanding just the first number, four, and the pattern: the numbers increase by three each time. This provides a generating structure from which the whole sequence can be derived. In so doing, we would be using our minds in a different way. We would be using intelligent learning. Also in contrast to habit learning, knowledge of this second kind is highly adaptable. For example, if asked what would be the hundredth number, or the ninety-ninth, we could work this out from the pattern. And we would not need to restrict ourselves to a hundred numbers. Make the list twice as long, and the memory load is no greater. Make it as long as we choose—the task of constructing the later numbers in the sequence becomes a little harder, but not greatly so. This is the kind of extra power we get from intelligent learning. Habit learning has been extensively studied by the behaviourist school of psychologists, mainly with animals. A well-known textbook example is provided by the Skinner box. A hungry rat is put in a cage, in which there is a bar sticking but horizontally from one side. In the course of its movements around the cage, the rat happens to press the bar, and a morsel of food is released into the cage. Eating the food reduces hunger, and each time this happens, the association between the stimulus situation (being in the cage, hungry) and barpressing is reinforced. Gradually this builds up into a habit. In this kind of learning, certain actions are reinforced as a result of their outcomes, so learning follows action. And what is learnt is action: the cognitive element is small. Rote learning, as in the telephone numbers example, is verbal habit learning. Once learnt, habits tend to be very persistent: they have low adaptability. If our telephone number is changed, we can’t erase the old number from our minds the way we do from our desk pad. The old number persists, and gets in the way of the new one. In contrast, the main feature of intelligent learning is adaptability. By this, I mean that for a given goal, we can find a variety of different ways of achieving it to suit a variety of different situations. If we are thirsty and want a drink of water, at home we probably go to the kitchen and turn on the tap. In a café, we ask a waitress. In camp, we find a clear stream, or a spring. As a child when on holiday, I went to the yard and worked a 33 Intelligence and understanding pump handle. Several winters ago when our pipes froze, our breakfast coffee was made with melted snow. Different environments, different plans of action, all directed towards the same goal: relief of thirst. At the level of habit learning, there is some justification for the behaviourist view that our behaviour is shaped by the environment. In sharp contrast, the present model for intelligent learning asserts that we shape our own behaviour, to achieve the same goals in different environments. Behaviour is goal-directed, using flexible plans of action which can be constructed in advance of action, and modified in the light of action. These plans enable us to achieve goals of our own choosing, in a wide variety of situations. What is more, we can devise several plans, and choose the best, before putting this plan into action. Intelligent learning often precedes action. And action is used not only for achieving goals, but for testing hypotheses. Each plan is based on knowledge of the environment; and building up this knowledge is a major function of intelligence. Action is not a response to an external stimulus, but directed towards whatever goal an individual has in his own mind. The cognitive element is great, and as a result, so is the variety of available plans. Knowledge gives adaptability. One of the greatest mistakes of the behaviourists was to think that habit learning is the only kind. We must not make the opposite mistake, of insisting that intelligent learning is the only kind. We need both. Habit learning, with rote learning as a special case, can be useful and necessary even in mathematics. It is by habit learning that children learn the spoken words for the number-concepts one, two, three, four, five, …; and the written symbols 1, 2, 3, 4, 5,…which mean the same. It is by habit learning that we know that p= 3.14159…. p can be derived from a particular kind of series, but this is not the best way for everyday use. What matters is to use the right kind for each particular requirement, and the right combination for the subject overall. Mathematics requires mainly intelligent learning. Spelling English words needs a different combination. Consider these words: Intelligence and understanding COW, NOW, VOW, and BOW (respectfully) BOW (and arrow) BOUGH (on a tree) BOUGHT TROUGH ENOUGH In the top line, spelling and pronunciation are regular. Reading down the right-hand column, however, we have different pronunciations with the same spelling, and different spellings with the same pronunciation. So we need to use intelligent learning when spelling and pronunciation are regular, and habit learning (in this case we would call it rote memorizing) when spelling and pronunciation are irregular. Because English spelling contains much irregularity, this area of learning requires a greater proportion of habit learning. Mathematics is a highly regular subject, so learning maths requires a high proportion of intelligent learning. My own estimate is about 95 per cent intelligent learning, 5 per cent habit learning. This is, unfortunately, not how it is learnt by a large number of children, for whom learning mathematics consists of memorizing a collection of rules without reasons. Unfortunately, it is all too easy for this to happen, since if the result is correct it is hard to tell whether this is based on rote-learning or comprehension. A rule gives quick results, and an able and willing child can memorize so many rules that the lack of understanding does not show immediately. But the time comes when this approach fails, for two reasons. First, what has been learnt in this way is (as has already been pointed out) of no help in subsequent learning. So as the mathematical content increases, the amount to be memorized becomes an impossible burden on the memory. And second, these rules only work for a limited range of problems. They cannot be adapted by the learner to related problems based on the same mathematical ideas, since with habit learning these ideas are absent. So the child comes to grief in his mathematical progress, with loss of confidence and self-esteem. We can save the children we teach from this fate if we know how. Intelligent teaching involves knowing which kind of learning to get children to use for different kinds of task, and how to get them to use it. We are much better at the 35 Intelligence and understanding latter in the case of habit learning, which may be one of the reasons children have less difficulty in learning to spell than to do mathematics. Goal-directed action As has been emphasized, there is a sharp contrast between the actions which result from using our intelligence, and those which are stimulus-determined and controlled by habit or instinct. In the former case we can choose our own aims, purposes, goals—these all mean much the same, but mostly I shall use the word ‘goals’ since it is more general—and our intelligence helps us to achieve these by a variety of plans according to different circumstances. To the relation between these three, goal-directed action, plans of action for achieving our goals, and intelligence, we now direct our attention. This emphasis on action as being goal-directed, rather than stimulus-determined, allows us to borrow from cybernetics the important idea of a director system. This is a kind of apparatus, physical or mental, which enables us to direct our actions so as to achieve our chosen goal in a variety of circumstances. The features of a director system by which it is able to do this are described in detail elsewhere. 1 For our present purposes, one of the most important of these is a plan of action which determines what we do, at each stage from starting point to the time when our goal is achieved. Here is one which I tried to use, a while ago. My starting point was a school in which I had been working for the first time. I had been piloted there by following someone else’s car, but we were now going our separate ways. So my friend told me how to get to highway A40: once there, I knew my way home. This was the plan of action I was given. LEFT, RIGHT, HALF-LEFT, STRAIGHT ON, LEFT, RIGHT It is easy to guess what happened. I was soon quite lost, and it took me a long time to get home that day. A plan of this kind is closely tied to action. And if one of the actions is wrong, it is likely to throw out all which follow. In this case, once one gets off the right path, one does not 36 Intelligence and understanding know how to get back on. In mathematics, a single mistake makes all the subsequent working wrong. One probably does not even know where one went wrong. The cognitive element is low. This is the kind of plan of action which is provided by habit learning, and a director system which has only these available is of low adaptability. Cognitive maps, knowledge structures, mental models, and schemas For my next visit, I made sure that I had a street map with me. Figure 2.1 gives a simplified extract from it. Figure 2.1 This has a number of advantages. First, I could find out where I had gone wrong. Figure 2.2 shows the plan I was trying to use, but now not 37 Intelligence and understanding Figure 2.2 in isolation but in relation to the road network. From this it became clear that I had not counted the left turn at the school gate as one of the left turns to be taken: so at what should have been my first right turn, I took another left, and so was heading in the opposite direction to the correct one. But with this map, even if I did go wrong, all I had to do was to find where I was by looking at a street sign, locate this on the map, and I could then make a new plan of action to take me where I wanted. The advantage goes beyond this. Given any starting location, and any goal location, from the map one can make a plan to take one from one to the other provided only that they are both on the map. One could, for example, make a plan to take one to a post office, then a newsagent, then a coffee shop, and finally back onto the road home. The cognitive element is high, and so also is the adaptability of one’s actions. This example generalizes nicely into a number of key 38 Intelligence and understanding concepts for intelligent learning. Using as a transitional metaphor, Tolman’s useful idea of a cognitive map, we can see it as a particular case of a knowledge structure. A street map represents knowledge at a concrete level, about things which we can directly see and touch. A more general term, used by psychologists to include more abstract kinds of knowledge, is schema. A model is a simplified representation of something else, and we can have mental models as well as physical. We have mental models of the neighbourhood around our home, of the place where we work, and other regions, which we use in the same way as that described in the example. We do this so habitually that it takes a situation where we are without one to bring home to us the importance of these mental models. The street map on paper provided a quick way of acquiring a mental model of a new district. Someone who could not read a map, i.e. who could not make a mental model from the marks on paper, would not have been able to use it to make plans for action. This kind of mental model is a cognitive map. Those of a more abstract kind will be discussed further in the section about theories. The meanings of the terms cognitive map, knowledge structure, mental model, theory, and schema, are very alike. The term schema is the most general, and includes all the others; but the other terms are also useful. It is sometimes easier to think at the less abstract level of examples such as the street map, and the term ‘cognitive map’ allows us to do this while keeping in mind the more general idea. The other three terms enable us to centre on the uses of these particular kinds of schema. Intelligent learning If these schemas (including cognitive maps, knowledge struc tures, mental models) are so important for achieving our goals, how do we acquire them? We are not born with them. The instinctual behaviours which are genetically inherited are closely tied to action, and are even less adaptable than habits. Schemas have to be acquired by learning. So now we need to build on to our model of what is happening in our minds 39 Intelligence and understanding Figure 2.3 something to help us grasp what goes on at two levels, that of action and also that of learning. For this we need two director systems, which I have called delta-one and delta-two: D stands for any kind of director system, including instincts and habits, with which we are not here concerned; and Greek D for those which are intelligent. Delta-one is a director system whose function is to enable us to achieve our goals in relation to the physical environment. Delta-two is a second-order director system which acts on delta-one. Its function is to take delta-one to states in which delta-one can do its job more successfully. This is one of the meanings of learning—increasing our ability to do what we want: in more technical language, our ability to achieve our goals. It does this in a way which is indirect, but much more adaptable, as we have seen. It is also much more economical. In the street-map example, we can quantify this. If in Figure 2.1 we count each road junction, and each stretch of road between two junctions, as a single location, the number of these is 49. From this it can be calculated that the number of plans required to go from any one of these as starting position to any other location as goal is 2352. To remember all of these plans separately would impose an almost impossible burden on the memory. But it is learning of this kind which children are trying to achieve when they are taught in ways which do not help them to build structured knowledge. In contrast, each and every one of these plans can be constructed, if and when 40 Intelligence and understanding needed, from the unified knowledge structure represented by the map. It is learning of this kind with which intelligence is particularly concerned; and though its function is one step removed from action, it makes action more likely to succeed because each plan is constructed for each particular task. Intelligence thus contributes to adaptability in two ways. 1 By the construction of schemas—not just one, but a large number, for all the different kinds of job that delta-one does. 2 By constructing from these schemas particular plans appropriate to different initial states and goal states. These plans can then form the basis of goal-directed action as already outlined. This way of using our minds is not only more efficient: it is more pleasurable for the learner. And witnessing this enjoyment is one of the rewards one experiences when teaching children to use their intelligence for learning mathematics. What is understanding, and why does it matter? In 1971 I wrote ‘What is understanding, and by what means can we help to bring it about? We certainly think we know whether we understand something or not; and most of us have a fairly deep-rooted belief that it matters. But just what happens when we understand, that does not happen when we don’t, most of us have no idea.’2 And I went on to suggest that until we did, we would not be in a good position to bring about understanding in others. I wrote from experience, for it was as a teacher of mathematics in school that my interest in these problems first arose. Later in the the same publication I offered an answer to this question. ‘To understand something means to assimilate it into an appropriate schema.’3 In the light of the previous section, we can take this thinking two steps further. We can now explain the greater power of action which is made available by understanding, and we can also relate this to the feelings which accompany non-understanding and understanding. 41 Intelligence and understanding Figure 2.4 Our cognitive maps, or schemas, are, as we have seen, the sources from which delta-two constructs plans of action for use by delta-one. But suppose now that we encounter an experience which we cannot connect to any of our existing schemas (Figure 2.4). Delta-two cannot make any plan which includes point P (that is to say, what this point represents). If, as in the original example, this represents a road map, then we are literally lost here. If it represents a cognitive map or schema, we are mentally ‘lost’. The metaphor is a close one: we do not know what to do in order to achieve any goal at all. And this, in general terms, is our state of mind when confronted with some object, experience, situation, or idea which we do not understand. The achievement of understanding, as represented in Figure 2.5, makes connections with an existing schema. We are now 42 Intelligence and understanding Figure 2.5 able to cope with the new situation, since our delta-two can once more make plans if and as needed. Metaphorically, and in some cases literally, we know where we are, and so we can find a way to where we want to arrive. This change of mental state gives us a degree of control over the situation which we did not have before, and is signalled emotionally by a change from insecurity to Habit learning and teacher-dependence vs understanding and confidence For habit learning, we are largely dependent on outside events. The rat is given the food if it presses the lever. In schools, we provide these events by awarding symbolic rewards or punishments such as ticks and crosses. In mathematics, we also 43 Intelligence and understanding have to explain to the children how to get these ticks and avoid the crosses. Not having the necessary mathematical schemas, they cannot find out for themselves. In the case of intelligent learning, the achievement of understanding is itself rewarding, as may be seen from children’s faces. I believe that it is part of our job as teachers to provide learning situations by which children can achieve understanding largely by their own endeavours. Exposition by a teacher still has its importance; but it has a different function, that of helping children’s efforts to understand for themselves. Learners who have experienced the satisfaction which results from understanding are likely to make these efforts. The map analogy will stand us in good stead once again. When away from home, we may need to ask for help in finding our way. The question ‘How do I get to…?’ usually evokes one kind of help, directions of the ‘what to do’ kind. Like the ‘left, left,…’ example, it is hard to remember anything longer than a short list, and if one goes astray one is dependent on further help. But if one asks ‘Please could you show me where I am on this map?’, the resulting information puts one in the position of being more able to help oneself thereafter. The latter, of course, usually depends on the enquirer having already acquired a map, or being given one. When on holiday last summer, I asked at a visitors’ information centre how to get to our hotel. In response I was given a little map on which I was shown ‘This is where you are now. Here is your hotel. So (drawing on the map) you turn right out of the car park, over the bridge,…’. As well as enabling us to find our hotel, this combination enabled us to find our way around for the rest of our stay. It would be hard to improve on this as a demonstration of how to help those in unfamiliar territory. A cognitive map of a more abstract kind takes longer to communicate, but the principle remains entirely the same. Habit learning and intelligent learning thus develop two different kinds of learner-teacher relationship and two states of mind. Habit learning keeps the learner dependent on being told what to do in every new situation, with little confidence in his own ability to cope if left on his own. Intelligent learning develops the learner’s confidence in his own ability to deal with any situation which can be understood in relation to his existing knowledge, and encourages perception of the 44 Intelligence and understanding teacher as someone who can help him to increase this knowledge, and thereby his power of understanding. What are theories, and why do we need them? Even at an everyday level, much of what we do depends on our having mental models of our environment. Suppose that we want a drink of water. If there is a jug of water in sight, there may seem to be no problem. But even in this simple case, we need to be aware of properties beyond those immediately visible, such as pourability. If there is no water in sight, then we need to have a mental model of the building we are in. At home, this will include a kitchen with a cold water tap. If we are working away from home, the model will be a different one, within which is represented a cloakroom, and within the cloakroom washbasins with cold water taps. This model of the building may usefully also include a room where the secretaries work, another room where we can use a photocopier, and beyond this other useful locations such as a car park, somewhere to eat. Without such a model, we cannot direct our steps to places where we can satisfy our needs. So even at the simplest level, and although we learn and use them with hardly a thought, these mental models are indispensable to us. They take us beyond dependence on what we can see and hear at any present time. Slightly more advanced are models at a common-sense level. These further increase our power to understand, predict, and control events in the physical world. They relate for the most part to objects and events which are accessible to our senses, and we use them still in much the same ways as mankind has done before us for thousands of years. Wood floats, so it is good for making boats. Iron sinks, so nobody would for a moment consider making an iron boat unless he had a more powerful mental model: in this case, the principle of Archimedes. This operates beyond a common-sense level: it is an example of a theory. The distinction between a theory and a common-sense mental model is one of degree rather than of kind: namely, its degree of abstractness and generality. When as children we have our first bicycles, we may acquire a common-sense model 45 Intelligence and understanding in which turning the pedals makes the back wheel go round, and this pushes us forward. If we begin to see how the chaindrive converts the slower rotations of the pedals into faster rotations of the rear wheel, our mental model is changing in the direction of a theory. If when we are older we relate this to a general model of velocity ratios, depending on the ratio of the numbers of the cogs in the two gear wheels, we now have a theory, albeit a fairly simple one. Further development of this theory enables us to understand how a multi-speed gear works. The engineer who designed this gear used the same theory, which also relates velocity ratios with mechanical advantage—how hard we have to push on the pedals, and so which gears are best for downhill and level riding, and which for moderate or steep hills. We may also know that a bicycle wheel by itself tends to keep upright so long as it is rolling along; but much more advanced knowledge is required before this can be explained in terms of a theory. Theories are both more powerful and more general than common-sense models. In the natural sciences, some of this generality often comes from their use of mathematical models, which are themselves notable for their generality. The mathematical model for velocity ratio and mechanical advantage can be used to design gearboxes for motor cars, clocks and watches (of the old kind), mixing machines for the kitchen, winches, lathes. The mathematical theory of spinning tops and flywheels can be used for gyro-compasses and autopilots. Another feature of a theory is that it sometimes predicts events which are contrary to common sense. The iron ship which floats is one example. If we hold a rotating bicycle wheel by the hub, and try to change the plane of rotation, the wheel not only resists our efforts but tries to twist in quite an unexpected direction. Someone who understands the mathematical theory of spinning tops and gyroscopes can predict this. As a result their thinking is, in this context, more powerful than that of most of us. An essential feature of a theory is that it helps us to understand the invisible causes which lie beyond the visible effects. And the more remote from what is accessible to our senses are these hidden causes, the more we need a theory before we can deal adequately with the situation. 46 Intelligence and understanding When children fall and bruise themselves we do not normally need a theory to deal with the situation. Common sense is enough. But when a little boy is very ill after a fall from which a normal child would take no harm, then we need to know about the possibility that he has haemophilia. The explanation of this requires a mental model far beyond common sense. It is based on theories of genetics, bio-chemistry, and physiology. A person who intervenes in the workings of the human body needs a mental model beyond common sense, if he is to do more good than harm. (Medieval doctors often did more harm than good.) And if as teachers we intervene in the mental processes of a growing child, which are even more inaccessible to our senses, the same applies. Summary Intelligence has already been described as a cluster of related mental abilities, which together are very useful. Among these is the ability to learn in a way which is qualitatively different from habit learning. Intelligent learning consists, not in the memorizing of a collection of rules, but in the building up of knowledge structures from which a great variety of plans of action can be derived as and when required. Constructing these plans from existing knowledge is another function of intelligence. This is a much more economical form of learning, since the number of plans which can be derived from the same knowledge structure is enormously greater than the number of rules which can be memorized separately. Intelligent learning is more adaptable, since plans can be constructed to meet circumstances for which a rule has not yet been devised. It is also more powerful, since plans are individually made to fit the given situation, and are thus likely to be more effective. The learning of most subjects requires a combination of intelligent learning and habit learning. The proportion varies between subjects. For mathematics, the proportion may be estimated as 95 per cent intelligent learning, 5 per cent habit learning. 47 Intelligence and understanding 7 The general term ‘schema’ includes knowledge structures, and also cognitive maps and mental models. 8 In the present theory of intelligence, understanding is conceived as relating new experiences or ideas to an existing schema. Until this has been achieved, we are unable to plan how to achieve our goals in any situation which involves these experiences or ideas. We feel lost, and unable to cope. Understanding extends our powers of adaptation to the new situation: so we are correct in our intuitive feeling that understanding is important to us. 9 Habit learning develops a learner’s dependence on a teacher to continue the supply of rules for each new kind of situation. Intelligent learning develops a learner’s confidence in his own abilities to cope with new situations, and perception of his teacher as someone who can help him to increase his own understanding. 10 Theories are mental models which are more abstract and general than common-sense ones. They increase our power to understand the invisible causes behind visible events. If, as teachers, we intervene in the mental processes of a growing child, without an appropriate theory we may do more harm than good. Suggested activities for readers 1 Reflect on the way you yourself learnt mathematics at school. How much of this did you learn by the use of your intelligence, i.e. with understanding, and how much consisted of memorizing rules? Were you encouraged to ask questions if there was something you did not understand? 2 Choose one or more other subjects which you have learnt, not necessarily academic. Within the same subject area, identify some parts for which habit learning would be the most effective kind, and others for which intelligent learning should be used. Compare the results of your own analysis with the ways in which you were taught. The formation of mathematical concepts The special demands of mathematics The mathematician Henri Poincaré, in his essay ‘Mathematical creation’, wrote: A first fact should surprise us, or rather would surprise us if we were not so used to it. How does it happen that there are people who do not understand mathematics?1 It is interesting to find this question asked nearly a hundred years ago, by a famous mathematician rather than a mathematics educator. With a different emphasis, it is the same question as we began with, and it is a good question. People do not seem to have this difficulty in understanding history, or geography, or in learning to speak a foreign language. If they make the necessary efforts at any of these, there is every likelihood that they will succeed. But people can work very hard at mathematics with little or no understanding at the end of it: and ‘people’ here for the most part means children, since a majority of adults have given it up as a bad job. The minority who have succeeded in understanding mathematics find it hard to understand the problems of those who have failed. So what is different about mathematics? Does it require special aptitudes which are found in only a few? Research evidence suggests that this is not the case, except perhaps for high-flying specialist mathematicians.2 In its early stages, mathematical thinking is not essentially different from some of the ways in which we use our intelligence in everyday life. But for many children, this continuity between mathematics 49 The formation of mathematical concepts and the everyday use of their intelligence is never established. And if it is not, then from the very beginning mathematics is something apart. This is another aspect of the false distinction between school mathematics and the mathematics of the world outside school which was discussed in Chapter 1. So this is part of the answer. Following on from this, in Chapter 2 the nature of intelligence itself was considered in more detail, and it was suggested that if children do not use intelligent learning rather than habit learning for their mathematics from the very beginning, then long-term they are likely to fail. However, though mathematics does not need different mental abilities from those which characterize intelligence, the nature of the subject requires that these abilities are used in special ways. It also calls for teaching which makes these special ways possible. What these ways are, and what it is about mathematics which demands them, will be explored in this chapter and the next. The abstract nature of mathematics Mathematics is much more abstract than any of the other subjects which children are taught at the same age, and this leads to special difficulties of communication. It is instructive to experience this difficulty at one’s own level of thinking. Here, by way of illustration, are three pairs of statements. 1 (a) My pocket knife will sink if I drop it in the river. A piece of wood will float. (b) Trains for London leave Coventry at 6 and 36 minutes past each hour. Both of these are understandable by an average seven-year-old child. The second is a little harder than the first. 2 (a) Iron sinks because its density is greater than that of water. A hot air balloon rises because the density of hot air is less than that of cold air. (b) The reason why high voltage electricity is used for powering electric trains is that a smaller current is 50 The formation of mathematical concepts needed to give the same power, and this results in smaller losses in transmission between the power station and the locomotive. A majority of readers are likely to understand the first of these statements, and also the second in general terms. A fuller understanding of the second requires a little knowledge of electrical theory. It may well be that a little time for thought is required, also. They are not as easy as the first two. 3 (a) The amount of information, being the negative logarithm of a quantity which we may consider as a probability, is essentially a negative entropy.3 (b) Let V be a finite dimensional vector space over the field F. The number of elements in a basis of V is the dimension of V.4 Those who can understand the last pair of statements (they are from advanced texts on cybernetics and geometry, respectively) are few. Clearly these statements increase in difficulty, from the first pair to the third. But where does the difficulty lie? The importance of this question is that the difficulty which the reader has just experienced is the same kind of difficulty which so many children encounter in trying to understand mathematics at the levels presented to them. It has something to do with the statements being progressively more abstract: but just what do we mean by ‘more abstract’? We know what we mean by ‘more heavy’ or ‘more expensive’, but do we know what is ‘more abstract’? Indeed, what do we mean by ‘abstract’? Even if we think we know at an intuitive level, can we put this knowledge clearly into words? We need to have an explicit and analytic understanding of these ideas before we can put them to work in our teaching, and thereby begin to make it easier for our pupils to learn with The necessity of conceptual learning As a beginning, I suggest that we should be more surprised than we are that we can recognize the same person on different occasions. The light may be different, we may see 51 The formation of mathematical concepts them at a different angle, they may be paler if it is cold, their expression may be different. The retinal image is different on every occasion. But beyond these differences we perceive something in common, and it is this which enables us to know that it is the same person. In the same kind of way, we can recognize the same person’s voice, whatever words they are speaking. This exemplifies, in everyday experience, an ability which is a key feature at all levels of learning. The present experiences from which (sometimes) we learn become part of our past, and will never again be encountered in exactly the same form. But the situations in which we need to apply what we have learned lie in the future as it becomes our present; or as, by anticipation, we bring the future into our present thinking and planning. From this it follows that if our mental models are to be of any use to us, they must represent, not singletons from among the infinite variety of actual events, but common properties of past experiences which we are able to recognize on future occasions. Abstraction and the process of concept formation A mental representation of these common properties is how, for many years now, I have described a concept; and for this process of concept formation I use the term ‘abstraction’. Concepts represent, not isolated experiences, but regularities abstracted from these. It is only because, and to the extent that, our environment is orderly and not capricious that learning of any kind is possible. A major feature of intelligent learning is the discovery of these regularities, and the organizing of them into conceptual structures which are themselves orderly. In this chapter, we shall consider the first of these processes: the formation of concepts, and in particular of mathematical concepts. In the next chapter we shall examine how these are built up into conceptual structures. To begin with, let us consider how we might try to communicate the meaning of a simple everyday word to a person who has been blind from birth, but who has recently been given sight by a corneal graft. Such a person is entering into what is for him a totally new field of experience, so his 52 The formation of mathematical concepts situation is not unlike that of a child learning mathematics for the first time. He asks ‘What does “red” mean?’ Intuitively, we would know that a verbal definition such as ‘Red is the colour we experience from light in the region of 0.6 microns’ would be useless to him. Instead we might point and say, ‘These are red socks. This is a red diary. That tulip is red.’ And so on. In this way, though we could not tell him, we could arrange for him to find out the meaning for himself. Let us now verbalize our intuitive awareness of the right approach, in this particular example, in the light of the description of the process of concept formation which has just been given. The purpose of doing so will be that already stated: to achieve an explicit and analytic understanding, which can be generalized to other appropriate situations, and which will help us to distinguish which are appropriate situations and which are not. The meaning of a word is the concept associated with that word. This concept is not the word itself, so let us distinguish between them by using ‘red’ for the word and red (without quotes) for the concept. (Italic type face, though not essential for this distinction, is useful to distinguish particular concepts under discussion.) The meaning of ‘red’, i.e. the concept red, is an awareness of something in common between the experiences he has just had. (Not between the objects themselves, but between his own visual experiences in looking at them.) Without these or similar experiences, it would not have been possible for him to form the concept. By arranging for him to have these experiences, we make it possible for him to form the concept. Possible, and in this case probable: but not certain. Concept formation has to happen in the learner’s own mind, and we cannot do it for him. What we can do, as teachers, is greatly to help along the natural learning processes, if we know enough about these. On further reflection we can see that in this example, two kinds of learning are happening simultaneously, conceptual and associative. As was said earlier, the meaning of the word ‘red’ is its attached concept. From the visual set of experiences, our subject abstracts the concept red. From the words we use while indicating these, ‘red socks’, ‘red diary’, ‘that tulip is red’, he abstracts what these descriptions have in common: the word ‘red’. And by associative learning, he attaches the concept to the word. 53 Figure 3.1 Figure 3.2 The formation of mathematical concepts Successive abstraction: primary and secondary concepts Not all concepts can be conveyed in this way. Suppose that our subject has asked for the meaning of ‘colour’. Our intuition would in this case tell us that it would be no good trying to communicate this by pointing. Instead, we would use words, and say something like: ‘green, yellow, red…these are all colours.’ By again analysing our intuition, we can arrive at two new and important ideas relative to concepts. Primary and secondary concepts Figure 3.1 illustrates the nature of primary concepts. Primary concepts are those which are derived directly from sensory experiences. In the above figure, green, yellow, red and triangle, circle, oblong are primary concepts. Other examples of primary concepts are hot, cold, heavy, smooth, sweet, lavender (the smell). Figure 3.2 illustrates the difference between primary and secondary concepts. Colour is a secondary concept, which is formed when we realize what the concepts green, yellow, red etc. have in common. When we have this concept, we can also recognize that blue is a colour, and square is not. Secondary concepts thus depend on other concepts, and can only be formed if the person already has these concepts, which may be primary concepts, or themselves other secondary concepts. We see from figure 3.3 that attribute is also a secondary concept, in this case derived from other secondary concepts. Lower- and higher-order concepts When we centre our attention on the common property of a set of objects (which may be mental objects), and ignore for the time being their individual differences, we are ‘pulling out’, or abstracting this common property. The result of this mental activity is what we call a concept. However, formation of the concept attribute involves more stages of abstraction than the concepts colour, shape. So now we have a meaning for ‘more abstract’: attribute is a more abstract concept than colour and 56 Figure 3.3 Figure 3.4 Figure 3.5 The formation of mathematical concepts shape, and similarly colour is a more abstract concept than green, yellow, red. These concepts thus form a hierarchy, in which the terms ‘higher-order’ and ‘lower-order’ describe the same relationships as ‘more abstract’ and ‘less abstract’. These also mean further away from, or nearer to, direct sensory experience. Everyday concepts and mathematical concepts We have seen that all intelligent learning involves abstracting from a number of examples something which they have in common, and that the mental object which results is what we mean by a concept. However, between different subjects there are differences in the availability of examples. Let us take two contrasting areas of knowledge: butterflies and invisible exports. Examples of concepts such as butterfly, and of particular varieties such as peacock, painted lady, are readily available in the outside world (at the right time of year), and children often become remarkably knowledgeable in areas of this kind with little or no teaching beyond, say, a book from which to get the names for attaching to the concepts. The concept butterfly is itself a primary concept, and children are likely to learn this first, and subsequently to differentiate between varieties, as shown in Figure 3.4. But if someone asks ‘What is an invisible export?’, the examples from which this concept may be abstracted are not visible in the environment. This is a high-order secondary concept, for which the contributors are other concepts, such as the export of material goods, money which is not coin or note but a matter of accountancy, insurance, tourism, banking. These are also very abstract, being themselves dependent on other lower-order concepts. Figure 3.5 gives some of the lowerorder concepts required for just export of material goods. These two contrasting examples of schemas are at opposite ends of a continuum. Those at the ‘butterfly’ end of the continuum contain only primary concepts and concepts of a low order of abstraction, and their interdependence is not great. Those at the ‘invisible export’ end of the continuum contain few or no primary concepts, and many concepts of a high order of abstraction. 60 The formation of mathematical concepts The interdependence between the individual concepts is much greater in subjects of the second kind. In the butterfly schema, the concept Painted Lady does not depend on the concept Red Admiral, and each of these could be replaced by another variety without affecting one’s overall understanding of butterflies. And small though it is as here shown, this schema is sufficient for forming the concept butterfly, so that if someone having this schema asks ‘What is a fritillary?’ they will understand when told that it is a kind of butterfly. But in the invisible export schema, there is hardly a single concept which is not necessary for understanding of those above it in the hierarchy. And yet Figure 3.5 is incomplete: it still shows only one of the hierarchies involved. Those required for the concepts banking, insurance, tourism, are also needed for the concept invisible export, and each of these similarly involves concepts of increasing levels of abstraction. These two examples illustrate one of the major differences between mathematics and most of the other subjects which children learn at primary school; and indeed at secondary school too, the main exceptions being the highly mathematicized science subjects. For convenience we may call these subjects of the first kind (whose schemas are like the butterfly schema) and of the second kind (whose schemas are like the invisible exports schema). Mathematics is a subject of the second kind. We can understand the geography of Mexico without knowing anything about the geography of France. But understanding of the concept place value, often but mistakenly regarded as elementary, depends on all of the following concepts: the natural numbers order counting unit objects sets of objects sets as single entities sets of sets numerals, and the distinction between these and numbers numeration bases for numeration 61 The formation of mathematical concepts And not only these, since all the above are themselves secondary concepts, some being of quite a high order of abstraction. To show all of them, with their interrelationships, would require quite a large diagram. Another important difference between these two kinds of subject is in the location and availability of the examples from which the new concepts need to be abstracted. For subjects of the first kind, the examples exist in the outside world; and if they are available there, a person may be able to form the concepts unaided. But for the second kind of subject, the examples are themselves concepts, and these are not physical but mental objects. The process of abstraction involves becoming aware of something in common among a number of experiences, and if a learner does not have available in his own mind the concepts which provide these experiences, clearly he cannot form a new higher order concept from them. In both cases, however, if the examples, whether physical or mental, are available, then as teachers we can greatly help a learner to form the new concept by grouping them together for him. In the case of primary concepts, if the objects are small we can bring them together physically; and if they are not easily moved, we can arrange a mental grouping by pointing. In the case of secondary concepts, we can similarly bring the ideas together in a learner’s mind by bringing together their attached symbols. In the case of our imaginary man who has just received his sight and is at the beginning of conceptualizing his visual experiences, we can say ‘Red, blue, green, yellow …these are all colours.’ Two ways of communicating concepts Grouping examples So far, we have been concentrating on the process of abstraction, which is fundamental to the process of concept formation. Arising directly from this is one of the ways in which we can help a person to form a new concept, by grouping suitable examples together for him. The formation of mathematical concepts Explanations and definitions There is another way of communicating a concept which can also be very useful if the conditions are right. Suppose that we are asked ‘What is cyan?’ and we reply ‘It’s a colour: a pale blue-green.’ This is a reasonable reply on the assumption that the person asking already has the concepts colour, blue, and green, and that he also interprets the juxtaposition blue-green as meaning a colour in between. In most cases these assumptions are valid. In the case of our imaginary subject who is entering this field of experience for the first time, we would first have to ensure that he had the contributory concepts blue and green, which could only be done by the method of grouping examples. For a mathematical example, suppose that we are asked: ‘What is a trapezium?’ We reply: ‘It is a mathematical shape: a four-sided figure in which there is one pair of parallel sides.’ Again, this is likely to convey the concept provided that the questioner has the concepts mathematical, shape, side, four, parallel. If asked by a five-year-old child, we would not assume these, and would be likely to use the examples method. This second method, which we may call explanation, does not involve new abstraction. It uses the concepts which the latter already has, and helps him to construct a new one by combining and relating these: so it can only convey new concepts of the same order as those which the hearer already has, or of lower order. A definition is a concise and exact explanation, which also allows us to distinguish clearly between examples and non-examples. For these reasons (which are good reasons), mathematicians like definitions. But definitions have the faults of their merits, which is that they are not always the best way to convey a new concept for the first time. Both definitions and explanations have the additional advantage that the newly constructed concept has, by its method of construction, ready-made connections with an appropriate schema. For convenience we may call these two ways of communicating a concept, provided that we bear in mind that new concepts cannot be communicated directly. Every learner has to construct these anew in his own mind. But as teachers we can greatly help this process along: as indeed we must, if 63 The formation of mathematical concepts children are to acquire in about ten years concepts which it has taken some of the best minds of mankind centuries to construct. Which of these two ways is the right one to use will depend on where this new concept is in relation to the learner’s existing schema. That is to say, whether the new concept is of the same or lower order as of higher order than those in the learner’s available currently Most everyday examples are of the first kind. The new concept depends either on primary concepts, which the other person has in common with us due to our shared physical environment; or it depends on low-order secondary concepts arising out of our shared social environment, everyday reading, what we see on television and hear on the radio. A child asks us what is an ostrich. We explain that it is a large bird with long legs and small wings, so it can run fast but not fly. He may think that this is a silly kind of bird, but he now has a very good idea of what an ostrich is. This explanation succeeds because we were right in assuming, probably without thinking, that he already has the concepts bird, wings, legs, fly, run. Because the method of explanation works so well in everyday life, we expect it to work also in our teaching. And so it does for most school subjects, certainly at primary school level, and for many secondary school subjects too. But this is not the case for mathematics. Year after year, we require children to learn many new concepts which are of higher order than those which they have already acquired. So for our teaching of mathematics, we must carefully distinguish between learning situations of the first kind, for which the method of explanations is appropriate, and those of the second kind, for which it is not. And in the second case, we must systematically apply these two principles. 1 New higher-order concepts are to be communicated by carefully chosen examples. 2 We must make sure that the necessary lower-order concepts are available in the mind of the learner. The formation of mathematical concepts These may sound simple enough, but they have important consequences. From the first principle, it follows that: (a) we must be clear what belongs to the concept and what doesn’t. Figure 3.6 Parallel lines From Figure 3.6, which has been copied from a well-known textbook, children learn a restricted and incorrect concept of parallel lines, that they are equally spaced and of the same lengths. So they may fail to recognize lines as parallel where this is not the case. Recently I met a ten-year-old child who said that in Figure 3.7 overleaf, the upper three were triangles, but not the lower three. All the examples of triangles which she had been shown were equilateral. (b) We must ourselves be aware of the distinction between closely-related but different concepts. If we are not, we may confuse children at the outset. For example, subtraction is often called ‘taking away’ in primary schools. The early examples of subtraction which children meet usually do involve taking away. Later, they meet questions like ‘Jean is nine years old, Philip is seven. How much older is Jean than Philip?’ When told that these are also take-aways, naturally they are confused, since the teacher is 65 Figure 3.7 Triangles The formation of mathematical concepts passing on his own confusion. Nothing here is being taken away, which in the present example would not be possible. This is a comparison. Taking away is one of the contributory concepts for subtraction, comparison is another. Subtraction is a higher-order concept, which a person has when he realizes that these two ideas have something in common. Another common confusion is that between a number and a numeral. A numeral is the name of a number, so this is like confusing a person’s name with the person himself. A child who was shown this: and asked ‘Can you write a larger number?’ wrote this. We may not make this particular mistake ourselves: but there are many who tell older children to ‘Convert this number to a binary number.’ What they mean is ‘Write this number in binary notation.’ Binary and decimal are different notations, not different kinds of number, and when writing a number in a different notation we do not change either the number itself or any of its properties. Conceptual analysis as a prerequisite for teaching mathematics: concept maps The second principle is even more far-reaching in its consequences. It means that we must, before teaching a new idea, ‘take it to pieces’, i.e. analyse it to see what are the contributory concepts. And to make sure that pupils have these contributory concepts, we must analyse these pieces, and continue thus right back either to their beginning in primary concepts, or to secondary concepts which we are sure that the children have. For it is at the beginning that much of the trouble lies. In some pupils, important foundation concepts are never formed, so that for them mathematics never is an intelligent or an intelligible activity. This conceptual analysis the first major step in the application of psychology to teaching of maths. Teachers must first analyse 67 Figure 3.8 Concept map for subtraction The formation of mathematical concepts the concepts so that pupils can re-synthesize them in their own minds. This is a huge job, and it is too much to expect busy teachers in classrooms (face-to-face teachers) to find time for this. But they are entitled to expect that it has been done by textbook writers, who may be regarded as indirect teachers; and it is important to be able to recognize whether or not this essential first step has been adequately done. The result of a conceptual analysis of this kind can usefully be represented in a concept map. Figure 3.8 gives an example. Concept maps (as the term is used here) differ from ordinary maps in two ways. They are directional, with arrows showing which concepts are prerequisite for which others. And where several arrows point to the same higher-order concept, the latter is incomplete until it embodies all of these earlier concepts. Concept maps are, however, like ordinary maps in that they allow us to make our own plans for arriving at new concepts. A writer may suggest an order, but there is no unique correct order. There are several good orders—and many which will not get the learner there at all! So concept maps are also useful diagnostically, since difficulties in learning a certain concept may well have their causes further back, in that other necessary concepts have not been acquired. Another result of using a concept map for the first time in one’s own teaching may not always be welcome, since (and I write from experience) one may find that one is teaching topics which one has never really understood. Fundamental changes may be needed in one’s own thinking, and this can be hard work. But there are two rewards, which encourage one to go on. Not only do our pupils learn better, and this is rewarding professionally; but we gain new insights into the nature of mathematics itself. Summary 1 Mathematics at school level does not require special aptitudes in learners. It is, however, much more abstract and hierarchic than most of the other subjects which children learn at the same age, and this makes special demands on teachers, including both face-to-face teachers and those who prepare books and other teaching materials. 69 The formation of mathematical concepts 2 Abstraction is a process by which we become aware of regularities in our experience, which we can recognize on future occasions. It is in this way that we are able to make use of our past experience to guide us in the present. Concepts are mental embodiments of these regularities. 3 Primary concepts are those which are abstracted from sensory experience; secondary concepts are abstracted from other concepts, which may be primary concepts or other secondary concepts. The more times this process is repeated, the more abstract and remote from sensory experience the concepts become. ‘Higher-order’ and ‘lowerorder’ refer to greater and lesser degrees of abstraction. 4 New concepts cannot be communicated directly. Each learner has to construct them for himself, in his own mind. But a teacher can greatly help learners to do this, if he knows how. Helping in this way may for convenience be called ‘communicating a concept’, provided that we remember the indirect nature of this process. 5 In this sense, there are two ways of communicating a new concept. If the new concept is of the same order as those in the learner’s currently available schema, or of a lower order than these, the method of explanation is suitable. If, however, the new concept is of a higher order than those in the learner’s currently available schema, the method of giving carefully chosen examples must be used. The latter situation is particularly frequent in learning mathematics. 6 Since in mathematics these examples are themselves concepts, it is essential to make sure that these are available in the learner’s own mind. In order to plan for this, a conceptual analysis of the subject matter is essential. The results of such an analysis can conveniently be represented in a concept map. Suggested activities for readers 1 For this you need a box of attribute blocks. (These are widely available in primary schools and centres of teacher education.) Starting with the concept map developed in Figures 3.1, 3.2, 3.3, expand this into a concept map to 70 The formation of mathematical concepts illustrate the rest of the conceptual hierarchy which is embodied in the attribute blocks. Which concepts are secondary concepts, and which are primary? From your enlarged concept map choose further examples of the relationship more abstract than, and also lower-order than, higher-order than. Make explicit the way in which the method of communicating concepts by giving examples has been used in the two sections of this chapter on abstraction and concept formation (pp. 52–60). ‘Thick’ and ‘thin’ really mean thicker than and thinner than. What examples would you use to communicate these more general concepts to a child? These are examples of order relationships. Heavier than is another example of an order relationship. Find some further examples, and draw a concept map to illustrate the formation of the concept order relationship. If object A is thicker than object B, then object B is thinner than object A. The relationships thicker than and thinner than stand in a particular relationship to each other, which is called ‘the inverse of’. Another example: larger than is the inverse of smaller than. Find some more examples of the relationship inverse of, and draw a concept map which continues down to primary concepts. On page 59 only one of the hierarchies needed for constructing the concept invisible export is shown. Complete those for banking, insurance, tourism. The advantage of doing a conceptual analysis using this example is that most of us know enough to make quite a good job of it, although the concept is highly abstract. This may or may not be the case for mathematics. Refer back to the sequence of numbers in Chapter 2, p. 32. What concepts are prerequisite for seeing the pattern, and thereby using intelligent learning rather than habit learning? Draw a concept map for a mathematical concept of your own choice. This should continue down to primary concepts. If several persons try this exercise with the same concept, comparing results may lead to some interesting discussion. The construction of mathematical knowledge Schema construction: the three modes of building and testing We have seen that an essential feature of the present model of intelligence is schematic learning. In Chapter 3, we also saw that if what we learn at a given time is to be usable on future occasions, abstraction and conceptualization are also necessary. So our meaning of a schema has now expanded to mean a structure of conceptualized knowledge. We have further noted that concepts and schemas cannot be communicated directly. Each individual has to construct them for himself, in his own mind. This is not too difficult with schemas in which all the concepts are of a low level of abstraction, such as the butterflies example. The more abstract the schema becomes, the greater the difficulty in constructing it, and thus greater the need for help. Effective help of this kind becomes essential if we want children to acquire, in ten or fifteen years, knowledge which it has taken the best minds of mankind centuries to construct. Fortunately, the right kind of teaching can greatly help the construction of mathematical schemas. Unfortunately, as we have seen, the wrong kind can put people off for life. The fact that this help is necessarily indirect makes a teacher’s task more sophisticated. In the present theory, constructing schemas is represented as a goal-directed activity, that of a second-order system which we have called delta-two, acting on a first order system deltaone. (See Chapter 2, pp. 39–43). Given that we cannot help in the most obvious and direct way, by taking over the job of a learner’s delta-two, how else can we help? 72 The construction of mathematical knowledge Good teaching is not trying to replace the schemaconstructing activity of the learner’s own delta-two, but providing as good learning situations as we can for this to do its own job. To do this, we need to know how delta-two sets about its task of constructing schemas in general, and mathematical schemas in particular. The latter can be more easily understood in the wider perspective provided by the inclusion of non-mathematical examples. The term construction will hereafter be used to mean a combination of building and testing. These take place both at the delta-one level, of activity on the physical environment, and at the delta-two level, of building schemas whereby delta-one can act more effectively. When we are constructing a wall, building consists of putting mortar on the wall, and then positioning another brick or building block. Having done this, we test for spacing, alignment, and verticality. Building a wall is a goal-directed activity, in which construction alternates between building it a little further, and then testing that the change to the partly-built wall is in the right direction, i.e. towards the way we want it to be when completed. Constructing a transistor radio likewise involves building and testing. Building consists mostly of choosing and connecting the right components, while the crucial part of the testing has to be deferred until the end, since it consists of finding out whether we now have an apparatus which will receive broadcasts. To construct means putting together a structure, and in all cases, the structure is all-important. It is the difference between a pile of bricks and a wall, between a box of bits and a radio receiver; between knowledge, and a collection of unrelated facts and rules. In the activities of delta-two when constructing schemas, we may distinguish three modes of building and three modes of testing. The first mode of building knowledge structures is direct experience, from which a mental model is built yielding testable predictions. The second is social: sharing knowledge, and discussion, are major features of academic life. In the third mode, our existing knowledge gives rise to new knowledge, e.g. by the extension of known patterns to new situations. These modes are more powerful when used in combination. 73 The construction of mathematical knowledge Figure 4.1 In conventional mathematics teaching, the main emphasis is on communication by the teacher of a method for doing a certain kind of task, after which the pupils practise using this method on further tasks of the same kind. Thus, only one at most of the six available means for intelligent learning is made available to the child, even assuming that the reasons behind these rules are also explained. It is no wonder that so many of them have problems. If we are to provide learning situations favourable for learners to construct their own schemas, these need to include methods and materials for bringing into use all six of these modes. The construction of mathematical knowledge Mode 1: the importance of structured practical activities The higher the building or other structure, the more important are its foundations, and this analogy applies with full force to the present discussion of knowledge structures. Schemas like the butterfly example may be compared to a bungalow, schemas like the invisible exports example to a skyscraper. In both cases, it is the primary concepts which correspond to the foundations. In the case of butterflies, these are available in the environment at the right time of year1 and this is also the case for mathematics in its earliest beginnings. Children come to school having already acquired, without formal teaching, more mathematical knowledge than they are usually given credit for.2 These are not sufficient to take the weight of the lofty abstract structures which we want them, with our help, to build. Nevertheless, they should be respected. It is not helping if we ignore or brush aside what children have achieved already, and by implication teach them that this is of no value or relevance to mathematics as it is done in school. The help they need at this stage is to be found in activities which help them to consolidate and organize their informal knowledge, and to extend it in such a way that it begins to dovetail with the highly organized knowledge which is part of our social inheritance. When this happens, they are starting to build the kind of foundations which will be capable of supporting a knowledge structure of the skyscraper kind. As a result of the foregoing analysis, it has been found possible to devise activities involving mode 1 schema building which not only help children to build structured mathematical knowledge, but to use this to make testable predictions. One of the early findings of the Primary Mathematics Project was that children take much pleasure from finding their predictions confirmed by events. And if this does not happen, they like to be in a position immediately to correct their own thinking rather than wait for a teacher to tell them where they went wrong. This allows them to experience mathematics, on a miniature scale, as increasing their power to predict and control their environment. It also gives them greater control in directing their own learning processes than does the kind of situation in which they depend mainly on their teacher to tell them whether or not they have answered correctly, and what 75 The construction of mathematical knowledge they did wrong. With activities of this kind, the event either does or does not happen as they predicted, which tells them whether their thinking was or was not correct. And by returning again to the physical materials for schema building, they can correct their mathematical model, and their use of it. Examples of this kind of activity will be given in Chapter 6, which may with advantage be read in parallel with the rest of the present chapter. Mode 2: the value of co-operative learning3 Mode 2 learning is co-operative learning, by exchange of ideas and discussion. This can be combined well with mode 1 learning, and extended into more abstract areas of thinking. If activities which embody mathematical concepts are done in pairs or small groups, children will naturally talk about what they are doing. In such situations they will be talking about mathematics, as embodied in these materials and activities. This has a number of benefits. First, they are putting their thoughts into words, in a mathematical situation. This is an important first step towards putting mathematics on paper, which is more difficult. Activities are also available which take the form of games, in which success depends largely on mathematical thinking. The rules for these games are largely mathematical, so whether a move is allowable or not depends on agreement about what is correct or incorrect mathematically. In this way, children correct each others’ mistakes in a way which is much less threatening than being told one is wrong by a teacher. Trying to justify, or disagree with, a move on mathematical grounds means explaining oneself clearly, and this requires one to get these ideas clear in one’s own mind. Simply speaking one’s thoughts aloud takes one a step in that direction. The activities based on physical embodiments of mathematical ideas provide shared sensory experiences which ensure that there is common ground for children’s discussion. Other activities, in which symbols such as number cards are used, are more abstract. In this case the shared experience is partly at a symbolic level, but more importantly at a mental level in the form of shared mathematical ideas and 76 The construction of mathematical knowledge experiences. The benefits already described continue to apply, with equal or greater force. Activities of these kinds are also described in Chapter 6. Mode 3: creativity in the learning of mathematics Learning in the ways described makes it possible for the natural creativity of the child to come into action. This is a resource which we expect to be used in subjects such as art, dance, drama. The suggestion that it is an important resource in the learning of mathematics may, however, come as a surprise. In the context of mathematics, creativity means mental creativity: using existing knowledge to create new knowledge. We create also in the physical world. Almost daily we encounter new inventions, new kinds of car, newly styled garments— consumer durables and ephemera of all kinds. Outwardly this appears as an activity of delta-one. But delta-one has to be provided with plans of action before physical creation takes place. Invention and design take place first in the mind, and on the drawing board: then in the physical world. Here there is an interaction between mental and physical creativity—between the activities of delta-two and of delta-one. Mathematics is mostly a delta-two activity. But the schemas, the knowledge structures, of mathematics provide strong support for many of the inventions which take place in the fields of science and technology. The jet engine, one of the most conspicuous technical inventions of our time, began as several pages of mathematical equations in Whittle’s notebook. Mathematics, like the arts, has an aesthetic quality of its own which gives much pleasure to those who can experience it. It is good if we can put children in the way of experiencing this pleasure. Since mathematics takes place in our minds, this makes mathematical creativity something personal; but communicating the results can result in a shared pleasure. Here is an example. (The activity the children were engaged in introduces the concept of multiplication, and is described in full detail in Chapter 6.) Five children were playing, and each was provided with a small oval card and some small objects such as shells, buttons, 77 Figure 4.2 The construction of mathematical knowledge acorn cups. To begin with, one child was asked to make, on his oval set card, a set of whatever number he chose. The others were then asked to make matching sets—that is, sets of the same number—on their own set cards. The children checked each other’s sets, and these were then brought together inside a large set loop. In the present example, the first child had made a set of 4 shells. So there were in the large set loop 5 sets, each of 4 objects, as shown in Figure 4.2. I said (pointing), ‘Now we have five sets, with four in each. How many altogether?’ Most of the children started to count, but after only a few seconds one little girl (this was a class of top infants, so her age was rising seven) surprised me by saying ‘Twenty’. Since I didn’t think she could have counted them all in so short a time, I asked ‘How did you answer so quickly?’ She replied (pointing), ‘Two fours, eight, and two from this four makes ten. Then there’s the same again—twenty.’ This incident gave me much pleasure, as it has to others to whom I have related it. It is an example of what mathematicians often call ‘an elegant solution’, and shows that we need not be high-powered mathematicians to experience and share this pleasure. Creativity of this kind is based on structured knowledge, not on ignorance. So teaching of the kind which helps children to build up their knowledge structures does not interfere with their creativity: it provides resources. Amanda already knew a lot of addition relationships, so from these she was able to construct a multiplication relationship. The more we know, the greater the resources from which we can create new knowledge. Clearly it involves intelligent learning, not habit learning. But fluency is valuable, and must not be confused with habit learning. Amanda had her knowledge readily available, and was able to choose from the many addition facts she knew just those which were useful at that moment. By itself, it is often hard to distinguish fluency from habit learning. The criterion is adaptability of the uses to which the knowledge is put. A correct answer to ‘What is 4+4+2?’ could have been given by habit learning. But to select from all the possible combinations just that sum, and apply it to the present situation, evidenced intelligent learning combined with fluency. Extrapolation of a known pattern is another example of 79 The construction of mathematical knowledge creativity. When a child can count 1, 2, 3, 4,…, he does not start from scratch when extending the pattern to 10, 20, 30, 40…and 100, 200, 300, 400,…. Communication of these new patterns will start him off, after which he may be able to continue them without further outside help. If so, he is using his creativity to extrapolate his knowledge of the counting pattern. He may at the same time be extrapolating his concept of number, though initially it is good for further support to be provided in the form of physical embodiments of the base ten system of numeration. In this way schema building by modes 1, 2, and 3 can be combined. Another example of using existing knowledge to create new knowledge may be found in the topic of multiplication. It is no small thing that once we know all the products from 1×1 to 9×9, we can find the product of any two numbers we choose. If the processes of multiplying numbers of any size first by a single-digit number, and then by numbers of several digits, are learnt as mechanical rules, little understanding and no creativity are involved. Children are simply being taught to function mechanically like a calculator, and for getting the answer quickly and accurately it is much better for them to use calculators. But the mathematical principles which make these extrapolations valid are of great importance and generality. For example, multiplying 37 by 4 is possible because four thirtysevens is equal to four thirties plus four sevens. This is the case whatever the numbers, a property of the natural number system which forms one of the foundations of algebra: a×(p+q)=a×p+a×q where a, p, q, stand for any numbers. We do not expect children to make this extrapolation unaided. If they are helped to do so by good teaching, creativity is still being used, albeit in combination with other modes. And valuable meta-learning is taking place, about the nature of mathematics itself. Schemas and long-term learning Some of the reasons why schemas are important for intelligent 80 The construction of mathematical knowledge learning have already been discussed, and may usefully be brought together at this stage. (i) They make possible understanding, and thereby adaptability. (ii) They provide a rich source of plans of action and techniques for a wide variety of applications. (iii) Shared schemas have important social functions. They facilitate co-operation on the basis of shared understanding, and plans which fit together to achieve shared or compatible goals. The shared knowledge of any given profession is an important example of this. This list can be extended, with special reference to long-term learning. (iv) Learning is easier. (v) Retention is better. (vi) Future learning is also easier. Learning is easier We have already seen, in Chapter 2 (pp. 40–41), that when schematic learning is possible, it is much easier. This difference was investigated systematically in one of my early experiments over a four-week period, with two groups totalling over 60 schoolchildren. In this experiment, it was found that material learnt schematically was, when tested immediately after learning, remembered slightly more than twice as well as identical material learnt by rote.4 Retention is better One day later, the schematically learnt material was remembered three times better, and four weeks later, seven times better. The difference in long-term recall was even more striking than in the short-term learning. The construction of mathematical knowledge Future learning is also easier The build-up of the schemas was spread over four one-hour periods, on four successive days. The new material presented each day was such that it could be understood in terms of the schema already acquired. When this had taken place, the expanded schema was available on the following day for the learning with understanding of that day’s new material. This principle is quite general. When learning takes place with understanding, new concepts are formed and connected with an appropriate schema. The schema itself thereby expands, and after a period of consolidation we are capable of assimilating yet more ideas which previously would have been beyond the reach of our understanding. So our schemas grow by this combined process of assimilating new experiences to themselves, and thereby expanding. The process is organic in quality. When a seed germinates, it first puts out roots downwards, and shoots with leaves upwards. These gather nourishment from the soil, and energy from the sun by photosynthesis. The combined result is transformed by the growing plant or tree into its own substance, including more roots, and more leaves. The increased root and leaf system are able to take in more nourishment than before, and so the results of past growth facilitate future growth. Implications for teaching These are twofold. (i) To enable children to learn with understanding, we must wherever possible ensure that the new concepts embodied in the learning materials we provide are such as can be assimilated to their existing schemas. We must give particularly careful thought to the foundation schemas in every topic. The first implication can be seen as a straightforward expansion of the principles of concept formation, already described in Chapter 3; and the conceptual analysis, with the resulting 82 The construction of mathematical knowledge concept map, provides a foundation for this approach. This will tell us what children should know already, if they are to be in a position to learn a particular topic with understanding. Complementary to this, we need to know whether they do know it. For this, we need some means of assessment which allows us to go beyond checking children’s written work, in which it is often hard to distinguish between rote learning and learning with understanding. We need to be able to diagnose the state of development of children’s schemas. How activity methods can help us to do this will be discussed in Chapter 6. The second implication is particularly important for the following reasons. First, our schemas are highly selective. What can be assimilated to them is remembered so much more easily than what cannot that our schemas tend to be self perpetuating. Ideas which do not fit our existing schemas are likely to be ignored, rejected, or forgotten. This is one reason why it is important to start pupils off with the right schemas. A second reason arises when we encounter new ideas which do not fit our existing schemas, but which we cannot ignore, reject, or forget. To understand these new ideas it is not enough to expand our present schemas. It requires us to make more radical changes— to replace some of our concepts by different ones, to make different connections. This process I call re-construction, and it is much more difficult. It is also usually unwelcome. Our schemas are very useful to us. They are important parts of our mental equipment. So it is not really surprising that when we encounter situations that demand that we take them to pieces and build them differently, this is experienced as threatening, and we react with anxiety and hostility. There are plenty of historical examples of this. When Pythagoras discovered that the length of the hypotenuse of a right-angled triangle could not always be expressed as a rational number, he swore the members of his school to secrecy. In his book Men of Mathematics, Bell writes: ‘When negative numbers first appeared in experience, as in debits instead of credits, they, as numbers, were held in the same abhorrence as “unnatural monstrosities” as were later the “imaginary” numbers’.5 So we need to understand these defensive reactions in our pupils, for example when first they encounter fractions. These cannot be understood by assimilation to their existing schema, 83 The construction of mathematical knowledge that of the counting numbers. As they understand the word, these are not numbers. They are not even told that they are being asked to expand their idea of number to include a new kind, still less are they systematically presented with all the concepts by which they can build up a schema which will be capable of assimilating the idea of a fractional number.6 We should also be on the look-out for defensive reactions in ourselves if, for example, we meet ideas which suggest radical changes in our concept of good teaching. So far as our pupils are concerned, we can help to save them from having to re-construct their schemas more often than is absolutely necessary by taking careful thought about the foundation schemas. Here is an example of how this may be done. Multiplication is often taught to young children as repeated addition. For the natural numbers this causes no problem, and is probably the easiest for them to understand. But it causes problems later, for example when we ask them to learn how to multiply fractions. Here, the concept of repeated addition has no meaning. What is more, the children are seldom told that we have changed the meaning of the word ‘multiplication’. If, however, we teach children multiplication as the combination of two operations, this is only a little harder initially; and they now have a concept which can be expanded without re-structuring to include all the other kinds of multiplication they are likely to meet. This is how it was being introduced in the incident related on p. 79. (Please refer back to Figure 4.2 for what follows.) The first operation here is ‘Make a set of 4 objects’. The result is a set, which is now treated as an object on which the next operation is done: ‘Make a set of 5 such sets.’ The combination of these two operations is equivalent to the single operation ‘Make a set of 20 objects’, and this single operation is called the product of the other two operations. This basic schema for multiplication is no harder to understand than repeated addition when it is presented in the physical embodiment described here and in Chapter 6; but the combination of two operations can be expanded without reconstruction to apply to the multiplication of fractions, of matrices, and of functions in general.7 The construction of mathematical knowledge Schemas and the enjoyment of learning During early experiments in schematic learning, a strong subjective impression was gained that children enjoyed learning in this way more than rote learning. To test this hypothesis, a group of primary school children were given material to learn which, though similar in the individual items of content, differed in the way this content was presented. One set of material was selected from the original schematic learning experiment and similarly organized; the other set consisted of further material from the same source, but random in presentation so that learning by the progressive build-up of schemas was virtually impossible. At the end of this session, the experimenter said that she would be coming back in a week, and children could choose which kind of learning task they would like to do. They were asked to put a tick at the bottom of their answer sheets to indicate which kind they wanted to do. As expected, a large majority opted for another schematic learning task. There were, however, several children who ticked the other box, so on her next visit the experimenter asked them why they had made this choice. They replied that in the first set they had learnt, they had been able to see how all the separate things to be learnt fitted together. The second set looked like the first, so they thought there had to be a pattern, but they had not been able to find it. So they thought it would be interesting to see if they could find the pattern next time.8 Readers who are able to do with children some of the activities described in Chapter 6 will be able to judge for themselves whether these particular children find intelligent learning intrinsically enjoyable, independently of external rewards such as pleasing their teacher, stars against their names, and the like. Most children do, but this does vary between schools. In a few of the schools we9 have worked in, we found that the children were watching our faces more than the materials we were working with, and were clearly trying to find the answers which would please us, rather than those which made sense to them in terms of the activity. There are some schools in which the climate is not favourable to intelligent learning: this problem will be discussed further in 85 The construction of mathematical knowledge Chapter 8: ‘Management for intelligent learning’. It is however good to report that the great majority of the children we have worked with clearly evidence a preference for intelligent learning. At the end of a session, we usually ask ‘What do you think of this activity?’ A frequent response from children we have been working with for the first time is ‘We like this better than Maths’! It has been suggested elsewhere10 that in the evolution of homo sapiens, natural selection for intelligence has played an important part. If this is the case, then there are biological grounds for thinking that enjoyment of intelligent learning is innate, and intrinsic in the activity itself. Since human young are also dependent on their parents, and other adults, for a longer proportion of their lives than in any other species, pleasing important adults is clearly important too. One of the tasks of adolescence, however, is to reduce this importance, that of relations with peers taking on even more importance than before. Since we have already seen that learning with understanding reduces teacher-dependence and increases personal confidence, we now have developmental and social reasons to add to those already discussed in favour of cooperative schematic learning. Summary 1 Knowledge structures (schemas) have to be constructed by every individual learner in his own mind. No one can do it directly for them. But good teaching can greatly help, and the more abstract and hierarchical are the knowledge structures (schemas) which are to be built, the more this help is needed. 2 The best help is not in trying to replace the schemaconstructing activity of the learner’s own delta-two, but by understanding how it sets about its task, to provide learning situations which are favourable to schema construction. 3 Constructing here means building and testing. We can distinguish three modes of building, and three corresponding modes of testing: see Figure 4.1, page 74. Briefly, these are: 86 The construction of mathematical knowledge Mode 1 Mode 2 Mode 3 Building experience communication creativity Testing prediction discussion internal consistency These are more powerful when used in combination, so good learning situations are those which provide opportunities for using all of these, though not necessarily in the same activity. 4 Learning situations of this kind include: (i) (ii) (iii) structured practical activities co-operative learning in small groups of children those which use children’s natural creativity. 5 When a number of parts are connected in the right way, the resulting whole may have important properties which would have been hard to predict from knowledge of the separate components. For this to happen, the right structure is essential. 6 Some of the properties of well-structured schemas are as follows. (i) (ii) (iii) (iv) (v) (vi) (vii) They make possible understanding, and thereby adaptability. They provide a rich source of plans of action and techniques. Shared schemas facilitate co-operation. Learning is easier. Retention is better. Future learning is also easier. Intelligent learning is intrinsically pleasurable for most children, and does not depend on external rewards or punishments. 7 Because of the importance of children’s schemas for longterm learning, we need to try to ensure that at every stage, the new concepts to be learnt can be assimilated to children’s available schemas. This requires careful longterm planning. 87 The construction of mathematical knowledge 8 Sometimes we encounter ideas which cannot be assimilated to an available schema, and re-construction of the schema is required before this can take place. This is often unwelcome and difficult. For this reason, and to minimize the need for reconstruction on future occasions, particular care is needed with the foundation concepts on which a schema is to be built. 9 If the conditions described in 7 and 8 are not met, learning with understanding comes to an end, and only rote learning is possible. For mathematics this is so inefficient that further progress of any kind is unlikely, and pupils give up. Suggested activities for readers 1 See Chapter 2, pp. 32–33. In order to use intelligent learning for the sequence shown there, what schema did you need to have available? A concept map would be a good way to present your answer. You may want to draw two, one for this particular case, and a more general one which applies to other examples of the same kind. You could also show the expanded schema, after assimilation of this new material. 2 Early numbers, below 10, need only simple expansion of concepts which children already have. But expansion to numbers beyond 10, and their representation in placevalue notation, require additional concepts to be assimilated to the number and notation schemas before the representation of larger numbers in place-value notation can be understood. One of these is the idea of a set. Next is the expansion of the idea of counting single objects to the idea that we can regard sets also as countable objects. Thus we collect a particular number of players into a team (which is a special kind of set), and we then collect a given number of these teams into a league (which is a set of such sets). Think about the ways in which this principle is applied when we write larger numbers in hundreds, tens, and units using headed columns. What other concept(s) is/are 88 The construction of mathematical knowledge required for understanding place-value notation: that is, the representation of units, tens, hundreds, etc., without labelled columns? On the basis of your analysis, draw a concept map for place-value notation. 3 Devise a physical embodiment of: (30+7)×4=30×4+7×4 Where, in the usual way of writing this calculation, is this ‘swept under the carpet’? Understanding mathematical symbolism The power of symbolism The power of mathematics in enabling us to understand, predict, and sometimes to control events in the physical world lies in its conceptual structures—in everyday language, its organized networks of ideas. These ideas are purely mental objects: invisible, inaudible, and not easily accessible even to their possessors. Before we can communicate them, ideas must become attached to symbols. These have a dual status. Symbols are mental objects, about which and with which we can think. But they can also be physical objects—marks on paper, sounds—which can be seen or heard. These serve both as labels and as handles for communicating the concepts with which they are associated. Symbols act as an interface, in two ways: between our own thoughts and those of other people; and between those levels of our mind which are difficult of access, and those easily accessible. Though the power of mathematics lies in its knowledge structures, access to this power is dependent on its symbols. Hence the importance of understanding the symbolism of mathematics. Symbols help us in a number of other ways too. Here are ten: there may be others. 1 2 3 4 5 6 90 Communication. Recording knowledge. The formation of new concepts. Making multiple classification straightforward. Making possible reflective activity. Explanations. Understanding mathematical symbolism 7 8 9 10 Helping to show structure. Making routine manipulations automatic. Recovering information and understanding. Creative mental activity. These are discussed at greater length elsewhere.1 Here I will expand on just two of them: those numbered 8 and 5. Making routine manipulations automatic Thinking is hard work, and the amount of information to which we can attend at one time is limited. Once we have understood a mathematical technique, if this is one which is often used it is a great advantage to be able to do it on future occasions with a minimum of conscious thought, and without having to repeat the conceptual processes which were originally involved. Symbols enable us to do this, in two ways. First we detach them from their concepts, and manipulate them in the same ways as before, but for the time being independently of their related concepts. This greatly reduces the amount of information to be handled. It is like moving people’s names around on a seating plan, rather than asking the people themselves to move around a full-sized table. Second, we routinize these manipulations so that we can do them with a minimum of conscious thought. This is not only useful, but essential if progress is to be made, in the same way as writing and spelling words need to be routinized so that our conscious thinking can concentrate on the ideas we are trying to put on paper. This automatic performance of routine processes must, however, be clearly distinguished from the mechanical arithmetic, taught by drilland-practice, which has in the past often taken the place of learning with understanding. A machine does not know what it is doing. A human being does, and at any time during the automatic performance of a routine process we can pause and re-establish the connections between symbols and concepts. It is essential that these connections are not lost. The power of mathematics is in its ideas, and the many benefits of symbols in helping us to access and manipulate these ideas will be lost unless we retain the ability to re-invest symbols with meaning. Re-attaching symbols to concepts restores access to the 91 Understanding mathematical symbolism mathematical structures which justify the routines, indicates when they do not apply, and allows us to adapt our methods to new cases. Reflective activity By this I mean becoming conscious of our own thinking processes, and sometimes intervening in these, e.g. by devising new methods or improving the ones we have, or examining critically conclusions we have reached intuitively. Here is an example. Figure 5.1 When they are first learning activities which involve moving forward along a number track by counting on a given number of spaces, children often include the starting square in their count. Thus, a child with a marker on square 3 above who threw a 4 with a die might point to 3, 4, 5, 6 and finish on 6. There are two ways in which a teacher might deal with this. The first is simply to tell the child that when counting on the starting square should not be included. This may change their faulty method, but it does not bring understanding. The second, which we have found effective, is to ask them where they would finish if they threw a 1. I have not yet found a child who thought that this meant they would stay still: they always respond correctly to this case by counting on one space. Working on from this (‘…and if you threw a 2?’), children correct their method by the exercise of their own reflective intelligence. They are starting from a case in which a correct answer is almost certain, and extrapolating it, each stage being consistent with what they have already built. This exemplifies what I mean by not trying to replace the learner’s delta-two, but helping it to do its own job better. In this example, part of the help takes the form of talking about the 92 Understanding mathematical symbolism method used: relating it to verbal symbols. Another part lies in the powerful symbolism of the number tracks, which will later be developed into the even more powerful symbolism of the number line. And the right starting example was also very important in this particular case. This kind of teaching is much more sophisticated and professional than just telling them what to do. Long-term, it is greatly more effective. Symbol systems Though the powers conferred by the use of symbols are great, we are so used to them that we tend to take them for granted. The task of acquiring this understanding is also considerable, and we easily overlook the achievement of children in learning to speak their mother tongue with considerable mastery by the age of five. But we cannot overlook the difficulties which many children have in learning to understand mathematical symbols. For the children, this means to assimilate them to an appropriate schema. For us as teachers, this means not only to do this, but also at a higher level, to understand what symbolism is, what it does, and how it does it; and what, in this case, constitutes assimilation to an appropriate schema. Symbols do not exist in isolation from each other. They have an organization of their own, by virtue of which they become more than a set of separate symbols. They form a symbol system. This consists of: a set of symbols corresponding to a set of concepts corresponding to a set of relations between the concepts together with a set of relations between the symbols What we are trying to communicate are the conceptual structures. How we communicate these (or try to) is by writing or speaking the symbols. The first are what is most important. These form the deep structures of mathematics. But 93 Understanding mathematical symbolism only the second can be transmitted and received. These form the surface structures. Even within our minds the surface structures are more accessible, as the term implies. And to other people they are the only ones which are accessible at all. But the surface structures and the deep structures do not necessarily correspond, and this causes problems. Deep structures and surface structures Here are some examples to illustrate the differences between a surface structure and a deep structure. What has this to do with mathematics? At a surface level wet rags and cups of tea would seem to have little connection with mathematics. But at a deeper level, this distinction between surface structures and deep structures, and the 94 Understanding mathematical symbolism relations between them, is of great importance when we start to think about the problems of communicating mathematics. For convenience let us shorten these terms to S for surface structure, D for deep structure. S is the level at which we write, talk, and even do some of our thinking. The trouble is that the structure of S may or may not correspond well with the structure of D. And to the extent that it does not, S is confusing D as well as supporting it. Let us look at some mathematical examples. We remember that a symbol system consists of: (i) a set of symbols, e.g. 1 2 3… 1/2 2/3 3/4… a b c… (ii) one or more relations on those symbols, e.g. order on paper (left/right, below/above); order in time, when spoken. But since the essential nature of a symbol is that it represents something else—in this case a mathematical concept—we must add: (iii) such that these relations between the symbols represent, in some way, relations between the concepts. So we need to examine what ways these are, in mathematics. Here is a simple example. (Remember that ‘numeral’ refers to a symbol, ‘number’ refers to a mathematical concept.) Symbols 1 2 3… Concepts the natural numbers Relations between symbols is to the left of (on paper) before in time (spoken) Relations between concepts is less than This is a very good correspondence. It is of a kind which mathematicians call an isomorphism. Place value provides another well-known example of a symbol system. 95 Understanding mathematical symbolism Symbols 1 2 3… (digits in this order) Concepts the natural numbers Relations between symbols is one place left of Relations between concepts is ten times By itself this is also a very clear correspondence. But taken with the earlier example, we find that we now have the same relationship between symbols, is immediately to the left of, symbolizing two different relations between the corresponding concepts: is one less than and is ten times greater than. We might take care of this at the cost of changing the symbols, or introducing new ones; e.g., commas between numerals in the first example. But what about these? 23 Here we have one meaning for the order relationship between 23 and 24, 24 and 25, etc., and another for the order relationship between the 2 and 3 of 23, etc. And how about these? 23 These three can all occur in the same mathematical utterance. This inconsistency of meaning is not just carelessness in choice of symbol systems; it is inescapable, because the available relations on paper or in speech are quite few: left/right, up/down, two dimensional arrays (e.g. matrices); big and small (e.g. R, r). What we can devise for the surface structure of our symbol system is inevitably much more limited than the enormous number and variety of relations between the mathematical concepts, which we are trying to represent by the symbol system. Looking more closely at place value, we find in it further subtleties. Here we have numbers greater than 9 represented by numerals of several digits. (Reminder: a digit is a singlefigure numeral, such as 0, 1, 2,…9.) Consider the symbol: 572. At the S level we have three digits in a simple order relationship. But at the D level it represents 96 Understanding mathematical symbolism (i) three numbers (ii) three powers of ten: These correspond to the three locations of the numerals, in order from right to left. (iii) three operations of multiplication: the number 5 multiplied by the number 102 (=100), the number 7 multiplied by the number 101 (=10), the number 2 multiplied by the number 100 (=1) (iv) addition of these three results. Of these four sets of ideas at D level, only the first is explicitly represented at S level by the numeral 572. The second is implied by the spatial relationships, not by any visible mark on the paper. And the third and fourth have no symbolic counterpart at all: they have to be deduced from the fact that the numeral has more than one digit. Once one begins this kind of analysis, it becomes evident there is a large and little-explored field.2 For our present purposes, it is enough if we can agree that the surface structure (of the symbol system) and the deep structure (of the mathematical concepts) can at best correspond reasonably well, in limited areas, and for the most part correspond rather badly. Different ways of understanding the same symbol The same symbol will be understood in different ways, according to which schema it is assimilated to. The word ‘field’ will be understood differently according to whether the schema to which it is assimilated is of agriculture, cricket, physics (e.g. electromagnetic field), general academic (mediaeval history, the plays of Shakespeare), or mathematical. If a person has all these schemas available (to varying degrees, unless he is unusually knowledgeable), what determines the schema selected? A simple explanation which is sufficient to start with is to say that the sensory input, usually a spoken word or a visual symbol, is attracted to whichever schema is most active at the 97 Understanding mathematical symbolism time. For a person with only two of these schemas, there would be only two contenders. And for a person with only one schema, there would be only one way in which he could understand what he heard, or saw written. If a child has written this and we ask him to write a larger number, sometimes we get this response: Such a child is understanding only at a surface level: at the level of marks on paper. At this level it is a perfectly good understanding. He knows the meaning of ‘larger’, and if we were asking for larger writing, his response would be correct. But he hasn’t taken the first step towards symbolic understanding, since the essence of a symbol is that it stands for something else as well as for itself. Our request was not for a larger numeral, but for a numeral representing a larger number. An older child would be likely to assimilate our request to his schema of numbers rather than of numerals: that is, to a deep structure, from which he would have a correct understanding of our meaning. Symbolic understanding If we apply our general conception of understanding (see Chapter 2, pp. 42–43) to the present case, we have this formulation as a starting point. Symbolic understanding is a mutual assimilation between a symbol system and an appropriate conceptual structure. Now we are concerned not with the assimilation of concepts to schemas, of small entities to large ones, but with the mutual assimilation of two schemas, of two entities which are comparable in size and each of which has a structure of its own. When something like this happens, there is the possibility that one organization may tend to dominate the other. When the organizations are businesses, or nations, or political groups, the power struggles are often prolonged and destructive. They may 98 Understanding mathematical symbolism also be sadly unnecessary, since a co-operative partnership could often have been for the benefit of both. The power of mathematics is in the ideas. In the right partnership, symbols help us to make use of this power by helping us to make fuller use of these ideas. In the wrong relationship, a weak or barely existent conceptual structure is dominated by its symbol system, and mathematics becomes no more than the manipulation of symbols. Sadly, this is the way it is for too many children. So how can we help children to build up an increasing variety of meanings for the same symbols? How can we prevent them from becoming progressively more insecure in their ability to cope with the increasing number, complexity, and abstractness of the mathematical relations they are expected to learn? A resonance model We need a model to support our thinking in this difficult and abstract area. The one I offer is based on the phenomenon of resonance. This is one which is widespread in the physical sciences, and many readers will already be familiar with it. For those who are not, the following experiment is a good introduction.3 You need a piano (of the traditional kind, with strings!); and it is worth taking a little trouble to obtain access to one, since some of the effects are quite striking. First lift the lid. Choose a note which you can sing comfortably, and slowly press down the key for that note so that the damper is lifted, but the string is not struck by the hammer. Now sing the note into the piano, stop, and listen. You will find that although you have not touched or struck the string, it is vibrating audibly. Repeat, with different notes. Now try this again with the loud pedal raised. In this case you will find that the string corresponding to the note sung responds most strongly, but others sound also. These are the strings whose frequencies are related to that of the note sung. (For a more detailed explanation, see the reference already cited.) Finally, raise all the dampers as before, and sing a vowel sound: say AAH. Repeat with others, lowering the dampers in between. Try other sounds, including short ones: 99 Understanding mathematical symbolism e.g. a as in cat, i as in hit. In all cases the strings will resonate in combination to give back the sound of your voice with surprising accuracy. The reason for this is that a vowel sound is not a pure frequency, but a combination of related frequencies. So the piano strings which resonate are those with this same combination. Other examples of resonance are widespread throughout science and technology. For example, radio and television receivers contain tuned electrical circuits which respond selectively, and are thereby able to choose from the many electromagnetic waves reaching our antennae those which carry the broadcast we choose to receive. The starting point for our present model is to suppose that conceptualized memories are stored within structures which are selectively sensitive to different patterns in the same way as the tuned circuits described above. Sensory input which matches one of these wave patterns causes resonance in the corresponding tuned structure, or possibly several structures together, and thereby sets up the particular pattern of a certain concept or schema. We all have many of these tuned structures corresponding to our many available schemas, and sensory input is interpreted in terms of whichever one of these resonates with what is coming in. What is more, different structures may be thus activated by the same input in different people, and at different times in the same person. Different interpretations will then result, as described on pp. 97–8. A related idea is put forward by Tall,4 who has suggested that a schema can act as an attractor for incoming information. He took this idea from the mathematical theory of dynamic systems; but if we combine it with the resonance model, we can offer an explanation of how this attraction might take place. Sensory input will be structured, interpreted, and understood in terms of whichever resonant structure it activates. In some cases, more than one resonant structure may be activated simultaneously, and we can turn our attention at will to one or the other. In our television sets, the sound input is attracted by one set of tuned circuits, and this signal is amplified and fed to the loudspeaker; while the vision input is attracted by a different tuned structure, which is used to control the screen image. We normally combine these into one audio-visual experience, but can if we wish attend more to one 100 Figure 5.2 The path which must be taken by a communication if it is to be understood conceptually. Understanding mathematical symbolism or the other. There are, however, cases in which one schema captures all the input. This capture effect is well known to radio engineers, who have put it to good use: for example, in radio and television circuits for automatic frequency control (AFC). Figure 5.2 illustrates the process of communicating a mathematical idea. Note that in the diagram each point represents not a single concept but a schema, in the same way as a dot on an airline map can represent a whole city— London, Atlanta, Rome. How can we help? How can this theoretical model help our thinking, and what practical help does this suggest for our teaching? Since communication is by the utterance of symbols, all communication, whether oral or written, first goes into a symbol system in S. To be understood mathematically, it must be attracted to an appropriate conceptual structure in D, in order that the input is interpreted in terms of the relationships within the conceptual structure, rather than those of the symbol system. (For example, 572 must be interpreted, not as three single-digit numbers, a five, a seven, and a two, but as a single number formed by the sum 5 hundreds plus 7 tens plus 2 units.) This requires: (i) that D is a stronger attractor than S. If it is not, S will capture the input, or most of it; (ii) that the connections between the symbol system and conceptual structure are strong enough for the input to go easily from the first to the second. How, in our teaching, can we help these two requirements to be fulfilled in children’s learning? S has a built-in advantage: all communicated input has to go there first. And for D there is a point of no return. In the years’ long learning process, if the deep conceptual structures are not formed early on, they have little chance to develop as attractors. And for too many children, D is effectively not there: it is either absent, or too weak to attract the input away from S. In these cases, all input will be assimilated to S: the effort to find some kind of structure is strong. So S will build up at the expense of D. Understanding mathematical symbolism But this guarantees problems, for we have seen that the symbol system is inconsistent. Learning at this level may be easy short-term, but it becomes impossibly difficult long-term. This reveals a built-in advantage for D, since in contrast the conceptual structures of mathematics are particularly coherent and internally consistent. If these once do get well established, input to S will evoke more extensive and meaningful resonances in D than in S, and D will attract much of the input. Long-term, what is learnt in this way is much easier to acquire and retain. So an important part of the answer is that already given in Chapter 3: by a careful analysis of the mathematical concepts, we must sequence the learning materials in such a way that the new materials which children encounter can always be assimilated conceptually. The model also underlines the importance of structural practical activities, using physical embodiments of mathematical concepts and operations. The structure of these physical events match the mathematical structures at the D level much more closely than they do the symbolic representations in S. For example, a rod of 5 cubes put next to a rod of 3 cubes embodies a visual comparison between the two numbers, in which the number 5 can be seen to be 2 greater than the number 3. So we here have a situation in which the sensory input goes directly to the conceptual structure. By using first a do-and-say approach, followed by recording, the symbols then become linked with the conceptual structure after the latter has been established. In the all-important early years, we should stay with spoken language much longer. The connections between thoughts and spoken words are initially much stronger than those between thoughts and written words, or thoughts and mathematical symbols. Spoken words are also much quicker and easier to produce. So especially in these early years we need to resist pressures to have ‘something to show’ in the form of pages of written work. And we can see an additional value in mathematical discussion, in its use of the spoken word. It is also helpful to allow children to use transitional, informal notations as bridges to the formal, highly condensed notations of established mathematics. By allowing children to express thoughts in their own ways to begin with, we are using symbols already well attached to their conceptual 103 Understanding mathematical symbolism structure. These ways will probably be lengthy, ambiguous and different between individuals. By experiencing these disadvantages, and by discussion, children may be led gradually to the use of conventional notation in such a way that they experience its convenience and power. A revised formulation In the light of the foregoing discussion, I offer the following revised formulation of symbolic understanding as a goal for our teaching. Symbolic understanding is a mutual assimilation between a symbol system and a conceptual structure, dominated by the conceptual structure. Symbols are excellent servants, but bad masters, because by themselves they do not understand what they are doing. Summary 1 The power of mathematics is in the ideas; but access to these ideas, and the ability to communicate them, depends on mathematical symbolism. 2 It is also by the use of symbols that we achieve voluntary and rational control of our own thinking.5 3 A symbol system consists of: a set of symbols together with a set of relations between the symbols corresponding to a set of concepts corresponding to a set of relations between the concepts 4 Symbol systems are surface structures in our minds; conceptual structures are deep structures. 5 Doing mathematics involves both levels: the manipulation of deep mathematical concepts, using symbols as combined handles and labels. But for many children (and 104 Understanding mathematical symbolism adults) these concepts are not there. So they learn to manipulate empty symbols, handles with nothing attached, labels without contents. Short-term, the surface structures may build up more easily since symbolic communications go there first. If the conceptual structures are weak or non-existent, the surface structures continue to build up at their expense, and a point of no return may be reached in which the input has no chance of being assimilated to a conceptual structure. Learning at a surface level may be easier short-term, but it becomes impossibly difficult long-term because of its lack of internal consistency. In contrast, the conceptual structures of mathematics are particularly coherent and internally consistent, so long-term these are much easier to learn and retain. The problem which so many have with mathematical symbols arises partly from the laconic, condensed, and often implicit nature of the symbols themselves; but largely also from the absence or weakness of the deep mathematical schemas which give the symbols their meaning. Like referred pain, the location of the trouble is not where it is experienced. The remedy likewise lies mainly elsewhere, namely in the building up of the conceptual structures. So it is important for us as teachers to use methods which help children to build up their conceptual structures right at the beginning, and continuously thereafter. These ways include (a) sequencing new material schematically; (b) using structured practical activities; (c) beginning with a do-and-say approach, followed by written work only when the connections between thoughts and verbal symbols are well established. Suggested activities for readers 1 Multiply 25 by 24. (a) If you know a short-cut, use it. (b) If not, first do it by your usual pencil-and-paper method, and then (c) use the hint in note 6 for this chapter to find 105 Understanding mathematical symbolism the short-cut, (d) Consider the relative advantages of the short-cut and the regular method, (e) Think how you would explain to someone else why the short-cut gives the correct answer. The purpose of the foregoing is not so much to think about methods of multiplication as to illustrate two levels of mental activity. One is the delta-one level, in which we are centring consciousness on a task to be done. The other is the delta-two level, in which our consciousness is directed towards the methods themselves, devising new ones, comparing them in terms of their relative merits, and also testing their validity by mode 3: consistency with established mathematical knowledge. The first level includes routine processes, and also intuition, by which we arrive at new ideas or methods without knowing how we got there. The second level is that of reflective intelligence. You are invited now to reflect on your own two levels of mental activity while doing (a) to (e) above. 2 Do, on paper, one or more arithmetical calculations of kinds with which you are familiar. Subtraction with ‘borrowing’ or decomposition is a good example to start with. Reflect on the relationship between the rules of procedure and the written symbolism on the one hand, and the mathematical concepts and relationships on the other. To what extent do you think that these are good relationships, in which the symbolism supports and gives good access to the meaning, and to what extent does it do otherwise? Part B Making a start How to use teaching as a learning experience for ourselves, as well as for our children The model of intelligent learning which was offered in Part I is applicable to all ages. It may usefully be applied to our own learning of any subject for which the appropriate mix includes a substantial component of intelligent learning relative to habit learning. (Reminder: the building up of a collection of useful routines is not the same as habit learning, but a valuable component of intelligent learning.) This description undoubtedly fits the two learning tasks facing most readers— not to mention many mathematics teachers who are not readers! The first task is ourselves to acquire, if we do not have it already, a well structured understanding of the foundation schemas of mathematics—those which children need to build during their early years of schooling. Surprising though this may seem, those who have a clear and reflective understanding of elementary mathematics are in a fortunate minority. Most people know what to do, but not why. And elementary does not necessarily mean simple, as I found when I turned my attention from mathematics at secondary school level and beyond to mathematics as taught in primary schools. The importance of conceptual analysis, leading to concept maps on which to base a well-structured collection of learning activities, has already been emphasized. It was this, for primary mathematics, which first opened my eyes to the conceptual complexity of many of the topics in primary mathematics which most of us use—and teach—intuitively. The second task is to understand the processes of intelligent 109 Making a start learning both at a theoretical level, and also in their practical application to the teaching of mathematics in our own classrooms, both day-to-day and long-term. To aim for anything less is, in my view, to be less than professional in one’s approach. These goals will take more time to achieve than is available during most pre-service courses for teachers. But during this time it is possible to make a substantial beginning; and, more important, to find out how to continue learning ‘on the job’. And one does not have to travel far along this road to find that the journey can be both interesting and rewarding. As a preliminary to applying the present model to these tasks, it will be useful to review the three modes of schema building and testing described in Chapter 4. These are reproduced below. Figure 6.1 Making a start Experience has shown that a very good approach is a combination of modes 1 and 2, used in the ways described below. These may, of course, be adapted to the particular situation of the present reader. They are offered not as rules to be followed, but as a valuable combination of ways for building professional knowledge and skills. At the end of this chapter are fourteen learning activities for children. They are offered here with an additional emphasis. Each of these activities embodies both a mathematical concept, and also one or more aspects of the theory. So by doing these with a group of children, both children and their teacher benefit. The children benefit by this approach to their learning of mathematics; and the teacher also has an opportunity to learn about the theory of intelligent learning by seeing it in action. Theoretical knowledge acquired in this way relates closely to classroom experience and to the needs of the classroom. It brings with it a bonus, since not only do the children benefit from this approach to mathematics, but it provides a good learning situation for teachers also. In this way we get ‘two for the price of one’, time-wise. Observe and listen, reflect, discuss In more detail, the method is as follows. (i) As a preliminary to using them with children, you need to do the activities yourself, preferably with fellow-students or colleagues. At this stage it is useful to discuss their mathematical content. It is also good to go through the activity enough times to become fluent in the procedures, so that more of one’s attention can be free for observation of the children. (ii) Do these activities with some children. Try not to be actively teaching every moment: allow the children time to think for themselves. This will also give you time to observe and listen. (iii) Reflect on your observations, and make notes. (iv) Discuss your experiences and observations with a colleague, fellow-student, or tutor, according to circumstances. In these discussions it is useful to try 111 Making a start to relate the observations, and your inferences from them, to the parent model. This provides a language for talking about the learning processes involved, and helps to relate the particular instances being discussed to an integrated and more general knowledge structure. When some of these activities were first used in the way described here, the results were very encouraging. Experienced teachers on an in-service course said to the organizing tutor (not the present writer) that they had learnt more from these observations than from all the books they had read at college. In Canada, a teacher said ‘It’s as though the children’s thinking was out there on the table’. This was not only encouraging, but beyond what had been expected. Subsequent reflection and analysis in terms of the present model, and particularly in terms of the ideas discussed in Chapter 5, have suggested the following explanation. What makes mathematics so powerful a tool for understanding, predicting, and sometimes controlling events in the physical world is the fact that it provides such an accurate and multi-purpose model of the physical world. But this correspondence is the one shown in Figure 6.2 by the lower arrow, not the upper one: between the deep structures of mathematics and the physical world, not the surface structures and the physical world. So when we give children materials to manipulate which embody these deep structures, we are doing two very important things. We are letting them experience, in simple examples, the power of mathematics to organize and understand the physical world in their own here-and-now. And when children are working with these manipulatives, they reveal the deep structures of their thinking more clearly than they do by words alone, still less by written symbols. Furthermore, the dominance of the deep structures in these activities helps to ensure that their spoken symbolism expresses these, rather than verbally memorized rules at a surface level. Children can sometimes be very articulate about mathematics in learning situations of this kind. Figure 6.2 Making a start Some observations which have revealed children’s thinking Here are two examples. Readers who use the activities at the end of this chapter in the ways suggested will be able to provide other examples for themselves. 1 A colleague1 was working on addition with three young children in a reception class. She asked them if they could say what five plus four made. All gave the same answer, nine: but each used a different method. The first counted out 5 counters, then counted 4 more, and then counted all of them beginning from 1. From this we may deduce that he knew what the question meant, and knew how to use mode 1 schema building to find out something which he didn’t yet have as a permanent part of his mental model. So he had a way of learning which did not depend on being told the answer, and which related his mathematics to the physical world. The second used her fingers, and was seen to be counting on from 5, 4 more. This child still needed concrete support for her thinking, but was at a more advanced stage in addition. Counting on implies at least an intuitive awareness that the first set is included in the larger set formed by the union of the two sets: so it does not need to be counted again. Put like this, we can see that the step from counting all to counting on is more sophisticated than perhaps we give children credit for. The third child just looked at the ceiling. My colleague said ‘I could see how the other two got their answers, but I couldn’t see how you did. Can you tell me?’ He replied ‘Five add five is ten. Four is one less than five, so the answer is one less: nine.’ In my view, this is both good thinking and good explaining. This child was working at an abstract level, involving an inference between two relationships. He was also able to reflect on his own thinking, and was highly articulate about this. Though his chronological age was about the same as that of the other two, mathematically he was far ahead. 2 I was visiting a school in company with an advisory teacher. We were in different classrooms, but met for the lunch break. She said to me ‘I’m thrilled with Capture. I’ve just seen two children go from concrete to abstract thinking.’2 114 Making a start This game is described in detail at the end of this chapter. Briefly, it is a game for two children, using two number tracks side by side. Each rolls a die, and puts this number of cubes on her own number track. The one with the larger number captures the difference. After a while, these children had stopped using the number tracks. If (say) one threw 5 and the other 3, the first one just took 2 of the other’s cubes. Both these observations support the view that the use of physical materials does not necessarily make children permanently dependent on them. If they are ready to work at a more abstract level of thinking, it seems that they will do so. It is, however, good also to have activities for which abstract thinking is essential, to make quite sure. Alias prime and How are these related?, at the end of this chapter, are examples of these. Fourteen activities for classroom use The following activities are offered for use in the ways described earlier in this chapter. They are taken from Structured Activities for Primary Mathematics.3 The activities themselves are as they appear there, but the accompanying discussions have been partly re-written for the present book. Since the English language has no pronoun which can mean either he or she, I have used these alternately in the descriptions of the activities. In choosing just fourteen from a collection of more than three hundred activities, to include some suitable for all ages from 5 to 11, one of the features most strongly emphasized in Part A, namely their structure, has been lost. This was inevitable, but unfortunate. Where each fits into the overall structure can, however, be found by reference to the concept maps and accompanying lists of activities in the volume from which they are taken. For this reason, their reference codes are included for the benefit of those who have access thereto. They may otherwise be ignored. Note that each activity is but one out of several for each topic. For any new concept, it is desirable to provide more than one embodiment. So the statements of the new concept(s) to be acquired, and of the abilities which should result from 115 Making a start having this concept, apply to the whole of the topic, and not just the single activity shown here. Apart from a spread of age and topic, the activities have been chosen with one other criterion in mind, which is that the materials should be easy to prepare. In the full collection, photo-masters are provided for materials such as game boards. Only two of the present collection require these, which should not be hard to copy from the illustrations provided here. The other activities require only materials which should already be available in primary schools, and institutions concerned with pre-service or in-service education of teachers for children of this age, or which are easy to make using coloured card and felt-tip pens. If access to the photo-masters is possible, this will reduce the time required for making things such as number cards. MISSING STAIRS (Org 1.5/1) Concept An ordered sequence of sets with ‘no gaps’, i.e. in which each set is of number one more than the set before, and one less than the set after (except for the first and last sets). (i) To construct a sequence of this kind. (ii) To extrapolate such a sequence. (iii) To tell whether a sequence is complete or not; and if it is not, to locate and fill the gaps. Discussion of concept If we cut half-a-dozen milk straws in assorted lengths, we can arrange these in order of length. But between any two we can insert another, which will conform to the same ordering. And if we take one away, there is nothing to show which is missing, or where it was taken from. A sequence of counting numbers, however, has a special property. Provided we know where it starts and 116 Making a start finishes, we can tell whether or not it is complete: and if not, which ones are missing and where they should go in order. This property can be shown very clearly in a staircase of rods made from Unifix or Multilink cubes. The activity This is an activity which children can play in pairs after you have introduced it to them. Its purpose is to introduce them to the concepts described, in a way which allows testable predictions. Materials Cubes, in two colours What they do Stage a 1 Child A makes a staircase from rods made up of one to five cubes, all the same colour. (See Figure 6.3.) 2 B removes one rod, hiding the missing rod from sight. 3 A then makes from loose cubes of a different colour a rod which he predicts will fit the gap. 4 This prediction is tested in two ways: by insertion into the gap in the staircase, and by comparison with the rod which was removed. Stage b As above, except that in step 2, child B closes the gap, as shown in Figure 6.4. Stage c As in stage (b), except that now child A closes his eyes during step 2. So he now has to decide where there is a missing rod, as well as make a matching replacement. Stage d The number of rods may gradually be increased to 10. Children will often make this extrapolation spontaneously. The first three stages may usually be taken in fairly rapid succession. 117 Making a start Figure 6.3 Missing stairs Making a start Figure 6.4 Missing stairs (cont.) Discussion of activity From this activity we can see that even such a simple mathematical model as the first five counting numbers, in order, can be used to make testable predictions. We can also see the pleasure of five-year-old children when their predictions are correct. It is also worth analysing in detail the mathematical model which children are using, particularly in stage (c). There is more here than meets the eye. Making a start STEPPING STONES (Num 3.2/3) Concept Adding as something done mentally, with numbers. To predict the results of actions based on the mathematical operation of addition. Discussion of concept The word ‘adding’ is used with two different meanings, one everyday and the other mathematical. When we talk about ‘adding an egg’, ‘adding to his stamp collection’, we are talking about physical actions with physical objects. When we are talking about ‘adding seven’, ‘adding eighty-two’, we are talking about mental actions on numbers. To avoid confusing these two distinct concepts, we shall hereafter use other words such as ‘putting more’ for physical actions, and ‘adding’ for what we do mentally with numbers. We shall also avoid using ‘action’ for the latter, and use ‘operation’ instead. The distinction we are making is therefore between physical actions and mathematical (i.e. mental) operations. Adding is thus a mathematical operation. Other mathematical operations are subtraction, multiplication, division, factorization…May I mention also that these others are not ‘sums’? In mathematics, a sum is the result of an addition, although the word is loosely used in everyday speech to mean any kind of calculation. The activity This is a board game for two, three or four children. Its purpose is to give practice at adding, in a predictive situation, and with physical support for their thinking. Materials Game board, as illustrated in Figure 6.5 Die, 1 to 6 Making a start Figure 6.5 Making a start Shaker Markers, one for each child (little markers, which do not hide the numbers, are best). Rules of play 1 Players start from the near bank, which corresponds to zero. 2 Players in turn throw the die, add this number to that on the stone where they are, and move to the stone indicated. For example, a player is on stone 3, throws 5, so moves to stone 8. When starting from the bank, she moves to the stone with the number thrown. 3 If that stone is occupied, she should not move since there is not room for two on the same stone. 4 If a player touches her marker, she must move it. If this takes her to an occupied stone, she falls in the water and has to return to the bank. 5 The exact number must be thrown to reach the island. Note. This can also be used as a subtraction game, to get back from the island. Discussion of activity Counting on is a useful method of addition while the addition facts (sometimes called ‘number bonds’, which they are not) are being built up in memory. This board provides visual support for this method. If children make the mistake of saying ‘one’ while pointing to the starting number instead of its successor, see the discussion on page 92. When they are near the island and need to throw an exact number to reach it, initially children usually count on by pointing. But the time comes when some of them begin to say ‘I need a three’, showing that they have now acquired adding three as a mathematical (mental) operation. Making a start CAPTURE (NuSp 1.4/4) Concept The correspondence between subtraction and actions on the number track. (i ) To link mathematical ideas relating to subtraction with actions on a number track, (ii) To use the number track as a mental support for subtracting. Discussion of concept Though subtraction might seem to be no more than the inverse of addition, it is in fact a more complex concept, derived from as many as four simpler concepts. These are taking away, comparison, complement, and giving change. The simplest of these is ‘taking away’, opposite of ‘putting more’. If children have only learnt this one, they will have difficulty with word problems of the kind ‘How many more grapes has Kate than Philip?’; so it is important for children to have practical activities which embody all four contributory concepts. The present activity uses the comparison aspect of subtraction. The activity This is a game for two. Its purpose is to introduce the comparison aspect of subtraction. Materials Two number tracks 1–10 One die 1–6 10 cubes for each player, a different colour for each Making a start Rules of play 1 The two number tracks are put beside each other. 2 Each player throws the die, and puts the number of cubes indicated on the track. The result might look like Figure Figure 6.6 Since A has filled two more spaces than B, B must give A two cubes. The cubes are taken off the track. Both players throw again, and the process is repeated. Captured cubes may not be used to put on the track, but may be used if cubes have to be given to the other player. The game finishes when either player has had all her own cubes captured, or cannot put down what is required by the throw of the die. The other player is then the winner. Discussion of activities This is a straightforward physical embodiment of the concept described. Making a start THE HANDKERCHIEF GAME (Num 4.6) Concepts (i) Complement, i.e. the remaining part of a set when one part is excluded. (ii) The number of this remaining part, relative to the number of the whole set. To state numerically the complement of any part relative to a given whole. Discussion of concepts An example will make these concepts clearer than the definitions (as is often the case). Suppose that there are six children sitting round a table, of whom four are girls and two are boys. Then in this set of children, the complement of the (sub-set of) boys is the (sub-set of) girls, and vice-versa. And relative to 6, the complement of 4 is 2 and vice-versa. This concept forms a good bridge between the addition and subtraction networks. It fits into the addition network if we call it missing addend: e.g. 5+?=8 It fits into the subtraction network, if we ask, for example: What is the difference between 5 and 8? Counting on is a good method for both of these. Both relate to the comparison aspect of subtraction rather than the ‘take away’ aspect. The activity This is a game for children to play in pairs. (More can play together but there is more involvement with pairs.) Its purpose 125 Making a start is to build the concept of complementary numbers in a physical situation which allows immediate testing. Materials Handkerchief 10 or more small objects such as shells, bottle caps, acorn cups, etc. Number cards 1–10 Rules of play 1 The game is introduced by having one child put out ten small objects. (Suppose that shells are used.) The other children check the number. 2 All the children are asked to hide their eyes while a handkerchief is placed over some of the shells. 3 The players are told to open their eyes and are asked, ‘How many shells are under the handkerchief?’ 4 They check by removing the handkerchief. 5 The children then play in pairs, covering their eyes in turn. 6 Repeat, using other numbers of objects. For numbers other than 10, the children will need some kind of reminder of how many there are altogether. So, before putting down the handkerchief, a number card is put down for the total number. Discussion of activity This activity introduces the idea of complement in a physical embodiment. Children first see the whole set, and then part of it, from which they have to deduce the number of the part they cannot see. They are able immediately to test the correctness of their deduction. It is interesting to observe the various methods which children use for this. Using fingers is perfectly sound at this stage. Try to infer what is happening in their minds while they are doing this. Counting on, perhaps? Making a start NUMBER TARGETS (Num 2.8/1) Concept That a particular digit can represent a number of units, tens, and later hundreds…according to where it is written. (i) To match numerals of more than one digit with physical representations of units, tens (and later hundreds…). (ii) To speak the corresponding number-words. Discussion of concept First, let us be clear about what is a digit. It is any of the single-figure numerals 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Just as we can have words of one letter (such as a), two letters (such as an), three letters (such as ant), and more, so also we can have written numerals of one digit (such as 7), two digits (such as 72), three digits (such as 702), and more. The same numeral, say 3, can be used to represent 3 conkers, or shells, or cubes, or single objects of any kind. If we want to show which objects, we can do so in two ways. We can either write ‘3 conkers, 5 sea shells, and 8 cubes’, or we can tabulate: conkers sea 3 shells 5 cubes 8 Likewise the same numeral, say 3, can be used to represent 3 single objects, or three groups of ten, or 3 groups of ten groups of ten (which we call hundreds for short). We could write ‘3 hundreds, 5 tens, and 8 units’; or we could tabulate: hundreds 3 tens 5 units 8 We are so used to thinking about (e.g.) 3 hundreds that we tend not to realize what a major step has been 127 Making a start taken in doing this. We are first regarding a group of ten objects as a single entity, so that if we have several of these we can count ‘One, two, three, four, five… groups of ten’. Then we are regarding a group of ten groups of ten as another entity, which can likewise be counted, ‘One, two three…’. And as our mathematical learning progresses, we shall no longer be regarding these as groups of physical objects, but as abstract mental entities which we can arrange and rearrange. We shall also have introduced a condensed and abstract notation (place-value). These two steps need to be taken one at a time. While the first, described above, is being taken, we need to use a notation which states clearly and explicitly what is meant. Headed column notation does this well. Also, because the correspondence between written numerals and number words only becomes regular from 20 onwards, we start children’s thinking about written numerals here where the pattern is clear. The written numerals 11–19 are also regular, but their spoken words are not, so these are postponed until the next topic. The activity This is a game for as many children as can sit so that they can all see the tray right way up; minimum 3. Its purpose is to link the spoken number words with the corresponding written numerals. Materials *Tens and units card *Target cards *Pencil and headed paper for each child **Base 10 material, tens and units *See Figure 6.7 **This game should if possible be played with a variety of base ten materials such as milk straws in units and bundles of ten, as well as the commercially-made base ten materials in cubes and rods. 128 Making a start Figure 6.7 Rules of play 1 2 3 4 5 6 The target cards are shuffled and put face down. In turn, each child takes the top card from the pile. He looks at this, but does not let the others see it. Before play begins, 2 tens are put into the tray. (This is to start the game at 20.) The objective of each player is to have in the tray his target number of tens and units. Each player in turn may put in or take out a ten or a unit. Having done this, he writes on his paper the corresponding numerals and speaks them aloud in two ways. For example, see Figure 6.8. He writes: 46. Speaks: ‘Four tens, six units; forty-six.’ In the above example, if a player holding a 47 target card had the next turn, he would win by putting down one more unit. He would then show his target card to show that he had achieved his target. Since players do not know each others’ targets, they may unknowingly achieve someone else’s target for 129 Making a start Figure 6.8 them. In this case the lucky player may immediately reveal his target card, whether it is his turn next or not. When a player has achieved a target, he then takes a new target card from the top of the pile, and play continues. The winner is the player who finishes with the most target cards. Notes (i) If one side of the card is empty, a corresponding zero must be written and spoken; e.g. in Figure 6.9: Figure 6.9 He writes 40, and speaks ‘four tens, zero units; forty.’ 130 Making a start And also, in Figure 6.10: Figure 6.10 He writes 07, and speaks ‘zero tens, seven units: seven.’ (ii) Players are only required to write the numbers they themselves make. If would be good practice for them to write every number, but we found it hard to get them to do it. Variation It makes the game more interesting if, at step 5, a player is allowed two moves. For example, he may put 2 tens, or put 2 units, or put 1 ten and take 1 unit, etc. This may also be used if no one is able to reach his target. Discussion of activity In preparation for place-value notation, it is important for children to have plenty of practice in associating the written symbols and their locations with visible embodiments of tens and units (later hundreds…) and in associating both of these with the spoken words. In this topic ‘location’ means ‘headed column’; later, in place-value notation where there are no columns, it will mean ‘relative position’. This activity uses concept building by physical experience (mode 1). The social context provided by a game links these concepts with communication (mode 2) using both written and spoken symbols. Making a start ‘MY SHARE IS…’ (Num 6.2/2) Concepts (i) Equal shares. (ii) Remainders. (i) Starting with a set of given number, to separate this into a required number of equal shares. (ii) To state the number in each share, and the remainder. Discussion of concepts Sharing is one of the two main contributors to the mathematical operation of division. In the present context, sharing is always taken to mean sharing equally, unless specifically stated otherwise. Physically it is quite different from grouping. This may be clearly shown by taking two sets of 15 cubes or counters, and arranging one in groups of 3, the other in shares of 3. If the starting sets are 16 or 17, there will be a remainder of 1 or 2 respectively—the same in each case. Here we have yet another example of how the same mathematical model can represent quite different physical situations. This is what makes them so useful, because multi-purpose; but it is also what can so easily cause confusion if we do not take care in the building up of these multi-purpose, higher-order, concepts. The activity This is a game for up to 6 children. Its purpose is to consolidate the concept of sharing by using it in a predictive game. This activity should be used after they have formed the concept, by sharing given sets of objects equally between varying numbers of players, without and with remainders. Materials *Game board **Start cards 10–25 **Action cards 2–5 Making a start Figure 6.11 Making a start 25 (or more) small objects Pencil and paper for each child, and for scoring *As in Figure 6.11. **To fit the spaces on the board. Rules of play 1 The start cards and the action cards are shuffled and put face-down in the upper parts of their spaces on the game board. The top card of each pile is turned over. A set of the specified number is put out. Players then take turns, as follows. The first player looks at the action card, and decides what her (equal) share will be (using pencil and paper if she likes). She then says ‘My share is…’, and takes this number of objects. She may need some help, initially. Suppose that 19 objects are to be shared between 5. ‘If you gave everyone 1 each, how many would that use? If you gave everyone 2 each, how many would that use?’ (And so on.) The correct number of other players take the same number of shares as the player in step 5. This will show whether she has decided correctly or not. One point is scored for a correct prediction. It is now another player’s turn, and steps 2 to 7 are repeated. When players are proficient, they may agree to play without pencil and paper, except for scoring. Extension The player in step 5 may also say what the remainder will be. If correct, she scores another point. Making a start Discussion of activities At this stage, we may leave it to children to devise their own methods. For larger numbers it is necessary to use known multiplication facts, and later they will be taught this method. However, with these smallish numbers there are other suitable methods, and I think that it is good to give children the chance to exercise their ingenuity. One of the features of intelligent learning is to use one’s existing knowledge in new situations, and in these days of calculators this aspect can be given increasing importance relative to the skills of calculation. Many children find sharing more difficult than grouping, so plenty of practice is necessary. SLIPPERY SLOPE (Num 3.6/3) Concept Extension of everything in the existing addition concept to cases when the sum is greater than 10, but not greater than 20. Ability To be able to add across the 10 boundary. Discussion of concepts Adding past the tens boundary is an important step in the transition from quite small number operations, which can easily be handled in physical embodiments, towards operations with large numbers for which physical embodiments offer little or no help. As a beginning for this transition, the present activity uses physical materials and symbols together. Making a start Figure 6.12 Making a start Making a start The activity This is a popular board game for 2 or 3 children. Its purpose is to consolidate the skill of adding past 10 in a predictive situation. Materials Game board, as illustrated in Figure 6.12 Three small markers (three cubes) of a different colour for each player Die 1–6 and shaker Rules of play 1 The board represents steps up a hillside. Steps 11, 12, 13 are missing. Here there is a slippery slope, and if a climber treads here he slides back to a lower step as shown by the arrows. 2 The object is to reach the top. Each player manages three climbers, represented by markers. (When first learning, they may start with two climbers each.) 3 Players in turn throw the die, and move one of their climbers that number of steps up. They begin at START, which corresponds to zero. 4 A climber may not move upwards to a step which is already occupied. Overtaking is allowed. 5 Players may choose not to move. However, if a climber has been touched, it must be moved (but see also step 6). 6 If a climber is touched and the move would take him to an occupied step, he must return to the start. 7 If a climber slides back to an occupied step, any climber already on that step is knocked off and must return to the start. 8 The exact number must be thrown to finish. Discussion of activity This activity is the third of a sequence for teaching this important concept and skill. It provides visual support in the form of a number track. There is also a 138 Making a start predictive element, since in order to decide which is the best piece to move it is necessary to compare the outcomes of more than one possible move. The game can be played at different levels of sophistication, and it is interesting to watch children progress through these. This also makes it a good family game. Adding by use of a number track is easier than using cubes, which are afterwards grouped or exchanged for a ten and some units. However, the number track method does not easily extrapolate, whereas the base 10 material provides very well for extrapolation to hundreds and thousands. So the present activity needs to be used as one of several embodiments of the concept, including that just described. TAKING (NuSp 1.7/5) Concepts (i) Unit intervals on a line. (ii) The number line. To use the number line in the same ways as the number track, in preparation for other uses of the number line. Discussion of concepts Figure 6.13 The differences between a number track and a number 139 Making a start line are appreciable, and not immediately obvious. The number track is physical, though we may represent it by a diagram. The number line is conceptual—it is a mental object, though we often use diagrams to help us think about it. The number track is finite, whereas the number line is infinite. However far we extend a physical track, it has to end somewhere. But in our thoughts, we can think of a number line as going on and on to infinity. On the number track, numbers are initially represented by the number of spaces filled, with one unit object to a space. So it is rather like a set loop, in which the number of objects is automatically counted. Even if physical objects are not used, it is the number of spaces counted which corresponds to a given number. So the number zero is represented by an empty track, corresponding to the empty set. The number one is represented by a single space filled, which means that the first space on the number track is marked 1 and not 0. On the number line, numbers are represented by points, not spaces; and operations such as addition and subtraction are represented by movements over intervals on the line, to the right for addition and to the left for subtraction. The concept of a unit interval thus replaces that of a unit object. Also, the number line starts at 0, not at 1. For the counting numbers, and all positive numbers, we use only the right-hand half of the number line, starting at zero and extending indefinitely to the right. For positive and negative numbers we still use 0 for the origin, but now the number line extends indefinitely to the right (positive numbers) and left (negative numbers). Thus, the number line is a much more sophisticated concept than that of a number track. It gives strong support for our extrapolation of the number concept to fractional numbers, negative numbers, and onwards to more advanced topics such as irrational numbers and imaginary numbers. Two number lines at right angles provide co-ordinate axes for graphs. Truly, this 140 Making a start beginning may be thought of as potentially leading into far distances of thought. The activity This is another capturing game for two, but a different one from that described earlier under the name of Capture. The former uses the number track; this uses the number line. Its purpose is to give practice in relating numbers to positions and movements on the number line. Materials One number line 0–20 3 markers for each player Die 1–6 Rules of play 1 The markers begin at zero. 2 The die is thrown alternately, and according to the number thrown a player may jump one of her markers forward that interval on the line. 3 A piece which is jumped over is taken, and removed from the board for the rest of the game. 4 An occupied point may not be jumped onto. 5 A player does not have to move at all if she doesn’t want to. (We introduced this rule when we found that starting throws of low numbers were likely to result in the piece being taken next throw, with no room for manoeuvre.) 6 The winner is the player who gets the largest number of pieces past 20. (It is not necessary to throw the exact number.) Discussion of activity This activity comes fifth in the topic which introduces the number line. The four preceding activities were 141 Making a start concerned first with introducing the number line, and then linking it to concepts already familiar. The present activity is not unlike Slippery Slope, which is a number track activity. Like Slippery Slope, it involves mentally comparing a number of possible moves before deciding which one to make. Unlike Slippery Slope, none of the hazards stays in the same location. This makes the game more difficult. Comparing several possible plans of action in the light of their expected outcomes, in a variety of possible circumstances, is one of the more important ways in which we use our intelligence in everyday life. So all games which involve this are helping to bridge the gap between ‘school maths’ and maths in the world outside school. DOUBLES AND HALVES RUMMY (Num 1.9/3) Concepts (i) Doubling a number. (ii) Halving a number. Abilities (i) Given an even number, to double or to halve it. (ii) To recognize a number which is the double or half of another. Discussion of concepts We are now working at an abstract level, with numbers themselves as independent mental objects, as against numbers in some physical embodiment. We can do things to physical objects, and what we can do depends on their nature. Similarly we can do things to numbers, again depending on their nature. All numbers can be doubled; only even numbers can be halved, so long as we are talking about whole numbers. We can also find relations between mental objects in the same way as we can between physical objects. A 142 Making a start relationship between two numbers is a more abstract concept than the number-concepts themselves, so in this activity children are working at what is for them quite a high level of abstraction. We rely on symbols for manipulating these mental objects. In the present activity, the number-symbols are written, the symbols for double and half are only spoken. The activity This is a card game for up to four players. Up to six may play if a third pack of cards is introduced. Its purpose is to practise the concepts of halves and doubles of a given number, independently of physical materials. It is assumed that children have already formed these concepts. If they have not, the concepts should be introduced using physical materials, and children given enough practice to become familiar with these. Materials *Two double-headed number packs 1–20, without the odd numbers over 10 *This just means that the numerals are written twice so that a numeral is seen right way up at the top, whichever way up the card is held. Rules of play 1 The packs are put together and shuffled. 5 cards are dealt to each player. 2 The rest of the pack is put face-down on the table, with the top card turned over to start a face-upwards pile. 3 The object is to get rid of one’s cards by putting down pairs of cards in which one is the half or double of the other. 4 Players begin by looking at their cards and putting down any pairs they can. They check each other’s pairs. 5 The first player then picks up a card from either the face143 Making a start down or the face-up pile, whichever he prefers. If he now has a pair, he puts it down. Finally he discards one of his cards onto the face-up pile. In turn the other players pick up, put down a pair if they can, discard. The winner is the first to put down all his cards. Play then ceases. Each player then scores the number of pairs he has made. The winner will thus score 3, the others 2, 1, or 0. Another round may then be played, and the scores added to those of the previous round. Discussion of activities This activity uses mode 2 learning only: hence the importance of establishing the concepts well beforehand, using mode 1. The first level of sophistication in playing this game is, clearly, recognizing whether two cards form a pair. What is the next level? MAKE A SET: MAKE OTHERS WHICH MATCH (Num 5.1/1) Concepts (i) The action of making a set. (ii) The action of making a set of sets. (iii) Starting and resulting numbers. (i) To make a given number of matching sets. (ii) To state the number of a single set. (iii) To state the number of matching sets, Discussion of concepts Multiplication is sometimes introduced as repeated addition. This works well for the counting numbers, but 144 Making a start it does not apply to multiplication of the other kinds of number which children will subsequently encounter; so to teach it this way is making difficulties for the future. This is one of the reasons why so many children have problems with multiplying fractions, and with multiplying negative numbers. The concept of multiplication which is introduced in the present topic is that of combining two operations, and this continues to apply throughout secondary school and university mathematics. And as a bonus, the correct concept is no harder to learn when properly taught. In the present case, we are going to multiply natural numbers. A natural number is the number of objects in a set, and we start with the concept as embodied in physical actions. First action: make a set of number 5. Second action: make a set of number 3. To combine these, we do the first action (see Figure 6.14). Figure 6.14 and then apply the second action to the result, i.e. we make a set of 3 (sets of 5). (See Figure 6.15.) Making a start Figure 6.15 This is equivalent to making a set of number 15. At this stage, there is not a lot of difference between this and adding together 5 threes, just as near their starting points two diverging paths are only a little way apart. But in the present case one of these paths leads towards future understanding of more advanced topics, while the other is a dead end. The activity This is an activity for up to six children. Its purpose is to introduce the concept of multiplication, as described above, in a physical embodiment. Materials Five small objects for each child, which should be different for each child, e.g. shells, acorns, bottle tops… *Six small set ovals Large set loop *Oval cards, about 6 cm by 7.5 cm 146 Making a start What they do 1 The first child makes a set, using some or all of her objects. A small set oval is used for this. It is best to start with a set of fairly small number, say 3. Figure 6.16 2 Everyone makes a set which matches this, i.e. has the same number. They too use set ovals and then check with each other for match. 3 All the sets are put in the set loop to make one combined set, which is counted. Figure 6.17 Making a start 4 With your help, they say (in their own words) what they have done. E.g. ‘Vicky made a set of 3 shells. We all made matching sets, so we made 5 sets of 3. When we put these together, there were 15 things altogether.’ Or ‘We made 5 sets of 3, making 15 altogether.’ Or ‘5 sets, 3 in each, makes 15.’ 5 The children take back their objects and steps 1 to 4 are repeated. 6 To give variety of numbers, sometimes only some of the children should make matching sets. For example, everyone on the same side of the table, or all the boys, or all the girls. Discussion of activities This activity embodies in physical actions on physical objects the concept of multiplication as described at the beginning of this topic. The first action is making a set of (say) shells, and the second action is making a set of matching sets. Simple recording, including notation for addition, is introduced in the next two activities in this topic (not shown here), but in this introductory activity no writing is involved, to free as much attention as possible for concentrating on the formation of this new concept. Like place-value notation, it has proved on analysis to be more sophisticated than is usually recognized. SETS UNDER OUR HANDS (Num 5.2/2) Concept Multiplication as a mathematical operation. Ability To do this mentally, independently of its physical embodiments. Discussion of concept Multiplication becomes a mathematical operation when 148 Making a start it can be done mentally with numbers, independently of actions on sets or other physical embodiments. At this stage we concentrate on forming the concept, using easy numbers. The activity This is an activity for up to six children. It is like the one just described, Make a set: make others which match, but now the process of finding the total number has to be done mentally. Its purpose is to start building the concept described above. Materials *Five small objects for each child *Six small set ovals *Large set loop Number cards 2 to 5 Pencil and paper for each child *As for Make a set: make others which match. What they do 1 The first child makes a set, using some or all of his objects. As before, a small set oval is used. 2 A number card is put out to remind them what is the number of this set. 3 All the children then make matching sets, using set ovals. 4 They cover the sets with their hands. 5 They try to predict how many objects there will be when they combine all these sets into a big set. This can be done by pointing to each hand in turn and mentally counting on. For example, if there are 4 in each set (pointing to first hand): ‘1, 2, 3, 4’; (pointing to second hand): ‘5, 6, 7, 8’; etc. 6 They speak or write their predictions individually. 7 The sets are combined and the predictions tested. 8 Steps 1 to 7 are repeated, with a different child beginning. 149 Making a start 9 As in the activity before, the number of sets made should be varied, by involving only some of the children. All, however, should make and test their predictions. Discussion of activities In Make a set: make others which match, the physical activities were used for schema building. The activities came first, and the thoughts arose from the activities. In the present topic it is the other way about: thinking first, and then the actions to test the correctness of the thinking. First mode 1 building, then mode 1 testing. By this process we help children first to form concepts, and then to develop them into independent objects of thought. In the present activity, the visual support which was available in the activity before is only partly withdrawn. They have to imagine how many objects there are under each hand, but they can still see how many hands there are. In this way we take them gently along the path towards purely mental operations. THE RECTANGULAR NUMBERS GAME (Num 6.4/2) Concept Rectangular numbers, as the number of unit dots in a rectangular array. (See Figure 6.18.) To recognize and construct rectangular numbers. Discussion of concept Here is another property which a number can have, or not have. The term ‘rectangular’ describes a geometrical shape, so when applied to a number it is being used metaphorically. Provided we know this, it is a useful metaphor, since the correspondence between geometry and arithmetic (also algebra) are of great importance in 150 Making a start mathematics. Rectangular numbers are closely connected with multiplication and with calculating areas. They provide a useful contribution to both of these concepts, and a connection between them. The activity This is a game for children to play in pairs. It uses the concept of a rectangular number in a predictive situation. Children also discover prime numbers, though usually they do not yet know this name for them. Materials 25 counters with dots on them Pencil and paper for scoring Rules of play First explain that the counters represent dots which we can move about. We could use dots on paper, but then we would have to keep rubbing them out. Begin by introducing the concept of a rectangular number by giving examples, such as that in Figure 6.18 below, and asking the children for other examples. The game is then played as follows. Figure 6.18 Making a start 1 Each player in turn gives the other a number of counters. 2 If the receiving player can make a rectangle with these, she scores a point. If not, the other scores a point. 3 If when the receiving player has made a rectangle (and scored a point), the giving player can make a different rectangle with the same counters, she too scores a point. (E.g. 12, 16, 18) 4 The same number may not be used twice. To keep track of this, the numbers 1 to 25 are written at the bottom of the score sheet and crossed out as used. 5 The winner is the player who scores most points. Note If a child puts (say) three counters in a row and claims that these make a rectangle, remind her that they represent movable dots. Draw three dots in a row, and ask if these make a rectangle. I have not yet found a child who thought that they did. Discussion of activity Success at this game depends on predicting whether or not a given number is rectangular. Initially, each prediction is tested (mode 1) as part of the game. The time may come when they stop doing this. What do you infer? ALIAS PRIME (Num 1.12/1) Concepts (i) A prime number as one which is not a rectangular number. (ii) A prime number as one which is not the multiple of any other number except 1 and itself. (iii) A prime number as one which is not divisible by any other number except 1 and itself. To use these criteria to recognize prime numbers. (ii) To give examples of prime numbers. Making a start Discussion of concepts This is a negative property, that of not having a given property. Children will already have formed this concept while playing the rectangular numbers game. In this topic we give the concept further meaning by relating it to other mathematical ideas. The activity This is a game for up to six players. Its purpose is to introduce children to the difference between composite and prime numbers, and give them practice in distinguishing between these two kinds of number. Materials Three counters for each player Rules of play 1 Begin by explaining the meanings of ‘composite number’ and ‘prime’. These concepts have been well prepared in earlier activities, and children have usually invented their own names for them. 2 Explain that ‘alias’ means ‘another name for’. In this game, all prime numbers use the alias ‘Prime’ instead of their usual name. 3 Start by having the players say in turn ‘Eight’, ‘Nine’, ‘Ten’,…round the table. 4 The game now begins. They say the numbers round the table as before, but when it is a player’s turn to say any prime number, he must not say its usual name, but say ‘Prime’ instead. 5 The next player must remember the number which wasn’t spoken, and say the next one. Thus the game would begin (assuming no mistake) ‘Eight’, ‘Nine’, ‘Ten’, ‘Prime’, ‘Twelve’, ‘Prime’, ‘Fourteen’, ‘Fifteen’, ‘Sixteen’, ‘Prime’, Eighteen’ and so on. 6 Any player who makes a mistake loses a life—i.e. one of 153 Making a start his counters. Failing to say ‘Prime’, or saying the wrong composite number, are both mistakes. 7 When a player has lost all his lives he is out of the game, and acts as an umpire. 8 The winner is the last player to be left in the game. Note. When the players are experienced, they may begin counting at ‘One’. This gives rather a lot of primes for beginners. Discussion of activity The concept of a prime number was implicit in the rectangular numbers game, when children encountered numbers which are not rectangular numbers. Here this negative property is made explicit and given a name. Alias Prime centres attention on prime numbers in a game based on this concept, and the concept is tested by mode 2 (comparison with the schemas of others, leading sometimes to discussion). Primes were initially conceptualized in relation to rectangular numbers, which is a physical and spatial metaphor. In this activity they are thought of in a different way, that of not being multiples. They are therefore not divisible except by 1 and themselves. Multiplication and division are abstract mathematical operations, so the concept of a prime has now become independent of its physical/spatial beginnings. This game strongly involves the activity of reflective intelligence, and requires much concentration. As the numbers get larger, beyond those in the usually-known multiplication tables, players have to find other ways of deciding whether a number is prime or composite. How about 51, for example: prime or composite? How do we know? HOW ARE THESE RELATED? (Num 1.14/2) Concept That all numbers are related, in many ways. 154 Making a start Ability To find several relationships between two given numbers. Discussion of concept One of the major emphases of the present approach is that learning mathematics involves learning, not isolated facts, but a connected knowledge structure. Here we make explicit a very general property of numbers: that every number is related to every other number, not just in one way but many. Indeed, the only limits to how many relationships we can find are those of time and patience. The activity This is a game for a small group. Materials A bowl of counters, say 3 per player A pack of number cards (or any other way of generating assorted numbers). How high they should go depends on the ability of the players. Rules of play 1 The first two cards are turned over, e.g. 25 and 7. 2 Each player in turn has to say how they are related. e.g ‘25 is more than 7.’ ‘25 is 18 more than 7.’ ‘3 sevens plus 4 makes 25.’ ‘Both 25 and 7 are odd numbers.’ ‘The sum of 25 and 7 is an even number.’ Etc. 3 Each time a player makes a new and correct statement, the others say ‘Agree’ and he takes a counter. 4 If it has been used already, the others say, ‘Tell us something new’. 5 If an incorrect statement is made, they say, ‘Tell us something true’. 155 Making a start 6 When all the counters have been taken, many different properties and relationships have been stated about the same number. 7 The player with the most counters is the winner. 8 The game may then be played with a different number. Discussion of activity This activity develops fluency and inventiveness in the handling of numbers. It also increases the interconnections within children’s schemas. There is much use of mode 3 activity—the creative use of existing knowledge to find new relationships. Testing is by mode 2, agreement and if necessary discussion, which in turn is based on mode 3—testing by consistency with what is already known. Summary 1 Our own teaching can be used as a learning experience for ourselves, as well as for our children. This has several advantages: (i) Theory learnt in this way is closely related to the day-to-day needs of the classroom. (ii) We get ‘two for the price of one’, time-wise. (iii) Our own learning can continue in this way for many years, and thus compensate for shortage of time during pre-service preparation for teaching. Activities of the kind exemplified in this chapter help to make this possible, since the physical materials externalize childrens thinking at this stage much better than their written work. Also, their discussions based on these materials are often highly articulate. Application of the model described in Part I to one’s own learning may be summarized in the reminders: OBSERVE AND LISTEN REFLECT DISCUSS The contents and structure of primary mathematics Knowledge, plans, and skills By knowledge I mean structured knowledge, not collections of isolated facts such as form the content of many television quiz shows. We already know that the latter are of low adaptability, as is well illustrated by the following example from Rees.1 Craft apprentices had learnt at school that the area of a circle is given by the formula pr2, where r is its radius. They needed to calculate the area of cross-section of a given piece of wire, so they began by measuring its thickness. This gives the diameter of the circular cross-section, not the radius. No problem, you may think: they know that the radius of a circle is half the length of a diameter, so if they know the diameter they can easily find the radius and apply the formula. But hard though it is to believe, many of these apprentices could not do this. Though they had the necessary facts, they were not able to combine them to make a plan for dealing with the requirements of this new situation. The practical importance of structured knowledge, as a foundation for relational understanding, was well put by a mature student at the Polytechnic of the South Bank, after a talk given there about relational and instrumental understanding. He said, ‘Instrumental understanding, which is what I was given at school, only enabled me to deal with yesterday’s technology. This is why I’ve had to come back to college and take evening classes, to get the relational understanding which will enable me to cope with the technology of the future.’ This student had become aware of the inadequacy of learning which does not go beyond the memorizing of facts 157 Contents and structure and rules, together with practice in using these. Moreover, he knew that there was an alternative, and that this was what he needed. More important still, he had the confidence in his own ability to acquire the kind of knowledge which led him to undertake the evening course where I met him. Unfortunately, this is not always the case. Many have had their confidence destroyed because they think that they ‘can’t do maths’, as was evidenced by the survey quoted in Chapter 1. It would encourage them to know that what they failed at probably was not mathematics, but a look-alike under the same name which, as is often the case, was of much less worth. Though structured knowledge is the first requirement, it is only the beginning. Next we need plans of action. These are what we have to do to reach a particular goal from a particular starting point. In present context, both the goal and the starting point are mathematical, and the plans are plans for mental action (though they may usefully be represented on paper). It will be instructive to identify the plans involved in doing some of the activities provided in Chapter 6. In all of them, as in everyday life, an important feature of a successful plan is to stay within the constraints of the situation. When driving, for example, these are both physical (we have to stay on the road: the car will not cross ploughed fields or climb walls) and social (we stop at red traffic lights, and drive on whichever side of the road is socially agreed in the country where we are). In mathematics, the constraints are those of an agreed body of knowledge, together with agreed ways of representing this. So every time we make a move, it is tested for consistency with this body of knowledge (mode 3 testing). (In the mathematics lookalike mentioned above, the constraints are arbitrary rules without reasons.) To us as adults, constraints of this kind are so familiar that we take them for granted: hence the need, as teachers, to make them conscious by reflecting on them. The particular details of the plans will, of course, vary between activities, and between players at different levels of sophistication. Slippery Slope is a good situation in which to observe the latter. For example, beginners often do not see the advantage of putting several ‘climbers’ on the board early in the game, so that they have a choice of moves for a given throw of the die. However, when they do so, and have three climbers to manage, a superordinate plan is now required: a 158 Contents and structure plan for each climber (i.e. a move to the step indicated by the fall of the die and the mathematical constraints embodied in the game), and a comparison of these plans in order to choose the most advantageous. This is one of the ways in which we use our intelligence in everyday life. It is also good exercise for reflective intelligence. Knowledge, and plans based on knowledge, are necessary but still not sufficient. The third requirement is skill: that is, being able to put our plans easily and accurately into action, with a minimum of conscious attention except when nonroutine situations are encountered. Knowledge then allows us to adapt existing plans, or devise new ones. The latter takes time, and is not always easy. So the best combination with which to equip children is a firm foundation of well-structured knowledge, together with a good repertoire of routine plans for frequently-encountered tasks, these plans being frequently practised in a variety of situations until they become skills. Activities for developing skills Nearly all games are good for this, since they provide constantly changing situations in which players have both to come up with the relevant bit of mathematical knowledge, and put this into action appropriate for the context of the game. It will be worth reviewing, for instance, The rectangular numbers game, and Alias prime, with this aspect in mind. There are, however, some skills which are so widely used, and also form the basis for other skills, that it is worth giving time to developing a high degree of fluency in these once children have a good grasp of the concepts. Multiplying any required pair of single-digit numbers is one of these. A question which is often put, when one is emphasizing the importance of learning with understanding, is ‘Are you saying they shouldn’t learn their multiplication tables, then?’ My own answer is that I wouldn’t think it sensible to make children learn to spell words they didn’t understand the meaning of, and neither would I teach multiplication tables in this way. The result is children who can add and multiply well at a mechanical level, but when given a simple problem still have to ask ‘Please, miss, is it an add or a multiply?’ However, children do need to be able to 159 Contents and structure spell fluently if they are to be able to use writing for putting ideas on paper, and give thought to choosing the best words to express their meaning. Likewise with multiplication. Once children understand multiplication, and having learnt addition and multiplication in practical situations can distinguish which is the appropriate mathematics for a given situation, then they do need to know their multiplication tables, and time is well spent in practising these until fluent recall is attained. Here is one of a group of activities which were devised for this purpose. CARDS ON THE TABLE (Num 5.6/4) Concepts (i) Product tables, as an organized collection of ready-for-use results. (ii) The complete set of products, up to 10×10. (i) To recall easily and accurately whatever results are needed for a particular job. (ii) To build new results from those which are already known. The activity This is an activity for children in pairs; as many pairs as you have materials for. They may with advantage make their own, and practise in odd times which might otherwise be wasted. Its purpose is to practise fluent recall of all their product results up to 10×10. Materials 9 sets of symbol cards, each with 10 cards in each set, from 2×1 to 10×10 One multiplication table and L card for each pair (see Figure 7.1) What they do 1 In each pair, one child has in his hand a single pack of cards, shuffled and face-down. The other has on the table his multiplication table and L-card. 160 Contents and structure Figure 7.1 161 Contents and structure 2 Child A looks at the top card, say 4×7, and tries to recall this result. Child B then checks by using his multiplication square and L-card. This is done by placing the L-card on the multiplication square as shown in Figure 7.2. Figure 7.2 3 If A’s answer was correct, this card is put on the table. If incorrect, it is put at the bottom of the pile in his hand so that it will appear again later. 162 Contents and structure 4 A continues until all the cards are on the table. This method gives extra practice with the cards he got wrong. 5 Steps 1 to 4 are repeated until A makes no mistake, and all his cards are put down first time. 6 The children then change roles, and repeat steps 1 to 5. 7 Steps 1 to 5 are then, possibly at some other time, repeated with a different pack until all the packs are known. 8 The foregoing may be repeated using two packs mixed together. 9 The final stage is to mix all the packs. Each child then takes from these a pack of 10 mixed cards, and repeats steps 1 to 5 with this pack. 10 This activity should be continued over quite a long period: say, one new pack a week, with revision of earlier packs, including mixed packs. Discussion of activity It is tempting to write the results on the back of the cards; e.g. to write 18 on the back of 6×3. I suggest that it is better not to do this, since it is a step in the direction of rote memory. In its present form, each time they use the multiplication square and L-card they are relating what they are learning to a structure. The number in the bottom right-hand corner of the rectangle shown by the L-card is the number of squares in the rectangle, and the squares in this rectangle correspond to the pattern of dots in the rectangular number game. The place of written work Teachers who understand the importance of a strong foundation of practical and oral work will appreciate the paragraph in the Cockcroft report in which we read 163 Contents and structure …albeit with the best of intentions, some parents can exert undesirable pressure on teachers to introduce written recording of mathematics, and especially ‘sums’, at too early a stage, because they believe that the written record is a necessary stage of a child’s progress. …A premature start on formal written arithmetic is likely to delay progress rather than hasten it.2 The foregoing observation is closely in accord with the present theoretical model. However, the word ‘premature’ is important, with its implication that written work does have a place when the appropriate degree of maturity has been reached in children’s thinking. So what is this place, and what degree of maturity is needed for written work to be a help rather than a hindrance? Writing is putting thoughts on paper, so: (i) Clearly children need the thoughts, i.e. the conceptual structures built and tested by practical experience and oral work in the ways already described and exemplified. (ii) They need to be able to form the written symbols fluently enough for these actions not to require too much effort. Most of our conscious attention needs to be available for the thinking process. This is one of the reasons why plenty of oral work is useful: once children can speak, saying the mathematical words is normally effortless (though long words like ‘perpendicular’ may need a little practice initially). Learning to write numerals and other mathematical symbols fluently needs first to be practised as a separate skill from the mathematics itself. (iii) We need strongly established mathematical schemas, so that the conceptual structure always dominates the symbolic structure. (See chapter 5.) Written work may usefully be introduced, initially in quite a small way, at any stage of children’s learning when the foregoing requirements are satisfied, and when it serves one or more useful purposes. These include: 1 Reducing cognitive strain. By ‘unloading’ some of the 164 Contents and structure ideas we are using onto paper, we have less to carry in our mind at a time. A simple example of this is in The Handkerchief Game (p. 125), when the total number of objects is shown by a numeral on a card, so that the children can concentrate on working out in their minds how many of this total are hidden by the handkerchief. This ‘unloading’ function becomes indispensable when we are doing a long calculation, or a long chain of reasoning. It also enables us easily to refer back to earlier steps in our thinking. 2 Showing structure. Compare these two ways of writing the same number. Roman numeral MMCDLXXVIII Hindu-Arabic numeral 2478 The advantages of place-value notation become even more apparent when we have to add two numbers. MMCDLXXVIII MDCCCXLIX Place-value notation represents on paper the structure which the Romans were already using in their thinking, but not in their notation. It greatly simplifies calculations, which were so difficult in Roman numerals that they were done with the help of pebbles (in Latin, calculi). 3 Promoting reflective intelligence. It is by the use of symbols that we achieve voluntary control over our thinking, and become able to move from an intuitive to a reflective level in the functioning of our intelligence.3 Putting our thoughts into words is not always easy. At an oral level, we can concentrate on putting together the right words in the right order. When we have achieved this, we have not only communicated them to others, but we have a better grasp of our own thinking: a metaphor (i.e. ‘grasp’) which emphasizes the function of symbols in acting as combined labels and handles for our concepts. This indicates another advantage of co-operative activities in which children talk to each other about the mathematical ideas they are learning. It is also why a good teacher, after an activity in which a new concept has been formed, asks the children to 165 Contents and structure put it into words, and helps them towards a clear formulation. Writing, though more difficult, takes this process a stage further. It distances us a little from our thinking, allows us to consider it more critically and objectively, and allows us to return to it later for further consideration. 4 Recording and communication. If our concern was with the development of a culture, the transition from oral transmission of knowledge and folklore to written records is of such importance that it would come first on our list. In the individual development of a child, this function of writing tends to be used largely as a means for teachers and examiners to try to find out what children have learnt, by getting them to answer questions in writing so that the answers are in a permanent form which can be marked methodically and at leisure. In a book about teaching, the importance of assessment is that we must know how far children have reached in their understanding, to know what they are ready to learn next, or whether revision and/or consolidation are needed before going on. In the case of younger children, it is hard to make reliable inferences of this kind from their written work. Practical and oral work, in which follow-up questions can be put by a teacher, are much better for this purpose. The benefits which can result from the appropriate use of written mathematics are so great that eventually, written work is indispensable. These benefits will, however, only be gained if this is introduced when children are ready, and in ways whereby children experience the additional powers which it gives. If this is done, then a desire to have something to show will also be fulfilled, and to better effect. Process and content One often hears it said, among mathematics educators, that we should emphasize process rather than content. In my view these are interdependent. Good process needs good content, as may be seen from the example already given on p. 79. Almost everyone to whom this is described admires the process. What did this child do? She selected from her repertoire just those 166 Contents and structure numerical relations which were useful for her purpose, and related them in a mathematically elegant way. To do this she had to know these relations, which is to say she had good content. She also needed to have them mentally ‘at her fingertips’. It is hard to say whether this is content or process. Selecting and arranging is certainly process, and since this can be applied to many different contents it is process in a fairly context-free form. But there have to be contents to realize the potential which inheres in this process, and the way it is structured is part of the contents. Structured contents include relationships, as part of their structure. Building wellstructured contents is an important kind of good process, and by acting on already structured contents it may add more relationships, in this way making better the contents already there. The relationship is mutually supportive. Problem-solving is often singled out as an example of process. In terms of the present model, solving a problem is finding a way for achieving a particular goal from a given starting point when we do not have a ready-made plan, and putting this plan into action. This is a situation calling for adaptability, and involves just the same processes acting on just the same kind of content as those described in the previous paragraph. So the foundations of successful problem-solving lie in well-structured relational schemas, together with fluency in a repertoire of useful routines so that conscious attention is free to concentrate on what is unfamiliar. An additional requirement is confidence in one’s own ability to cope with new situation. This, and other emotional influences on learning and performance, will then be discussed in Chapter 9. Can ‘learning how’ lead to ‘understanding why’? The argument is sometimes advanced that if children are first taught how to do certain mathematical tasks, and are given enough of these to do, understanding will gradually come. While it would be unwise to suggest that this can never happen (someone would certainly find a counter-example), the present model suggests that this would not be a safe principle to adopt as a general basis for teaching unless certain important conditions were satisfied. 167 Contents and structure In general, understanding results from assimilating new experience to an appropriate schema. Applied to the present discussion, it will result if the new method, first learnt without understanding, is subsequently assimilated to an appropriate mathematical knowledge structure: which must therefore already exist. This being so, the method could have been derived from this knowledge structure; so there needs to be a good reason for not teaching it that way round, either directly, or by helping children to find the method for themselves. Either of these will greatly increase the likelihood of the new method being related to a knowledge structure. In the approach under discussion, it is natural for children to perceive what they are being specifically taught, namely the use of the methods, as what they are being asked to learn. And if they can get the right answers in this way, why should they bother why the methods work? Though there will always be some children who want to know why, the situation itself is weighted towards rule learning, with all its disadvantages. As teachers, why should we expose children to this risk when it is not necessary? Though the ‘methods first’ approach is not advocated as a general approach, there are some instances when ‘Here is a method. Why does it work? Will it always work?’ can be seen as a valid approach within the present model. These instances need to be carefully chosen, however, and come as extras to the ‘understanding first’ approach, not as the basis of a desirable teaching method. An example of this kind, which the reader may find intriguing, will be found at the end of this chapter as number 3 of the suggested activities for readers. Calculators and computers These have several things in common. First and most important, they manipulate symbols not concepts. They neither store nor process information as such, but symbols which can represent information only in the minds of those who have the right concepts and knowledge structures available. They operate at the level of syntax (relations between symbols), not semantics (relations between concepts). However, they do these manipulations very fast and very 168 Contents and structure accurately, and they never grow bored however long, monotonous, and repetitious the tasks. Calculators These release us from the drudgery of acquiring speed and accuracy in doing complicated calculations. They do not release us from the task of knowing what are the appropriate calculations to do, or whether the answer makes sense. But they make more time available for learning with the emphasis on understanding, and thereby help us to meet this obligation. They also allow us to choose which methods it is useful for children to learn, and which we may now hand over to calculators. The former include simple routines, fluency in which will enable us to do any job requiring these much faster than if we were to stop and use calculators every time. In the school hall we have set out eight rows of chairs, twelve chairs to a row. Are there enough chairs for 100 people, and if not how many more do we need? We should not need a calculator to answer questions like this one, nor to check our change after a simple purchase. Another category of methods which are still worth teaching includes those which exemplify important mathematical concepts, required in later work, as well as being useful methods in themselves. Long multiplication provides a good example in which we can, if we look for it, see mathematical creativity at work: the construction of new knowledge from existing knowledge. It is no small thing that if we know our multiplication tables from 2 times to 5 times, we can use this information to multiply any two numbers, as large as we care to write. The first step involves the construction of the tables from 6 times to 9 times, using the commutative property of multiplication. 6×2 is equal to 2×6 which we know already, and so on. This in itself is non-trivial. Not all operations in mathematics are commutative (e.g. subtraction, division), and we need to make sure whether they are or not, or we shall make mistakes. The next step is in the extension of multiplication to that of a two-digit number by a single-digit number, e.g. 37 multiplied 169 Contents and structure by 4. The method is simple enough. We multiply 30 by 4, 7 by 4, and add the results. Many who have learnt this method do not realize that this depends on another property of the number system, that 4(30+7)=4(30)+4(7) This, generalized, is a basic concept in algebra, known as the distributive property: a(x+y)=ax+ay So by teaching children long multiplication in the right way, we can teach them far beyond the routine itself. We can teach them something about the nature of mathematical creativity, and we can lay some of the foundations of algebra. And since the actual calculations involved in the multiplication of large numbers are much better done by calculator, children no longer need to acquire a high degree of speed and accuracy in doing long multiplications. The time saved can be put to better uses, such as the above. In contrast, I have not found any such benefit in teaching children to do long division, and my preference would be to replace this entirely with the use of calculators. Another advantage of calculators is that when mathematics is applied to real-life situations, they allow us to work easily with the kind of numbers we are likely to get, rather than with the simpler figures chosen for the examples we give children to learn on. Thus, rightly used, calculators can help us to improve the quality of school mathematics. Computers At present, most primary schools do not have enough computers for individual children to spend more than a short time with them in each week, so they are not as yet to be seen as a major influence on their learning of mathematics. This situation is not without its advantages, since we still need to find out what are the right ways of using them to 170 Contents and structure promote intelligent learning of mathematics. Many programs offered to schools merely replicate existing bad teaching methods. Others simulate practical activities which the children would much better be doing with the physical materials themselves. Moreover, the software scene is so rapidly changing that many detailed recommendations which might be made are likely to be out of date within a year or two of the publication of this book. Up-to-date information is better sought in periodicals given to this subject. General principles, however, are more lasting, since they are applications to the particular situation provided by a microcomputer of the principles which have been discussed throughout this book. An example of good computer software would, ideally, provide a well-structured mathematical situation which allowed children to use all of the six modes of schema construction first listed in Figure 4.1, and repeated here because of their importance. Figure 7.3 Contents and structure One of the best examples of software which does this is to be found in LOGO, invented by Seymour Papert as a computer language specifically for educational use. Though Papert’s thinking developed independently from mine, and in different contexts, the underlying similarity is striking. Many readers will already be familiar with LOGO. Those who are not will benefit by gaining at least a little acquaintance with it before reading the discussion which follows, and also for its own sake. The language of LOGO enables the user to control the movements of a screen ‘turtle’, and to use it to draw patterns, in ways which become increasingly complex and interesting as the user’s knowledge progresses. The knowledge needed is of two kinds: of the computer language itself, and of the mathematical structure of the microworld provided by LOGO. ‘Microworld’ is an evocative term, and though Papert invented it in the context of LOGO, we may apply it to any part of our environment which behaves lawfully; and which can be understood, and in varying degrees controlled, by building a schema representing the system and its laws. A computer running LOGO provides good opportunity for schema construction in all six of the ways listed above in Figure 7.3. Mode 1 building. In a typical LOGO learning situation, children are allowed freely to explore the behaviour of the screen turtle in response to the commands typed at the keyboard, single and in combination. Thus they learn by direct physical experience of the LOGO microworld. Mode 1 testing. Children set their own goals and devise plans of action, in the form of programs, which they predict will draw the pattern or other shape which they want. This prediction is immediately tested when the commands are typed in and the program is run. Mode 2 building. They learn the commands built into the LOGO languages (known as primitives) by communication, either verbal from their teacher or written from a handbook or instruction sheet. Mode 2 testing. A good way for children to work is two to a computer, not only because computers are usually in short supply, but for all the benefits of discussion, and co-operative learning, described earlier in this book. 172 Contents and structure Mode 3 building. The LOGO language allows children to create new procedures, and give them a name, after which the computer will treat this name as a command and execute this procedure whenever its name occurs as part of a program. These new procedures may then be combined to form superordinate procedures. For example, a procedure may be invented to draw a shape like the petal of a flower, and called PETAL. Another procedure, called (say) FLOWER, may then arrange a number of petals into a flower; and another, say GARDEN, may scatter flowers over the screen. Mode 3 testing. All the time one is writing a program or procedure, one is testing by mode 3: whether it fits in with one’s existing knowledge of the LOGO environment. These three modes are more powerful in combination, and LOGO provides a situation which is conducive to this. Devising new programs involves mode 3, as described above. If one is working with a partner, mode 2 testing is thereby introduced; and as soon as the program is run, mode 1 testing also takes place. As an example of good process, learning with LOGO is outstanding in the field of educational computer software. This is interdependent with good content: LOGO is an elegant language, simple to get started in, but capable of development to complex and sophisticated levels of use. It is also enjoyable, for children and adults alike. These together often produce a halo effect in which the limitations of its mathematical content are overlooked. Unfortunately only a small part of the necessary content of a mathematical curriculum, at primary or secondary level, can be learnt from the LOGO microworld. But the concept of a microworld itself is quite general, and applicable to many other areas within and outside mathematics. And the pioneering work of Papert has provided us with a high-quality exemplar by which to judge other software on offer.4,5 Some criteria for a curriculum There is not space here for a detailed primary curriculum. One such curriculum may be found in my Structured Activities for Primary Mathematics. It provides detailed plans of action for teachers based on the theory, and the relation of theory to practice, described in this book. 173 Contents and structure There is, however, much other material to choose from, so in this brief section are offered some criteria for choice. (i) Is there a clear theoretical basis, which takes account of the process of intelligent learning as distinct from the mathematical content? (ii) We cannot know for what uses children will need their mathematics in the future world for which we are trying to prepare them. We do not even know what this world will be like. It follows that the most important feature of what they learn is adaptability. This can be provided by helping them to acquire well-structured mathematical schemas, together with a repertoire of widely used routines, experience in adapting their knowledge to new uses, and confidence in their ability to go on doing so. Does the curriculum you are evaluating provide for these? (iii) Does the curriculum take account of the hierarchic nature of mathematical knowledge, in which nearly everything children learn builds on past knowledge and prepares for future learning? Is the overall structure to be constructed over the seven years of primary schooling made explicit by concept maps, or in some other equivalent way? (iv) Within each topic, is there a progression from concrete to abstract? And are children given practice at reembodying general concepts in particular instances? The place of projects and investigations Both of these have an important place in the mathematical curriculum, provided that they are introduced at the right stages, when children can benefit from them. One sometimes hears that ‘Mathematics is all around us’. This is not accurate: mathematics is in people’s minds, a kind of knowledge which can be used in many different ways to understand and organize what is around us. But first we need the knowledge. To form a new concept, learners need to encounter a 174 Contents and structure number of examples, fairly close together in time, in embodiments which do not contain too much irrelevant material. Projects do not meet these needs. The mathematics is too dilute: there is not enough mathematical content relative to all other materials, and too much material which does not belong to the concepts to be formed. The value of projects is after new concepts have been formed and existing schemas enlarged. They provide opportunities for application, call for adaptation of available knowledge, and possibly also provide situations in which there are problems to solve. Children gain experience in the choice of appropriate mathematical models, and of plans of action based on these. These may involve combining ready-to-hand methods with new adaptations specifically devised for the job. Also related to these is the collection and organization of data, in a form which allows the mathematics to be put to use. Environmental studies offer good opportunities for projects of many kinds, and for relating the mathematics learnt in school to the world outside. Children are also using their mathematics in ways similar to those in which they may need to use it in adult life. For the foregoing, children need already to have appropriate knowledge and skills as a starting point. The sequence in which suitable situations for projects, environmental applications and investigations, are likely to arise is unlikely to be one which meets the requirements, already discussed in detail, for building up mathematically structured knowledge. Mathematical investigations are, however, a different matter, since these can be chosen specifically to fit current states of children’s mathematical schemas. Indeed, many of the schemabuilding activities which appear in Structured Activities for Primary Mathematics, and of which a sample is offered in Chapter 6, may be regarded as investigations leading to new concepts. For example, The rectangular numbers game invites players to investigate the question: ‘Is there some way of predicting whether it is possible for the other player to make a rectangle from the counters I give her?’ By their nature, the mathematical content of investigations of this kind is more concentrated; and in some cases, they can be investigated at different levels of sophistication. One such example is offered as number 6 of the suggested activities for readers at the end of this chapter. 175 Contents and structure Summary 1 A mathematical curriculum should provide for the learning of structured knowledge, a good repertoire of routine plans for frequently-encountered tasks, and skills in putting these plans into action easily and accurately. 2 Written work should not be introduced prematurely, before concepts have been introduced by oral and practical work. Used at the right stage, however, written mathematics is indispensable. Among its functions are reducing the cognitive strain of trying to remember long sequences of calculation or reasoning; showing structure; promoting reflecting intelligence; recording and communication. 3 Process and content are interdependent. 4 Calculators release us from the drudgery of acquiring speed and accuracy in doing complicated calculations. They do not release us from the requirement of knowing which are the correct calculations to do, or whether the answer makes sense. 5 Some methods of calculation embody important mathematical principles, and may be worth teaching with this emphasis, while still using calculators as in 4. 6 Computers can be used to provide microworlds for children to explore, using the three modes of schema building and testing already discussed. 7 We do not know what are the uses for which children will need their mathematics when they are grown up. The best preparation for this unknown future is the combination described in (1) above, together with enjoyment of mathematics and confidence in their ability to continue learning it and applying what they already know to new situations. 8 Projects and investigations form a valuable extension of the mathematical curriculum. Projects, in particular, allow teachers to make use of matters of current interest, and provide opportunities for adapting mathematics to new situations, and relating the mathematics learnt in school to the outside environment. Contents and structure Suggested activities for readers 1 Teach the activity Cards on the table to some children. 2 The whole school, consisting of 103 teachers and children, are going on a school trip. They are to be shared between 4 hired coaches. How many persons will there be in each coach? Do the appropriate calculation mentally, and by the use of a calculator. Which gives the more sensible answer? Can you think of other examples in which the answer given by a calculator should not be accepted at face value? 3 (A mathematical investigation.) Here is a method for finding whether a two-digit number is a multiple of 9. Add the digits together, and if the sum is 9, the number is a multiple of 9. Example: 36.3+6=9, so 36 is a multiple of 9. (i) Why is this so? (Hint. If you are stuck, draw a number line and, beginning at 9, keep moving to the right 9 at a time.) (ii) Does this work for larger numbers, and if so why? (iii) (Harder.) Does a similar test apply to multiples of other numbers? 4 (Another mathematical investigation.) Only the first step in the analysis of long multiplication was given in the text. How about 4(30)? Do you know your ‘4 times’ tables as far as four thirties? If not, what other property of a number system is used? Continue the analysis further if you wish, or read about it in detail by following up reference 6. 5 Analyse any mathematical learning activity with which you are familiar in terms of the concept of a microworld. 6 (Another mathematical investigation.) What is the least set of weights by which one can weigh any whole number of grammes up to (say) 100 grammes in a balance (a) with all the weights on the same side (b) with weights on both sides if required? Management for intelligent learning A teacher’s dual authority: authority of position, and authority of knowledge Unless learners accept the authority of their teacher, the teacher cannot function as such. But there are at least two distinct kinds of authority which a school teacher has to exercise, which may be called authority of position and authority of knowledge. In everyday life these are usually separated. A policeman on traffic duty, a customs officer, the captain of a ship, all occupy positions in which obedience is due to them. In these examples, their powers have the force of law, and disobedience is punishable by fines and/or imprisonment. We can think of other examples of positional authority which, though not enforced by the law of the land, are similar in other respects. A football umpire has the power to award a penalty kick, or to send a player off the field. Here, obedience is required and enforced by the rules of the organization. In all these cases, the power belongs to the position, not to the person. When persons cease to occupy the positions, they no longer have the powers which go with them. As a spectator, the football referee would no longer be in a position to exercise the powers described; and similarly for the other examples. In contrast, authority of knowledge is inherent in the persons themselves. Someone who is an authority on growing roses, or the history of the Second World War, remains so whatever their role at a given moment. Another difference between the two kinds of authority is in our freedom to choose whether we accept their authority or 178 Management for intelligent learning not. With respect to positional authority, we have no choice. When driving a car, we must obey the directions of a policeman on traffic duty. But if as patients we consult a medical authority, whether a general practitioner or an expert on our particular ailment, we are free to choose whether we follow the advice they give. Unlike the first kind, this is a cooperative relationship: and it is of the essence of co-operation that it is voluntary on both sides. For us as adults, the authority of a teacher is of the second kind. We go to them of our own free will because we believe them to be knowledgeable in a particular subject, and they are free to accept us as students, or not. When we follow their directions, it is because they are helping us to achieve a goal of our own choosing. If we do not want to do the work they give us to do between lessons, they cannot make us. Equally, if they feel that we are not co-operative students, they are free to discontinue teaching us. If there are conditions attached, e.g. about regular attendance, or withdrawal from a course, these will be clearly stated and agreed beforehand. For children, however, their teachers’ authority is both of position and of knowledge. They are in school because the law requires it; they are in a particular teacher’s class because either the head teacher, or someone to whom this power has been delegated, has so decided; and their class teacher decides for the most part what they have to learn. Many of us believe that there are good reasons for all these decisions, just as when driving it makes good sense under certain conditions to have someone regulate the traffic. Likewise, an orderly classroom environment is necessary for intelligent learning, and it is part of a teacher’s job to create and maintain this. But this does not change the fact that in this part of their authority, the relationship of teachers towards their pupils is a power relationship. To teach successfully, however, teachers have to be authorities of the second kind, knowledgeable both about their subjects and how to teach these; not to mention other aspects of child development. This is authority of knowledge, not of position; and, as we shall see in the next section, for intelligent learning this needs to be a co-operative relationship. Management for intelligent learning Obedience and co-operation: their different effects on the quality of learning A characteristic of a power relationship is that obedience can be enforced by punishment or the threat of punishment. Sometimes it is also encouraged by rewards. The relationship between these and habit learning has already been described in Chapter 2, p. 33, and it will be worth reviewing that section briefly here. From the extensive literature of the behaviourists, we find that as well as learning reinforced by rewards, animals (typically, rats and pigeons) will learn to do things by which they can avoid punishment (such as electric shocks). They can also be taught habits by symbolic rewards. For example, when a bell has become associated with a food reward, the bell itself acts as a reinforcer. And in the same way, stimuli which have become associated with punishment can be themselves used to produce avoidance learning. Reward and punishment are powerful methods of bringing about learning in nearly every kind of animal, not excluding our own species. Adults in charge of children are in a strong position to give rewards of many kinds, physical, material, and verbal approval or other symbolic rewards; and to impose punishments of equally many kinds, including disapproval. Learning brought about by rewards and punishment is, however, more likely to be habit learning rather than intelligent learning. Children are at the most learning age of the most learning species which has yet evolved on this planet. Teaching is an intervention in this process, by which we can greatly help them to use their learning abilities for the greatest benefit of themselves and others if we know how. These learning abilities include both intelligent learning and habit learning, and in view of the great difference between these, both in how they take place and in their long-term effects, knowing how begins with an understanding of these differences. This means that as teachers, it is important for us (i) to analyse the nature of every learning task, so that we can make a considered decision whether in this case habit learning or intelligent learning is in the long-term interests of the child; and (ii) to choose, or if necessary devise, teaching methods 180 Management for intelligent learning such that the children we teach are likely to bring into use the appropriate kind of learning. As has already been emphasized, in the learning of mathematics most of the learning needs to be intelligent learning. Habit learning is needed for the number-names and their written symbols. The development of fluency in the recall of useful facts, and of skills in the easy and accurate performance of useful routines, may be thought of as a partnership between habit learning and intelligent learning, in which intelligent learning should dominate. There are other subjects for which a greater proportion of habit learning is appropriate. For example, I would regard learning to write easily and clearly as a valuable habit. This is likely to remain useful throughout our lives, so lack of adaptability is no disadvantage. For anyone learning to use a computer, I would similarly regard learning to touch-type as a valuable habit to acquire early on, instead of forming a habit of two-finger typing with eyes fixed on the keyboard. But this also illustrates the disadvantages of habit learning, in that we are tied to the QWERTY keyboard, which was designed to minimize the mechanical limitations of early typewriters rather than for ergonomic efficiency.1 This lack of adaptability of habit learning has proved a major obstacle to improving keyboard design. However, the advantages of good touchtyping skills, in my view, outweigh this disadvantage. These are the kind of considerations to be taken into account, though for other subjects the analysis will be more complex. Where intelligent learning is the appropriate method, we need to remember that we cannot take over the job of a learner’s delta-two. A teacher can, however, greatly help its function by providing a good learning situation and good materials to learn from. Within this environment, an ideal relationship would be one based entirely on co-operation, in which the learner freely consults the knowledge of the teacher, and is also free to learn by the other modes of schema construction summarized in Figure 4.1 (p. 74). When it is their teachers and not the children who have decided what they should learn, we cannot have this ideal situation. Nevertheless, activity methods involving co-operative learning by small groups of children, in which their teacher is involved for some but not all of the time, can come a long way towards it. 181 Management for intelligent learning The resulting problem, for children and teachers The problem intrinsic in the dual role of a school teacher is the need to avoid role confusion, and to reduce role conflict as much as possible. This problem would not arise if habit learning were the only kind needed: but we have seen that this is not the case. So the first requirement is for both teachers and children to distinguish between these two roles, and to know which one is active in any given situation. It is hard for children to do so if their teachers do not. But there are many teachers who treat arithmetical mistakes as if they were a kind of disobedience, and teach children to do as they are told by repetition or reproof rather than by explanation. Since in the long term, habit learning increases the likelihood of mistakes, this role-confusion is counterproductive. The double meaning of terms ‘right’ and ‘wrong’ does not help matters. On the one hand they mean ethically right or wrong, ‘good’ or ‘naughty’; and on the other hand they mean correct or incorrect, in the sense of schema testing by the three modes already described, or in the sense of a successful plan of action. This suggests that while teaching it is better to avoid using ‘right’ and ‘wrong’ with the latter meanings, but to use ‘correct’ and ‘incorrect’ instead; or in some cases, ‘I agree’ or ‘I don’t agree’. Remembering that praise as well as reproof can bring about habit learning, we also need to think seriously about its overuse in situations where the learning goal is the building of structured knowledge. A desire to gain praise from their teacher may lead to an attitude based on ‘What does he want me to say?’, rather than ‘What makes sense in relation to my own knowledge?’ In one of the schools where we were field testing activities, it became clear that many of the children were concerned, not with using their own thinking to arrive at an answer which made sense to themselves, but to try to find out what answer we wanted. They were looking at our faces as much as at the materials. This situation has been well described by John Holt in his classic How Children Fail.2 The achievement of understanding is itself a source of pleasure and satisfaction, as is evident from children’s expressions. This is also rewarding to oneself when teaching, and it would seem a cold-hearted suggestion that 182 Management for intelligent learning at these times children and teachers should not share each other’s pleasure. However, there does seem to be a possible conflict of principle between this and the argument of the previous paragraph. Reflecting on this, and after discussion with teachers and colleagues working with the project, we have agreed that responses like ‘That was a nice piece of thinking’, ‘You explained that very clearly’, are appropriate. These are not rewards for correct answers, but appreciation of intelligent thinking. If children acquire the habit of using their intelligence within the context of mathematics, then surely this is a good habit which they should not need to change. The authority of the subject Over a number of years, Buxton has engaged in remedial group work with adults who were convinced that they couldn’t do mathematics. 3 One of his techniques involved asking them to visualize a 3×3×3 cube, as in Figure 8.1. Figure 8.1 Management for intelligent learning They were asked to take time to stabilize this image in their mind (note the explicit removal of time pressure), after which they were given the following problem. Imagine this cube painted black on the outside. Next, think of it as having been cut up into smaller cubes, each 1×1×1. How many small cubes have no paint on them? Once the answer was seen, it was regarded as self-evident by some of Buxton’s subjects, needing no confirmation from anyone else. Many, however, so lacked confidence that they remained dependent on his agreement. They were still dependent on their teacher telling them they were right. Mathematics is a body of knowledge which, though it only exists in people’s minds,4 nevertheless has an existence which is independent of any one person or number of persons, in much the same way as an institution continues to exist although over the years its members change. The truth of a mathematical conclusion depends on its consistency with an accepted body of mathematical knowledge, and both this shared knowledge, and whether a particular statement is consistent with it, cannot be changed by the say-so of an individual. As Buxton has well put it, ‘the “authority” is within the mathematics’. In relation to this, teacher and pupil are on equal terms. If a teacher makes a mistake while working on the blackboard, and a pupil points this out, the teacher will correct it. The mathematics itself ‘says so’. What kind of authority are we talking about now? Clearly, it has to be the second kind, authority of knowledge, with the additional force that this is knowledge widely shared and agreed. The basis of this agreement is shared human rationality: shared not only by the great mathematicians of the past and present who have pioneered its construction, but potentially by a majority of children and adults in whom it is part of their genetic endowment. The process can operate on quite a small amount of content, so it can be, and is, found in quite young children: as is exemplified by the third child in observation 1 on page 114. Management for intelligent learning Management of the learning situation Here is an illuminating encounter described by Biggs. A six-year-old said to me the other day: ‘Give me a number and I’ll double it for you.’ I gave him 37 (he was calculating in his head). He said ‘Two thirties that’s sixty, two sevens that’s fourteen, seventy-four.’ He continued, ‘Don’t tell my teacher.’ I asked why not since his method was a good one. The boy answered ‘She makes me write it down and I don’t understand her method, so I do it in my head and then I write it down her way and I always get my mark, so don’t tell her.’5 Young as he was, this little boy had learned to manage his learning situation so that he could both use a method he understood, and gain the rewards of obedience. Not all children are likely to be so successful, however. Nor will they need to be, if their teachers can manage the learning situation in such a way that the two aspects of their authority are so far as possible kept distinct. The need for obedience to their teachers, together with reward and punishment in all their varieties, relate to the need for orderly behaviour without which an environment supportive of intelligent learning could hardly exist. Children are not old enough to know what they need to learn to fit them for the world in which they are growing up. This is, or should be, largely the responsibility of the teaching profession. Teaching them ‘tricks for ticks’6 does not fulfil this responsibility, though it is an easy way of producing shortterm results. We can influence children’s learning indirectly, through the power we have over children’s learning environment, just as strongly as we can do so directly by telling them exactly what they have to do and how they are to do it. This makes possible a co-operative learning situation of the kind described at the end of the last section. This is not a book about the sociology of the classroom. The purpose of this short chapter is to show how social factors can strongly influence the quality of learning, and to encourage further thinking and reading in this area. In 185 Management for intelligent learning particular, the present model underlines the effect of perceived role-relationships on the quality of learning. Management for ‘motivation’ It is perhaps unfortunate that the terms ‘motivate’ and ‘motivation’ have become widely used, especially the former. For example, ‘What is his motivation for climbing the hill?’ means no more than ‘Why does he want to climb the hill?’, or just ‘Why does he climb the hill?’ Some possible answers might be ‘To enjoy the view from the top’, or ‘For exercise and fresh air’, or ‘Because he thinks there will be a good wind for flying his kite.’ In terms of the present model, we are asking what goal he expects to achieve by climbing the hill, as is clear from these answers. They do not go easily into the language of motivation: ‘He is motivated by a wish to see the view from the top’ is better said as ‘He wants to see the view from the top.’ These terms seem to stem from behaviourist psychology, in which behaviour is thought to be caused by external stimuli, rather than self-directed. Within this way of thinking (which is not confined to behaviourists), ‘motivate’ comes to be used as a transitive verb, and motivation takes on the meaning of something which can be applied externally to produce a given action, chosen in this case not by the subject but by the experimenter. Transferred to the teaching situation, this leads to pronouncements such as, ‘It is part of a teacher’s role to motivate their pupils to learn.’ The dubiousness of this statement is concealed by the language. But when translated it becomes ‘It is part of a teacher’s role to make their pupils want to learn.’ How can teachers do this? Can they make someone want to see the view from the top of a hill, or want to take exercise, or want to fly kites? People want what they want, not what someone else wants them to want. So if I wanted this imaginary person to come with me to the top of the hill, I would have to know the sort of things he liked doing: the goals which gave him pleasure in pursuing. Or, of course, the opposite: anti-goals which he wanted to avoid, such as being left behind and getting lost. This would matter particularly if he were a child. 186 Management for intelligent learning So if we want children to learn mathematics, we need to know what are their natural goals and anti-goals. Some which it is easy to identify are to gain approval and avoid disapproval from their parents, which in school extends to their teachers. Others are to increase their power over their environment, and to do many kinds of things with other children. In the next chapter it will be suggested that general characteristics such as these have evolved because of their survival value: and the survival value of all the above is easy to see, except perhaps the last, until we view it as learning to co-operate with their peers. As teachers, we have considerable control over children’s environment. We cannot make them want to do any of the above, but assuming that they do, we can determine what they have to do to achieve their goals and avoid their anti-goals. Thus, we can award praise in the form of verbal approval, stars against their names, if they learn what we want them to learn, and also let them experience things they would rather avoid if they do not. This is easy to do, and a powerful means of bringing about habit learning. Under these conditions, learning with understanding may or may not take place. Fortunately, the desire to understand, to make sense of our environment and thereby to increase our power over it, is not only a very general goal, but one which mathematics can help us to achieve. This is one of its major uses in the adult world, and as we have already seen, we can arrange for children to experience this at their own level in the mathematics they learn at school. Likewise, mathematics has important social value in the complex interactions of commerce and technology which are important to our culture; and it can form the basis of co-operative activities, including games, which provide worthwhile interactions at child level. If we therefore re-word the misleading question ‘How can we motivate children to learn mathematics’ with ‘How can we manage the school situation so that by learning mathematics, children can satisfy some of their natural desires?’, then it can be seen that this is largely what the present book is about. Summary 1 A teacher’s authority is of two kinds, authority of position 187 Management for intelligent learning and authority of knowledge. Outside school these are often distinct. Authority of position embodies a power relationship in which obedience can be enforced by punishment or the threat of punishment, and encouraged by rewards or the hope of these. Authority of knowledge implies a co-operative relationship in which we are free to seek the help of those who are knowledgeable in certain areas, or not; and to follow their advice, or not. Both teachers and children need to distinguish between the two kinds of a teacher’s authority, so that the right kind of role-relationships can be used for the two categories of learning needed in school: orderly habits of behaviour, and intelligent learning. Metaphorically, we may also talk about the authority of the subject; that is, of a shared body of knowledge, agreed as a result of human ability to reason. Reward and punishment are conducive to habit learning based on obedience, rather than to intelligent learning based on co-operation. Intelligent learning requires a co-operative learning situation within an orderly school environment. The misleading question ‘How can teachers motivate children to learn mathematics?’ can be replaced by ‘In what ways can teachers arrange that learning mathematics becomes a way by which children can fulfil some of their natural desires?’ Suggested activities for readers 1 Identify some situation in your own experience where confusion between the two kinds of authority described has contributed to lack of success in learning. Note that this confusion may be on the part of children, or teachers, or both. 2 Identify some school situations in your own experience in which the two role-relationships have been clearly distinguished. How was this brought about? Emotional influences on learning Emotions and survival Mainstream psychology has tended either to ignore emotions, or to regard them as irrational influences, distractors, which disturb our normal thinking processes. This is in accordance with everyday usage: the Concise Oxford Dictionary defines emotion as ‘agitation of mind, feeling; excited mental state’. However, I believe that the separation of cognitive from affective processes is an artificial one, which does not accurately reflect human experience. In particular, many students when reflecting on their time at school have reported that strong emotions were aroused by their classroom experiences, and that these greatly influenced their learning for better or worse. As professionals in this area we need a better understanding of these influences than common sense provides. So in the present chapter, I shall suggest how the model of intelligence outlined in Chapter 2 can be extended to include this important influence on our behaviour and learning. Given that emotions are, subjectively, an important feature of human experience, it seems reasonable to ask whether we should indeed regard them as a disturbance of our normal thinking processes, in which case we should try to minimize their influence; or whether emotions have a useful function, in which case we need to know what this is. I believe that our emotions, like many other questions relating to human nature and activity, can only be understood by considering them within the perspective of evolution. So we shall begin with a wide-angle view of intelligence in relation to adaptation and survival.1 189 Emotional influences on learning Any species which exists on earth today is here because over the centuries it has evolved physical, behavioural, and mental characteristics which are pro-survival. Homo sapiens has become dominant on this planet mainly because of one particular characteristic, intelligence. Why is intelligence prosurvival? Because it gives us the ability to achieve our goal states in a variety of ways to suit a variety of circumstances. Intelligence shows not in behaviour itself, but in adaptive changes of behaviour. Mathematics, as an adaptable and manypurpose mental tool, is an important contributor to these. If the survival value of intelligence is adaptability, and the survival value of adaptability is that it enables us to achieve our goals, why is this last pro-survival? Because many of our goals are directly related to survival. In some cases the connection is direct and obvious: getting food, finding water, keeping the right body temperature. In others, perhaps, it is less direct. At the beginning of each day, many of us can be seen travelling by a variety of means to our places of work. When we achieve the goal of this journey we are no better off directly. But this is where we earn the money to pay for food, shelter, clothing, and other necessities of life; so indirectly this goal, and even more so the activities we do there, are prosurvival. When viewed in this way, a surprising number of the goals we seek can be seen as contributing to our survival. Keeping alive is the cumulative result of achieving and maintaining many different goal states: hence the survival value of this ability, in general: and of intelligent learning, in particular. The above, necessarily brief, overview suggests that if we are looking for a possible answer to the question ‘Do emotions serve any useful purpose?’, a good place to begin might be by looking for survival value in emotions. At any time, our senses tell us of many changes in our relationships with our surroundings. Some of these changes take us nearer to, or further from, our goal states, and thus may affect our survival. Others are neutral. So it would be pro-survival in itself to have signals which call our attention to changes which do relate to goal states; and it would also be pro-survival if these signals were qualitatively different from other data reaching our consciousness, since this would 190 Emotional influences on learning enhance their attention-getting quality. This may be recognized as a fair description of emotions. They are hard to ignore, and this is because they are calling our attention to matters which relate to our survival. Pleasure and unpleasure, fear and relief The categories of emotion which follow are broad, and within each there are differentiations which I do not make here. Pleasure Emotions in this category signal changes towards a goal state. We feel pleasure while eating, taking exercise, resting when tired, enjoying the company of friends. The first three of these examples relate to bodily goal states: nourishment, keeping our muscles strong and our heart and lungs well functioning, physical and mental recuperation. The last relates to the need for mutual help, support, and encouragement, and to the benefits arising from the exchange of ideas in conversation. The enlargement of our schemas which results from understanding is of very general pro-survival value, since this increases the number of situations in which we can act appropriately. So it is no accident that we feel pleasure when we newly understand something. Here is a student’s memory from early childhood.2 When ‘reading’ the letters of the headline in the newspaper, I suddenly came to the realisation that the words were made up of sound groups I could recognize, which were represented as groups of letters—basically, I had made the discovery that the letters on the chart on my nursery wall were related to the words that I heard people speaking, and that I was attempting to articulate. I had discovered that both were part of the same idea. My mother reports that this discovery sent me racing round the house in ecstasy, attempting to ‘pronounce’ every written word I came across, and positively gurgling with pleasure. Emotional influences on learning Unpleasure This signals changes away from a goal state. We feel unpleasure when we miss our bus, when we lose our purse, or shiver in a cold wind. If at the same time there is nothing we can do about it, we also feel frustration, and all the examples of this kind contributed by my students have come into the latter category. Fear This signals changes towards an anti-goal state: that is, one which is counter-survival. It warns us of danger. Thus, we feel fear when our car goes into a skid, or when we encounter a venomous snake. But survival is more than bodily survival, and that which threatens our self-image is also experienced as threatening. Fear has been the greatest emotion I have felt in a learning situation. Now, as I reflect on the situations in which I have been afraid, they are all related to a school environment. I have felt this fear as the form-positions are read out—the dread of coming lower than the position I should (in the teacher’s opinion) be coming in. Relief This signals changes away from an anti-goal state. We feel relieved when the driver of the car we are in regains control after a skid. Again, the threat need not be a physical one. I passed the eleven-plus exam, which was an enormous relief to me as I had been expected to. It is an awful amount of pressure for a child that age to be put under. I felt confident I would pass but then feared that would be asking fate to make me fail, so I used to walk home pretending that when the list of passes was read out in class, my name wasn’t on it. I did this to practise not crying at the news as a girl had done the year before. 192 Emotional influences on learning Relief is not the same as pressure, and is a poor substitute for it. This cessation of finding pressure in learning something in itself and instead experiencing relief, was also reflected in other areas of the classroom. The results of the tests were used as a basis for seating arrangements. The higher the mark, the further away from the teacher one sat. The unfortunate pupil who had got the lowest mark had the misfortune to sit right under the teacher’s nose. Added to this was the shame and humiliation of everyone else knowing your marks. The foregoing categories are summarized in the diagram below. Figure 9.1 Emotions in relation to competence: confidence and frustration, security and anxiety The first four categories of emotion, described in the previous section, relate to situations which have actually arisen, and call our attention to the need to do something. The next four 193 Emotional influences on learning relate to whether or not we are in fact able to do whatever is necessary to bring about the goal state, and to prevent the anti-goal state. They signal our own competence relative to the situation. This, too, has clear implications for survival, since we need to be cautious about entering situations where we lack competence. Confidence Confidence signals competence: ability to move towards a goal state. Most of us, as experienced drivers, feel confidence while driving under normal conditions since we are well able to make the vehicle do what we want. English has always been my strongest subject, and so whenever I received a low mark, I felt able to cope with it because I knew I had the ability to do better, reflecting on past marks. A friend however, found the subject difficult and so each ‘failure’ she found harder to cope with. Frustration This results from inability to move towards a goal state. When my computer screen shows uninformative error messages like ‘Bad program’ and I cannot discover the cause, I feel frustration. As long as I can remember I have been better at English than Mathematics: pleasure and confidence in the former and frustration and anxiety in the latter are an integral part of my school experience. Constant frustration in Mathematics has affected, as the model suggests, my ability in attempting certain new tasks. For example, unknown subject areas like computing (which is an important part of my psychology course). My belief in my inability in the Mathematical field has led to basic lack of confidence in an area where I know little, and have no previous experience. Notice that we are here talking about competence, the ability 194 Emotional influences on learning to achieve one’s goal by one’s own efforts. If it is a beautiful day, we may well feel pleasure; but we have no confidence in our ability to produce one of these when desired. The relation between these two emotions is well brought out by the next example. When I am playing a piece of music on the piano I experience pleasure. If I play a wrong note I feel unpleasure, but if I quickly correct myself pleasure returns. However, if when I play the wrong note, my brother leans over my shoulder and says ‘F-sharp’, I feel frustrated because I’ve been deprived of the chance of moving to the goal state by my own efforts. Security This signals that we are able to move away, and/or stay away, from anti-goals. A good climber will feel secure half way up a vertical rock face, not because there is no danger, but because he knows that he is in control of the situation. From my own experience it appears to be beneficial to do a number of questions related to a new concept in order to create security and confidence…If one feels happy with what has just been learnt, there does not seem such a large risk involved in going on to the next stage. Anxiety If on the other hand we are in a situation where there are possible dangers, and we are unsure of our ability to avert these if they arise, then we are anxious. On an icy road, most drivers will feel anxious even when they are not actually in a skid. This student is describing her feelings during reading lessons. As it neared my turn, I became more and more anxious as I was unable to do anything about it…When it was my turn, anxiety was superseded by fear; fear that I wouldn’t 195 Emotional influences on learning know all the words, that I’d lose my place or skip a line. I was fearful that the teacher would criticize me for not concentrating, but most fearful that my peers would laugh or think how stupid I was i.e. loss of peer-group status. When I was asked to stop reading, there was a feeling of immense relief. Learning as a frontier activity It is in the nature of learning that this takes place in regions where we are not as yet competent. Figure 9.2 In Figure 9.2, the domain represents the region where we can achieve our goals and avoid our anti-goals. This is our region of competence, and within it we feel confident and secure. We might say that we feel ‘at home’ in a given situation. In the 196 Figure 9.3 Emotional influences on learning region outside this domain we know that we are not competent. We can neither achieve our goals nor avoid our anti-goals, and feel both frustration and anxiety. Here a person might say ‘I’m lost’, or ‘I’m out of my depth.’ These are strong signals to keep out of regions such as these. Ancient maps sometimes warned ‘Here be monsters’. The boundary between inside and outside a person’s domain is, however, usually not a sharp one. There is a frontier zone in which we can achieve our goals, and avoid our anti-goals, sometimes but not reliably. It is in this area that learning takes place; and learning is thus a process of changing frontier zone to established domain. The frontier zone then moves outwards, and the process continues. It is pro-survival to expand our domains, so it fits in with this model that we have a strong exploratory urge. Since this takes us into our frontier zones, where we are not fully competent, the model also predicts that mixed emotions are to be expected in these situations: pleasure from success together with increase of confidence, unpleasure from failure and loss of confidence. The latter may be such as to bring learning to a halt, temporarily: My brother is six years younger than me and I can remember very clearly when he first managed to take a few steps on his own. When he was 11 months old he tried to stand up on his own, but he fell down and he was really frightened. From that day he wouldn’t take another single step. It was five months later…that my little brother was persuaded to try again. Fortunately, he managed this time, and although he could not talk, one could see how satisfied and pleased he was with himself by the look on his face. He was obviously very happy and even his eyes were smiling. Or even permanently: I remember, a few years back, I used to envy my sister who could skate. Because it looked such fun, and because I didn’t want to remain a spectator all the time, I decided to learn this skill. I was very unsure of the consequences and did not commence my learning with much, if any, confidence at all…However, try I did, but no sooner had I 198 Emotional influences on learning set foot on ice then I found myself with a big crash on the floor, which was a very painful fall. Immediately this put me off, and I did not even attempt to have another go. Needless to say, I’ve never set foot in an ice-rink since. Managing children’s risks in learning In both the above examples, one feels that it might have been possible to save the learner from such a severe result of failure. Children learning to ride a bicycle are often provided with a pair of extra wheels, one each side, which p r e v e n t t h e m f r o m f a l l i n g o v e r. B u o y a n c y a i d s a r e provided for children learning to swim. Risks are not only physical, however. Some frustration is bound to be experienced when we cannot always reach our goals. Repeated failure may result in loss of confidence and selfesteem. Teachers can do much to compensate for these negative emotions. In the course of my education, I had one teacher in particular who strove to provide this emotional support. He tried to create as relaxed an atmosphere as possible, and tried to encourage everyone to make some contribution, no matter how small, without actually putting pressure on anyone to do so…Thus, emotional support was provided by the teacher and by the other pupils. What a pity that this is not always the case. If I did not understand his explanation of a point, instead of explaining it again slowly or in a different way, he seemed to just shout louder and thump his hand on the desk in emphasis of certain points. The latter approach diminishes ability to think clearly. The stress of the situation invariably diminished one’s ability to concentrate and think intelligently. Even if you were not the ‘victim’ at that precise moment, there was still the possibility that you would be next. 199 Emotional influences on learning This may result in a vicious circle of anxiety leading to failure leading to further anxiety. Friday morning was Mathematics, which for me was an awful experience. I panicked and could never get my sums finished and correct…Mathematics then meant failure, punishment and ultimately humiliation.3 It has been shown experimentally4 that the functioning of reflective intelligence is diminished by time pressure. Buxton has also shown that questioning by someone in positional authority may be experienced as stressful, and that this in combination with time pressure may produce near-paralysis of rational thinking, accompanied by feeling of panic.5 In the psychological literature, we read that stress induces a high state of arousal which in turn hinders effective memory function.6 From the foregoing, we can derive five basic principles of risk management. The first of these is to distinguish clearly between the two kinds of authority of a teacher, and help children to do so also. It is appropriate to treat a genuine misdemeanour with disapproval: but errors in something that children are trying to learn are part of the learning situation. With this interpretation, ignorance is not something to hide. Rather, attention is focused on the error, and understanding of this error becomes a sub-goal in itself. By the process of understanding, that which began as error becomes a contribution to knowledge. The second is to bear in mind that the frontier zone is liable to evoke mixed emotions, in which emotional support from teachers and peers can be a great help. The third is to allow time for reflective intelligence to function—time to think. The fourth is to provide opportunities, in the form of time and learning materials, for children to consolidate a new frontier zone, and convert it into established territory, before pushing on into the unknown. And fifth, we should allow children sometimes to revisit areas in which they are entirely competent and confident. Mathematical games are good for this. Emotional influences on learning Mixed emotions: confidence as a learner Learning is a delta-two activity, concerned with improving the abilities of delta-one. The emotional signals summarized in Figures 9.1 and 9.2 will now apply at both levels. Thus, at a delta-two level, we feel pleasure when we make progress towards our learning goal, and frustration when we can make no progress. The pleasure of knowing that our ability is improving may compensate for unpleasure from mistakes at the doing level, or from other sources. My father would very often shout, throwing me into a temper, however the pleasure in learning to drive came from my realization that toleration of his temper would eventually mean that I could drive the car single-handed. A person may be in his frontier zone at a doing level, but comfortably inside his domain as a learner. For example, a pianist who heard a certain piece of music and wanted to play it himself might, on first sitting down at the piano with the written music in front of him, be able to play it only haltingly and with mistakes. As a performer, he would be in his frontier zone. But he would soon form a judgement of his ability to learn to play it well: that is, he would decide whether it was within his domain as a learner. If he decided that it was, he would approach the learning task with confidence, even though as a performer he would feel anxious and insecure if suddenly called upon to play the work in public. This confidence in one’s ability to learn is a crucial factor in any learning situation. How long a person goes on trying, and how much frustration he can tolerate, will depend on the degree of confidence he brings to the learning skill initially. His likelihood of success will also depend partly on how long he goes on trying. So a good level of initial confidence tends to be a self-fulfilling prophesy: the learner succeeds because he thinks he can. Lack of confidence will have the opposite effect. Some are able to find resources of hope which enable them to view a problem constructively, while others are overcome by feelings of frustration or helplessness. I can 201 Emotional influences on learning illustrate this observation with my experiences as a student teacher on teaching practice in a local primary school. A lesson plan was drawn up involving the use of language and sentence structure. The children were Asian and, as English was their second language, they were unable to cope with the exercises set for them. One child ‘G’ remained calm and asked for help and spelling corrections, not understanding any better than the other children, but nevertheless willing to try. Another child ‘R’ immediately said ‘I don’t care if I can’t do it’ and gave up, speaking on the defensive to hide his lack of confidence in his own ability. The present model also indicates that as teachers we can help to build children’s confidence if we distinguish clearly between these two levels, that of performance and that of learning, and help children to do likewise. At the level of knowledge construction, a mistake which by reflection and discussion leads to greater understanding can be as helpful to progress as a successful performance. This is to take a constructive approach to errors. After good understanding has been achieved, together with plans of action based on this, there may be a case for developing skills in performing useful routines fluently and accurately. In this case, the goal is now to minimize errors, so an error itself will result in unpleasure. However, if the number of errors is getting smaller with repeated practice, the pleasure which results from this improvement will outweigh the unpleasure from making mistakes, and again the learner will go on trying. Confidence in ability to learn has particularly far-reaching consequences, since it will influence whether, for emotional reasons, actual (as against potential) learning ability is increased or decreased. During the long future in which children may need to use mathematics, two of the most influential factors will be whether they enjoyed the mathematics they learnt at school, and whether they are confident in their ability to learn whatever new they need as and when they encounter it. An important source of this confidence is past experience of success in learning with understanding. With habit learning, any new situation for which the learner has not memorized a 202 Emotional influences on learning rule throws him back on a teacher for new rules. With intelligent learning, however, a learner’s cognitive map may already extend part way into the frontier zone. Its features have something familiar about them, which can be understood by expanding and/or extrapolating existing concepts. For example, a child who can count in tens and units is able to see the same pattern repeated in hundreds and thousands. And by extrapolating this pattern in the reverse direction, place-value notation (if well understood) can help him to understand tenths and hundredths. In this new context, we see once again the importance of teaching which is based on conceptual analysis and concept maps. The pleasure of exploration Exploration involves working within a frontier zone and mapping it. Physical exploration often results in the arrival of settlers, who convert the frontier into established domain. The explorers may then move onward, into a new frontier region. In the process of schema building, we all have to be explorers, since the constructivist principle, embodied in the present model, tells us that conceptual knowledge cannot be communicated directly. It has to be constructed anew by every learner in his own mind. As Euclid said to King Ptolemy the first, ‘There is no royal road to geometry’; nor is there to any other branch of mathematics. With the help of a good guide, however, we may become familiar with what is for us new territory with greater success and fewer hardships than we are likely to do by ourselves. The fact that many pupils fall by the wayside results partly from the shortage of guides who understand the special nature of the region to be explored. It is good also to have companions—fellow learners with whom to discuss what we encounter. Children, like the young of many species, are natural explorers at a physical level. Particularly characteristic of humans is a proclivity towards mental exploration, which shows itself at an early age. For example, Papousec has shown that infants as young as two or three months show clear signs of pleasure from success in problem-solving, without any material reward.7 This, together with an innate ability to learn 203 Emotional influences on learning with understanding, forms a powerful combination which has been a major factor in the emergence of homo sapiens as (currently) the dominant species on this planet. This urge to explore is not confined to children. I believe that the research scientist in his laboratory and the child engaged in exploratory play have more in common than either has with most school children working for examinations. Any reader who deduces that I am suggesting a more playful approach to the learning of mathematics will be quite right. The survival value of knowledge does not show at once, and at the time we acquire it we may not even know what we shall use it for. But, like money in the bank, it is good for a wide variety of purposes; and when our immediate needs are taken care of, the pursuit of knowledge has long-term survival value. In acquiring knowledge, we are providing ourselves with a resource from which we can construct a variety of plans of action to serve needs as they arise. Some of these may not have been foreseen when the knowledge resource was being acquired. Like money, gaining it brings pleasure in itself; and, again like money, when we have taken care of the necessities, we can use it to enrich our lives in other ways. ‘If you have a loaf, sell half and buy a lily.’ If mathematics is seen as a particularly powerful, adaptable, and multi-purpose kind of knowledge, with aesthetic qualities of its own, the present evolutionary perspective on emotions indicates that with the right kind of guidance, there is much pleasure to be gained while learning it, by children and adults alike. Summary 1 Emotions are important signals which relate to survival, both physical and in our own self-esteem. Diagrams summarizing some of the main categories will be found on pages 193, 196 and 197. 2 Within our region of competence (domain), we feel confident and secure. Outside this region we feel frustration and anxiety. 3 The boundary between these is not usually a sharp one. There is a frontier zone in which we feel mixed emotions. 204 Emotional influences on learning 4 Learning may be thought of as changing frontier zone into established domain, and therefore takes place in a frontier zone. 5 Mixed emotions are therefore likely to be encountered while learning. If the negative emotions are stronger, a person may give up trying to learn. 6 Emotional support from others can help to keep the balance on the positive side. 7 Stress diminishes the ability to think clearly. Causes of stress in a learning situation include questioning by someone in positional authority, and time pressure. This combination can lead to a vicious circle of anxiety leading to failure leading to further anxiety. 8 The risks inherent in most learning situations can be diminished by good management. Ways of doing this include: (a) Distingish between positional authority and authority of knowledge. (b) Provide emotional support for learning. (c) Allow time to think. (d) Provide opportunities for consolidating newly learnt material before moving on to new topics. (e) Allow children sometimes to work in areas with which they are comfortably familiar. 9 A person can be in a frontier zone as a performer, but still comfortably within his domain as a learner. Confidence in ability to learn will support continued efforts to learn, and greatly increase likelihood of success. Lack of confidence will have the reverse effect. 10 Past experience of learning with understanding is an important contributor to confidence as a learner. 11 The urge to explore, both physically and mentally, is an important characteristic of human nature. 12 In their adult life, two of the most influential factors in children’s ability to continue using mathematics are whether they enjoyed the mathematics they learnt at school, and whether they are confident in their ability to learn. Emotional influences on learning Suggested activities for readers 1 Reflect on your own learning experiences, both in and out of school, in relation to the theoretical model described in this chapter. Discuss your own recollections with others who have also followed suggestion 1. For those learning experiences associated with negative emotions, use hindsight to redesign them as you now think they should have been. Continuing professional development Learning while teaching Continuing the exploration analogy used at the end of Chapter 9, the present book may be thought of as both a guide to guides, and a guide to explorers. At the mathematical level, teachers are acting as guides insofar as they understand the maths themselves—though it is common experience that there is nothing like trying to teach something to others for improving one’s own knowledge. So the book is intended to help teachers in their roles as guides to young explorers in what Papert has called ‘Mathland’. It offers a cognitive map to help in our thinking about the nature of the territory, and of this kind of exploration. Those learning to be teachers, and those already in the profession wishing to improve their teaching of mathematics, are however more like explorers themselves. The new region to be familiarized and understood is the cluster of related abilities which together characterize intelligence, and which in use are the processes of intelligent learning, together with their applications to the learning of mathematics. Explorations in the area of human intelligence have so far centred mainly on its measurement. Part A of the present book has emphasized its function, and in Chapter 6 suggestions were offered how this exploration might be approached. But this knowledge calls attention to another aspect to be explored, which is particularly the concern of all of us who perceive our work in terms of child development: the nurturing aspect. Hebb1 has pointed out that we need to distinguish between two meanings of the word ‘intelligence’, both important. The first, which he calls intelligence A, is an innate potential; the second, 207 Continuing professional development intelligence B, refers to comprehension and performance. In biological terms, intelligence A is the genotype, intelligence B the phenotype. How far, and in what directions, each child develops his or her innate intelligence will be greatly influenced by the quality of the nurture which their intelligence receives, in home and school. This is a frontier of knowledge in which there are still too few explorers. If it is agreed that mathematics is a particularly clear and concentrated example of intelligent learning, then it may also be seen as a good area in which to begin one’s own exploration of this frontier. Chapter 6 was called ‘Making a start’, with the implied suggestion of continuing this throughout one’s professional career. An important part of this process can be done simultaneously with one’s own teaching, if we give children activities which embody mathematical concepts in physical materials, and also involve communication and discussion. Just as these provide the children with opportunities for learning by all three modes, so do the children themselves, while doing these activities, provide us as teachers with embodiments of intelligent learning which we can observe in action, discuss with colleagues, and which provide a starting point for our own creativity as teachers. In this way we get ‘two for the price of one’, time-wise. What is more, as our own theoretical grasp increases, we find that our classroom observations increase in depth; and our discussions also gain by being related to a shared theoretical schema. So our own learning follows a rising curve. The need for professional companionship While there is no substitute for doing structured mathematical activities with children as a foundation for one’s own learning, it is not cost-effective in time to have to find out everything for oneself. Modes 1 and 2 are more powerful when used in combination; and knowledge has the benevolent property that however much we give to others, we still have as much ourselves. For this reason alone, it is a great help to have at least one companion while exploring a new approach to teaching mathematics. 208 Continuing professional development Figure 10.1 But the difficulty of ‘going it alone’ arises not only from the lack of opportunities for discussion, but from a more subtle process which was strikingly demonstrated in a classic experiment by Asch.2 This was disguised as an experiment in perceptual judgement. The subject sat at a table with a group of others whom he thought were also subjects, but who in fact were accomplices of the experimenter. Figure 10.1 shows two of the kind of stimuli which were used. After looking at display A, the subjects had to choose from display B the line which was of the same length. They all sat round a table together, and in turn stated their decisions aloud, the actual subject being in the last-but-one position. On most of the trials, all gave the same answer, since the task was an easy one. But on some of the trials, the confederates had all been told beforehand to give the same, incorrect, answer. The real point of the experiment was to see whether, under this social pressure, the genuine subject would give, not the answer which was obvious to his perception, but the socially conforming incorrect answer. Under these conditions, about 74 per cent of the subjects conformed at least once; and about 32 per cent of all the answers by the (genuine) subjects conformed. If the effect of majority opinion is as strong as this under conditions where the correct answer was clear to see, 209 Continuing professional development we may expect it to be even stronger in cases when the choice is much less obvious, as it is when trying to choose the best teaching approach. This experiment has since been repeated with variations by many other investigators. One noteworthy discovery was that if just one of the accomplices departed from the majority choice, conforming answers by the genuine subject dropped to about 6 per cent.3 Taken together with the positive benefits already discussed, these findings suggest that the advantages of having at least one companion within the same profession for one’s explorations of new and better ways of teaching mathematics are considerable. Some rewards of intelligent teaching The pleasures of mental exploration have already been seen in relation to the survival value of knowledge. And it may be worth repeating that although this may have been the cause of the development of intelligence to such a high degree in homo sapiens, now that we have it we can use it to enrich our lives in other ways. In the teaching profession, one of these enrichments is that of seeing the illumination which results from understanding. When watching children’s faces, it is so like a light being switched on that one almost forgets that this is a metaphor. But once having experienced it, one wants to see it happen again. Teaching in ways which bring children’s intelligence into action has the effect that they too want to continue the pleasures of expanding their knowledge and understanding; and so human interaction can happen of a kind which, coupled with the knowledge that we are also contributing to the children’s long term benefit, makes all the hard work involved more than worth the effort. Summary Many years ago when I was first appointed as an assistant lecturer at Manchester University, I found the following on a 210 Continuing professional development brass plate near the entrance to the main (and oldest) building. To my regret, it has since disappeared from there, though not from memory. I ask the reader’s indulgence for my reproducing it here, instead of a more conventional summary to this last chapter. Those who learn from one who is still learning drink from a running stream. Those who learn from one who has ceased to learn drink from a stagnant pond. Notes and references Prologue: Relational Understanding and Instrumental Understanding 1 2 3 4 Mathematics Teaching (Bulletin of the Association of Teachers of Mathematics), 77, December 1976. R.R.Skemp, Understanding Mathematics, book 1, London, University of London Press, 1964. For a fuller discussion of this, see pp. 82–84 in the present book. However, it was recently pointed out to me that for some pupils, this pleasure is offset by feelings of insecurity resulting from not knowing why the method gave the right answers. I would not put it quite like this today. Experienced mathematicians are fluent in a repertoire of routines which they can do with a minimum of conscious attention. Relational understanding is still there if needed. See note 5. This was written before the present efforts to improve methods of examination. H.Bondi, ‘The dangers of rejecting mathematics’, Times Higher Education Supplement, 23 March 1976. 1 Why is mathematics still a problem subject for so many? 1 W.H.Cockcroft (Chairman of Committee of Inquiry), Mathematics Counts, London, Her Majesty’s Stationery Office, 1982, p. 6, para. 16. Notes and references 2 3 Cockcroft, op. cit., p. 10, para. 34. S.J.Eggleston, Learning Mathematics, APU Occasional Paper No. 1, London, DES, 1983. 4 D.Foxman, G.Ruddock, L.Joffe, K.Mason, P. Mitchell, and B.Sexton, A Review of Monitoring in Mathematics 1978 to 1982, London, HMSO, 1986. 5 K.M.Hart et al., Children’s Understanding of Mathematics: 11–16, London, John Murray, 1981. 6 H.Whitney, ‘Taking responsibility in school mathematics education’, in L.Streefland (ed.), Proceedings of the Ninth International Conference for the Psychology of Mathematics Education, vol. 2, State University of Utrecht, 1985. 7 It was Euclid who said to Ptolemy the first, when he asked for an easy way, to learn geometry: ‘There is no royal road to geometry.’ 8 R.R.Skemp, Intelligence, Learning, and Action, Chichester and New York, Wiley, 1979. 9 ‘Theory is in the end…the most practical of all things.’ J.Dewey, Sources of a Science of Education, New York, Liveright, 1929. 10 S.H.Erlwanger, ‘Case Studies of children’s conceptions of mathematics—Part 1’, Journal of Children’s Mathematical Behavior, I, 3, Champaign, Ill., Study Group for Mathematical Behavior, 1975, p. 93. 2 Intelligence and understanding 1 2 R.R.Skemp, Intelligence, Learning, and Action, Chichester and New York, Wiley, 1979, pp. 41–50. R.R.Skemp, The Psychology of Learning Mathematics, Harmondsworth, Penguin, 1971, p. 14; 2nd ed., 1986, p. 14. Ibid., p. 46; 2nd ed., p. 43. Notes and references 3 The formation of mathematical concepts 1 H.Poincaré (trans. G.B.Halstead), ‘Mathematical creation’, in B.Ghiselin (ed.), The Creative Process, New York, Mentor, 1955, p. 33. J.Wrigley, ‘The factorial nature of ability in elementary mathematics’, paper read to N. Ireland branch of the British Psychological Society, 17 November 1956, abstract in Bulletin of the British Psychological Society, May 1957. N.Weiner, Cybernetics, New York, Wiley, 1943. K.W.Gruenberg and A.J.Weir, Linear Geometry (2nd ed.), New York, Springer-Verlag, 1977, p. 10. 4 The construction of mathematical knowledge 1 Buddleia, sometimes known as the butterfly tree, is recommended for those who like to see butterflies in their gardens. For a good discussion of this and other aspects of children’s learning, see H.Ginsberg, Children’s Arithmetic: The Learning Process, New York, Van Nostrand, 1977. For further discussion of the importance of talk and discussion, and other matters relating to classroom organization for learning in the ways described in the present book, I highly recommend Tom Brissenden’s new book Talking about Mathematics: Mathematical Discussion in Primary Schools, Oxford, Blackwell, 1988. This was published after the present book had gone to press: otherwise, I would have referred to it at much greater length. For a detailed account of this experiment, see R.R. Skemp, ‘The need for a schematic learning theory’, British Journal of Educational Psychology, xxxii, 1962. A shorter but more accessible account may be found in R.R. Skemp, The Psychology of Learning Mathematics, Harmondsworth, Penguin, 1971, pp. 40–2; 2nd ed., 1986, pp. 38–40. E.T.Bell, Men of Mathematics, Harmondsworth, Penguin, chapter 19. Notes and references 6 A full explanation of the ideas of a fraction and of fractional numbers would take many pages. Two such explanations can be found elsewhere; at adult level, in Skemp, op. cit. (note 3), 1st ed., pp. 186–96 and 2nd ed., pp. 174–84; and at child level, in R.R.Skemp, Structured Activities for Primary Mathematics, Vol. 2, London, Routledge, 1988, the whole of network Num. 7. 7 Here is an example for those who are already familiar with matrices as representing a geometric transformation (such as a translation). Here we might start with a line as operand, and apply a reflection to this line, and then a second reflection to the result. The matrix which represents the transformation (in this case a rotation) equivalent to the combination of these two translations is what is meant by the product of the two other matrices; and the (rather strange) way in which this product is obtained from the original two matrices is what is meant by multiplication of matrices. Even in this advanced example, the meaning of ‘multiplication’ is not a reconstruction but only an expansion of the original schema for multiplication, as here presented. 8 M.Luft, unpublished third year honours psychology project, Manchester University, c. 1970. 9 My colleague in these visits was Janet Ainley, now Lecturer in Primary Mathematics, University of Warwick. 10 R.R.Skemp, Intelligence, Learning, and Action, Chichester and New York, Wiley, 1979, Chapter 1. 5 Understanding mathematical symbolism 1 2 R.R.Skemp, The Psychology of Learning Mathematics, 2nd ed., Hardmondsworth, Penguin, 1986, Chapter 4. A recent and useful contribution to this field may be found in D.Pimm, Speaking Mathematically, London, Routledge & Kegan Paul, 1987. Condensed from R.R.Skemp, Intelligence, Learning, and Action, Chichester, Wiley, 1979, pp. 131–41, where a more detailed discussion of this model can be found. D.O.Tall, ‘Conflicts and catastrophes in the learning of 215 Notes and references mathematics’, Mathematical Education for Teaching, 2, 4, 81, 1977, pp. 2–18. For further reading on this subject, I recommend the classic work of L.S.Vygotsky, Thought and Language, Cambridge Mass., MIT Press, 1962. Twenty-five is a quarter of a hundred. 6 Making a start 1 2 3 J.N.Ainley, personal communication, 1988. H.Gardner, personal communication, 1988. R.R.Skemp, Structured Activities for Mathematics, London, Routledge, 1989. 7 The contents and structure of primary mathematics 1 2 3 R.Rees, personal communication, 1985. W.H.Cockcroft (Chairman of Committee of Inquiry) et al., Mathematics Counts, London, HMSO, 1982, p. 89. See chapter 5, pp. 92–93. For further discussion of this important topic see also R.R.Skemp, The Psychology of Learning Mathematics, 2nd ed., Harmondsworth, Penguin, 1986, the whole of Chapter 3; and Skemp, Intelligence, Learning, and Action, Chichester, Wiley, 1979, Chapter 11, sections 11.6–11.12. S.Papert, Mindstorms, Brighton, Harvester, 1980. A good introduction to the use of LOGO in the classroom is provided by J.N.Ainley and R.N.Goldstein, Making Logo Work: A Guide for Teachers, Oxford, Blackwell, 1988. Skemp, 1986, op. cit., pp. 154–61. 8 Management for intelligent learning 1 On the early mechanical typewriters, keys of letters which often came next to each other in words were spaced apart, to reduce the likelihood of jamming when typing fast. 2 John Holt, How Children Fail, London, Pitman, 1965. (Also available as a Penguin.) 216 Notes and references 3 L.G.Buxton, ‘Cognitive-affective interaction in foundations of human learning’, unpublished doctoral thesis, Warwick University, 1985. See also L.G.Buxton, Do You Panic about Maths? London, Heinemann Educational, 1981. There are those who think that mathematics has an existence of its own, independently of human minds. E.E.Biggs, ‘Investigational methods’, in L.R.Chapman (ed.), The Process of Learning Mathematics’, Oxford, Pergamon, 1972, pp. 232–3. I have borrowed this apt description from Judith Bamford (personal communication, 1988). 9 Emotional influences on learning 1 For a more detailed treatment, see R.R.Skemp, Intelligence, Learning, and Action, Chichester, Wiley, 1979, Chapter 2. This and the other extracts quoted inset in this chapter have been contributed by students taking my course Foundations of Human Learning at Warwick University. They have agreed to my quoting them, anonymously; and I am grateful to them for this valuable collection of examples. Mathematics was more frequently linked with negative emotions in these students’ recollections than in all other school subjects put together. R.R.Skemp, ‘Difficulties of learning mathematics by children of good general intelligence’, unpublished doctoral thesis, Manchester University, 1958. L.G.Buxton, ‘Cognitive-affective interaction in foundations of human learning’, unpublished doctoral thesis, Warwick University, 1985. And see also L.G.Buxton, Do you Panic about Maths?, London, Heinemann Educational, 1981. W.Hockey, Stress and Fatigue in Human Performance, Chichester, Wiley, 1983. H.Papousec, ‘Individual variability in learned responses in human infants’, in R.J.Robinson (ed.), Brain and Early Behaviour, London, Academic Press, 1969. Notes and references 10 Continuing professional development 1 2 D.O.Hebb, The Organization of Behaviour, New York, Wiley, 1949. S.E.Asch, ‘Effects of group pressure upon modification and distortion of judgement’, in E.E.Maccoby, T.M. Newcomb, and E.L.Hartley (eds), Readings in Social Psychology, New York, Holt, Rinehart & Winston, 1958. V.L.Allen and J.M.Levin, ‘Social support and conformity: the role of independent assessment of reality’, Journal of Experimental Social Psychology, 7, 1971, pp. 48–58. abstract 56–7 abstraction 52–3, 70; and concept formation 52–62; successive 56–60 action, goal-directed 36; plans of, see plans of action; power of 413 activities, co-operative 165; for classroom use 115–56 adaptability 33, 34, 38, 41, 47, 81, 157, 167, 174–5, 181, 190 adding: as mathematical operation 120; past tens boundary 135 algebra 170 Alias prime 152–4, 159 Amanda 79 analysis, conceptual, see conceptual analysis anti-goals 186–7, 192 anxiety 195–6, 205; and vicious circle 200, 205 approval 187 Asch, S.E. 209 assessment 83 Assessment of Performance Unit 22 assimilation 80–1, 87–8 attribute 56; blocks 70 authority, co-operation with 179; and freedom 178–9; of knowledge 178–9, 188; of position 178–9, 183–4, 187–8; of subject 183–4; of teacher, dual nature 178–9, 187–8 behaviour, orderly 185, 188 behaviourism 24, 33, 180, 186; see also theory, behaviourist behaviours, instinctual 39 Bell, E.T. 83 Betjeman, John 23 Biggs, E.E. 185 bicycles 45–6 binary notation 67 Bondi, H. 13 borrowing 106 butterflies 60–1 Buxton, L.G. 183–4, 200 calculation, and mathematical principles 176 calculator 168–9, 176–7 calculi, used for calculation 165 Capture 114–15, 123–4 capture effect 102 Cards on the table 160–3, 177 change, giving 123 child development 207 circle, circumference of 3 classroom, orderly 179 co-operation 81; effects on learning 180 Index cognitive: maps 37–9, 42, 44, 203; strain 164–5; element 33, 38 colour 56 common sense, events contrary to 46 communication 74; path taken by 101 commutative property, see multiplication companionship, professional need for 208–10 comparison 67, 123, 125–6 competence, emotions relating to 193–6; region of 196–9, 204 composite number, see number, composite comprehension 35 computers 168–9; and microworlds 176 concept: map 67–9, 70–1, 82, 88, 115, 174; formation 52–62, 174; formulation 166 concepts: communicating 70, 90; everyday and mathematical 60–2; lowerand higher-order 60; mathematical 49–71; particular instances 174; primary and secondary 56–60, 64, 70; ways of communicating 62–7, 70 conceptual: analysis 67–70, 82–3; structure 16, 102; structures, weak 105 confidence 43, 86, 167, 174, 194, 196, 204–5; as learner 201–3; as performer and as learner 201, 205; lack of 21–2, 201–5; loss of 35 conformity 209–10 connections 42 consistency 74, 184 constraints 158 constructivism 203 content: interdependent with process 176; and LOGO 173 contents, structured 167 counting 114; on 125 creativity 74, 77–80, 169 CSMS group 22 curriculum, criteria for 173–4, 176 cybernetics 36 decomposition 106 defensive reactions 83–4 definition 53 Department of Education and Science 22 definitions 63 delta-one and delta-two 40–3, 72–3, 106 delta-two 92 devil’s advocate 8–9 Dewey, J. 27 diagnosis 83 digit 127 director system 35 disapproval 187 discussion 74, 76, 111, 172 division, long 170 do-and-say 103 domain 196–9, 204–5; established 197–8; as learner 201–3; as performer and as learner 201, 205 doubling 142 Doubles and halves rummy 142–4 drudgery 169 efforts, to improve mathematics education 22–3, 30 Eggleston, S.J. 22 eleven-plus 192 emotional support 199–200, 205 emotions: attention-getting Index 190; and survival 189– 206; mixed 198, 200–5; see also anxiety, competence, confidence, fear, pleasure, relief, unpleasure, frustration, security entropy 51 equivalence 11 errors: mathematical 22; as part of learning 200 established territory 200 Euclid 203 evolution 190–8 examinations, backwash effect 12 examples, in concept formation 62 experience 74; regularities in 70 experiment 74; to find which are problem subjects 30 explanations 63; instrumental 3 exploration, pleasure of 203 exploratory urge 198, 205 explorers 207 exposition 44 extrapolation 79–80 faux ami 1–2, 7 fear 192–3 feelings 41–3; see also emotions fluency 79, 139, 167, 169, 181 football, two kinds 4 fractions 83; multiplication of 3 frontier zone 197–8, 200–1, 204–5; consolidation 200; as performer and as learner 201, 205 frustration 194, 196, 204–5 functions 84 future, preparation for 174, 176 gear, multi-speed 46 goal 33, 186–7 grouping 132 guides 203–7 haemophilia 47 halving 142 Handkerchief game 125–6 Hebb, D.O. 207 hierarchy, of concepts 60 Holt, John 182 homo sapiens 29, 86, 190, 204 How are these related? 154–6 insecurity 43 intelligence: A and B 208; and adaptability 41, 190–1; functioning of 25–6; genotype and phenotype 208; vs habit learning 47–8; and learning 25; a new model of 26; psychometric models of 24; and understanding 32–48; see also reflective intelligence intelligent thinking, appreciation of 183 internal consistency 74 interviews, refused 21 intuition 106 investigations, mathematical 174–7 invisible exports 60–1 journey, mental 23–4; round world 23 keyboard, QWERTY 181 knowledge 157–79; gives adaptability 33; and application 175; as starting point for project 175; informal 75; mathematical, construction of 72–89; power of 75; relational, as goal in itself 11; shared, Index authority of 184; structured 79, 157; structure(s) 37–9, 86, 155, 168; survival value 204 language, spoken 103 learner-teacher relationship 44–5 learning: conceptual, necessity of 51–2; and confidence 201–3; co-operative 76–7, 181; emotional influences 189–206; experiment in long-term 81–2; as frontier activity 196–9; habit 1, 43–5; habit vs intelligent 32–6, 180–1; intelligent 1, 25–6, 39–41, 188, 207–8; intelligent, economy of 40, 47; intelligent, management for 178–88; long-term 80– 2, 174; management of 185–6; management of risks 199–200, 205; meta80; ‘on the job’ 110, 207–8; rote 88; rote vs understanding 83; and self correction 75–6; schematic 72; at surface level 105 lines, parallel 65 LOGO 172–3 listening 111 lost, literally or mentally 42 Make a set: make others which match 144–8 manipulatives, and deep structures 112 map 44; see also cognitive map, concept map mappings 6, 11 mathematics: abstract and hierarchic 69; abstract nature of 50–1; in adult life 202, 205; aesthetic quality 29, 77–9; as amplifier of human intelligence 25–9, 31; and commerce 26; communicating 101–2; for co-operation 29; different meanings of word 7; a false friend 17; high information content 12–13; and human intelligence 30; and humiliation 200; instrumental 8–9; internal consistency of 103, 105; as a mental tool 25–9; ‘modern’ 6; in a new perspective 26; pleasure in learning 204; power of 90, 104; a problem subject 21–31; relational 8–11; in school and adult world 27–8, 30; and science 26; social value 187; special demands 49–50; and technology 26; two meanings of word 16–17; see also primary mathematics Mathland 207 meaning, of word 53 memorizing, rote 32 memory 24 meta-learning 80 microworld 172 mis-match 4–6 Missing stairs 116–18 mistakes, correction of 76 modes 1, 2, 3, see under schema construction models: behaviourist 24; common-sense 46; mathematical 175; mental 37–9; psychometric 25 ‘motivation’ 186–8 multiplication 144–5; as combination of two operations 84; commutative property 169; learning tables 139–60; long 169–70; as Index mathematical operation 148–9; tables, see product tables music 7 ‘My share is…’ 132–4 natural selection 86 notation 67; headed column 128; place-value 165, 203; see also numeral notations, informal 103 ‘number bonds’ 122 number line 139–40 Number targets 127–81 number track 123–4, 139–40 numbers, composite 153; fractional 84; as mental objects 142; negative 83; prime 152–4; rectangular 150–2 numeral 127 numerals, Roman and HinduArabic 165 nurture 207 obedience 185, 188; to authority 178; effects on learning 180–1 observation, of children learning 111–15; learning from 112 observations, revealing children’s thinking 114–15 operation 84 ostrich 64 plans, routine, and skills 159 play, exploratory 204 pleasure 191–3 pond, stagnant 211 power, over environment 187; relationship 179; see also knowledge, power of practical activities 75; importance of 103 prediction 75, 119, 151–5, 172 primary mathematics: its conceptual complexity 109; contents and structure 157–76 prime, see numbers, prime problem-solving 167; by infants 203 problem subjects, see experiment problems, professional and theoretical 24 process, and content 166–7; and LOGO 173 product tables 160 professional development 207–11 projects 174–6; and irrelevant material 175; mathematical content of 175 Ptolomy 203 punishment 180, 185, 188 Pythagoras 83 QWERTY, see keyboard Papert, S. 172, 207 Papousec, H. 203 pattern 32–3 perspective, need for wider 30 piano, experiment with 99– 100 place-value notation 61, 96–7, 128, 131 plans 157–9; fixed 15; of action 34, 36–8, 47, 81, 142, 175, 202 rat 33 ratio see velocity ratio reactions, defensive 83–4 reading: in school and outside world 27, 30; uses of 27 real-life situations 170 recording 103; and communication 166; saves cognitive strain 164 rectangle, area of 2 Index Rectangular numbers game 150–2, 159, 175 red 53 Rees, R. 157 reflection 111 reflective activity 92–3 reflective intelligence 106, 154, 159, 165 reinforcement 32 relationships 71; and structure 167 relief 192–3 remainders 132 resonance model 99–102 results, ready-for-use 160 rewards 11, 180, 185, 188 role: dual, of teacher 182–3; conflict 182–3; confusion 182–3; relationships 188 rote memorizing 32, 35 routines 91–2, 109, 169, 174, 181 royal road 23 rule learning 168 rules, without reasons 2, 35, 158 running stream 211 schema 41, 48, 60–1, 70; abstract 72; as attractor 100; building 203; see also schema construction schema construction 72–89, 86–7, 110, 171; mode 1 175–6; tabulated 74; three modes 72–80, 87; and LOGO 171–3 schemas 37–9, 42, 86, 167; accommodating, see restructuring; difficulty of restructuring 13; and enjoyment of learning 85–7; foundation 82–4, 105, 164; and long-term learning 80–2, 87–8; organic quality 11; properties of 87; reconstruction of 6, 83–4, 88; shared 81; social functions of 81; state of development 83 schematic learning, organic quality 82 scientist 204 security 195–6, 204–5 self-esteem, loss of 35 semantics 168 sequence 32, 71; of counting numbers 116 set 84, 144–8 sets 6, 11; as countable objects 88 Sets under our hands 148–50 shares, equal 132 sharing 132 simplicity, two kinds 14 skills 157–9, 168, 202; activities for developing 159–63; as starting point for project 175 Skinner 33 Slippery slope 135–9, 158 sophistication, different levels 158 spelling 34–5 Stepping stones 120–2 stimulus 33, 186 stream, running 211 stress 205 structure 115, 163, 165; conceptual, see conceptual structure; importance of 73, 87; conceptual should dominate symbolic 164; deep and surface 94–7, 104; see also knowledge structures subtraction 65–7, 106, 123 surveys, factual 22 survival 187; and emotions 189–206 syllabi, over-burdened 12–13 syllabus, changes 23; traditional 6 Index symbol system 93–4, 104 symbol, different ways of understanding same 97–8 symbolism, mathematical 90–106; power of 90–3 symbols: as interface 90; empty 105; inconsistency of meaning 96; problems with 105; uses of 90–1 syntax 168 system, director see director system system, symbol see symbol system Taking 139–42 taking away 65–7, 123 Tall 100 teacher-dependence 43–5, 86 teacher: dual authority of 200; dual role of 182 teachers, as guides and as explorers 207–8 teaching: as intervention in learning process 180; as learning experience for ourselves 109–15, 156; intelligent 35; intelligent, rewards of 210; for schematic learning 86–7; for understanding 82–4 technology 187; yesterday’s and future 157 textbook writers 69 thinking, time for 200 theories 48; what are they for? 45–7; why do we need? 45–7 theory: of flywheels 46; of intelligence 24, 48; of learning 24; power of 14; practical value of 27; related to classroom 111, 156; of spinning tops 46 thinking: children’s, externalised 156; voluntary control of 104, 165 Tolman 39 tools, physical and mental 26 triangles 65–6; angles of 10 understanding 24, 32–48, 41–3, 168, 182, 200, 202; and adaptability 81; and confidence 43–5; instrumental, reasons for teaching 12–13; relational and instrumental 1–17, 157; satisfaction of 44; symbolic 98–9, 104 unpleasure 192–3 variables 6 vector space 51 vicious circle, see anxiety Whitney, H. 22 work: oral 163–4, 176; practical 163–4, 176; written 163–6
{"url":"https://silo.pub/mathematics-in-the-primary-school-subjects-in-the-primary-school-series.html","timestamp":"2024-11-07T10:26:52Z","content_type":"text/html","content_length":"414837","record_id":"<urn:uuid:c91484f6-30e1-4881-97dd-9ccc249dfa0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00046.warc.gz"}
Single Area Border RBridge Nickname for TRILL Multilevel INTERNET-DRAFT Mingui Zhang Intended Status: Proposed Standard Donald Eastlake Radia Perlman Margaret Wasserman Painless Security Hongjun Zhai Expires: January 7, 2016 July 6, 2015 Single Area Border RBridge Nickname for TRILL Multilevel A major issue in multilevel TRILL is how to manage RBridge nicknames. In this document, the area border RBridge uses a single nickname in both Level 1 and Level 2. RBridges in Level 2 must obtain unique nicknames but RBridges in different Level 1 areas may have the same Status of this Memo This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at The list of Internet-Draft Shadow Directories can be accessed at Copyright and License Notice Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved. Mingui Zhang, et al Expires January 7, 2016 [Page 1] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 2 2. Acronyms and Terminology . . . . . . . . . . . . . . . . . . . 3 2.1. Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Nickname Handling on Border RBridges . . . . . . . . . . . . . 3 3.1. Actions on Unicast Packets . . . . . . . . . . . . . . . . 4 3.2. Actions on Multi-Destination Packets . . . . . . . . . . . 5 4. Per-flow Load Balancing . . . . . . . . . . . . . . . . . . . . 6 4.1. Ingress Nickname Replacement . . . . . . . . . . . . . . . 6 4.2. Egress Nickname Replacement . . . . . . . . . . . . . . . . 7 5. Protocol Extensions for Discovery . . . . . . . . . . . . . . . 7 5.1. Discovery of Border RBridges in L1 . . . . . . . . . . . . 7 5.2. Discovery of Border RBridge Sets in L2 . . . . . . . . . . 8 6. One Border RBridge Connects Multiple Areas . . . . . . . . . . 8 7. E-L1FS/E-L2FS Backwards Compatibility . . . . . . . . . . . . . 9 8. Security Considerations . . . . . . . . . . . . . . . . . . . . 9 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 9 9.1. TRILL APPsub-TLVs . . . . . . . . . . . . . . . . . . . . . 9 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 10 10.1. Normative References . . . . . . . . . . . . . . . . . . . 10 10.2. Informative References . . . . . . . . . . . . . . . . . . 10 Appendix A. Clarifications . . . . . . . . . . . . . . . . . . . . 10 A.1. Level Transition . . . . . . . . . . . . . . . . . . . . . 11 Author's Addresses . . . . . . . . . . . . . . . . . . . . . . . . 12 1. Introduction TRILL multilevel techniques are designed to improve TRILL scalability issues. As described in [MultiL], there have been two proposed approaches. One approach, which is referred as the "unique nickname" approach, gives unique nicknames to all the TRILL switches in the multilevel campus, either by having the Level-1/Level-2 border TRILL switches advertise which nicknames are not available for assignment in the area, or by partitioning the 16-bit nickname into an "area" field and a "nickname inside the area" field. The other approach, Mingui Zhang, et al Expires January 7, 2016 [Page 2] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 which is referred as the "aggregated nickname" approach, involves assigning nicknames to the areas, and allowing nicknames to be reused in different areas, by having the border TRILL switches rewrite the nickname fields when entering or leaving an area. The approach specified in this document is different from both "unique nickname" and "aggregated nickname" approach. In this document, the nickname of an area border RBridge is used in both Level 1 (L1) and Level 2 (L2). No additional nicknames are assigned to the L1 areas. Each L1 area is denoted by the group of all nicknames of those border RBridges of the area. For this approach, nicknames in L2 MUST be unique but nicknames inside different L1 areas MAY be reused. The use of the approach specified in this document in one L1 area does not prohibit the use of other approaches in other L1 areas in the same TRILL campus. 2. Acronyms and Terminology 2.1. Acronyms Data Label: VLAN or FGL IS-IS: Intermediate System to Intermediate System [ISIS] 2.2. Terminology The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119]. Familiarity with [RFC6325] is assumed in this document. 3. Nickname Handling on Border RBridges This section provides an illustrative example and description of the border learning border RBridge nicknames. Area {2,20} level 2 Area {3,30} +-------------------+ +-----------------+ +--------------+ | | | | | | | S--RB27---Rx--Rz----RB2---Rb---Rc--Rd---Re--RB3---Rk--RB44---D | | 27 | | | | 44 | | ----RB20--- ----RB30--- | +-------------------+ +-----------------+ +--------------+ Figure 3.1: An example topology for TRILL multilevel In Figure 3.1, RB2, RB20, RB3 and RB30 are area border TRILL switches Mingui Zhang, et al Expires January 7, 2016 [Page 3] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 (RBridges). Their nicknames are 2, 20, 3 and 30 respectively. Area border RBridges use the set of border nicknames to denote the L1 area that they are attached to. For example, RB2 and RB20 use nicknames {2,20} to denote the L1 area on the left. A source S is attached to RB27 and a destination D is attached to RB44. RB27 has a nickname, say 27, and RB44 has a nickname, say 44 (and in fact, they could even have the same nickname, since the TRILL switch nickname will not be visible outside these Level 1 areas). 3.1. Actions on Unicast Packets Let's say that S transmits a frame to destination D and let's say that D's location is learned by the relevant TRILL switches already. These relevant switches have learned the following: 1) RB27 has learned that D is connected to nickname 3. 2) RB3 has learned that D is attached to nickname 44. The following sequence of events will occur: - S transmits an Ethernet frame with source MAC = S and destination MAC = D. - RB27 encapsulates with a TRILL header with ingress RBridge = 27, and egress RBridge = 3 producing a TRILL Data packet. - RB2 and RB20 have announced in the Level 1 IS-IS instance in area {2,20}, that they are attached to all those area nicknames, including {3,30}. Therefore, IS-IS routes the packet to RB2 (or RB20, if RB20 on the least-cost route from RB27 to RB3). - RB2, when transitioning the packet from Level 1 to Level 2, replaces the ingress TRILL switch nickname with its own nickname, so replaces 27 with 2. Within Level 2, the ingress RBridge field in the TRILL header will therefore be 2, and the egress RBridge field will be 3. (The egress nickname MAY be replaced with an area nickname selected from {3,30}. See Section 4 for the detail of the selection method. Here, suppose nickname 3 is used.) Also RB2 learns that S is attached to nickname 27 in area {2,20} to accommodate return traffic. RB2 SHOULD synchronize with RB20 using ESADI protocol [RFC7357] that MAC = S is attached to nickname 27. - The packet is forwarded through Level 2, to RB3, which has advertised, in Level 2, its L2 nickname as 3. - RB3, when forwarding into area {3,30}, replaces the egress nickname in the TRILL header with RB44's nickname (44). (The Mingui Zhang, et al Expires January 7, 2016 [Page 4] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 ingress nickname MAY be replaced with an area nickname selected from {2,20}. See Section 4 for the detail of the selection method. Here, suppose nickname 2 is selected.) So, within the destination area, the ingress nickname will be 2 and the egress nickname will be 44. - RB44, when decapsulating, learns that S is attached to nickname 2, which is one of the area nicknames of the ingress. 3.2. Actions on Multi-Destination Packets Distribution trees for flooding of multi-destination packets are calculated separately within each L1 area and L2. When a multi- destination packet arrives at the border, it needs to be transitioned either from L1 to L2, or from L2 to L1. All border RBridges are eligible for Level transition. However, for each multi-destination packet, only one of them acts as the Designated Border RBridge (DBRB) to do the transition while other non-DBRBs MUST drop the received copies. All border RBridges of an area SHOULD agree on a pseudorandom algorithm and locally determine the DBRB as they do in the "Per-flow Load Balancing" section. It's also possible to implement a certain election protocol to elect the DBRB. However, such kind of implementations are out the scope of this document. As per [RFC6325], multi-destination packets can be classified into three types: unicast packet with unknown destination MAC address (unknown-unicast packet), multicast packet and broadcast packet. Now suppose that D's location has not been learned by RB27 or the frame received by RB27 is recognized as broadcast or multicast. What will happen, as it would in TRILL today, is that RB27 will forward the packet as multi-destination, setting its M bit to 1 and choosing an L1 tree, flooding the packet on the distribution tree, subject to possible pruning. When the copies of the multi-destination packet arrive at area border RBridges, non-DBRBs MUST drop the packet while the DBRB, say RB2, needs to do the Level transition for the multi-destination packet. For a unknown-unicast packet, if the DBRB has learnt the destination MAC address, it SHOULD convert the packet to unicast and set its M bit to 0. Otherwise, the multi-destination packet will continue to be flooded as multicast packet on the distribution tree. The DBRB chooses the new distribution tree by replacing the egress nickname with the new root RBridge nickname. The following sequence of events will occur: - RB2, when transitioning the packet from Level 1 to Level 2, replaces the ingress TRILL switch nickname with its own nickname, so replaces 27 with 2. RB2 also needs to replace the egress Mingui Zhang, et al Expires January 7, 2016 [Page 5] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 RBridge nickname with the L2 tree root RBridge nickname, say 2. In order to accommodate return traffic, RB2 records that S is attached to nickname 27 and SHOULD use ESADI protocol to synchronize this attachment information with other border RBridges (say RB20) in the area. - RB20, will receive the packet flooded on the L2 tree by RB2. It is important that RB20 does not transition this packet back to L1 as it does for a multicast packet normally received from another remote L1 area. RB20 should examine the ingress nickname of this packet. If this nickname is found to be a border RBridge nickname of the area {2,20}, RB2 must not forwarded the packet into this - The packet is flooded on the Level 2 tree to reach both RB3 and RB30. Suppose RB3 is the selected DBRB. The non-DBRB RB30 will drop the packet. - RB3, when forwarding into area {3,30}, replaces the egress nickname in the TRILL header with the root RBridge nickname, say 3, of the distribution tree of L1 area {3,30}. (Here, the ingress nickname MAY be replaced with an area nickname selected from {2,20} as specified in Section 4.) Now suppose that RB27 has learned the location of D (attached to nickname 3), but RB3 does not know where D is. In that case, RB3 must turn the packet into a multi-destination packet and floods it on the distribution tree of L1 area {3,30}. - RB30, will receive the packet flooded on the L1 tree by RB3. It is important that RB30 does not transition this packet back to L2. RB30 should also examine the ingress nickname of this packet. If this nickname is found to be an L2 border RBridge nickname, RB30 must not transition the packet back to L2. - The multicast listener RB44, when decapsulating the received packet, learns that S is attached to nickname 2, which is one of the area nicknames of the ingress. 4. Per-flow Load Balancing Area border RBridges perform ingress/egress nickname replacement when they transition TRILL data packets between Level 1 and Level 2. This nickname replacement enables the per-flow load balance which is specified as follows. 4.1. Ingress Nickname Replacement When a TRILL data packet from other areas arrives at an area border Mingui Zhang, et al Expires January 7, 2016 [Page 6] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 RBridge, this RBridge MAY select one area nickname of the ingress to replace the ingress nickname of the packet. The selection is simply based on a pseudorandom algorithm as defined in Section 5.3 of [RFC7357]. With the random ingress nickname replacement, the border RBridge actually achieves a per-flow load balance for returning All area border RBridges in an L1 area MUST agree on the same pseudorandom algorithm. The source MAC address, ingress area nicknames, egress area nicknames and the Data Label of the received TRILL data packet are candidate factors of the input of this pseudorandom algorithm. Note that the value of the destination MAC address SHOULD be excluded from the input of this pseudorandom algorithm, otherwise the egress RBridge will see one source MAC address flip flopping among multiple ingress RBridges. 4.2. Egress Nickname Replacement When a TRILL data packet originated from the area arrives at an area border RBridge, this RBridge MAY select one area nickname of the egress to replace the egress nickname of the packet. By default, it SHOULD choose the egress area border RBridge with the least cost route to reach. The pseudorandom algorithm as defined in Section 5.3 of [RFC7357] may be used as well. In that case, however, the ingress area border RBridge may take the non-least-cost Level 2 route to forward the TRILL data packet to the egress area border RBridge. 5. Protocol Extensions for Discovery 5.1. Discovery of Border RBridges in L1 The following Level 1 Border RBridge APPsub-TLV will be included in an E-L1FS FS-LSP fragment zero [RFC7180bis] as an APPsub-TLV of the TRILL GENINFO-TLV. Through listening to this Appsub-TLV, an area border RBridge discovers all other area border RBridges in this area. | Type = L1-BORDER-RBRIDGE | (2 bytes) | Length | (2 bytes) | Sender Nickname | (2 bytes) o Type: Level 1 Border RBridge (TRILL APPsub-TLV type tbd1) o Length: 2 Mingui Zhang, et al Expires January 7, 2016 [Page 7] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 o Sender Nickname: The nickname the originating IS will use as the L1 Border RBridge nickname. This field is useful because the originating IS might own multiple nicknames. 5.2. Discovery of Border RBridge Sets in L2 The following APPsub-TLV will be included in an E-L2FS FS-LSP fragment zero [RFC7180bis] as an APPsub-TLV of the TRILL GENINFO-TLV. Through listening to this APPsub-TLV in L2, an area border RBridge discovers all groups of L1 border RBridges and each such group identifies an area. | Type = L1-BORDER-RB-GROUP | (2 bytes) | Length | (2 bytes) | L1 Border RBridge Nickname 1 | (2 bytes) | ... | | L1 Border RBridge Nickname k | (2 bytes) o Type: Level 1 Border RBridge Group (TRILL APPsub-TLV type tbd2) o Length: 2*k. If length is not a multiple of 2, the APPsub-TLV is corrupt and MUST be ignored. o L1 Border RBridge Nickname: The nickname that an area border RBridge uses as the L1 Border RBridge nickname. The L1-BORDER-RB- GROUP TLV generated by an area border RBridge MUST include all L1 Border RBridge nicknames of the area. It's RECOMMENDED that these k nicknames are ordered in ascending order according to the 2- octet nickname considered as an unsigned integer. When an L1 area is partitioned [MultiL], border RBridges will re- discover each other in both L1 and L2 through exchanging LSPs. In L2, the set of border RBridge nicknames for this splitting area will change. Border RBridges that detect such a change MUST flush the reach-ability information associated to any RBridge nickname from this changing set. 6. One Border RBridge Connects Multiple Areas It's possible that one border RBridge (say RB1) connects multiple L1 areas. RB1 SHOULD use a single area nickname for all these areas. Mingui Zhang, et al Expires January 7, 2016 [Page 8] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 Nicknames used within one of these areas can be reused within other areas. It's important that packets destined to those duplicated nicknames are sent to the right area. Since these areas are connected to form a layer 2 network, duplicated {MAC, Data Label} across these areas ought not occur. Now suppose a TRILL data packet arrives at the area border nickname of RB1. For a unicast packet, RB1 can lookup the {MAC, Data Label} entry in its MAC table to identify the right destination area (i.e., the outgoing interface) and the egress RBridge's nickname. For a multicast packet: suppose RB1 is not the DBRB, RB1 will not transition the packet; otherwise, RB1 is the DBRB, - if this packet is originated from an area out of the connected areas, RB1 should replicate this packet and flood it on the proper Level 1 trees of all the areas in which it acts as the DBRB. - if the packet is originated from one of the connected areas, RB1 should replicate the packet it receives from the Level 1 tree and flood it on other proper Level 1 trees of all the areas in which it acts as the DBRB except the originating area (i.e., the area connected to the incoming interface). RB1 may also receive the replication of the packet from the Level 2 tree. This replication must be dropped by RB1. 7. E-L1FS/E-L2FS Backwards Compatibility All Level 2 RBridges MUST support E-L2FS [RFC7356] [rfc7180bis]. The Extended TLVs defined in Section 5 are to be used in Extended Level 1/2 Flooding Scope (E-L1FS/E-L2FS) PDUs. Area border RBridges MUST support both E-L1FS and E-L2FS. RBridges that do not support either E-L1FS or E-L2FS cannot serve as area border RBridges but they can well appear in an L1 area acting as non-area-border RBridges. 8. Security Considerations For general TRILL Security Considerations, see [RFC6325]. The newly defined TRILL APPsub-TLVs in Section 5 are transported in IS-IS PDUs whose authenticity can be enforced using regular IS-IS security mechanism [ISIS][RFC5310]. This document raises no new security issues for IS-IS. 9. IANA Considerations 9.1. TRILL APPsub-TLVs IANA is requested to allocate two new types under the TRILL GENINFO TLV [RFC7357] for the TRILL APPsub-TLVs defined in Section 5. The following entries are added to the "TRILL APPsub-TLV Types under IS- Mingui Zhang, et al Expires January 7, 2016 [Page 9] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 IS TLV 251 Application Identifier 1" Registry on the TRILL Parameters IANA web page. Type Name Reference --------- ---- --------- tbd1[256] L1-BORDER-RBRIDGE [This document] tbd2[257] L1-BORDER-RB-GROUP [This document] 10. References 10.1. Normative References [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997. [RFC6325] Perlman, R., Eastlake 3rd, D., Dutt, D., Gai, S., and A. Ghanwani, "Routing Bridges (RBridges): Base Protocol Specification", RFC 6325, July 2011. [RFC7356] L. Ginsberg, S. Previdi, et al, "IS-IS Flooding Scope LSPs", RFC 7356, June 2014. [RFC7357] Zhai, H., Hu, F., Perlman, R., Eastlake 3rd, D., and O. Stokes, "Transparent Interconnection of Lots of Links (TRILL): End Station Address Distribution Information (ESADI) Protocol", RFC 7357, September 2014. 10.2. Informative References [ISIS] ISO, "Intermediate system to Intermediate system routeing information exchange protocol for use in conjunction with the Protocol for providing the Connectionless-mode Network Service (ISO 8473)", ISO/IEC 10589:2002. [RFC5310] Bhatia, M., Manral, V., Li, T., Atkinson, R., White, R., and M. Fanto, "IS-IS Generic Cryptographic Authentication", RFC 5310, February 2009. [RFC7180bis] D. Eastlake, M. Zhang, et al, "TRILL: Clarifications, Corrections, and Updates", draft-eastlake-trill-rfc7180bis, work in progress. [MultiL] Perlman, R., Eastlake, D., et al, "Flexible Multilevel TRILL", draft-perlman-trill-rbridge-multilevel, work in Appendix A. Clarifications Mingui Zhang, et al Expires January 7, 2016 [Page 10] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 A.1. Level Transition It's possible that an L1 RBridge is only reachable from a non-DBRB RBridge. If this non-DBRB RBridge refrains from Level transition, the question is, how can a multicast packet reach this L1 RBridge? The answer is, it will be reached after the DBRB performs the Level transition and floods the packet using an L1 distribution tree. Take the following figure as an example. RB77 is reachable from the border RBridge RB30 while RB3 is the DBRB. RB3 transitions the multicast packet into L1 and floods the packet on the distribution tree rooted from RB3. This packet will finally flooded to RB77 via +--------------+ (root) RB3 o | | \ -RB3 | | o RB30 | | | / -RB30-RB77 | RB77 o Example Topology L1 Tree In the above example, the multicast packet is forwarded along a non- optimal path. A possible improvement is to have RB3 configured not to belong to this area. In this way, RB30 will surely act as the DBRB to do the Level transition. Mingui Zhang, et al Expires January 7, 2016 [Page 11] INTERNET-DRAFT Single Nickname Multiple Levels July 6, 2015 Author's Addresses Mingui Zhang Huawei Technologies No.156 Beiqing Rd. Haidian District, Beijing 100095 P.R. China EMail: zhangmingui@huawei.com Donald E. Eastlake, 3rd Huawei Technologies 155 Beaver Street Milford, MA 01757 USA Phone: +1-508-333-2270 EMail: d3e3e3@gmail.com Radia Perlman 2010 256th Avenue NE, #200 Bellevue, WA 98007 USA EMail: radia@alum.mit.edu Margaret Wasserman Painless Security EMail: mrw@painless-security.com Hongjun Zhai Jinling Institute of Technology 99 Hongjing Avenue, Jiangning District Nanjing, Jiangsu 211169 China EMail: honjun.zhai@tom.com Mingui Zhang, et al Expires January 7, 2016 [Page 12]
{"url":"https://datatracker.ietf.org/doc/html/draft-zhang-trill-multilevel-single-nickname","timestamp":"2024-11-11T18:25:09Z","content_type":"text/html","content_length":"76450","record_id":"<urn:uuid:19513f7d-5bfd-4f9d-9096-7f05aaabfe57>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00766.warc.gz"}
Complementary And Supplementary Angles Worksheet Pdf Complementary And Supplementary Angles Worksheet Pdf Supplementary angles can be adjacent. Two acute angles are. Best Complementary And Supplementary Angles Worksheet In 2020 Angles Worksheet Geometry Worksheets Angle Relationships Worksheet El2lmch c q wadl6l l qruirgdhnthsq vr oeis bewrjv bekdf v z 8mcaud ges 8w8i6tvht witn 0f5irn jiwtzeg gweko 2mcectray8 j worksheet by kuta software llc 25 108. Complementary and supplementary angles worksheet pdf. Complementary angles can be adjacent. The geometry worksheets on this page can be used to introduce and review the concepts of complementary and supplementary angles. Find the measure of two complementary angles if the difference of the measures is 12. B find the value of x in each figure. Vertical angles must have the same measure. Find the complement or supplement of the indicated angles or find the measures of both the angles in a pair using the given data. Sue murray created date. Complementary angles supplementary angles 1 class notes a state what type of angles are illustrated in the diagram. Two acute angles can be supplementary. Finding angles in complementary or supplementary pairs. Any two right angles are supplementary. Supplementary angles find the value worksheet complementary linear pair vertical or adjacent worksheet to link to this complementary and supplementary angle worksheets page copy the following code to your site. Complementary and supplementary angles worksheet 1 author. Complementary and supplementary worksheet. Complementary and supplementary word problems worksheet. Mindy briones last modified by. Proving triangle congruence worksheet. Supplementary angles two angles in which the sum of the measures is 180 degrees. Area and perimeter worksheets. 9 12 2009 1 57 00 pm other titles. Complementary supplementary angles find the measure of angle b. Sum of the angles in a triangle is 180 degree worksheet. Find the measure of two supplementary angles if the angle is twice the measure of its supplement. Most worksheets on this page align with common core standard 7 g b 5. Types of angles worksheet. A pair of vertical angles can be supplementary. Complementary and supplementary pairs adjacent and non adjacent angles multiple rays grab these pdf worksheets to demonstrate greater skills in finding the complementary and supplementary angles. A pair of vertical angles can be complementary. Special line segments in. 9 3 9 3 complementary and supplementary angles complementary angles two angles in which the sum of the measures is 90 Write an equation and solve the following word problems. Let this pdf worksheet help your ability to find angles in complementary and supplementary pairs flourish. Complementary and supplementary angles worksheet 1. Properties of parallelogram worksheet. Each worksheet comprises two problems each of which consists of four questions to test your understanding. Complementary Angles Worksheet Education Com Angles Worksheet Complementary Angles Geometry Worksheets Geometry Worksheets Angles Worksheets For Practice And Study In 2020 Vertical Angles Angles Worksheet Geometry Worksheets Complementary And Supplementary Angles Practice Activity Worksheet Fun Way To Review Complementary A Supplementary Angles Geometry Worksheets Angles Worksheet Notes Coloring Activity Containing Two Pages Of Scaffolded Notes With Hints On How St Everyday Math Teaching Middle School Maths Middle School Math Resources Complementary And Supplementary Angles Supplementary Angles Geometry Worksheets Geometry High School Teach Your Students About Complementary And Supplementary Angles Fichas De Exercicios De Matematica Atividades De Matematica Exercicios De Matematica Sixth Grade Math Worksheets Geometry Worksheets Free Math Worksheets Math Worksheets Complementary And Supplementary Angles Worksheet Education Com Teaching Math Supplementary Angles Geometry Worksheets Vertical Angles Worksheet Pdf Beautiful Identify Plementary Supplementary Vertical Angle In 2020 Angles Worksheet Vertical Angles Worksheets Finding Supplementary Angles Worksheet Angles Worksheet Geometry Angles Math Worksheets Complementary And Supplementary Angles Puzzle Worksheet Supplementary Angles Worksheets Teaching Vertical Angles Worksheet Pdf Luxury Identify Plementary Supplementary Vertical Angle In 2020 Angles Worksheet Vertical Angles Worksheets Printable Worksheets Angles On Parallel Lines A Maths Worksheet Geometry Worksheets Angles Worksheet Year 7 Maths Worksheets Geometry Worksheets Angles Worksheets For Practice And Study In 2020 Angles Worksheet Geometry Worksheets Angle Relationships Worksheet Pairs Of Angles Worksheets Angles Worksheet Geometry Worksheets Finding Angle Measures Angle Relationships Foldable Angle Relationships Angle Relationships Foldable Math Interactive Notebook Supplementary Angles Worksheet Supplementary Angles Angles Worksheet Worksheets Geometry Worksheets Angles Worksheets For Practice And Study Geometry Worksheets Angles Worksheet Complementary Angles Pairs Of Angles Worksheets Geometry Worksheets Angles Worksheet Education Math
{"url":"https://thekidsworksheet.com/complementary-and-supplementary-angles-worksheet-pdf/","timestamp":"2024-11-05T02:34:41Z","content_type":"text/html","content_length":"136361","record_id":"<urn:uuid:3a03ca56-5565-4c66-8450-d851a0a3c8cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00150.warc.gz"}
Vertical asymptotes online calculator Vertical asymptote of the function called the straight line parallel y axis that is closely appoached by a plane curve . The distance between this straight line and the plane curve tends to zero as x tends to the infinity. The vertical asymptote equation has the form: , where - some constant (finity number) The vertical asymptote of the function exists if the value of one (or both) of the limits is equal to . It should be noted that the limits described above also used to test whether the point is the discontinuity point of the function . Hence, the vertical asymptotes should only be searched at the discontinuity points of the function. Use our online calculator, based on the Wolfram Aplha system, to find vertical asymptotes of your function.
{"url":"https://mathforyou.net/en/online/calculus/asymptotes/vertical/","timestamp":"2024-11-14T11:16:37Z","content_type":"text/html","content_length":"19914","record_id":"<urn:uuid:3c21a11d-46d1-4843-92bf-8af930fe299e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00408.warc.gz"}
This article is about the scientific concept. For the substance of which all physical objects consist, see For other uses, see Mass (disambiguation) In physics, mass is a property of a physical body. It is generally a measure of an object's resistance to changing its state of motion when a force is applied.^[1] It is determined by the strength of its mutual gravitational attraction to other bodies, its resistance to acceleration or directional changes, and in the theory of relativity gives the mass–energy content of a system. The SI unit of mass is the kilogram (kg). Mass is not the same as weight, even though we often calculate an object's mass by measuring its weight with a spring scale instead of comparing it to known masses. An object on the Moon would weigh less than it would on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) causes this In Newtonian physics, mass can be generalized as the amount of matter in an object. However, at very high speeds or for subatomic particles, special relativity shows that energy is an additional source of mass. Thus, any stationary body having mass has an equivalent amount of energy, and all forms of energy resist acceleration by a force and have gravitational attraction. In addition, "matter" is a loosely defined term in science, and thus cannot be precisely measured. There are several distinct phenomena which can be used to measure mass. Although some theorists have speculated some of these phenomena could be independent of each other,^[2] current experiments have found no difference among any of the ways used to measure mass: • Inertial mass measures an object's resistance to being accelerated by a force (represented by the relationship F = ma). • Active gravitational mass measures the gravitational force exerted by an object. • Passive gravitational mass measures the gravitational force experienced by an object in a known gravitational field. • Mass–energy measures the total amount of energy contained within a body, using E = mc^2. The mass of an object determines its acceleration in the presence of an applied force. This phenomenon is called inertia. According to Newton's second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A body's mass also determines the degree to which it generates or is affected by a gravitational field. If a first body of mass m[A] is placed at a distance r (center of mass to center of mass) from a second body of mass m[B], each body experiences an attractive force F[g] = Gm[A]m[B]/r^2, where G = 6.67×10^−11 N kg^−2 m^2 is the "universal gravitational constant". This is sometimes referred to as gravitational mass.^[note 1] Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been entailed a priori in the equivalence principle of general relativity. Units of mass The standard International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the international prototype kilogram, and as such is independent of the meter, or the properties of water. As of January 2013, there are several proposals for redefining the kilogram yet again, including a proposal for defining it in terms of the Planck constant.^[3] Other units are accepted for use in SI: • the tonne (t) (or "metric ton") is equal to 1000 kg. • the electronvolt (eV) is a unit of energy, but because of the mass–energy equivalence it can easily be converted to a unit of mass, and is often used like one. In this context, the mass has units of eV/c^2. The electronvolt is common in particle physics. • the atomic mass unit (u) is 1/12 of the mass of a carbon-12 atom, approximately 1.66×10^−27 kg.^[note 2] The atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units include: Definitions of mass In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass:^[4] Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. • The amount of matter in certain types of samples can be exactly determined through electrodeposition or other precise processes. The mass of an exact sample is determined in part by the number and type of atoms or molecules it contains, and in part by the energy involved in binding it together (which contributes a negative "missing mass," or mass deficit). • Inertial mass is a measure of an object's resistance to changing its state of motion when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia. • Active gravitational mass ^[note 3] is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small 'test object' to freely fall and measuring its free-fall acceleration. For example, an object in free-fall near the Moon will experience less gravitational field, and hence accelerate more slowly than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass. • Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration. Two objects within the same gravitational field will experience the same acceleration; however, the object with a smaller passive gravitational mass will experience a smaller force (less weight) than the object with a larger passive gravitational mass. • Energy also has mass according to the principle of mass–energy equivalence. This equivalence is exemplified in a large number of physical processes including pair production, nuclear fusion, and the gravitational bending of light. Pair production and nuclear fusion are processes through which measurable amounts of mass and energy are converted into each other. In the gravitational bending of light, photons of pure energy are shown to exhibit a behavior similar to passive gravitational mass. • Curvature of spacetime is a relativistic manifestation of the existence of mass. Curvature is extremely weak and difficult to measure. For this reason, curvature was not discovered until after it was predicted by Einstein's theory of general relativity. Extremely precise atomic clocks on the surface of the earth, for example, are found to measure less time (run slower) when compared to similar clocks in space. This difference in elapsed time is a form of curvature called gravitational time dilation. Other forms of curvature have been measured using the Gravity Probe B • Quantum mass manifests itself as a difference between an object's quantum frequency and its wave number. The quantum mass of an electron, the Compton wavelength, can be determined through various forms of spectroscopy and is closely related to the Rydberg constant, the Bohr radius, and the classical electron radius. The quantum mass of larger objects can be directly measured using a watt balance. In relativistic quantum mechanics, mass is one of the irreducible representation labels of the Poincaré group. Weight vs. mass In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its natural course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass.^[5] The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances, and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight W of an object is related to its mass m by W = mg, where g = 9.80665 m/s^2 is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object). For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight W of an object is related to its mass m by the equation W = –ma, where a is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero). Macroscopically, mass is associated with matter, although matter is not, ultimately, as clearly defined a concept as mass. On the subatomic scale, not only fermions, the particles often associated with matter, but also some bosons, the particles that act as force carriers, have rest mass. Another problem for easy definition is that much of the rest mass of ordinary matter derives from the invariant mass contributed to matter by particles and kinetic energies which have no rest mass themselves (only 1% of the rest mass of matter is accounted for by the rest mass of its fermionic quarks and electrons). From a fundamental physics perspective, mass is the number describing under which the representation of the little group of the Poincaré group a particle transforms. In the Standard Model of particle physics, this symmetry is described as arising as a consequence of a coupling of particles with rest mass to a postulated additional field, known as the Higgs field. The total mass of the observable universe is estimated at between 10^52 kg and 10^53 kg, corresponding to the rest mass of between 10^79 and 10^80 protons. Inertial vs. gravitational mass Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact. Albert Einstein developed his general theory of relativity starting from the assumption that this correspondence between inertial and (passive) gravitational mass is not accidental: that no experiment will ever detect a difference between them (the weak version of the equivalence principle). However, in the resulting theory, gravitation is not a force and thus not subject to Newton's third law, so "the equality of inertial and active gravitational mass [...] remains as puzzling as ever".^[6] The equivalence of inertial and gravitational masses is sometimes referred to as the "Galilean equivalence principle" or the "weak equivalence principle". The most important consequence of this equivalence principle applies to freely falling objects. Suppose we have an object with inertial and gravitational masses m and M, respectively. If the only force acting on the object comes from a gravitational field g, combining Newton's second law and the gravitational law yields the acceleration This says that the ratio of gravitational to inertial mass of any object is equal to some constant K if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". (In addition, the constant K can be taken to be 1 by defining our units appropriately.) The first experiments demonstrating the universality of free-fall were conducted by Galileo. It is commonly stated that Galileo obtained his results by dropping objects from the Leaning Tower of Pisa , but this is most likely apocryphal; actually, he performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös,^[7] using the torsion balance pendulum, in 1889. As of 2008, no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10^−12. More precise experimental efforts are still being carried out. The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in free-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15. A stronger version of the equivalence principle, known as the Einstein equivalence principle or the strong equivalence principle, lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of space-time, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field. Origin of mass In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model. Pre-Newtonian concepts Weight as an amount The concept of amount is very old and predates recorded history. Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection: $W_n \propto n,$ where W is the weight of the collection of similar objects and n is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio: $\frac{W_n}{n} = \frac{W_m}{m}$, or equivalently $\frac{W_n}{W_m} = \frac{n}{m}.$ An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was: $\frac{\mathrm{ounce}}{\mathrm{pound}} = \frac{W_{144}}{W_{1728}} = \frac{144}{1728} = \frac{1}{12}.$ Planetary motion In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System.^[note 4] On 25 August 1609, Galileo Galilei demonstrated his first telescope to a group of Venetian merchants, and in early January of 1610, Galileo observed four dim objects near Jupiter, which he mistook for stars. However, after a few days of observation, Galileo realized that these "stars" were in fact orbiting Jupiter. These four objects (later named the Galilean moons in honor of their discoverer) were the first celestial bodies observed to orbit something other than the Earth or Sun. Galileo continued to observe these moons over the next eighteen months, and by the middle of 1611 he had obtained remarkably accurate estimates for their periods. Galilean free fall Sometime prior to 1638, Galileo turned his attention to the phenomenon of objects in free fall, attempting to characterize these motions. Galileo was not the first to investigate Earth's gravitational field, nor was he the first to accurately describe its fundamental characteristics. However, Galileo's reliance on scientific experimentation to establish physical principles would have a profound effect on future generations of scientists. It is unclear if these were just hypothetical experiments used to illustrate a concept, or if they were real experiments performed by Galileo,^ [8] but the results obtained from these experiments were both realistic and compelling. A biography by Galileo's pupil Vincenzo Viviani stated that Galileo had dropped balls of the same material, but different masses, from the Leaning Tower of Pisa to demonstrate that their time of descent was independent of their mass.^[note 5] In support of this conclusion, Galileo had advanced the following theoretical argument: He asked if two bodies of different masses and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing resolution to this question is that all bodies must fall at the same rate.^[9] A later experiment was described in Galileo's Two New Sciences published in 1638. One of Galileo's fictional characters, Salviati, describes an experiment using a bronze ball and a wooden ramp. The wooden ramp was "12 cubits long, half a cubit wide and three finger-breadths thick" with a straight, smooth, polished groove. The groove was lined with "parchment, also smooth and polished as possible". And into this groove was placed "a hard, smooth and very round bronze ball". The ramp was inclined at various angles to slow the acceleration enough so that the elapsed time could be measured. The ball was allowed to roll a known distance down the ramp, and the time taken for the ball to move the known distance was measured. The time was measured using a water clock described as "a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results."^[10] Galileo found that for an object in free fall, the distance that the object has fallen is always proportional to the square of the elapsed time: ${\text{Distance}} \propto {\text{Time}^2}$ Galileo had shown that objects in free fall under the influence of the Earth’s gravitational field have a constant acceleration, and Galileo’s contemporary, Johannes Kepler, had shown that the planets follow elliptical paths under the influence of the Sun’s gravitational mass. However, Galileo’s free fall motions and Kepler’s planetary motions remained distinct during Galileo’s lifetime. Newtonian mass Robert Hooke had published his concept of gravitational forces in 1674, stating that all celestial bodies have an attraction or gravitating power towards their own centers, and also attract all the other celestial bodies that are within the sphere of their activity. He further stated that gravitational attraction increases by how much nearer the body wrought upon is to their own center.^[11] In correspondence with Isaac Newton from 1679 and 1680, Hooke conjectures that gravitational forces might decrease according to the double of the distance between the two bodies.^[12] Hooke urged Newton, who was a pioneer in the development of calculus, to work through the mathematical details of Keplerian orbits to determine if Hooke's hypothesis was correct. Newton's own investigations verified that Hooke was correct, but due to personal differences between the two men, Newton chose not to reveal this to Hooke. Isaac Newton kept quiet about his discoveries until 1684, at which time he told a friend, Edmond Halley, that he had solved the problem of gravitational orbits, but had misplaced the solution in his office.^[13] After being encouraged by Halley, Newton decided to develop his ideas about gravity and publish all of his findings. In November 1684, Isaac Newton sent a document to Edmund Halley, now lost but presumed to have been titled De motu corporum in gyrum (Latin for "On the motion of bodies in an orbit").^[14] Halley presented Newton's findings to the Royal Society of London, with a promise that a fuller presentation would follow. Newton later recorded his ideas in a three book set, entitled Philosophiæ Naturalis Principia Mathematica (Latin: "Mathematical Principles of Natural Philosophy"). The first was received by the Royal Society on 28 April 1685–6; the second on 2 March 1686–7; and the third on 6 April 1686–7. The Royal Society published Newton’s entire collection at their own expense in May 1686–7.^[15]^:31 Isaac Newton had bridged the gap between Kepler’s gravitational mass and Galileo’s gravitational acceleration, resulting in the discovery of the following relationship which governed both of these: where g is the apparent acceleration of a body as it passes through a region of space where gravitational fields exist, μ is the gravitational mass (standard gravitational parameter) of the body causing gravitational fields, and R is the radial coordinate (the distance between the centers of the two bodies). By finding the exact relationship between a body's gravitational mass and its gravitational field, Newton provided a second method for measuring gravitational mass. The mass of the Earth can be determined using Kepler's method (from the orbit of Earth's Moon), or it can be determined by measuring the gravitational acceleration on the Earth's surface, and multiplying that by the square of the Earth's radius. The mass of the Earth is approximately three millionths of the mass of the Sun. To date, no other accurate method for measuring gravitational mass has been discovered.^[16] Newton's cannonball Newton's cannonball was a thought experiment used to bridge the gap between Galileo's gravitational acceleration and Kepler's elliptical orbits. It appeared in Newton's 1728 book A Treatise of the System of the World. According to Galileo's concept of gravitation, a dropped stone falls with constant acceleration down towards the Earth. However, Newton explains that when a stone is thrown horizontally (meaning sideways or perpendicular to Earth's gravity) it follows a curved path. "For a stone projected is by the pressure of its own weight forced out of the rectilinear path, which by the projection alone it should have pursued, and made to describe a curve line in the air; and through that crooked way is at last brought down to the ground. And the greater the velocity is with which it is projected, the farther it goes before it falls to the Earth."^[15]^:513 Newton further reasons that if an object were "projected in an horizontal direction from the top of a high mountain" with sufficient velocity, "it would reach at last quite beyond the circumference of the Earth, and return to the mountain from which it was projected." Universal gravitational mass In contrast to earlier theories (e.g. celestial spheres) which stated that the heavens were made of entirely different material, Newton's theory of mass was groundbreaking partly because it introduced universal gravitational mass: every object has gravitational mass, and therefore, every object generates a gravitational field. Newton further assumed that the strength of each object's gravitational field would decrease according to the square of the distance to that object. If a large collection of small objects were formed into a giant spherical body such as the Earth or Sun, Newton calculated the collection would create a gravitational field proportional to the total mass of the body,^[15]^:397 and inversely proportional to the square of the distance to the body's center.^[15]^:221^[note 6] For example, according to Newton's theory of universal gravitation, each carob seed produces a gravitational field. Therefore, if one were to gather an immense number of carob seeds and form them into an enormous sphere, then the gravitational field of the sphere would be proportional to the number of carob seeds in the sphere. Hence, it should be theoretically possible to determine the exact number of carob seeds that would be required to produce a gravitational field similar to that of the Earth or Sun. In fact, by unit conversion it is a simple matter of abstraction to realize that any traditional mass unit can theoretically be used to measure gravitational mass. Measuring gravitational mass in terms of traditional mass units is simple in principle, but extremely difficult in practice. According to Newton's theory all objects produce gravitational fields and it is theoretically possible to collect an immense number of small objects and form them into an enormous gravitating sphere. However, from a practical standpoint, the gravitational fields of small objects are extremely weak and difficult to measure. Newton's books on universal gravitation were published in the 1680s, but the first successful measurement of the Earth's mass in terms of traditional mass units, the Cavendish experiment, did not occur until 1797, over a hundred years later. Cavendish found that the Earth's density was 5.448 ± 0.033 times that of water. As of 2009, the Earth's mass in kilograms is only known to around five digits of accuracy, whereas its gravitational mass is known to over nine significant figures. Given two objects A and B, of masses M[A] and M[B], separated by a displacement R[AB], Newton's law of gravitation states that each object exerts a gravitational force on the other, of magnitude where G is the universal gravitational constant. The above statement may be reformulated in the following way: if g is the magnitude at a given location in a gravitational field, then the gravitational force on an object with gravitational mass M is This is the basis by which masses are determined by weighing. In simple spring scales, for example, the force F is proportional to the displacement of the spring beneath the weighing pan, as per Hooke's law, and the scales are calibrated to take g into account, allowing the mass M to be read off. Assuming the gravitational field is equivalent on both sides of the balance, a balance measures relative weight, giving the relative gravitation mass of each object. Inertial mass Inertial mass is the mass of an object measured by its resistance to acceleration. This definition has been championed by Ernst Mach^[17]^[18] and has since been developed into the notion of operationalism by Percy W. Bridgman.^[19]^[20] The simple classical mechanics definition of mass is slightly different than the definition in the theory of special relativity, but the essential meaning is the same. In classical mechanics, according to Newton's second law, we say that a body has a mass m if, at any instant of time, it obeys the equation of motion $\mathbf{F}=m \mathbf{a},$ where F is the resultant force acting on the body and a is the acceleration of the body's centre of mass.^[note 7] For the moment, we will put aside the question of what "force acting on the body" actually means. This equation illustrates how mass relates to the inertia of a body. Consider two objects with different masses. If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force. However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m[1] and m[2]. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m[1] by m[2], which we denote F[12], and the force exerted on m[2] by m[1], which we denote F[21]. Newton's second law states that \begin{align} \mathbf{F_{12}} & =m_1\mathbf{a}_1,\\ \mathbf{F_{21}} & =m_2\mathbf{a}_2, \end{align} where a[1] and a[2] are the accelerations of m[1] and m[2], respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another. Newton's third law then states that and thus If |a[1]| is non-zero, the fraction is well-defined, which allows us to measure the inertial mass of m[1]. In this case, m[2] is our "reference" object, and we can define its mass m as (say) 1 kilogram. Then we can measure the mass of any other object in the universe by colliding it with the reference object and measuring the accelerations. Additionally, mass relates a body's momentum p to its linear velocity v: and the body's kinetic energy K to its velocity: The primary difficulty with Mach's definition of mass is that it fails to take into account the potential energy (or binding energy) needed to bring two masses sufficiently close to one another to perform the measurement of mass.^[18] This is most vividly demonstrated by comparing the mass of the proton in the nucleus of deuterium, to the mass of the proton in free space (which is greater by about 0.239% - this is due to the binding energy of deuterium.). Thus, for example, if the reference weight m[2] is taken to be the mass of the neutron in free space, and the relative accelerations for the proton and neutron in deuterium are computed, then the above formula over-estimates the mass m[1] (by 0.239%) for the proton in deuterium. At best, Mach's formula can only be used to obtain ratios of masses, that is, as m[1] /m[2] = |a[2]| / |a[1]|. An additional difficulty was pointed out by Henri Poincaré, which is that the measurement of instantaneous acceleration is impossible: unlike the measurement of time or distance, there is no way to measure acceleration with a single measurement; one must make multiple measurements (of position, time, etc.) and perform a computation to obtain the acceleration. Poincaré termed this to be an "insurmountable flaw" in the Mach definition of mass.^[21] Atomic mass Typically, the mass of objects is measured in relation to that of the kilogram, which is defined as the mass of the international prototype kilogram (IPK), a platinum alloy cylinder stored in an environmentally-monitored safe secured in a vault at the International Bureau of Weights and Measures in France. However, the IPK is not convenient for measuring the masses of atoms and particles of similar scale, as it contains trillions of trillions of atoms, and has most certainly lost or gained a little mass over time despite the best efforts to prevent this. It is much easier to precisely compare an atom's mass to that of another atom, thus scientists developed the atomic mass unit (or Dalton). By definition, 1 u is exactly one twelfth of the mass of a carbon-12 atom, and by extension a carbon-12 atom has a mass of exactly 12 u. This definition, however, might be changed by the proposed redefinition of SI base units, which will leave the Dalton very close to one, but no longer exactly equal to it.^[22]^[23] Mass in relativity Special relativity In special relativity, there are two kinds of mass: rest mass (invariant mass),^[note 8] and relativistic mass (which increases with velocity). Rest mass is the Newtonian mass as measured by an observer moving along with the object. Relativistic mass is the total quantity of energy in a body or system divided by c^2. The two are related by the following equation: $m_\mathrm{relative}=\gamma (m_\mathrm{rest})\!$ where $\gamma$ is the Lorentz factor: $\gamma = \frac{1}{\sqrt{1 - v^2/c^2}}$ The invariant mass of systems is the same for observers in all inertial frames, while the relativistic mass depends on the observer's frame of reference. In order to formulate the equations of physics such that mass values do not change between observers, it is convenient to use rest mass. The rest mass of a body is also related to its energy E and the magnitude of its momentum p by the relativistic energy-momentum equation: So long as the system is closed with respect to mass and energy, both kinds of mass are conserved in any given frame of reference. The conservation of mass holds even as some types of particles are converted to others. Particles of matter may be converted to types of energy, but this does not affect the amount of mass. Although things like heat may not be matter, all types of energy still continue to exhibit mass.^[note 9]^[24] Thus, mass and energy do not change into one another in relativity; rather, both are names for the same thing, and neither mass nor energy appear without the Both rest and relativistic mass can be expressed as an energy by applying the well-known relationship E = mc^2, yielding rest energy and "relativistic energy" (total system energy) respectively: The "relativistic" mass and energy concepts are related to their "rest" counterparts, but they do not have the same value as their rest counterparts in systems where there is a net momentum. Because the relativistic mass is proportional to the energy, it has gradually fallen into disuse among physicists.^[25] There is disagreement over whether the concept remains useful pedagogically.^[26]^[27]^ In bound systems, the binding energy must often be subtracted from the mass of the unbound system, because binding energy commonly leaves the system at the time it is bound. Mass is not conserved in this process because the system is not closed during the binding process. For example, the binding energy of atomic nuclei is often lost in the form of gamma rays when the nuclei are formed, leaving nuclides which have less mass than the free particles (nucleons) of which they are composed. Mass–energy equivalence also holds in macroscopic systems.^[29] For example, if one takes exactly one kilogram of ice, and applies heat, the mass of the resulting melt-water will be more than a kilogram: it will include the mass from the thermal energy (latent heat) used to melt the ice; this follows from the conservation of energy.^[30] This number is small but not negligible: about 3.7 nanograms. It is given by the latent heat of melting ice (334 kJ/kg) divided by the speed of light squared (c^2 = 9×10^16 m^2/s^2). General relativity In general relativity, the equivalence principle is any of several related concepts dealing with the equivalence of gravitational and inertial mass. At the core of this assertion is Albert Einstein's idea that the gravitational force as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (i.e. accelerated) frame of reference. However, it turns out that it is impossible to find an objective general definition for the concept of invariant mass in general relativity. At the core of the problem is the non-linearity of the Einstein field equations, making it impossible to write the gravitational field energy as part of the stress–energy tensor in a way that is invariant for all observers. For a given observer, this can be achieved by the stress–energy–momentum pseudotensor.^[31] Mass in quantum physics In classical mechanics, the inert mass of a particle appears in the Euler–Lagrange equation as a parameter m: $\frac{\mathrm{d}}{\mathrm{d}t} \ \left( \, \frac{\partial L}{\partial \dot{x}_i} \, \right) \ = \ m \, \ddot{x}_i$. After quantization, replacing the position vector x with a wave function, the parameter m appears in the kinetic energy operator: $i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \left(-\frac{\hbar^2}{2m}abla^2 + V(\mathbf{r})\right)\Psi(\mathbf{r},\,t)$. In the ostensibly covariant (relativistically invariant) Dirac equation, and in natural units, this becomes: $(-i\gamma^\mu\partial_\mu + m) \psi = 0\,$ ...where the "mass" parameter m is now simply a constant associated with the quantum described by the wave function ψ. In the Standard Model of particle physics as developed in the 1960s, there is the proposal that this term arises from the coupling of the field ψ to an additional field Φ, the so-called Higgs field. In the case of fermions, the Higgs mechanism results in the replacement of the term mψ in the Lagrangian with $G_{\psi} \overline{\psi} \phi \psi$. This shifts the explanandum of the value for the mass of each elementary particle to the value of the unknown couplings G[ψ]. The tentatively confirmed discovery of a massive Higgs boson is regarded as a strong confirmation of this theory. But there is indirect evidence for the reality of the Electroweak symmetry breaking as described by the Higgs mechanism, and the non-existence of Higgs bosons would indicate a "Higgsless" description of this mechanism. Tachyonic particles and imaginary (complex) mass A tachyonic field, or simply tachyon, is a quantum field with an imaginary mass.^[32] Although tachyons (particles that move faster than light) are a purely hypothetical concept not generally believed to exist,^[32] ^[33] fields with imaginary mass have come to play an important role in modern physics^[34]^[34]^[35]^[36] and are discussed in popular books on physics.^[32]^[37] Under no circumstances do any excitations ever propagate faster than light in such theories – the presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality).^[38] While the field may have imaginary mass, any physical particles do not; the "imaginary mass" shows that the system becomes unstable, and sheds the instability by undergoing a type of phase transition called tachyon condensation (closely related to second order phase transitions) that results in symmetry breaking in current models of particle physics. The term "tachyon" was coined by Gerald Feinberg in a 1967 paper,^[39] but it was soon realized that Feinberg's model in fact did not allow for superluminal speeds.^[38] Instead, the imaginary mass creates an instability in the configuration:- any configuration in which one or more field excitations are tachyonic will spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation. Well known examples include the condensation of the Higgs boson in particle physics, and ferromagnetism in condensed matter physics. Although the notion of a tachyonic imaginary mass might seem troubling because there is no classical interpretation of an imaginary mass, the mass is not quantized. Rather, the scalar field is; even for tachyonic quantum fields, the field operators at spacelike separated points still commute (or anticommute), thus preserving causality. Therefore, information still does not propagate faster than light,^[39] and solutions grow exponentially, but not superluminally (there is no violation of causality). Tachyon condensation drives a physical system that has reached a local limit and might naively be expected to produce physical tachyons, to an alternate stable state where no physical tachyons exist. Once the tachyonic field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles with a positive mass-squared.^[40] This is a special case of the general rule, where unstable massive particles are formally described as having a complex mass, with the real part being their mass in usual sense, and the imaginary part being the decay rate in natural units.^[40] However, in quantum field theory, a particle (a "one-particle state") is roughly defined as a state which is constant over time; i.e., an eigenvalue of the Hamiltonian. An unstable particle is a state which is only approximately constant over time; If it exists long enough to be measured, it can be formally described as having a complex mass, with the real part of the mass greater than its imaginary part. If both parts are of the same magnitude, this is interpreted as a resonance appearing in a scattering process rather than a particle, as it is considered not to exist long enough to be measured independently of the scattering process. In the case of a tachyon the real part of the mass is zero, and hence no concept of a particle can be attributed to it. In a Lorentz invariant theory, the same formulas that apply to ordinary slower-than-light particles (sometimes called "bradyons" in discussions of tachyons) must also apply to tachyons. In particular the energy–momentum relation: $E^2 = p^2c^2 + m^2c^4 \;$ (where p is the relativistic momentum of the bradyon and m is its rest mass) should still apply, along with the formula for the total energy of a particle: $E = \frac{mc^2}{\sqrt{1 - \frac{v^2}{c^2}}}.$ This equation shows that the total energy of a particle (bradyon or tachyon) contains a contribution from its rest mass (the "rest mass–energy") and a contribution from its motion, the kinetic energy. When v is larger than c, the denominator in the equation for the energy is "imaginary", as the value under the radical is negative. Because the total energy must be real, the numerator must also be imaginary: i.e. the rest mass m must be imaginary, as a pure imaginary number divided by another pure imaginary number is a real number. See also 1. ↑ When a distinction is necessary, M is used to denote the active gravitational mass and m the passive gravitational mass. 2. ↑ Since the Avogadro constant N[A] is defined as the number of atoms in 12 g of carbon-12, it follows that 1 u is exactly 1/(10^3N[A]) kg. 3. ↑ The distinction between "active" and "passive" gravitational mass does not exist in the Newtonian view of gravity as found in classical mechanics, and can safely be ignored by laypersons. In most practical applications, Newtonian gravity is used because it is usually sufficiently accurate, and is simpler than General Relativity; for example, NASA uses primarily Newtonian gravity to design space missions, although "accuracies are routinely enhanced by accounting for tiny relativistic effects".www2.jpl.nasa.gov/basics/bsf3-2.php The distinction between "active" and "passive" is very abstract, and applies to post-graduate level applications of General Relativity to certain problems in cosmology, and is otherwise not used. There is, nevertheless, an important conceptual distinction in Newtonian physics between "inertial mass" and "gravitational mass", although these quantities are identical; the conceptual distinction between these two fundamental definitions of mass is maintained for teaching purposes because they involve two distinct methods of measurement. It was long considered anomalous that the two distinct measurements of mass (inertial and gravitational) gave the identical result. The observed property, noted by Galileo, according to which objects of different mass fall with the same rate of acceleration (ignoring air resistance), is an expression of the fact that inertial and gravitational mass are the same. 4. ↑ This constant ratio was later shown to be a direct measure of the Sun's active gravitational mass; it has units of distance cubed per time squared, and is known as the standard gravitational $\mu=4\pi^2\frac{\text{distance}^3}{\text{time}^2}\propto\text{gravitational mass}$ 5. ↑ At the time when Viviani asserts that the experiment took place, Galileo had not yet formulated the final version of his law of free fall. He had, however, formulated an earlier version which predicted that bodies of the same material falling through the same medium would fall at the same speed. See Drake, S. (1978). Galileo at Work. University of Chicago Press. pp. 19–20. ISBN 6. ↑ These two properties are very useful, as they allow spherical collections of objects to be treated exactly like large individual objects. 7. ↑ In its original form, Newton's second law is valid only for bodies of constant mass. 8. ↑ It is possible to make a slight distinction between "rest mass" and "invariant mass". For a system of two or more particles, none of the particles are required be at rest with respect to the observer for the system as a whole to be at rest with respect to the observer. To avoid this confusion, some sources will use "rest mass" only for individual particles, and "invariant mass" for 9. ↑ For example, a nuclear bomb in an idealized super-strong box, sitting on a scale, would in theory show no change in mass when detonated (although the inside of the box would become much hotter). In such a system, the mass of the box would change only if energy were allowed to escape from the box as light or heat. However, in that case, the removed energy would take its associated mass with it. Letting heat out of such a system is simply a way to remove mass. Thus, mass, like energy, cannot be destroyed, but only moved from one place to another. 1. ↑ "New Quantum Theory Separates Gravitational and Inertial Mass". MIT Technology Review. 14 Jun 2010. Retrieved 3 Dec 2013. 2. ↑ Jacob Aron (10 Jan 2013). "Most fundamental clock ever could redefine kilogram". NewScientist. Retrieved 17 Dec 2013. 3. ↑ W. Rindler (2006). Relativity: Special, General, And Cosmological. Oxford University Press. pp. 16–18. ISBN 0-19-856731-6. 4. ↑ Kane, Gordon (September 4, 2008). "The Mysteries of Mass". Scientific American (Nature America, Inc.). pp. 32–39. Retrieved 2013-07-05. 5. ↑ Rindler, W. (2006). Relativity: Special, General, And Cosmological. Oxford University Press. p. 22. ISBN 0-19-856731-6. 6. ↑ Eötvös, R. V.; Pekár, D.; Fekete, E. (1922). "Beiträge zum Gesetz der Proportionalität von Trägheit und Gravität". Annalen der Physik 68: 11–66. Bibcode:1922AnP...373...11E. doi:10.1002/ 7. ↑ Drake, S. (1979). "Galileo's Discovery of the Law of Free Fall". Scientific American 228 (5): 84–92. Bibcode:1973SciAm.228e..84D. doi:10.1038/scientificamerican0573-84. 8. ↑ Galileo, G. (1632). Dialogue Concerning the Two Chief World Systems. 9. ↑ Galileo, G. (1638). Discorsi e Dimostrazioni Matematiche, Intorno à Due Nuove Scienze 213. Louis Elsevier., translated in Crew, H.; de Salvio, A., eds. (1954). Mathematical Discourses and Demonstrations, Relating to Two New Sciences. Dover Publications. ISBN 1-275-10057-0. and also available in Hawking, S., ed. (2002). On the Shoulders of Giants. Running Press. pp. 534–535. ISBN 10. ↑ Hooke, R. (1674). "An attempt to prove the motion of the earth from observations". Royal Society. 11. ↑ Turnbull, H. W., ed. (1960). Correspondence of Isaac Newton, Volume 2 (1676–1687). Cambridge University Press. p. 297. 12. ↑ Hawking, S., ed. (2005). Principia. Running Press. pp. 15ff. ISBN 978-0-7624-2022-3. 13. ↑ Whiteside, D. T., ed. (2008). The Mathematical Papers of Isaac Newton, Volume VI (1684–1691). Cambridge University Press. ISBN 978-0-521-04585-8. Retrieved 12 March 2011. 14. 1 2 3 4 Sir Isaac Newton; N. W. Chittenden (1848). Newton's Principia: The mathematical principles of natural philosophy. D. Adee. Retrieved 12 March 2011. 15. ↑ Cuk, M. (January 2003). "Curious About Astronomy: How do you measure a planet's mass?". Ask an Astronomer. Retrieved 2011-03-12. 16. ↑ Ernst Mach, "Science of Mechanics" (1919) 17. 1 2 Ori Belkind, "Physical Systems: Conceptual Pathways between Flat Space-time and Matter" (2012) Springer (Chapter 5.3) 18. ↑ P.W. Bridgman, Einstein's Theories and the Operational Point of View, in: P.A. Schilpp, ed., Albert Einstein: Philosopher-Scientist, Open Court, La Salle, Ill., Cambridge University Press, 1982, Vol. 2, p. 335–354. 19. ↑ D. A Gillies, "Operationalism" Synthese (1972) pp 1-24 D. Reidel Publishing DOI 10.1007/BF00484997 20. ↑ Henri Poincaré. "Classical Mechanics". Chapter 6 in Science and Hypothesis . London: Walter Scott Publishing (1905): 89-110. 21. ↑ Leonard, B.P. (2010). "Comments on recent proposals for redefining the mole and kilogram". Metrologia 47 (3): L5–L8. Bibcode:2010Metro..47L...5L. doi:10.1088/0026-1394/47/3/L01. Retrieved 22. ↑ Pavese, Franco (2011). "Some reflections on the proposed redefinition of the unit for the amount of substance and of other SI units". Accreditation and Quality Assurance 16 (3): 161–165. doi: 23. ↑ Taylor, E. F.; Wheeler, J. A. (1992). Spacetime Physics. W. H. Freeman. pp. 248–149. ISBN 0-7167-2327-1. 24. ↑ G. Oas (2005). "On the Abuse and Use of Relativistic Mass". arXiv:physics/0504110 [physics.ed-ph]. 25. ↑ Okun, L. B. (1989). "The Concept of Mass" (PDF). Physics Today 42 (6): 31–36. Bibcode:1989PhT....42f..31O. doi:10.1063/1.881171. 26. ↑ Rindler, W.; Vandyck, M. A.; Murugesan, P.; Ruschin, S.; Sauter, C.; Okun, L. B. (1990). "Putting to Rest Mass Misconceptions" (PDF). Physics Today 43 (5): 13–14, 115, 117. Bibcode: 1990PhT....43e..13R. doi:10.1063/1.2810555. 27. ↑ Sandin, T. R. (1991). "In Defense of Relativistic Mass". American Journal of Physics 59 (11): 1032. Bibcode:1991AmJPh..59.1032S. doi:10.1119/1.16642. 28. ↑ Planck, Max (1907), "Zur Dynamik bewegter Systeme", Sitzungsberichte der Königlich-Preussischen Akademie der Wissenschaften, Berlin, Erster Halbband (29): 542–570 English Wikisource translation: On the Dynamics of Moving Systems (See paragraph 16.) 29. ↑ Eugene Hecht, "There Is No Really Good Definition of Mass", Phys. Teach. 44, 40 (2006); doi 10.1119/1.2150758 30. ↑ Misner, C. W.; Thorne, K. S.; Wheeler, J. A. (1973). Gravitation. W. H. Freeman. p. 466. ISBN 978-0-7167-0344-0. 31. 1 2 3 Lisa Randall, Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions, p.286: "People initially thought of tachyons as particles travelling faster than the speed of light...But we now know that a tachyon indicates an instability in a theory that contains it. Regrettably for science fiction fans, tachyons are not real physical particles that appear in 32. ↑ Tipler, Paul A.; Llewellyn, Ralph A. (2008). Modern Physics (5th ed.). New York: W.H. Freeman & Co. p. 54. ISBN 978-0-7167-7550-8. “... so existence of particles v > c ... Called tachyons ... would present relativity with serious ... problems of infinite creation energies and causality paradoxes.” 33. 1 2 Kutasov, David; Marino, Marcos & Moore, Gregory W. (2000). "Some exact results on tachyon condensation in string field theory". JHEP 0010: 045. doi:10.1088/1126-6708/2000/10/045. arXiv EFI-2000-32, RUNHETC-2000-34. 34. ↑ Sen, A. (2002). Rolling tachyon. JHEP 0204, 048. Cited 720 times as of 2/2012. 35. ↑ Gibbons, G.W. (2000). "Cosmological evolution of the rolling tachyon". Phys. Lett. B 537: 1–4. doi:10.1016/s0370-2693(02)01881-6. 36. ↑ Brian Greene, The Elegant Universe, Vintage Books (2000) 37. 1 2 Aharonov, Y.; Komar, A.; Susskind, L. (1969). "Superluminal Behavior, Causality, and Instability". Phys. Rev. (American Physical Society) 182 (5): 1400–1403. Bibcode:1969PhRv..182.1400A. doi: 38. 1 2 Feinberg, Gerald (1967). "Possibility of Faster-Than-Light Particles". Physical Review 159 (5): 1089–1105. Bibcode:1967PhRv..159.1089F. doi:10.1103/PhysRev.159.1089. 39. 1 2 Peskin, M. E.; Schroeder, D. V. (1995). An Introduction to Quantum Field Theory. Perseus Books. External links Wikimedia Commons has media related to Mass (physical property). Wikisource has the text of The New Student's Reference Work article Mass. SI base quantities Base quantity Specification The quantity (not the unit) can have a specification: T[max] = 300 K Definition A quantity Q is expressed in the base quantities: $Q =f\left(\mathit{l, m, t, I, T, n, I}\mathrm{_v}\right)$ Derived dimension dim Q = L^a · M^b · T^c · I^d · Θ^e · N^f · J^g (Superscripts a–g are algebraic exponents, usually a positive, negative or zero integer.) Derived quantity Example Quantity acceleration = l^1 · t^−2, dim acceleration = L^1 · T^−2 possible units: m^1 · s^−2, km^1 · Ms^−2, etc. See also • Book • Category • Outline Linear/translational quantities Angular/rotational quantities Dimensions — L L^2 Dimensions — — — T time: t absement: A T time: t s m s s — distance: d, position: r, s, x, displacement area: A — angle: θ, angular displacement: θ solid angle: Ω m m^2 rad rad^2, sr frequency: f speed: v, velocity: v kinematic viscosity: ν, frequency: f angular speed: ω, angular velocity: ω T^−1 s^−1, Hz m s^−1 specific angular momentum: h T^−1 s^−1, Hz rad s^−1 m^2 s^−1 T^−2 acceleration: a T^−2 angular acceleration: α m s^−2 rad s^−2 T^−3 jerk: j T^−3 angular jerk: ζ m s^−3 rad s^−3 M mass: m ML^2 moment of inertia: I kg kg m^2 MT^−1 momentum: p, impulse: J action: 𝒮, actergy: ℵ ML^2T^−1 angular momentum: L, angular impulse: ΔL action: 𝒮, actergy: ℵ kg m s^−1, N s kg m^2 s^−1, J s kg m^2 s^−1 kg m^2 s^−1, J s MT^−2 force: F, weight: F[g] energy: E, work: W ML^2T^−2 torque: τ, moment: M energy: E, work: W kg m s^−2, N kg m^2 s^−2, J kg m^2 s^−2, N m kg m^2 s^−2, J MT^−3 yank: Y power: P ML^2T^−3 rotatum: P power: P kg m s^−3, N s^−1 kg m^2 s^−3, W kg m^2 s^−3, N m s^−1 kg m^2 s^−3, W This article is issued from - version of the Saturday, April 16, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://wiki-gateway.eudic.net/wikipedia_en/Mass.html","timestamp":"2024-11-03T03:05:41Z","content_type":"text/html","content_length":"191694","record_id":"<urn:uuid:0d6391c5-e079-4387-af10-10496a36912d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00332.warc.gz"}
P Values, Confidence Intervals and Clinical Trials P values are so ubiquitous in clinical research that it’s easy to take for granted that they are being understood and interpreted correctly. After-all, one might say, they are just simple proportions and it’s not brain surgery. At times, however, its’ the simplest of things that are easiest to overlook. In fact, the definitions and interpretations of p values are sufficiently subtle that even a minute pivot from an exact definition can lead to interpretations that are wildly misleading. In the case of clinical trials, p values have a momentous impact on decision making in terms of whether or not to pursue and invest further into the development and marketing of a given therapeutic. In the context of clinical practice p values drive treatment decisions for patients as they essentially comprise the foundational evidence upon which these treatment decisions are made. This is perhaps as it should be, as long as the definition of p values and their interpretations are sound. A counter-point to this is the bias towards publishing only studies with a statistically significant p value, as well as the fact that many studies are not sufficiently reproducible or reproduced. This leaves clinicians with an impression that evidence for a given treatment is stronger than the full picture would suggest. This however is a publishing issue rather than an issue of significance tests themselves. This article focusses on interpretation issues only. As p values apply to the interpretation of both parametric and non-parametric tests in much the same way, this article will focus on parametric examples. Interpreting p values in superiority/difference study designs This refers to studies where we are seeking to find a difference between two treatment groups or between a single group measured at two time points. In this case the null hypothesis is that there is no difference between the two treatment groups or no effect of the treatment, as the case may be. According to the significance testing framework all p values are calculated based upon an assumption that the null hypothesis is true. If a study yields a p value of 0.05, this means that we would expect to see a difference between the two groups at least as extreme as the observed effect 5% of the time; if the study were to be repeated. In other words, if there is no true difference between the two treatment groups and we ran the experiment 20 times on 20 independent samples from the same population, we would expect to see a result this extreme once out of the 20 times. This of course is not a very helpful way of looking at things if our goal is to make a statement about treatment effectiveness. The inverse likely makes more intuitive sense: if were were to run this study 20 times on distinct patient samples from the same population, 19 out of 20 times we would not expect a result this extreme if there was no true effect. Based on the rarity of the observed effect, we conclude that likelihood of the null hypothesis being the optimal explanation of the data is sufficiently low that we can reject it. Thus our alternative research hypothesis, that there is a difference between the two treatments, is likely to be true. As the p value does not tell us whether the difference is a positive or negative direction, care should of course be taken to confirm which of the treatments holds the advantage. P values in non-inferiority or equivalence studies. In non-inferiority and equivalence studies a non-statistically significant p value can be a welcome result, as can a very low p value where the differences were not clinically significant, or where the new treatment is shown to be superior to the standard treatment. By only requiring the treatment not to be inferior, more power is retained and a smaller sample size can be used. The interpretation of the p value is much the same as for superiority studies, however the implications are different. In these types of studies it is ideal for the confidence intervals for the individual treatment effects to be narrow as this provides certainty that the estimates obtained are accurate in the absence of a statistically significant difference between the two estimates. While alternatives to p values exist, such as Bayesian statistics, these statistics have limitations of their own and are subject to the same propensity for misuse and misinterpretation as frequentist statistics are. Thus it remains important to take caution in interpreting all statistical results. What p values do not tell you A p value of 0.05 is not the same as saying that there is only a 5% chance that the treatment wont work. Whether or not the treatment works in the individual is another probability entirely. It is also not the same as saying there is a 5% chance of the null hypothesis being true. The p value is a statistic that is based on the assumption that the null hypothesis is true and on that basis gives the likelihood of the observed result. Nor does the p value represent the chance of making a type 1 error. As each repetition of the same experiment produces a different p value, it does not make sense to characterise the p value as the chance of incorrectly rejecting the null hypothesis ie making a type one error. Instead, an alpha cut-off point of 0.05 should be seen as indicating a result rare enough under the null hypothesis that we are now willing to reject the null as the most likely explanation given the data. Under a type-one error alpha of 0.05 this decision is expected to be wrong 5% of the time, regardless of the p value achieved in the statistical test. The relationship between the critical alpha and statistical power is illustrated below. Another misconception is that a small p value provides strong support for a given research hypothesis. In reality a small p value does not necessarily translate to a big effect, nor a clinically meaningful one. The p value indicates a statistically significant result, however it says nothing about the magnitude of the effect or whether this result is clinically meaningful in the context of the study. A p value of 0.00001 may appear to be a very satisfactory result, however if the difference observed between the two groups is very small then this is not always the case. All it would be saying is that “we are really really sure that there is only minimal difference between the two treatments”, which in a superiority design may not be as desired. Minimally important difference (MID) This is where the importance of pre-defining a minimally important difference (MID) becomes evident. The MID, or clinically meaningful difference. should be defined and quantified in the design stage before the study is to be undertaken. In the case of clinical studies this should generally be done in consultation with the clinician or disease expert concerned. The MID may take different forms depending on whether a study is a superiority design, versus an equivalence or non-inferiority design. In the case of a superiority design or where the goal of the study is to detect a difference, the MID is the threshold of minimum difference at which we would be willing to consider the new treatment worth pursuing over the standard treatment or control being used as the comparator. In the case of a non-inferiority design the MID would be the minimum lower threshold at which we would still consider the new treatment as equally effective or useful as the standard treatment. Equivalence design on the other hand may sometimes rely on an interval around the standard treatment effect. When interpreting results of clinical studies it is of primary importance to keep a clinically meaningful difference in mind, rather than defaulting to the p value in isolation. In cases where the p value is statistically significant, it is important to ask whether the difference between comparison groups is also as large as the MID or larger. Confidence Intervals All statistical tests that involve p values can produce a corresponding confidence interval for the estimates. Unlike p values, confidence intervals do not rely on an assumption of the null hypothesis but rather on the assumption that the sample approximates the population of interest. A common estimate in clinical trials where confidence intervals become important is the treatment effect. Very often this translates to the difference in means of a surrogate endpoint between two groups, however confidence intervals are also important to consider for individual group means/ treatment effects, which are an estimate of the population means of the endpoint in these distinct groups/treatment categories. Confidence interval for the mean A 95% confidence interval of the estimate of the mean indicates that, if this study were to be repeated, the mean value is expected to fall within this interval 95% of the time. While this estimate is based on the real mean of the study sample our interest remains in making inferences about the wider population who might later be subject to this treatment. Thus inferentially the observed mean and it’s confidence interval are both considered an estimate of the population values. In a nutshell the confidence interval indicates how sure we can be of the accuracy of the estimate. A narrower interval indicates greater confidence and a wider interval less. The p value of the estimate indicates how certain we can be of this result, ie the interval itself. Confidence interval for the mean difference, treatment effects or difference in treatment effects The mean difference in treatment effect between two groups is an important estimate in any comparison study, from superiority to non-inferiority clinical trial designs. Treatment response is mainly ascertained from repeated measures of surrogate endpoint data on the individual level. One form of mean difference is repeated measures data from the same individuals at different time points, these individuals’ differences could then be compared between two independent treatment groups. In the context of clinical trials, confidence intervals of the mean difference can relate to an individual’s treatment effect or to group differences in treatment effects. A 95% Confidence interval of the mean difference in treatment effect indicates that 95 per cent of the time, if this study were to be repeated, the true difference in treatment effect between the groups is expected to fall within this interval. A confidence interval containing zero indicates that a statistically significant difference between the two groups has not been found. Namely, if part of the time the true population value representing the difference is expected to fall above zero on the number line and part of the time to fall below zero, indicating a difference in the opposite direction, we cannot be sure whether one group is higher or lower than the other. Much ho-hum has been made of p values in recent years but they are here to stay. While alternatives to p values exist, such as Bayesian methods, these statistics have limitations of their own and are subject to the same propensity for misuse and misinterpretation as frequentist statistics are. Thus it remains important to take caution in interpreting all statistical results. Sources and further reading: Gao, P-Values – A chronic conundrum, BMC Medical Research Methodology (2020), 20:167 The Royal College of Ophthalmologists, The clinician’s guide to p values, confidence intervals, and magnitude of effects, Eye (2022) 36:341–342; https://doi.org/10.1038/s41433-021-01863-w
{"url":"https://anatomisebiostats.com/biostatistics-blog/p-values-and-clinical-trials/","timestamp":"2024-11-03T20:16:37Z","content_type":"text/html","content_length":"166357","record_id":"<urn:uuid:b1dbfc8a-6cdf-480d-8345-19b81536c220>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00441.warc.gz"}
Finding Powers of Complex Numbers in Polar Form Question Video: Finding Powers of Complex Numbers in Polar Form Mathematics • Third Year of Secondary School Given that π § = 2β 3(cos 240Β° + π sin 240Β°), find π §Β² in exponential form. Video Transcript Given π § is equal to two root three multiplied by cos of 240 degrees plus π sin of 240 degrees, find π § squared in exponential form. Weβ re currently given a complex number written in trigonometric form. And weβ re looking to find π § squared in exponential form. There are two ways we can go about this. We can evaluate π § squared in trigonometric form, and then convert it to exponential form. Or we can convert it to exponential form first, and then work out the value of π § squared. Letβ s consider both of these And to square this complex number, we recall De Moivreβ s theorem. And this said, for a complex number in trigonometric form π cos π plus π sin π , this complex number to the power of π is given by π to the power of π multiplied by cos π π plus π sin π π . And in this example, π is a natural number. We can see that the modulus of our complex number π § is two root three. And π , its argument is 240 degrees. In fact, at some point, weβ re going to have to convert this to radians. So, we might as well do that now and get it out of the way. To do this, we recall the fact that two π radians is equal to 360 degrees. And we can find the value of one degree by dividing through by 360. One degree is equal to two π over 360 radians. And two π by 360 simplifies to π by 180. So, one degree is equal to π over 180 radians. So, we can change 240 degrees into radians by multiplying it by π over 180. That gives us four π by three. And so, we can work out the modulus of π § squared by squaring the modulus of π §. Thatβ s two root three squared. Root three squared is three. So, two root three squared is two squared multiplied by three, which is 12. And then, to work out the argument of π § squared, we multiply the argument of π § by the power thatβ s two. Four π by three multiplied by two is eight π by three. So, we can see that in trigonometric form, π § squared is 12 multiplied by cos of eight π over three plus π sin of eight π over three. And remember, to change a complex number in trigonometric form into exponential form, itβ s π π to the π π . And since π for π § squared the modulus is 12 and the argument π is eight π by three, we can say that π § squared is equal to 12π to the eight π over three π . Remember though, we usually want to represent this using the principal argument. Thatβ s greater than negative π and less than or equal to π . In fact, eight π by three is greater than π . So, to find the principal argument, we add or subtract multiples of two π . Here, letβ s subtract two π from eight π by three. Two π is equal to six π over three. And when we subtract six π over three from eight π over three, weβ re left with two π over three. So, in exponential form, π § squared is 12π to the two π by three π . Now, letβ s consider the alternative method. And that was to convert this complex number into exponential form first and then square it. Once again, weβ ll use this rule. A complex number with a modulus of π and an argument π can be represented in exponential form as π π to the π π . We already solved 240 degrees is equal to four π by three radians. So, we can say that π § in exponential form is two root three multiplied by π to the power of four π by three π . And this time, to find π § squared, we consider the alternative form of De Moivreβ s theorem. And that says that if π § is equal to π π to the π π , then π § to the power of π is equal to π to the power of π multiplied by π to the π π π . And now you should be able to see the relationship between the two forms and the methods that weβ re using. This time, π § squared is equal to two root three squared multiplied by π to the power of two multiplied by four π by three π . And, once again, we know that two root three squared is 12. And two multiplied by four π by three is eight π by three. And, once again, changing our argument into the principal argument by subtracting two π , we can see that π § squared is equal to 12π to the two π by three π .
{"url":"https://www.nagwa.com/en/videos/924158150676/","timestamp":"2024-11-11T10:25:19Z","content_type":"text/html","content_length":"254826","record_id":"<urn:uuid:acecd955-8f19-4eec-b6c4-a5e770a49caa>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00351.warc.gz"}
Project URA-HPC Welcome to Project URA-HPC Website The URA-HPC project, an acronym for Ultra-Scalable Randomized Algorithms for Current and Future High Performance Supercomputers, is an FCT sponsored project (reference 2022.08838.PTDC) starting on February 1st 2023 with a duration of 3 years. We aim to make fundamental contributions to a method for computing matrix functions based on their Neumann series, consisting of weighted sums of powers of the matrix. We use a Monte Carlo algorithm where a matrix to the power k is computed using random walks of length k over the matrix. Monte Carlo methods have the preeminent facet of being embarrassingly parallel, achieving a high degree of efficiency on a parallel system. In our method, each walk is independent from one another and can be computed completely in parallel. A second, crucially important feature of this Monte Carlo method, that sets it apart from the classic algorithms, is that it allows calculating individual entries of the result, avoiding the computation of the entire matrix. While real-world matrices are typically sparse, the result matrix will, in general, be dense. Thus, for large problems the representation of the output matrix in memory is in itself unfeasible, rendering its computation using classic methods impractical. With our approach we can, for instance, compute the trajectory of a single state variable in a dynamic system (eg, the voltage of a node in an electrical network) or the communicability of a single node in a network. In yet another research avenue, we have found that using this Monte Carlo approach with some modifications we are able to efficiently solve systems with derivatives of fractional order. Fractional calculus has a wide range of practical applications, and has not yet seen a more widely spread adoption due to its intrinsic computational difficulty. Our expectation is that our idea can have a broad impact and be a significant contribution from this project, and we plan to apply it to both Magnetic Resonance Imaging (MRI) and the modeling of chemical reactions.
{"url":"http://ura.inesc-id.pt/?Main_Page","timestamp":"2024-11-08T10:57:04Z","content_type":"text/html","content_length":"9475","record_id":"<urn:uuid:1e298bd4-793b-489f-b9db-191fa8a471e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00114.warc.gz"}
[Solved] Let A = {{1, 2, 3}, {4, 5}, {6, 7, 8}}. Determine whic... | Filo Let A = {{1, 2, 3}, {4, 5}, {6, 7, 8}}. Determine which of the following is true or false: Not the question you're searching for? + Ask your question A null set is a subset of every set. Therefore, the correct form would be ] Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 13 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions from Maths XI (RD Sharma) Practice questions from Maths XI (RD Sharma) View more Practice more questions from Sets Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Let A = {{1, 2, 3}, {4, 5}, {6, 7, 8}}. Determine which of the following is true or false: Topic Sets Subject Mathematics Class Class 11 Answer Type Text solution:1 Upvotes 65
{"url":"https://askfilo.com/math-question-answers/let-a-1-2-3-4-5-6-7-8-determine-which-of-the-following-is-true-or-false-phi-in-a-95674","timestamp":"2024-11-11T13:15:12Z","content_type":"text/html","content_length":"433451","record_id":"<urn:uuid:f4ea6494-df8a-4422-9e8f-8a4e22d4562f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00062.warc.gz"}
How to Convert Metric Tons to Pounds with the Weight Conversion Formula Converting metric tons (t) to pounds (lbs) is a straightforward mathematical process that involves a simple conversion factor: One metric ton equals 2,204.62 pounds. This ratio is all you need to perform the calculation. To calculate the conversion, start with the weight in metric tons you need to convert and multiply the value by 2,204.62. This will output the corresponding weight in pounds. x metric tons * 2,204.62 = Number of Pounds For example: If you have a weight of 1 metric ton, multiplying it by 2,204.62 will convert it to 2,204.62 pounds. This formula, centering on multiplication by 2,204.62, is suitable for any metric ton to pound conversion, and will provide accurate and reliable results. Conversely, if you wanted to convert pounds to metric tons, you would simply divide the number of pounds by 2,204.62, as there are 2,204.62 pounds in a ton. For the most part, converting between two units of mass is a simple matter of division or multiplication. Just keep in mind, you might end up with a fraction or decimal, so keep a calculator handy or use our online converter for convenience! Common Metric Tons to lbs Conversion Table Metric Tons (t) Pounds (lbs) 0.1 t 220.462 lbs 0.5 t 1,102.31 lbs 1 t 2,204.62 lbs 2 t 4,409.24 lbs 5 t 11,023.1 lbs 10 t 22,046.2 lbs 20 t 44,092.4 lbs In-Depth on the Metric Ton! The metric ton, also known as a tonne, is a unit of mass in the metric system. It is equivalent to 1,000 kilograms or approximately 2,204.62 pounds. This unit is commonly used in industries such as shipping, freight, and various forms of bulk material handling. It is the standard measurement for large quantities of material and is widely used internationally, especially in countries that follow the metric system. In-Depth on the Pound! The pound, a unit of weight in the Imperial system, is commonly used in the United States, the United Kingdom, and other countries that use Imperial measurements. A pound is equivalent to 16 ounces and is denoted by the symbol 'lb'. It's a key unit in various industries where precision in weighing is crucial, such as food and goods trading. Fun fact: The pound can also be symbolized by the hashtag symbol (#). Good luck, and don't forget to bookmark this metric t to lb converter to save time when you need help converting a metric system number to the imperial system.
{"url":"https://www.calculyte.com/convert/mass/ton-to-pound","timestamp":"2024-11-10T01:11:49Z","content_type":"text/html","content_length":"16055","record_id":"<urn:uuid:c52e01c5-0200-4138-9512-0d03024e2ff7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00442.warc.gz"}
American Mathematical Society Towards the ample cone of $\overline {M}_{g,n}$ HTML articles powered by AMS MathViewer by Angela Gibney, Sean Keel and Ian Morrison; J. Amer. Math. Soc. 15 (2002), 273-294 DOI: https://doi.org/10.1090/S0894-0347-01-00384-8 Published electronically: December 20, 2001 PDF | Request permission In this paper we study the ample cone of the moduli space $\overline {M}_{g,n}$ of stable $n$-pointed curves of genus $g$. Our motivating conjecture is that a divisor on $\overline {M}_{g,n}$ is ample iff it has positive intersection with all $1$-dimensional strata (the components of the locus of curves with at least $3g+n-2$ nodes). This translates into a simple conjectural description of the cone by linear inequalities, and, as all the $1$-strata are rational, includes the conjecture that the Mori cone is polyhedral and generated by rational curves. Our main result is that the conjecture holds iff it holds for $g=0$. More precisely, there is a natural finite map $r: \overline {M}_{ 0, 2g+n} \rightarrow \overline {M}_{g,n}$ whose image is the locus $\overline {R}_{g,n}$ of curves with all components rational. Any $1$-strata either lies in $\overline {R}_{g,n}$ or is numerically equivalent to a family $E$ of elliptic tails, and we show that a divisor $D$ is nef iff $D \ cdot E \geq 0$ and $r^{*}(D)$ is nef. We also give results on contractions (i.e. morphisms with connected fibers to projective varieties) of $\overline {M}_{g,n}$ for $g \geq 1$ showing that any fibration factors through a tautological one (given by forgetting points) and that the exceptional locus of any birational contraction is contained in the boundary. References [Gibney00]Gibney00 A. Gibney, Fibrations of ${\overline {M}_{g,n}}$, Ph. D. Thesis, Univ. of Texas at Austin, 2000. [KeelMcKernan96]KeelMcKernan96 S. Keel and J. McKernan, Contractible extremal rays of $\overline {M}_{0,n}$, preprint alg-geom/9707016 (1996). [Logan00]Logan00 A. Logan, Relations among divisors on the moduli space of curves with marked points, preprint math.AG/0003104, 2000. [LosevManin00]LosevManin00 A. Losev and Y. Manin, New Moduli Spaces of Pointed Curves and Pencils of Flat Connections, Michigan Math. J. 48 (2000), 443–472. [Moriwaki00]Moriwaki00 —, Nef divisors in codimension one on the moduli space of stable curves, preprint math.AG/0005012, 2000. [Moriwaki01]Moriwaki01 —, The $Q$-Picard group of the moduli space of curves in positive characteristic, Internat. J. Math. 12 (2001), no. 5, 519–534. [Rulla00]Rulla00 W. Rulla, Birational Geometry of ${\overline {M}_{3}}$, Ph D. Thesis, Univ. of Texas at Austin, 2000. Similar Articles • Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 14H10, 14E99 • Retrieve articles in all journals with MSC (2000): 14H10, 14E99 Bibliographic Information • Angela Gibney • Affiliation: Department of Mathematics, University of Michigan, Ann Arbor, Michigan 48109 • MR Author ID: 689485 • Email: agibney@math.lsa.umich.edu • Sean Keel • Affiliation: Department of Mathematics, University of Texas at Austin, Austin, Texas 78712 • MR Author ID: 289025 • Email: keel@fireant.ma.utexas.edu • Ian Morrison • Affiliation: Department of Mathematics, Fordham University, Bronx, New York 10458 • Email: morrison@fordham.edu • Received by editor(s): September 5, 2000 • Published electronically: December 20, 2001 • Additional Notes: During this research, the first two authors received partial support from a Big XII faculty research grant, and a grant from the Texas Higher Education Coordinating Board. The first author also received partial support from the Clay Mathematics Institute, and the second from the NSF The third author’s research was partially supported by a Fordham University Faculty Fellowship and by grants from the Centre de Recerca Matemática for a stay in Barcelona and from the Consiglio Nazionale di Ricerche for stays in Pisa and Genova Dedicated: To Bill Fulton on his sixtieth birthday • © Copyright 2001 American Mathematical Society • Journal: J. Amer. Math. Soc. 15 (2002), 273-294 • MSC (2000): Primary 14H10, 14E99 • DOI: https://doi.org/10.1090/S0894-0347-01-00384-8 • MathSciNet review: 1887636
{"url":"https://www.ams.org/journals/jams/2002-15-02/S0894-0347-01-00384-8/","timestamp":"2024-11-02T08:54:30Z","content_type":"text/html","content_length":"72036","record_id":"<urn:uuid:b9466d8c-8b69-4497-bdf7-1d9ed2748d7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00202.warc.gz"}
Do you Know these things about the Mathematician Ramanujan? - RA AcademyDo you Know these things about the Mathematician Ramanujan? - RA Academy If you are a student, some of you might not know about the Mathematician Ramanujan. His name was Sriniwas Ramanujan. The world has lost a gem at a very early age. He left us at an age of 32 but gave the world a lot in those 32 years. I don’t wan’t to make this boring. Let’s know some important incidents, facts and information about him: • He once said that, There is no meaning on an equation if it don’t expresses a thought of God. • He finished the book on advanced trigonometry by S. L. Loney at an age of 13 and discovered some sophisticated theorems on his own. • He was very close to the Mathematician G. H. Hardy from University of Cambridge, whom he sent 120 theorems in the first two letters only. • Ramanujan completed approximately 3,900 results mostly related to identities and equations during his short life. • Ramanujan’s three notebooks has 351 pages in first, 256 pages and second and 33 pages in third. First Notebook has 16 organised chapters and some material, which are unorganized. The second has 21 organized chapters and 100 pages which are unorganized, and the third which he wrote during his last days of life has 33 pages, which are unorganized. • A private university in Tamil Nadu, SASTRA University has instituted a prize of $10,000 to a mathematician for their outstanding contribution in Mathematics influenced by Srinivasa Ramanujan. The age of the mathematician should not be more than 32. • A movie based on the novel of Kanigel, The Man Who Knew Infinity was launched in 2015 starring Dev Patel as Ramanujan. • On the 125th birth anniversary of Ramanujan in 2011, the Indian government declared 22nd December as National Mathematics Day.
{"url":"https://academy.mithilanchalgroup.com/do-you-know-these-things-about-the-mathematician-ramanujan/","timestamp":"2024-11-11T00:34:11Z","content_type":"text/html","content_length":"92858","record_id":"<urn:uuid:69081038-5576-4daa-8641-b756ab45f047>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00683.warc.gz"}
How Much Can indo877.site How Much Can I Loan For Mortgage How Much Can I Loan For Mortgage To determine how much you can afford using this rule, multiply your monthly gross income by 28%. For example, if you make $10, every month, multiply $10, To determine how much you can afford for your monthly mortgage payment, just multiply your annual salary by and divide the total by This will give you. Ideally, borrowers should aim to spend 28% or less of their gross annual income on a mortgage. Monthly debt — Monthly debts impact how much of a mortgage you. Most lenders base their home loan qualification on both your total monthly gross income and your monthly expenses. These monthly expenses include property. Please specify how much you would like to consider as down payment. Please This tool does not include mortgage loan insurance when you have a down. It states that a household should spend no more than 28% of its gross monthly income on the front-end debt and no more than 36% of its gross monthly income on. Your home equity gives you financial flexibility. Find out how much you may qualify to borrow through a mortgage or line of credit. Input high level income and expense information, along with some loan specific details to get an estimate of the mortgage amount for which you may qualify. You can afford a home worth up to $, with a total monthly payment of $1, · LOAN & BORROWER INFO · TAXES & INSURANCE · ASSUMPTIONS. Determine what you could pay each month by using this mortgage calculator to calculate estimated monthly payments and rate options for a variety of loan. How much home can you afford? Use the RBC Royal Bank mortgage affordability calculator to see how much you can spend and determine your monthly payments. The following housing ratios are used for conservative results: 29% for down payments of less than 20% and 30% for down payments of 20% or more. A debt ratio of. How much house can I afford if I make $50,, $70,, or $, a year? As noted in our 28/36 DTI rule section above, multiplying your gross monthly. Based on information provided, you may be able to afford a home worth up to $, with a total monthly payment of $1, ; LOAN & BORROWER INFO. Using a percentage of your income can help determine how much house you can afford. For example, the 28/36 rule suggests your housing costs should be limited to. The 28% and 36% ratios are standard in the mortgage world, but lenders may have other combinations available, such as 33%/38%. mortgages available in your area. How We Calculate Your Home Value. First, we calculate how much money you can borrow based on your income and monthly debt. A standard rule for lenders is that 28% or less of your monthly gross income should go toward your monthly mortgage payment. This maximum mortgage calculator collects these important variables and determines the maximum monthly housing payment and the resulting mortgage amount. Use the home affordability calculator to help you estimate how much home you can afford It does not reflect fees or any other charges associated with the loan. To calculate "how much house can I afford," one rule of thumb is the 28/36 rule, which states that you shouldn't spend more than 28% of your gross monthly. How much money do you make each year? Rule of thumb says that your monthly home loan payment shouldn't total more than 28% of your gross monthly income. Gross. Lenders usually require housing expenses plus long-term debt to less than or equal to 33% or 36% of monthly gross income. A general guideline for the mortgage you can afford is % to % of your gross annual income. However, the specific amount you can afford to borrow depends. Use our online mortgage calculator to get an indication of the maximum amount you could borrow based on your income today. FHA's floor of $, is set at 65% of the national conforming loan limit of $, This limit differs based on county and the amount you enter may. How Much Can You Borrow? · You may qualify for a loan amount ranging from $, (conservative) to $, (aggressive) · Related Resources. How lenders assess what you can afford. Mortgage lenders base their decisions on what's known as the loan-to-income ratio – the amount you want to borrow. This rule asserts that you do not want to spend more than 28% of your monthly income on housing-related expenses and not spend more than 36% of your income. This range will help you figure out what you can afford and also helps lenders determine your approval status for a mortgage loan. A DTI score of 36% or less is. How much of a down payment do you need? To get the best mortgage interest rates and terms, you'll want a down payment amounting to 20% of a home's sale price. How much house can I afford? Buying a home is a major commitment and many factors determine what a mortgage lender is willing to offer you. loan amount you. How much house can you afford? Use our affordability calculator to estimate An FHA loan will come with mandatory mortgage insurance for the life of the loan. One influential factor in determining the amount of money you can borrow on a home loan is your debt-to-income (DTI) ratio. It is recommended that your DTI. Credit Card For Pet Supplies | Verizon Wireless Coupon
{"url":"https://indo877.site/categories/how-much-can-i-loan-for-mortgage.php","timestamp":"2024-11-10T21:31:53Z","content_type":"text/html","content_length":"12817","record_id":"<urn:uuid:a4af1546-e650-4edf-b663-d71eb23651dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00084.warc.gz"}
Class 9 Maths- Heron’s FormulaHeron’s Formula Class 9 Maths Notes - Leverage Edu What is Heron’s formula? How can we use it in our daily life? These are the questions that arise when we talk about Chapter 12 of the class 9 maths. Named upon one of the famous mathematicians of all time, Heron’s Formula opens a door to a new and interesting concept of maths. As Heron’s Formula is a vital topic as per the CBSE class 9 maths syllabus, it is important to ace it if you want to qualify the class 9th with flying colours. So, let’s get started and go through this informational blog about this topic. What is Heron’s formula? Heron, a Greek mathematician, has worked out this formula. He received praises for his proficiency in mathematics and was called ‘hero’ of mathematics. Because of his excellence, the formula is also called the ‘Heron’s formula.’ He wrote various procedures and compiled all of them in three volumes. Each volume has the formulas for various mathematical fields. Book I encompasses the Heron formulae, which has the area equations for different types of shapes. YouTube: Don’t Memorise Where Can We Use the Heron Formula? With the heron formulae, you can calculate the area of a triangle, whatever may be its side lengths. In the case of equilateral and isosceles, finding the height of the triangle will lead you to the calculation of its area. To evaluate the area of the equilateral triangle, we first derive the height h using the Pythagoras theorem. Then, we compute the area of the triangle using the following Area of a triangle = ½ x b x h But, what will we do if we cannot find the height of the triangle? If we have a scalene body, it’s not easy to find its apex. A triangle with irregular side lengths is called a scalene triangle.That’s the point where Heron formulae come into play. For example, if a triangle has the measurements 10 m, 15 m, and 20 m, then you can say it as a scalene triangle. Also Read: Maths Formulas for Class 10 YouTube: Let’s Tute Heron’s Formula In the Heron formula, we use a semi-perimeter to obtain the area. S stands for semi-perimeter, which is nothing but half of the perimeter (P/2). As we all know, the perimeter is the boundary or border of your respective area’s shape. It is the total of all the surface lengths. For a scalene triangle, you can calculate the perimeter and area using the formulae: Perimeter of an Area, P = (a + b + c)/ 2 Area of the triangle, A = S (S-A) (S-B) (S-C) m^2 S = semi perimeter of the triangle = Perimeter/2 A, B, C = Side lengths of the triangle Perimeter P = Sum of three side lengths= AB + BC + CA Example 1: A boy is crossing a triangular playground having a scalene body with the side lengths 12 m, 5 m, and 6 m. He wishes to find the area of the ground. Here, we will use the Heron formula to calculate the playground’s area. Area of the triangle A = S (S-A) (S-B) (S-C) So, semi-perimeter will be, S = (A + B + C)/2 = (10 +5 + 6)/2 = 11.5 m Now, apply the values in the Heron’s formula, S – A = 11.5 – 10 = 1.5 S – B = 11.5 – 5 = 6.5 S – C = 11.5 – 6 = 5.5 A = S (S -A) (S – B) (S – C) m^2 = 11.5 (1.5) (6.5) (5.5) m^2 = 616.69 Area of the triangle, A = 24 m^2 This is how we can derive the area of an irregular triangle using the Heron’s formula. Here are some Mental Maths: Tips & Tricks to reduce your overall calculation time! [optin-monster-shortcode id=”xf2mlnjiouddzrshykdb”] Example 2 Now, let’s see another example confining the Heron’s formula. Rohit has decided to calculate the cost of the tiles for his bedroom, having a triangular form. The side lengths of the rooms are as follows: 5 m, 4 m, and 7 m whereas the cost of a single 2 x 2 tile is Rs. 10. Find the total area and its cost. The problem encloses the same set of steps that we had used in the previous example. First, find the perimeter P = 5m + 4m + 7m = 16m Now, find the semi-perimeter S = P / 2= 16 /2, S= 8 metres Substituting the values in the Heron’s formula A = S (S -A) (S – B) (S – C) m^2 So, Area of the triangle = 8 (8 -5) (8 – 4) (8 – 7) m^2 = 8 (3) (4) (1) m^2 = 9216 A= 96 m^2 Area of a single tile = 2 x 2 m^2 Number of tiles required for the room = Area of the triangle Area of the single tile = 96 m^2 / 4 m^2 = 24 nos Having the number of tiles, now we can calculate the cost of tiles for the bedroom Cost = 24 x 10 =Rs. 240 Heron’s Formula NCERT Chapter for Class 9 PDF Books for Practising Heron’s Formula Here are some Maths Tricks to Ace GMAT Quant Section! Practice Questions Now that you are clear with the concept of Heron’s Formula, here is a question for you to practice. • Assume you have a garden that is in the form of a triangle. Calculate the cost of the fence needed if the side lengths of your garden are 15 m, 12m, 10 m. The cost of a single metre barbed wire is 50 rs. Calculate the area of the triangle and the cost of fencing. • The sides of the triangular field are in the ratio of 4:5:6. Then find the area of the field if its perimeter is 300m. • Find the area of a triangle having a perimeter of 32cm. One side of its side is equal to 11cm and the difference between the other two is 5cm. • The perimeter of the rhombus is 100 m and its diagonal is 40m. Find the area of the rhombus. • Two parallel sides of a trapezium are 60 cm and 77 cm and the other sides are 25 cm and 26 cm. Find the area of the trapezium. • The perimeter of a triangular field is 135 cm and its sides are in the ratio 25: 17: 12. Find its area. • The perimeter of a square is (4x+20)cm. What will be the length of its diagonal? • If the area of the hexagon is 24√3cm2, find its perimeter. • Two sides of a triangular field are 112m and 50m. Then find the height of the altitude on the side of 78 m length if the perimeter of the triangle is 240m. • Find the length of the hypotenuse of an isosceles right angle triangle if its area is 50m2. • The perpendicular distance between the parallel sides of a trapezium is 8cm. Find the area of the trapezium if the lengths of its parallel sides are 9cm and 11cm. We hope that this blog about Heron’s formula has helped you in preparing for class 9 exams. For expert advice related to important career decisions, reach out to our Leverage Edu experts. Hurry Up! Book an e-meeting.
{"url":"https://leverageedu.com/blog/herons-formula/","timestamp":"2024-11-06T04:12:08Z","content_type":"text/html","content_length":"331972","record_id":"<urn:uuid:641c3159-32f6-409e-860c-233e47ab983d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00445.warc.gz"}
SPPU CGPA to Percentage Calculator - Cgpa Calculator SPPU CGPA to Percentage Calculator SPPU CGPA to Percentage Calculator At SPPU, the CGPA system is used for evaluating students’ performance in various undergraduate and postgraduate courses. Each subject is graded, and these grades are then converted into grade points. The CGPA is calculated by taking the average of these grade points over all subjects. Key Features of the SPPU CGPA System 1. Scale: The CGPA is calculated on a 10-point scale. 2. Weightage: Different subjects might have different weightage based on their credit points. 3. Exclusions: Certain additional courses or non-credit subjects may not be included in the CGPA calculation. Conversion Formula: SPPU CGPA to Percentage SPPU has a specific formula for converting CGPA to percentage. The formula used is: This formula standardizes the conversion process across the university. Why This Formula? The subtraction of 0.75 from the CGPA before multiplying by 10 accounts for the grading scale used by SPPU, ensuring a more accurate reflection of a student’s performance when converted to Step-by-Step Conversion Process Converting CGPA to percentage involves a simple calculation using the formula provided by SPPU. Here’s how to do it: 1. Determine Your CGPA: Identify your CGPA from your academic records. 2. Apply the Conversion Formula: Use the formula Percentage=(CGPA−0.75)×10 to calculate the percentage. 3. Round Off (if needed): Depending on the requirements, you may need to round off the percentage to the nearest whole number or decimal place. Example Calculation Let’s convert a CGPA of 8.5 to percentage using the SPPU formula: 1. Identify the CGPA: CGPA is 8.5. 2. Apply the Formula: Therefore, a CGPA of 8.5 converts to 77.5%. Detailed Example Consider another example where the CGPA is 7.2: 1. Identify the CGPA: CGPA is 7.2. 2. Apply the Formula: Thus, a CGPA of 7.2 converts to 64.5%. Conversion Table To simplify the conversion process, here is a conversion table for quick reference: CGPA Percentage 10.0 92.5% 9.9 91.5% 9.8 90.5% 9.7 89.5% 9.6 88.5% 9.5 87.5% 9.4 86.5% 9.3 85.5% 9.2 84.5% 9.1 83.5% 9.0 82.5% 8.9 81.5% 8.8 80.5% 8.7 79.5% 8.6 78.5% 8.5 77.5% 8.4 76.5% 8.3 75.5% 8.2 74.5% 8.1 73.5% 8.0 72.5% 7.9 71.5% 7.8 70.5% 7.7 69.5% 7.6 68.5% 7.5 67.5% 7.4 66.5% 7.3 65.5% 7.2 64.5% 7.1 63.5% 7.0 62.5% Practical Applications of CGPA to Percentage Converter Academic Admissions Many higher education institutions, especially those outside of India, require applicants to submit their academic records in percentage format. Converting CGPA to percentage using the SPPU formula helps meet these requirements. Job Applications Some employers prefer or require academic records in percentage format. Converting your SPPU CGPA to percentage ensures that your qualifications are presented in a format that is easily understood by hiring managers. Scholarships and Grants Scholarship committees often use percentage scores to evaluate academic performance. Converting your CGPA to percentage can be essential for eligibility and selection processes. Performance Analysis Converting CGPA to percentage can help in analyzing and comparing academic performance across different grading systems, providing a clearer picture of a student’s achievements. Using an Online SPPU CGPA to Percentage Calculator While manual conversion is straightforward, online calculators offer a quick and accurate alternative. These calculators require you to input your CGPA, and they automatically compute the percentage using the SPPU formula. Steps to Use an Online Calculator 1. Enter Your CGPA: Input your CGPA score into the designated field. 2. Click on Calculate: The calculator will instantly provide the corresponding percentage. Benefits of Online Calculators • Accuracy: Online calculators minimize the risk of manual calculation errors. • Time-Saving: They provide instant results, allowing for quick conversions. • User-Friendly: Designed to be easy to use without requiring advanced mathematical skills. Understanding the conversion from SPPU CGPA to percentage is essential for students and professionals navigating different academic and professional pathways. This guide has provided a detailed explanation of the SPPU CGPA system, the conversion formula, and practical applications. With the help of an online CGPA to percentage calculator, the conversion process becomes even more straightforward, ensuring accuracy and saving time.
{"url":"https://cgpacalc.com/sppu-cgpa-to-percentage-calculator/","timestamp":"2024-11-04T18:08:38Z","content_type":"text/html","content_length":"61536","record_id":"<urn:uuid:0c18bc9e-6f63-4fcb-86bd-75376148cc32>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00065.warc.gz"}
Number Theory and Automorphic Forms More about number theory and automorphic forms Research topics include • Analytic number theory • Both number-field and function-field cases • Arithmetic algebraic geometry, especially of Shimura varieties and moduli spaces • Automorphic forms in many different guises • Combinatorial representation theory • Analytical representation theory • Harmonic analysis applied to number theory • Random matrix theory • Applications of mathematical physics ideas to number theory • A student seminar • The Geometric Langlands seminar • A Lie theory seminar • A number theory and automorphic forms seminar [email protected] automorphic forms, p-adic representations, combinatorial representation theory, statistical lattice models [email protected] automorphic forms, L-functions, representations, harmonic analysis, number theory [email protected] automorphic forms, L-functions, number theory, harmonic analysis, representation theory [email protected] number theory, automorphic forms, Shimura varieties and related topics in arithmetric geometry Professor Emeritus [email protected] p-adic Galois representations associated with algebraic varieties via étale cohomology, the connections between the latter and de Rham cohomology [email protected] computational complexity, cryptography, number theory, combinatorics, coding theory, analysis, probability theory, ecommerce, and economics of data networks
{"url":"https://cse.umn.edu/math/number-theory-and-automorphic-forms","timestamp":"2024-11-05T23:12:32Z","content_type":"text/html","content_length":"99924","record_id":"<urn:uuid:0cbbd2ee-768a-47bf-a806-c918b45aa7a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00734.warc.gz"}
IntroductionThe CHM15k-Nimbus ceilometerMethodPhysical basisOutline of the algorithmResultsCase study: 16 June 2014Long-term variabilityEffect of the internal temperatureEffect of the overlap correction on edge detectionCase study: 15 July 2014Long-term variabilitySummary and conclusionsAlgorithm detailsDetermination of the fitting intervalsQuality check of the fits and determination of a set of overlap correction candidatesQuality check of the overlap correction candidatesFinal selectionAcknowledgementsReferences The lidar equation relates received power per pulse, P, as a function of range, r, and time, t, to instrumental and atmospheric parameters as follows: P(r,t)=1r2CL(t)CCHM(t)O(r,t)β(r,t)e-2∫0rα(r′,t) dr′+B(t). CL is the time-dependent calibration factor, and CCHM is a factor accounting for variations in the sensitivity of the receiver. CCHM is the product of the variables “p_calc” and “scaling” provided by the manufacturer. α and β are the extinction and backscatter coefficient, respectively, and B is the background normalized by the number of laser pulses. O(rt) is the range and time-dependent overlap function which can be expressed with a temporally constant overlap function provided by the manufacturer, OCHM(r), and a correction function, fc(r,t), as follows: O(r,t)=OCHM (r)/fc(r,t). The standard instrument output, βraw (variable “beta_raw” provided by the manufacturer), is the normalized and background, range and overlap-corrected signal defined as βraw(r,t)=(P(r,t) -B(t))r2CCHM(t)OCHM(r). We define the corrected instrument output as βcorrected(r,t)=βraw(r,t)fc(r,t), which is proportional to the attenuated backscatter coefficient, defined as βatt(r,t)=β(r,t) The factor of proportionality is the calibration factor, as can be shown using Eqs. (1) and (4). The algorithm to calculate the correction function fc(r,t) is based on two main assumptions: The aerosol extinction and backscatter coefficients are constant in a range interval [0,R] and during the time period of observation (assumption of homogeneous atmosphere). The overlap function is known with low uncertainty in the range interval [ROK,∞], with ROK≤R. Under these assumptions, the aerosol lidar ratio (also defined in the literature as extinction-to-backscatter ratio) is constant in the range [0,R]. The aerosol backscatter coefficient (βp) is therefore proportional to the aerosol extinction coefficient (αp) in the considered range. The molecular backscatter and extinction coefficients, respectively βm and αm, depend on atmospheric density and vary with range. In the range [0,R] Eqs. (1) to (3) can be written as follows, with time dependence neglected for clarity: log⁡βraw(r)+log⁡fc(r)=log⁡CL+log⁡βp-2αpr+log⁡1+βm(r)βp-2∫0rαm(r′)dr′. Using the aerosol lidar ratio L and a molecular lidar ratio equal to 8π3, Eq. (6) can be rewritten as follows: log⁡βraw(r)+log⁡fc(r)=log⁡CL+log⁡αpL-2αpr+log⁡1+3Lαm(r)8παp︸A1(r)-2∫0rαm(r′)Dr′︸A2(r). Left panel: logarithm of the absolute value of the range corrected signal measured at Payerne on 15 July 2014 from 00:25 to 01:20. The red line represents the linear fit performed between the two black dashed lines. Right panel: corresponding correction function. CHM15k measurements at Payerne for 16 June 2014. (a, c): Logarithm of the range corrected signal. (b, d): Gradient of the range corrected signal, (a) and (b): without correction and (c, d): with overlap correction. The reference zones from which the overlap correction was calculated are circled with black dashed lines. Overlap functions for 16 June 2014. The thick black line is the median overlap function for this day. The dashed line represents the overlap function provided by the manufacturer. Success rate of the algorithm for 2 years of data. For a standard atmosphere and at a wavelength of 1064nm, assuming a lidar ratio between 20 and 120sr and a particle extinction coefficient between 0 and 100Mm-1, the 5th term (A2) is in the order of 0.01% of the total signal. A2 is neglected for the rest of the calculations. Noting that the 4th term (A1) is close to straight line, the right hand side of Eq. (7) forms itself, in good approximation, into a straight line: log⁡βraw(r)+log⁡fc(r)=A+Br∀r∈0,R. Assuming further that OCHM(r) is correct in the range [ROKR], i.e. log⁡(o(r))=0∀r∈[ROKR], the coefficients A and B are obtained from fitting Eq. (8) to the data in this same range. The correction function in the range [0,R] is given by the difference between the fit (right hand side of Eq. 8) and the data as follows: fc(r)=e-log⁡βraw(r)-(A+Br)∀r∈[0,R]. An example of fitting Eq. (8) to real data is presented on Fig. 1, left panel. The corresponding correction function fc is represented on the right panel.
{"url":"https://amt.copernicus.org/articles/9/2947/2016/amt-9-2947-2016.xml","timestamp":"2024-11-02T15:20:01Z","content_type":"application/xml","content_length":"128201","record_id":"<urn:uuid:469ba625-f4c0-4f88-afd1-db9b23523bad>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00441.warc.gz"}
How do you solve e^x=2? | Socratic How do you solve #e^x=2#? 1 Answer Make use of the $\textcolor{b l u e}{\text{law of logarithms}}$ $\textcolor{red}{| \overline{\underline{\textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{\log {x}^{n} = n \log x} \textcolor{w h i t e}{\frac{a}{a}} |}}}$ Using this law allows us to obtain x as a multiplier First step - take ln of both sides. $\Rightarrow \ln {e}^{x} = \ln 2 \Rightarrow x {\cancel{\ln e}}^{1} = \ln 2 \Rightarrow x = \ln 2$ Impact of this question 54142 views around the world
{"url":"https://socratic.org/questions/how-do-you-solve-e-x-2","timestamp":"2024-11-07T04:10:29Z","content_type":"text/html","content_length":"32429","record_id":"<urn:uuid:4e0f84ef-8525-4fd0-8e1c-83cdb871ba8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00287.warc.gz"}
Bayesian Sample Size Determination for Two For two group models (i.e., treatment and control group with no covariates), we denote the parameter for the treatment group by \(\mu_t\) and the parameter for the control group by \(\mu_c\). The default null and alternative hypotheses are given by \[H_0: \mu_t - \mu_c \ge \delta\] and \[H_1: \mu_t - \mu_c < \delta,\] where \(\delta\) is a prespecified constant. We use the following definition of Bayesian power / type I error rate. 1. Two Group Cases with Binary Outcomes We first demonstrate a model for binary outcomes for treatment and control groups with no covariates. We consider the non-inferiority design application of Chen et al. (2011). The goal was to design a trial to evaluate a new generation of drug-eluting stent (DES) (“test device”) with the first generation of DES (“control device”). The primary endpoint is the 12-month Target Lesion Failure (TLF) (binary). Historical information can be borrowed from two previously conducted trials involving the first generation of DES. The table below summarizes the historical data. Summary of the historical data. % TLF (# of failure/sample size) Historical Trial 1 8.2% (44/535) Historical Trial 2 10.9% (33/304) Let \(\textbf{y}_t^{(n_t)}=(y_{t1},\cdots, y_{tn_t})\) and \(\textbf{y}_c^{(n_c)}=(y_{c1},\cdots, y_{cn_c})\) denote the responses from the current trial for the test device and the control device, respectively. The total sample size is \(n=n_t+n_c\). We assume the \(i\)-th observation from the test group \(y_{ti}\) follows Bern(\(\mu_t\)), and the \(i\)-th observation from the control group \(y_{ci}\) follows Bern(\(\mu_c\)). We will illustrate Bayesian sample size determination (SSD) incorporating historical data using the power prior with fixed \(a_0\) and the normalized power for \(a_0\) modeled as random. The hypotheses for non-inferiority testing are \[H_0: \mu_t - \mu_c \ge \delta\] and \[H_1: \mu_t - \mu_c < \delta,\] where \(\delta\) is a prespecified non-inferiority margin. We set \(\delta=4.1\% We choose beta\((10^{-4}, 10^{-4})\) for the initial prior for \(\mu_c\), which performs similarly to the uniform improper initial prior for \(\log\left(\frac{\mu_c}{1-\mu_c}\right)\) used in Chen et al. (2011) in terms of operating characteristics. Power is computed under the assumption that \(\mu_t=\mu_c\) and type I error rate is computed under the assumption that \({\mu_t=\mu_c+\delta}\). For sampling priors, a point mass prior at \(\mu_c = 9.2\%\) is used for \(\pi^{(s)}(\mu_c)\) where \(9.2\%\) is the pooled proportion for the two historical control datasets, and a point mass prior at \(\mu_t = \mu_c\) is used for \(\pi^{(s)}(\mu_t)\). For all computations, we use \(N=10,000\), \(\frac{n_t}{n_c} = 3\), and \(\gamma=0.95\). 1.1 Power Prior with Fixed \(a_0\) When \(a_0\) is fixed, the historical matrix is defined where each row represents a historical dataset, and the three columns represent the sum of responses, sample size and \(a_0\), respectively, of the historical control data. We use \(a_{01}=a_{02}=0.3\). historical <- matrix(0, ncol=3, nrow=2) historical[1,] <- c(44, 535, 0.3) historical[2,] <- c(33, 304, 0.3) We consider \(n_t\) values ranging from \(600\) to \(1000\) to achieve the desired power of \(0.8\). Since point mass sampling priors are used for \(\mu_t\) and \(\mu_c\), samp.prior.mu.t and samp.prior.mu.c are both scalars. For Bernoulli outcomes, beta initial priors are used for \(\mu_t\) and \(\mu_c\), with hyperparameters specified by prior.mu.t.shape1, prior.mu.t.shape2, prior.mu.c.shape1 and prior.mu.c.shape2. n.t_vals <- seq(from=600, to=1000, by=50) powers <- NULL for(i in 1:length(n.t_vals)){ n.t <- n.t_vals[i] results <- power.two.grp.fixed.a0(data.type="Bernoulli", n.t=n.t, n.c=round(n.t/3), historical=historical, samp.prior.mu.t=0.092, samp.prior.mu.c=0.092, prior.mu.t.shape1=0.0001, prior.mu.t.shape2=0.0001, delta=0.041, N=10000) power <- results$`power/type I error` powers <- c(powers, power) #> [1] 0.7819 0.8112 0.8220 0.8383 0.8588 0.8763 0.8865 0.8922 0.9084 We can see that a sample size of \(650\) is required to achieve a power of at least \(0.8\). A power curve is plotted below for sample sizes ranging from \(600\) to \(1000\). df <- data.frame(sample_size=n.t_vals, power=powers) ggplot(data=df, aes(x=sample_size, y=powers)) + geom_smooth(method = lm, formula = y ~ x, se = FALSE) + geom_point() + xlab("Sample Size") + We then compute the type I error rate for these sample sizes. Since the type I error rate is computed under the assumption that \({\mu_t=\mu_c+\delta}\), we use a point mass at \(\mu_c = 9.2\%\) for the sampling prior for \(\mu_c\), and a point mass at \(\mu_t = 9.2\% + 4.1\%\) for the sampling prior for \(\mu_t\). The following type I error rate calculations match the results given in Table 2 of Chen et al. (2011). TIEs <- NULL for(i in 1:length(n.t_vals)){ n.t <- n.t_vals[i] results <- power.two.grp.fixed.a0(data.type="Bernoulli", n.t=n.t, n.c=round(n.t/3), historical=historical, samp.prior.mu.t=0.092+0.041, samp.prior.mu.c=0.092, prior.mu.t.shape1=0.0001, prior.mu.t.shape2=0.0001, delta=0.041, N=10000) TIE <- results$`power/type I error` TIEs <- c(TIEs, TIE) #> [1] 0.0275 0.0299 0.0310 0.0290 0.0307 0.0313 0.0295 0.0300 0.0316 1.2 Normalized Power Prior (\(a_0\) Modeled as Random) When \(a_0\) is modeled as random, the normalized power prior is used and the priors for \(a_{01}\) and \(a_{02}\) are beta(1,1), as in Chen et al. (2011). We run 10,000 iterations of the slice sampler. We use the default settings for the upper limits, lower limits and slice widths for \(a_{01}\) and \(a_{02}\). The same initial priors and sampling priors are used as in the fixed \(a_0\) When \(a_0\) is modeled as random, the historical matrix is defined where each row represents a historical dataset, and the two columns represent the sum of the responses and the sample size, The code below computes the power when \(n_t=750\). n.t <- 750 results <- power.two.grp.random.a0(data.type="Bernoulli", n.t=n.t, n.c=round(n.t/3),historical=historical, samp.prior.mu.t=0.092, samp.prior.mu.c=0.092, prior.mu.t.shape1=0.0001, prior.mu.t.shape2=0.0001, delta=0.041, gamma=0.95, nMC=10000, nBI=250, N=10000) 2. Two Group Cases with Normally Distributed Outcomes We now demonstrate a model for normally distributed outcomes for treatment and control groups with no covariates. We use simulated data for this example. We assume the \(i\)-th observation from the treatment group \(y_{ti}\) follows N(\(\mu_t\), \(\tau^{-1}\)) and the \(i\)-th observation from the control group \(y_{ci}\) follows N(\(\mu_c\), \(\tau^ {-1}\)), where \(\tau\) is the precision parameter for the current data. The null hypothesis is \(H_0: \mu_t - \mu_c \ge \delta\). We set \(\delta=0\). We assume the treatment group sample size (\(n_t\)) and the control group sample size (\(n_c\)) are both \(100\). 2.1 Power Prior with Fixed \(a_0\) First, we assume \(a_0\) is fixed. We simulate three historical datasets. For normally distributed data, the historical matrix is defined where each row represents a historical dataset, and the four columns represent the sum of the responses, the sample size, the sample variance and \(a_0\), respectively. data.type <- "Normal" n.t <- 100 n.c <- 100 # Simulate three historical datasets K <- 3 historical <- matrix(0, ncol=4, nrow=K) # The columns are the sum of the responses, the sample size, the sample variance and a_0 historical[1,] <- c(50, 50, 1, 0.3) historical[2,] <- c(30, 50, 1, 0.5) historical[3,] <- c(20, 50, 1, 0.7) To calculate power, we can provide the sampling prior of \(\mu_t\) and \(\mu_c\) such that the mass of \(\mu_t - \mu_c < 0\). We generate the sampling prior for the variance parameter from a Gamma(1, 1) distribution. # Generate sampling priors samp.prior.mu.t <- rnorm(50000) samp.prior.mu.c <- rnorm(50000) sub_ind <- which(samp.prior.mu.t < samp.prior.mu.c) # Here, mass is put on the alternative region, so power is calculated. samp.prior.mu.t <- samp.prior.mu.t[sub_ind] samp.prior.mu.c <- samp.prior.mu.c[sub_ind] samp.prior.var.t <- rgamma(100, 1, 1) samp.prior.var.c <- rgamma(100, 1, 1) We run \(10,000\) iterations of the Gibbs sampler for \(N=100\) simulated datasets. Note that \(N\) should be larger in practice. results <- power.two.grp.fixed.a0(data.type=data.type, n.t=n.t, n.c=n.t, historical=historical, samp.prior.mu.t=samp.prior.mu.t, samp.prior.mu.c=samp.prior.mu.c, samp.prior.var.t=samp.prior.var.t, samp.prior.var.c=samp.prior.var.c, delta=0, nMC=10000, nBI=250, N=100) #> average posterior mean bias #> mu_t -0.580 -0.007 #> mu_c 0.535 -0.026 #> The power/type I error rate is 0.79 . #> The average of the posterior probabilities of P(mu_t - mu_c < delta) is 0.842 . Next, to calculate type I error, we can provide the sampling prior of \(\mu_t\) and \(\mu_c\) such that the mass of \(\mu_t - \mu_c >= 0\). # Generate sampling priors samp.prior.mu.t <- rnorm(50000) samp.prior.mu.c <- rnorm(50000) sub_ind <- which(samp.prior.mu.t >= samp.prior.mu.c) # Here, mass is put on the null region, so type I error rate is calculated. samp.prior.mu.t <- samp.prior.mu.t[sub_ind] samp.prior.mu.c <- samp.prior.mu.c[sub_ind] results <- power.two.grp.fixed.a0(data.type=data.type, n.t=n.t, n.c=n.t, historical=historical, samp.prior.mu.t=samp.prior.mu.t, samp.prior.mu.c=samp.prior.mu.c, samp.prior.var.t=samp.prior.var.t, samp.prior.var.c=samp.prior.var.c, delta=0, nMC=10000, nBI=250, N=100) #> average posterior mean bias #> mu_t 0.488 -0.079 #> mu_c -0.454 0.111 #> The power/type I error rate is 0.14 . #> The average of the posterior probabilities of P(mu_t - mu_c < delta) is 0.186 . 2.2 Normalized Power Prior (\(a_0\) Modeled as Random) Next, we model \(a_0\) as random with the normalized power prior. We simulate three historical datasets. Here, the historical matrix is defined where each row represents a historical dataset, and the three columns represent the sum of the responses, the sample size, and the sample variance, respectively. data.type <- "Normal" n.t <- 100 n.c <- 100 # Simulate three historical datasets K <- 3 historical <- matrix(0, ncol=3, nrow=K) # The columns are the sum of the responses, the sample size, and the sample variance historical[1,] <- c(50, 50, 1) historical[2,] <- c(30, 50, 1) historical[3,] <- c(20, 50, 1) To calculate power, we can provide the sampling prior of \(\mu_t\) and \(\mu_c\) such that the mass of \(\mu_t - \mu_c < 0\). We generate the sampling prior for the variance parameter from a Gamma(1, 1) distribution. # Generate sampling priors samp.prior.mu.t <- rnorm(50000) samp.prior.mu.c <- rnorm(50000) sub_ind <- which(samp.prior.mu.t < samp.prior.mu.c) # Here, mass is put on the alternative region, so power is calculated. samp.prior.mu.t <- samp.prior.mu.t[sub_ind] samp.prior.mu.c <- samp.prior.mu.c[sub_ind] samp.prior.var.t <- rgamma(100, 1, 1) samp.prior.var.c <- rgamma(100, 1, 1) We use the default prior on \(a_0\), the uniform prior. The average posterior means of \(a_0\) and \(\tau\) are also returned below. We run \(10,000\) iterations of the Gibbs sampler (for \(\mu_c\)) and the slice sampler (for \(a_0\)) for \(N=100\) simulated datasets. Note that \(N\) should be larger in practice. results <- power.two.grp.random.a0(data.type=data.type, n.t=n.t, n.c=n.t, historical=historical, samp.prior.mu.t=samp.prior.mu.t, samp.prior.mu.c=samp.prior.mu.c, samp.prior.var.t=samp.prior.var.t, samp.prior.var.c=samp.prior.var.c, delta=0, nMC=10000, nBI=250, N=100) #> average posterior mean bias #> mu_t -0.451 0.122 #> mu_c 0.574 0.013 #> The power/type I error rate is 0.78 . #> The average of the posterior probabilities of P(mu_t - mu_c < delta) is 0.865 . results$`average posterior means of a0` #> [,1] #> [1,] 0.4217424 #> [2,] 0.4130283 #> [3,] 0.4330019 results$`average posterior mean of tau` #> [1] 1.121705 Next, to calculate type I error, we can provide the sampling prior of \(\mu_t\) and \(\mu_c\) such that the mass of \(\mu_t - \mu_c >= 0\). # Generate sampling priors samp.prior.mu.t <- rnorm(50000) samp.prior.mu.c <- rnorm(50000) sub_ind <- which(samp.prior.mu.t >= samp.prior.mu.c) # Here, mass is put on the null region, so type I error rate is calculated. samp.prior.mu.t <- samp.prior.mu.t[sub_ind] samp.prior.mu.c <- samp.prior.mu.c[sub_ind] results <- power.two.grp.random.a0(data.type=data.type, n.t=n.t, n.c=n.t, historical=historical, samp.prior.mu.t=samp.prior.mu.t, samp.prior.mu.c=samp.prior.mu.c, samp.prior.var.t=samp.prior.var.t, samp.prior.var.c=samp.prior.var.c, delta=0, nMC=10000, nBI=250, N=100) #> average posterior mean bias #> mu_t 0.531 -0.037 #> mu_c -0.235 0.330 #> The power/type I error rate is 0.16 . #> The average of the posterior probabilities of P(mu_t - mu_c < delta) is 0.207 .
{"url":"https://cran.mirror.garr.it/CRAN/web/packages/BayesPPD/vignettes/bayesppd-vignette.html","timestamp":"2024-11-14T12:04:01Z","content_type":"text/html","content_length":"83307","record_id":"<urn:uuid:3bec9d9d-4b58-431f-bf9e-c862d87b2c62>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00402.warc.gz"}