content
stringlengths
86
994k
meta
stringlengths
288
619
Infinite long waves, equal everywhere and forever the same Quotes from FTL? (Faster Than Light) by Chiao, Kwiat and Steinberg, Scientific American August 1993, page 43. “Infinite long waves, equal everywhere and forever the same. A wave plus another wave with a little larger frequency and a wave with a little smaller frequency gives a pulse-like object. Adding sufficient frequencies yield a real pulse or wave-packet. A larger spread of added frequencies yield a shorter pulse. A smaller spread gives a longer pulse.” Δf * Δt E = hf -> ΔE * Δt f = frequency, Δt = pulse duration, h = Planck's constant. “Usually the blue pulse speed is a little slower than the red pulse speed, causing dispersion: an originally white pulse changes into a rainbow-like longer pulse, the reds a little ahead, the blues a little behind.” “Every kind of glass has dispersion, also bundle splitters and mirrors suffer this. When a photon passes through glass of 2.5 cm thick, its pulse width is 4 times as large.” One such wave, infinite long, equal everywhere and forever the same, is equal to a photon of infinite precise frequency and thus infinite uncertain place and time (uncertainty relation). It is a photon and so it has an existence on its own, just like the usual photons. Call this a field frequency, one frequency out of the field of all possible frequencies. Imagine a real particle, e.g. an electron, to exist in one definite state. Observation of a particle always goes by means of another particle, there is just no other way. Take our electron to be observed by a photon. Take the photon as a superposition of an infinite number of nearly equal field frequencies taken out of the field of all possible frequencies. The phase of each frequency is taken identical at one specific location. That location then is the maximum of the pulse's amplitude and is the location of the photon. Subsequently the electron absorbs the photon. The electron is observed simultaneously by a herd of nearly equal photons, resulting in a similar herd of observed electron states. The electron then exists in a superposition of a quite narrow group of nearly identical state guided by the uncertainty relations of Heisenberg; just as the original photon already was. When the electron then emits a photon, it is this herd of electrons in superposition that all at the same moment and at the same place emit one specific frequency out of the field of all possible frequencies, one such infinite long wave, equal everywhere and forever the same. The phase of all emitted photons is identical at the moment of emission. The electrons in superposition were nearly identical and after the emission they are identical. The electron's definite end state is the result of all possible ways the electron could have arrived there. The electron is in one definite state back again. The photon is a herd of nearly identical frequencies again. One starts with a single state electron and a herd of photons in superposition. Then the photon is absorbed and is gone, and the electron splits in as many electrons as there were photon states. The herd of electron states each emit a photon. The electron states merge again to the single state as it had at the start. Simultaneously the herd-of-photons superposition is restored. Usually this process is described by the uncertainty relations of Heisenberg. In the process there is no collapse of a wavefunction. This paragraph describes the coupling between a photon and an electric charge. Paragraph There is one more peculiarity to view at page 2 of the storyline EXPERIMENTS ON THE COLLAPSE OF THE WAVEFUNCTION, tries to take into account spin. There are only two massless particles in the Standard Model: the photon and the gluon. They have net absorption zero from the Higgs field and thus no mass and no gravitational field. Aren't they entirely “outside of time” as well? (See the conjecture at page 6 of QCD) Their entire world line? It seems to fit in with the following two points. 1) Because of SR time is standing still at the photon. It will be so at the gluon as well. So we will not miss events happening at the photon. Such events are not there. For the gluon we would miss all gluon-gluon couplings. 2) According to QED the photon does not exist as a localized particle. You cannot clip a part of empty space, e.g. a cube meter, with a photon in it and say “Look, I caught a photon!”. According to QED the photon travels all possible routes from source to goal and when you clip a part of space with the photon in it, only the most important contributions are in there. But not all the other contributions that visit all points of spacetime outside the clipped part. It would be convenient if all those reactions would have a place “outside of time”, where they can do their job without being hindered by usual limitations like causality and the speed of light. (Feynman said “Particles are immoral there!”) Is this candidate for ground below the QED picture? One is tempted to suggest renormalization from QED would work as well when only the photons travel all paths from source to goal and that the electrons don't. But the two slit experiment works with electrons instead of photons as well. So no, that cannot be, isn't it? The field of all possible frequencies The article of T. H. Boyer, The classical vacuum, Scientific American august 1985, discusses the zero-point radiation field. The article makes clear a Lorentz invariant field of all possible frequencies has to be of intensity I
{"url":"http://leandraphysics.nl/sea3.html","timestamp":"2024-11-01T20:37:15Z","content_type":"text/html","content_length":"11942","record_id":"<urn:uuid:bf8b4985-ad02-4392-ac1b-3a2be77ef3ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00891.warc.gz"}
Toltén Bridge’s response under extreme conditions analysis through numerical models This article presents the structural health analysis of a full-scale vehicular bridge, using a twin model calibrated with experimental information. This structure consists of concrete arches, built more than 80 years ago, and reinforced in the 1990s with a steel structure. Different load combinations were evaluated in this model to determine the strength of the structure according to current design standards. Finally, it was found that several of its components do not meet the current design requirements, putting the structure in a vulnerable condition to seismic hazards and restricting its service to traffic loads. 1. Introduction Bridges are a vital component of any transport network [1]. The proper performance of these structures provides a foundation on which the economy of a region can be grown. However, these structures are continuously subjected to different types of dynamic loads, such as traffic, wind and earthquake. These loads can produce excessive vibrations in the structure, causing damage to its components, limiting its service and making its users uncomfortable [2]. The control of structural vibrations is one of the main objectives of the design process. Based on this, minimum material specifications, boundary conditions and geometry of structural components are defined [3]. However, the actual behavior of the structure can differ significantly from the initial design, especially in older bridges. A very useful method to evaluate this behavior is the Operational Modal Analysis (OMA), by which the dynamic properties of the structure are identified based on its dynamic response [4]. Several algorithms have been developed in the last decades for their implementation in OMA [5-9]. Yet, this methodology has certain limitations, for example, in the calculation of mass and stiffness matrices of the structure. Although in some studies clustering methods are used for damage identification without the need to address a numerical representation of the structure [10], usually the identification algorithms are complemented with a finite element model. In this way, the model can be calibrated so that its dynamic properties match those identified experimentally, leading to a high-fidelity digital representation of the structure [11]. This calibration process can be approached in different ways. One of them is shown in [12], where a sensitivity analysis is first performed to estimate the influence of some parameters of the model on the variation of its dynamic properties. Then, this information is used to calibrate the model in an iterative way to minimize the difference between its dynamic properties and those identified Once the numerical model has been calibrated, different types of analysis can be performed with this digital twin (Fig. 1), such as monitoring the structural health from changes in the dynamic properties or predicting the maximum stresses of the physical twin under new service conditions [13, 14]. For example, in [15, 16] a modal-observer approach was implemented in a calibrated model to estimate the time history response of all degrees of freedom of the structure, based on the acquired time histories of only some of its degrees of freedom. Then, with the information obtained, the maximum demand-capacity ratios of each element were evaluated. It is also worth highlighting the work done in [17], where Bayesian methods were implemented in a calibrated model to identify the location and magnitude of structural damage. Fig. 1Digital twin and structural health analysis (adapted from [18]) In this article, the study of structural health in a vehicular bridge that is more than 80 years old is addressed. For this purpose, a preliminary finite element model is made, based on structural drawings and visual inspections. Then, the dynamic properties of the structure are experimentally identified and with this information, the numerical model is calibrated. Finally, the digital twin of the structure is subjected to different load combinations to evaluate its behavior according to the current design standards. 2. Case study of Toltén Bridge The Toltén bridge is composed of two structures, one called Original Structure (Fig. 2), built in the 1930s, and a more recent one called New Structure. The Original Structure it’s made up of ten open-arch spans, built-in reinforced concrete and with a total length of 440 meters. It has two 7.9 meters wide road lanes in each direction and two 0.8 meters wide sidewalks. The original deck consists of a reinforced concrete slab with joints between spans which lies on reinforced concrete arches attached to concrete piers. Each span is formed by two parallel arches joined by eight braces. The slab is supported by 22 columns in each span, 18 of which rest on the arches and 4 on the piers at the ends of the span. The lower part of the slab is equipped with transversal beams that join the heads of the columns two by two, giving the slab transversal stiffness. Loss of stability due to scour in pier 5 (Fig. 3), compromising the stability of spans 2 and 5, forced to perform rehabilitation works in 1993. Thus, a New Structure that consists of a concrete slab that rests on four steel girders was executed in three consecutive spans (20.8+28.6+21.2 meters) which partially replaces the Original Structure. The girders rest on circular cross-section pier columns. Two transversal struts join the heads of the piers near the Original Structure, while the eight central piers are joined by beams, forming two pier caps of four adjacent piers. In addition, under these pier caps, there is a lateral bracing formed by L-shaped and tubular profiles. The longitudinal beams are braced by L-profile trusses. Fig. 3Scheme of the lateral view of both structures The original and New Structures do not share any element, the original substructure was disconnected from the deck in the section where the New Structure was built (Fig. 4) and there are transverse joints in the deck that disconnect the displacements between both structures. Therefore, it can be considered that the structures do not have any compatible degree of freedom, work independently, and can be modeled separately. Fig. 4Disconnecting the original substructure (left) and disconnection between the New Structure and the original elevation (right) According to the inspection carried out by the Ministerio de Obras Publicas of Chile in November of 2016 (Fig. 5), the bridge is in poor condition and, with the new configuration, the support of the external arches is compromised. The Bridge presents serious structural damage with burst pillars and highly undermined stocks, which may make its response unpredictable for how it was designed. 3. Finite element model A preliminary finite element model of the original and New Structure of the Toltén Bridge was built. The model accurately represents the several element details and sizes, as it is necessary to implement a successful condition assessment of the structures [19]. Girders, beams, trusses, columns and piers were modeled as BEAM elements, suitable for slender pieces. These are 3D elements based on Timoshenko beam theory with six degrees of freedom at each node. The deck was modeled using SHELL elements, suitable for shell structures, which have four nodes with six degrees of freedom at each node. Due to the slab discontinuities between spans and it’s simply supported on abutments, displacements in slab between spans are not coupled, and only the vertical displacements between the slab and abutments are coupled. The connections between the other concrete elements are modeled completely rigid (Fig. 6). Fig. 5Inspection of the Toltén Bridge by the Ministerio de Obras Publicas of Chile, in November of 2016 Fig. 6Finite element model of the Original Structure The slab is also discontinuous between spans and is simply supported on the pier’s caps, so only vertical displacements between these elements are coupled. The concrete slab works in solidarity with the beams, so the 6 degrees of freedom between slab and beams have been coupled. The connections between the other elements have been modeled assuming they are completely rigid (Fig. 7 and Fig. 8). The New Structure has a deep pier-type foundation, so it is important to establish how the stiffness of the ground varies with depth to properly reproduce the response of the ground. In this case, the ground stiffness was calculated according to the Manual de Carreteras V3, using the Eqs. (1) to (3). ${G}_{c}$ is the shear modulus of ground for seismic excitations in (tonf/m^2), H the height of the structure buried in (m), ${K}_{2}$ the shear coefficient, ${K}_{2max}$ the maximum shear coefficient, ${k}_{hi}$ the horizontal interaction spring constant in the center of the $i$-th layer in (tonf/m^2), ${Z}_{i}^{*}$ the distance to the center of the $i$-th layer, measured from the roof level of the structure in (m) and ${\stackrel{-}{\sigma }}_{vi}$[]the vertical effective stress in the $i$-th pier in (tonf/m^2): ${G}_{C}=53·{K}_{2}\sqrt{{\stackrel{-}{\sigma }}_{vi}},$ With this formulation, the stiffness of the soil varies with depth and is obtained for any depth according to the stiffness on the surface. This surface stiffness is unknown and therefore will be a feature to be determined in the calibration process. Fig. 7Finite element model of the New Structure Fig. 8Spans 4 to 6 of the New Structure 4. Data gathering and dynamic properties identification The use of operational modal analysis for structural health assessment has been widely studied [20-29]. This method is based on measuring the response of the structure and, through signal processing and model fitting, finding the dynamic properties that best represent the acquired signals. In this case, accelerometers were used to measure environmental vibrations in the bridge and with the identified properties the numerical model presented above was calibrated. A modal analysis was performed on the preliminary models to verify the location of the sensors. The modal coordinates of the model were obtained in the points where the sensors would be located. It was found that, in those locations, there was not a node, at least, for the first three vibration modes. Based on this, the sensors were installed at 1/4 and 3/4 of the spans of the Original Structure, and the center of the spans of the New Structure (Fig. 9). The accelerometers installed in the Original Structure are inertial sensors AltIMU-10 v5, while in the New Structure were installed unidirectional analog accelerometers MMA2241KEG. Both sensors used a sampling frequency of 100 Hz. Fig. 10 shows two sensors installed on the deck and column of the Original Structure. Fig. 9Sensors location on structures Fig. 10Sensors installed on Original Structure The response of Toltén Viejo (Fig. 11) and Toltén Nuevo (Fig. 12) to the passage of vehicles were measured. Subsequently, the PSDs of each accelerometer were obtained, so that the natural frequencies (${f}_{exp}$) of each structure (Table 1) could be determined by Peak Picking. Fig. 11Measured response of the Original Structure in the longitudinal, transverse and vertical direction Table 1Natural frequencies identified for the Original Structure and the New Structure ${f}_{exp1}$ (Hz) ${f}_{exp2}$ (Hz) ${f}_{exp3}$ (Hz) Original Structure 1.74 10.04 14.50 New Structure 9.24 14.00 17.60 Fig. 12Measured response of the New Structure in the vertical direction 5. Model calibration The calibration procedure is based on [30] as is detailed in Fig. 13. The approach is deployed in an iterative method which allows reducing the error within experimental and numerical results. Fig. 13Diagram of the model calibration process The sensitivity-based calibration method consists of three phases: (i) selection of the reference parameters, usually experimental data such as measured natural frequencies and modes of vibration, (ii) selection of the material properties to be modified, and (iii) an iterative fitting model that modifies the material properties to be updated based on the reference parameters and an objective function [31]. In Eqs. (4) and (5) it is shown that experimental reference parameters (${R}_{e}$) are expressed in terms of analytical reference parameters (${R}_{a}$), structural characteristics $\left(P,{P}_{a}\ right)$ and a sensitivity coefficient ($S$) as a first-order Taylor series [19]. $∆R$ is the difference between the experimental and analytical reference parameters, $∆P$ is the difference between the experimental structural characteristics and the estimates introduced in the model to be updated, and $S$ is the sensitivity matrix of the reference parameters with respect to the characteristics to be updated Eq. (6). ${R}_{a,i}$ and ${P}_{j}$ are the analytical input reference parameters and structural characteristics to be maintained: ${S}_{ij}=\frac{\partial {R}_{a,i}}{\partial {P}_{j}}.$ The reference parameters used in model calibration usually include natural frequencies and eigenmodes as they can be determined from ambient vibration [20, 23, 24, 32] such as wind, earthquake and traffic. Therefore, the objective of model calibration is to minimize an objective function based on the residue between the experimental eigenfrequencies and modes and the analytical. In this article, the reference parameters used for model calibration are the frequencies and modes of the structures themselves. The process of updating parameters and calculating errors has been implemented in MATLAB by reading the output files and modifying the script to be entered into ANSYS with the updated parameters. The geometry of the bridge, the boundary conditions and the material properties were obtained from the construction project. A visual inspection made it possible to corroborate the boundary conditions between elements as the transverse joints were operational and there was no visible damage to the connections. This is why discrepancies are associated with variation in material properties and uncertainty about boundary conditions in the field. To avoid physically meaningless results of the updated characteristics after the calibration process, upper and lower limits were determined for these values. Because of this, the chosen material properties to be updated and their ranges of variation were: – Density of materials. (Steel: 7-8.5 t/m^3; concrete: 2.1-2.8 t/m^3). – Modulus of elasticity of the materials. (Steel: 150-220 GPa; concrete: 15-30 GPa). – Poisson Coefficient (0.15-0.35). – Stiffness of the terrain (30-100 MPa). 5.1. Updating the model and convergence criteria To update the characteristics of the preliminary model, the average deviation in the fundamental frequencies was used as an objective function to be minimized, expressed as the average in the relative errors between the analytical and experimental frequencies [31]: ${e}_{f}=\frac{1}{n}{\sum }_{i=1}^{n}\frac{\left|∆{f}_{i}\right|}{{f}_{i}}×100,$ where $n$ is the total number of frequencies considered in the calibration, $∆{f}_{i}$ is the error between the analytical and experimental frequency and ${f}_{i}$ is the experimental frequency. The convergence criteria established for the iterative process were (i) value of the target function less than 5 %; (ii) minimum improvement in the target function between two iterations less than 0.1 %. 5.2. Comparison between experimental and analytical frequencies The analytical modal parameters after the calibration process ($\phi$) were compared with the experimental ones ($\stackrel{^}{\phi }$) using the modal assurance criterion (MAC): ${MAC}_{i}\left({\phi }_{i},{\stackrel{^}{\phi }}_{i}\right)=\frac{{\left({\phi }_{i}^{T}{\stackrel{^}{\phi }}_{i}\right)}^{2}}{\left({\phi }_{i}^{T}{\phi }_{i}\right)\left({\stackrel{^}{\phi }}_{i}^ {T}{\stackrel{^}{\phi }}_{i}\right)}×100.$ Besides, the natural frequencies of the model (${f}_{an}$) and those identified by Pick Piking (${f}_{exp}$) were compared in terms of relative error: MAC values above 90 % are generally accepted as an indicator of a good correlation between modes. If differences between the experimental and analytical natural frequencies are small, the calibration can be considered satisfactory [19]. In Table 2 the experimental and analytical natural frequencies area compared before and after the calibration process. The high values of MAC along with the low errors in frequency indicate that the model can reproduce successfully the real behavior of the structure. At the beginning of this calibration process, the materials of the model were considered with typical mechanical properties, which were modified until obtaining the results shown in Table 3. Table 2Experimental and analytical frequencies evaluation of its correlation Before calibration After calibration Original structure Mode ${f}_{exp}$ (Hz) ${f}_{an}$ (Hz) MAC (%) Error (%) ${f}_{an}$ (Hz) MAC (%) Error (%) 1 1.74 1.37 57.60 21.12 1.67 92.10 4.02 2 10.04 8.47 83.70 15.69 10.07 97.70 –0.29 3 14.50 12.22 47.10 15.74 14.39 95.00 0.76 New structure 1 9.24 7.72 32.90 16.43 9.14 98.10 1.08 2 14.00 10.44 46.60 25.46 12.37 90.80 11.64 3 17.60 15.35 75.60 12.80 18.20 92.30 –3.41 Table 3Initial and final material properties during the calibration process Material properties of the model Initial Final Steel density ($t$ / m^3) 7.80 7.38 Concrete density ($t$ / m^3) 2.30 2.22 Modulus of elasticity of steel (MPa) 210.00 202.50 Modulus of elasticity of concrete (MPa) 25.00 27.05 Steel Poisson coefficient (MPa) 0.30 0.28 Concrete Poisson coefficient (MPa) 0.20 0.19 Stiffness of the ground on the surface (MPa) 30.00 39.40 6. Considered scenarios In order to find out whether the bridge complies with the regulations applicable to newly built bridges in Chile in its current state, a series of combined loads based on the regulations in force are simulated. The loads considered were traffic, wind and seismic under scouring conditions following the Highway Manual. This manual refers to the American Association of State Highway and Transportation Officials: Standard Specifications for Highway Bridges (AASHTO Standard) [33]. Methods to obtain loads from different loads as wind, scour, earthquake, and traffic are presented. 6.1. Traffic load Traffic loads can be calculated as standard trucks or as equivalent strip loads according to AASHTO Standard. In this article, we choose to apply the HS20 standard truck [33] increasing their loads by 20 % as indicated in the Manual de Carreteras. This load has been modeled as point forces on the concrete slab. The impact load is calculated according to the AASHTO Standard as a percentage increase of the live load using the following expression: where $I$ is the impact fraction (maximum 30 percent) and $L$ is the length in feet of the portion of the span that is loaded to produce the maximum stress in the member. Considering the span lengths of the structures, an increase of 19 % is adopted for the Original Structure and 23 % for the New Structure. 6.2. Wind load Wind load was considered as horizontal point loads orthogonal to the bridge’s axis equivalent to 3.6 kN/m^2 on arches and frames, and 2.4 kN/m^2 on beams and crossbeams as stated in the AASTHO Standard for superstructures in the transverse wind. 6.3. Earthquake scenario The bridge’s behavior under the seismic scenario was simulated using the Response Spectrum Analysis which, for a given excitation, calculates the maximum response based on the input spectrum. Structure’s mode shapes are required to carry out the Response Spectrum Analysis, therefore the vibration mode shapes of the bridge are used from the Modal Analysis. The excitation spectrum gives absolute acceleration as a function of the natural period of the structure and the relation between accelerations and the structure’s natural frequencies. The excitation is calculated following the Manual de Carreteras formulation whose response spectrum is based on the subductive earthquake of magnitude 8.0 in the Moment Magnitude scale that took place in the central area of Chile in 1985 [34] and is calculated as follows: ${S}_{a}\left({T}_{m}\right)=\left\{\begin{array}{c}1.5{K}_{1}S{A}_{0},{T}_{m}\le {T}_{1},\\ \frac{{K}_{1}{K}_{2}S{A}_{0}}{{T}_{m}^{2/3}},{T}_{1}<{T}_{m},\end{array}\right\$ where ${T}_{m}$ is the natural period of the mode $m$, ${K}_{1}$ and ${K}_{2}$ are coefficients depending on the importance of the bridge and the soil, $S$ takes into account the type of soil, ${A}_ {0}$ is the maximum effective acceleration of the ground and ${T}_{1}$the threshold period. Since $Freq=1/T$, the absolute acceleration can be obtained as a function of frequency as the excitation spectrum. According to the Manual de Carreteras, the Toltén Bridge is located in seismic zone 2 so it is considered a maximum acceleration of 0.3 g. The structure is considered of importance I and the soil on which it lies of type III. The mode combination method for Response Spectrum Analysis was Square Root of Sum of Squares (SRSS) which combines the maximum values of each mode: ${R}_{a}=\sqrt{{\sum }_{m=1}^{n}{R}_{m}^{2}},$ where ${R}_{m}$ is the modal response of the $m$ mode and ${R}_{a}$ is the total modal response. 6.4. Scour scenario Currently, 7 meters of scour depth is reached in the piers 5 and 6 due to erosion caused by the Toltén River. Therefore, two scour scenarios were combined with each of the other loads: no scour in piers (except actual 7 meters in piers 5 and 6) and maximum scour set as the actual scour of 7 meters depth in piers 5 and 6, and also 4 meters depth in piers 2, 3 and 4. The scour has been introduced into the model by removing the springs equivalent to the different layers from the surface to the scour depth in each case, leaving the part of the foundation above the surface free. 6.5. Load combinations The method of the admissible service loads (ASD) is adopted to check whether the bridge complies with the current standard. The formula used for the calculation of the combination of loads is that Eqs. (3-10) of the AASHTO Standard, as the Manual de Carreteras derives from this standard. Considering the loads applied to the Toltén Bridge, the following reduced formula is arrived at: $Group\left(N\right)=\gamma \left[{\beta }_{D}D{\beta }_{L}\left(L+I\right)+{\beta }_{C}CF+{\beta }_{E}E+{\beta }_{B}B+{\beta }_{S}SF+{\beta }_{W}W+{\beta }_{WL}WL$$+{\beta }_{L}LF+{\beta }_{R}\left (R+S+T\right)+{\beta }_{EQ}EQ+{\beta }_{ICE}ICE\right].$ The values of $\gamma$ and $\beta$ coefficients are selected for each of the AASHTO Standard load combination groups for in-service loads. According to the considered loads, and taking into account the criteria for Orthogonal Seismic Forces established in the Manual de Carreteras, the groups of loads to be calculated are those corresponding to dead and live loads, the impact of live loads, wind and earthquake (groups I, II and VII). Two directions have been considered for the earthquake: longitudinal (VIIa) and transversal (VIIb). 7. Results and discussion The results obtained will be analyzed from two different points of view. On the one hand, it will be sought to know if the structure meets the current design criteria and, on the other hand, if its integrity can be impaired under any of the above-mentioned solicitations: – Compliance with the current regulations, comparing the results with the admissible stresses stated in the AASHTO Standard. – Stress limit of the material, comparing the results with the elastic limit for steel and the compressive strength for concrete. To find out how stressed is the bridge in each of its parts, the following demand to capacity ratio is used: $DCR=\frac{{\sigma }_{max}}{{\sigma }_{adm}},$ where ${\sigma }_{max}$ is the maximum Von Mises stress obtained by the FEM analysis and ${\sigma }_{adm}$ is either the maximum allowable stress set by the AASHTO Standard or for the material as mentioned in the material stress limit criterion. Therefore, a DCR value of 1 means that the limit value set by the regulations has been reached or that the capacity of the section has been exhausted. Material properties are yield strength (${f}_{y}$) of 248 MPa and ultimate strength (${f"}_{c}$) of 20.6 MPa for structural steel and concrete respectively. By the above criteria, the permissible stresses according to the AASHTO Standard for reinforced concrete and steel are obtained (${\sigma }_{adm,steel}$ and ${\sigma }_{adm,concrete}$). In concrete elements subjected to bending, the stress in the most compressed fiber (${f}_{c}$) must not exceed $0.4{f}_{c}^{"}=$ 10 MPa. Axial stresses in steel elements without gaps must not exceed ${0.55f}_{y}=$ 136 MPa. These stresses can be increased for some groups of load combinations according to the AASTO Standard. In Table 4 the permissible stresses are shown as a function of the load combination and the material. Table 4Permissible stresses (ASSHTO Table 10.32.1.A and Chapter 8.15.2.1 [35]) Group I II VIIa VIIb Percentage 100 % 125 % 133 % 133 % ${\sigma }_{adm,steel}$ (MPa) 136.40 170.50 181.41 181.41 ${\sigma }_{adm,concrete}$ (MPa) 8.24 10.30 10.96 10.96 7.1. Original structure The response of the Original Structure was calculated for each load case and scour condition. For example, the maximum stress in the columns for case VIIa with the current scour is 23.40 MPa (Fig. 14). This value corresponds to a DCR of 1.14 when compared to the admissible stress of the material. A summary of each DCR for each component is presented in the following subsections. Fig. 14Von Mises stress of the Original Structure for the earthquake in the longitudinal direction 7.1.1. Compliance with current regulations The maximum allowable stresses DCR values for each element of the Original Structure and each scenario are shown in Fig. 15 and discussed afterward. As shown in Fig. 15, maximum stresses on the Original Structure are highly influenced by scouring values. The slab, columns and arches are the most affected elements by this scour increment, especially under traffic (I) and wind conditions (II). On the other hand, cross-beams, stumps and braces are not significantly affected by this factor. In the current situation of scouring, both the slab, columns, crossbeams, arches and braces exceed the permissible limits established by the regulations in the earthquake combination (VIIa and VIIb) while in the other combinations the values of all the elements are below the limits. In the case of maximum scour, at least two of the element types exceed the permissible limits in each combination of loads. In the combination of traffic (I) and wind (II) the limits are exceeded by the slab, columns and arches while in the longitudinal earthquake (VIIa), they are exceeded by the slab and columns. In the transverse earthquake (VIIb) only the slab and the columns do not exceed the limit values. Fig. 15Maximum DCR for AASHTO allowable stresses in the Original Structure 7.1.2. Stress limit of the material The maximum material stress limit DCR values for each element of the Original Structure and each scenario are shown in Fig. 16 and discussed afterward. Using the strengths of steel and concrete as a reference, no element of the Original Structure exhausts its resistance capacity in the traffic load (I) and wind (II) combinations for both scour scenarios. The depletion in the load combinations with an earthquake is similar in the current scour and maximum scour scenarios, exceeding the resistance of the concrete in the columns in a longitudinal earthquake and the arches and braces in a transversal earthquake (VIIb). The results show how the Original Structure is highly influenced by an earthquake due to the high mass of concrete, and that the stability of this structure is compromised in this case, as the concrete reaches its maximum strength in the arches which are the main supporting elements of the substructure, putting the safety of the users at risk. Fig. 16Maximum DCR for stress limit of the material in the Original Structure 7.2. New structure The response of the New Structure was calculated for each load case and scour condition. For example, the maximum stress in the piles for case VIIa with the current scour is 219.66 MPa (Fig. 17). This value corresponds to a DCR of 0.88 when compared to the admissible stress of the material. A summary of each DCR for each component is presented in the following subsections. Fig. 17Von Mises stress of the New Structure for the earthquake in the longitudinal direction 7.2.1. Compliance with current regulations The maximum allowable stresses DCR values for each element of the New Structure and each scenario are shown in Fig. 18 and discussed afterward. The maximum DCR values in each element type of the New Structure on each scenario are shown in Fig. 18. Scour does not affect the New Structure’s behavior as much as the original. The maximum stress in the strut is highly increased under traffic loading in the earthquake scenario. The stress increments due to scour in the rest of the elements are not significant. Under current scour values, allowable stresses are reached in the slab in traffic load case (I). Since the New Structure is slender, it is low influenced by wind load. Besides that, the allowable steel stress in the diagonal tubes is significantly increased in comparison with the current state while the stresses in the rest of the elements remain equal. In all combinations of seismic loads, the permissible limits are reached in the piers, and only in the case of a transverse earthquake under current scour conditions are they reached in the slab. In addition, the limits at the cross-beams for the traffic combination (I) are reached in the situation of maximum scour. The values for the remaining elements are below the permissible stresses. Fig. 18Maximum DCR for AASHTO allowable stresses in the New Structure Fig. 19Maximum DCR for stress limit of the material in the New Structure 7.2.2. Stress limit of the material The maximum material stress limit DCR values for each element of the New Structure and each scenario are shown in Fig. 19 and discussed afterward. Using the strengths of steel and concrete as a reference, it can be seen that the stresses in the slab reach the strength of the concrete in the traffic combinations for both scour scenarios. This situation of compression breakage of the (brittle) concrete would compromise the safety of the users and leave the bridge out of service until it is repaired. On the other hand, the New Structure behaves well in earthquakes due to the low weight-to-strength ratio of the steel, which provides a light and rigid structure and is, therefore, less susceptible to ground acceleration than massive structures. 8. Conclusions The lack of maintenance in bridges can lead to the deterioration of their structural components, restricting the service of the structure and its capacity to resist natural events. In this article, the structural health of the Toltén Bridge has been analyzed by evaluating the capacity of its sections under different load combinations. For this purpose, a finite element model was made based on the information gathered from structural drawings and visual inspections. Subsequently, a calibration process was carried out on the model so that its dynamic properties matched with those identified experimentally. The respective load combinations were calculated, including the cases of wind, earthquake, and traffic for both scour and non-scour conditions. The maximum capacity-demand ratios (DCR) were evaluated for each component of the structure, based on the AASHTO design standards and the material capabilities. These results were analyzed, leading to the following conclusions: 1) The Original Structure does not meet the regulatory criteria in the earthquake combinations in the current scour situation or in any of the load combinations in the case of maximum scour. 2) The New Structure only meets the regulatory criteria in the wind combination under the current and maximum scour situation. 3) The New Structure behaves better in an earthquake due to the strength of steel with its mass compared to massive concrete elements, which are more influenced by ground accelerations and its foundation deep. 4) The New Structure behaves better than the original in wind due to its greater slenderness, meeting the regulatory criteria in both cases of scouring. 5) The stability of the Original Structure would be compromised in the event of an earthquake similar to that proposed by the design regulations due to the material stress limit is reached in the columns (longitudinal earthquake) or the arches (transversal earthquake), thus posing a risk to the safety of the users. 6) In the event of a traffic load as set out in the regulations, the New Structure would pose a risk to the safety of users and would be out of service due to insufficient slab capacity. • Murachi Y., Orikowski M. J., Dong X., Shinozuka M. Fragility analysis of transportation networks. in Smart Structures and Materials 2003: Smart Systems and Nondestructive Evaluation for Civil Infrastructures, Vol. 5057, 2003. • Deng L., Wang W., Yu Y. State-of-the-art review on the causes and mechanisms of bridge collapse. Journal of Performances od Constructed Facilities, Vol. 30, 2016, p. 2-4015005. • Oviedo J. A., Duque Del M. P. Seismic Response control systems in buildings. Antequio Enginering school Journal, Vol. 6, 2006, p. 105-120. • Reynders E. System identification methods for (operational) modal analysis: review and comparison. Archives of Computational Methods in Engineering, Vol. 19, Issue 1, 2012, p. 51-124. • Saridis G., Stein G. Stochastic approximation algorithms for linear discrete-time system identification. IEEE Transaction Automatatic Control, Vol. 13, Issue 5, 1968, p. 515-523. • Van Overschee P., De Moor B. N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems. Automatica, Vol. 30, Issue 1, 1994, p. 75-93. • Chang M., Pakzad S. N. Modified natural excitation technique for stochastic modal identification. Journal of Structural Engineering, Vol. 139, Issue 10, 2013, p. 1753-1762. • Van Overschee P., De Moor B. Subspace Identification for Linear Systems. Springer, Boston, 1996. • Peeters B., De Roeck G. Stochastic System Identification for operational modal analysis: a review. Journal of Dynamic Systems Measurement Control, Vol. 123, Issue 4, 2001, p. 659-667. • Diez A., Khoa N. L. D., Makki Alamdari M., Wang Y., Chen F., Runcie P. A clustering approach for structural health monitoring on bridges. Journal of Civil Structures Health Monitoring, Vol. 6, Issue 3, 2016, p. 429-445. • Sevim B., Bayraktar A., Altunişik A. C., Atamtürktür S., Birinci F. Finite element model calibration effects on the earthquake response of masonry arch bridges. Finite Element Analysis Design, Vol. 47, Issue 7, 2011, p. 621-634. • Zhang Q. W., Chang T. Y. P., Chang C. C. Finite-element model updating for the Kap Shui Mun cable-stayed bridge. Journal Bridge Engineering, Vol. 6, Issue 4, 2001, p. 285-293. • Orlowitz E., Brandt A. Comparison of experimental and operational modal analysis on a laboratory test plate. Measurement, Vol. 102, 2017, p. 121-130. • Tuegel E. J., Ingraffea A. R., Eason T. G., Spottswood S. M. Reengineering aircraft structural life prediction using a digital twin. International Journal Aerospatial Engineering, Vol. 2011, 2011, p. 154798. • Hernandez E., Roohi M., Rosowsky D. Estimation of element-by-element demand-to-capacity ratios in instrumented SMRF buildings using measured seismic response. Earthquake Engineering Structural Dynamics, Vol. 47, Issue 12, 2018, p. 2561-2578. • Roohi M., Erazo K., Rosowsky D., Hernandez E. M. An extended model-based observer for state estimation in nonlinear hysteretic structural systems. Mechanical Systems Signal Processing, Vol. 146, 2021, p. 107015. • Behmanesh I., Moaveni B. Probabilistic identification of simulated damage on the Dowling Hall footbridge through Bayesian finite element model updating. Structural Control Health Monitoring, Vol. 22, Issue 3, 2015, p. 463-483. • Muñoz E., Nunez F., Rodríguez J. A., Ramos A., Otálora C. Seismic vulnerability and load capacity of a cable-stayed bridge based on structural reliability. Construction Engineering Journal, Vol. 25, Issue 2, 2010, p. 285-323. • Brownjohn J. M. W., Xia P. Q., Hao H., Xia Y. Civil structure condition assessment by FE model updating: methodology and case studies. Finite Element Analysis Design, Vol. 37, Issue 10, 2001, p. • Brownjohn J. M. W., Magalhaes F., Caetano E., Cunha A. Ambient vibration re-testing and operational modal analysis of the Humber Bridge. Engineering Structures, Vol. 32, Issue 8, 2010, p. • Brownjohn J. M. W., Moyo P., Omenzetter P., Lu Y. Assessment of highway bridge upgrading by dynamic testing and finite-element model updating. Journal of Bridge Engineering, Vol. 8, Issue 3, 2003, https://doi.org/10.1061/(ASCE)1084-0702(2003)8:3(162). • Butt F., Omenzetter P. Seismic response trends evaluation and finite element model calibration of an instrumented RC building considering soil-structure interaction and non-structural components. Engineering Structures, Vol. 65, 2014, p. 111-123. • Wu J. R., Li Q. S. Finite element model updating for a high-rise structure based on ambient vibration measurements. Engineering Structures, Vol. 26, Issue 7, 2004, p. 979-990. • Bayraktar A., Birinci F., Altunışık A. C., Türker T., Sevim B. Finite element model updating of senyuva historical arch bridge using ambient vibration tests. Journal of Road Bridge Engineering, Vol. 4, Issue 4, 2009, p. 177-185. • Stavroulaki M. E., Riveiro B., Drosopoulos G. A., Solla M., Koutsianitis P., Stavroulakis G. E. Modelling and strength evaluation of masonry bridges using terrestrial photogrammetry and finite elements. Advances in Engineering Software, Vol. 101, 2016, p. 136-148. • Zampieri P., Zanini M. A., Faleschini F., Hofer L., Pellegrino C. Failure analysis of masonry arch bridges subject to local pier scour. Enginering of Failure Analysis, Vol. 79, 2017, p. 371-384. • Malm R., Andersson A. Field testing and simulation of dynamic properties of a tied arch railway bridge. Engineering Structures, Vol. 28, Issue 1, 2006, p. 143-152. • Chung W., Sotelino E. D. Three-dimensional finite element modeling of composite girder bridges. Engineering Structures, Vol. 28, Issue 1, 2006, p. 63-71. • Hong A. L., Ubertini F., Betti R. Wind analysis of a suspension bridge: Identification and finite-element model simulation. Journal of Structural Engineering, Vol. 137, Issue 1, 2011, https:// • Friswell M., Mottershead J. E. Finite element model updating in structural dynamics. Springer Science and Business Media, Vol. 38, 2013. • Butt F., Omenzetter P. Finite element model calibration of an instrumented RC building based on seismic excitation including non-structural components and soil-structure-interaction. 22nd Australasian Conference on the Mechanics of Structures and Materials, 2013, p. 251-256. • Chen X., Omenzetter P., Beskhyroun S. Calibration of the finite element model of a twelve-span prestressed concrete bridge using ambient vibration data. 7th European Workshop on Structural Health Monitoring, 2nd European Conference of the Prognostics and Health Management (PHM) Society, 2014. • Dakota N. American Association of State Highway and Transportation Officials, 2010. • Instruction and Design criteria. Roads Construction Ministery, Vol. 1, 2008. • Aashto L. Standard specifications for highway bridges. 17th Edition, American Association State Highwai Transportation of Washington, DC, 2002. About this article Vibration in transportation engineering finite element simulation twin model structural response Copyright © 2021 Julia Real, et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/21246","timestamp":"2024-11-12T12:36:26Z","content_type":"text/html","content_length":"171010","record_id":"<urn:uuid:4e972142-b312-459f-b557-659ef24abf79>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00475.warc.gz"}
Cogsworth: Byzantine View Synchronization Most methods for Byzantine fault tolerance (BFT) in the partial synchrony setting divide the local state of the nodes into views, and the transition from one view to the next dictates a leader change. In order to provide liveness, all honest nodes need to stay in the same view for a sufficiently long time. This requires view synchronization, a requisite of BFT that we extract and formally define here. Existing approaches for Byzantine view synchronization incur quadratic communication (in $n$, the number of parties). A cascade of $O(n)$ view changes may thus result in $O(n^3)$ communication complexity. This paper presents a new Byzantine view synchronization algorithm named $\text{Cogsworth},$ that has optimistically linear communication complexity and constant latency. Faced with benign failures, $\text{Cogsworth}$ has expected linear communication and constant latency. The result here serves as an important step towards reaching solutions that have overall quadratic communication, the known lower bound on Byzantine fault tolerant consensus. $\text{Cogsworth}$ is particularly useful for a family of BFT protocols that already exhibit linear communication under various circumstances, but suffer quadratic overhead due to view synchronization. ^Keywords: Distributed systems 1. Introduction Logical synchronization is a requisite for progress to be made in asynchronous state machine replication (SMR). Previous Byzantine fault tolerant (BFT) synchronization mechanisms incur quadratic message complexities, frequently dominating over the linear cost of the consensus cores of BFT solutions. In this work, we define the view synchronization problem and provide the first solution in the Byzantine setting, whose latency is bounded and communication cost is linear, under a broad set of scenarios. 1.1 Background and Motivation Many practical reliable distributed systems do not rely on network synchrony because networks go through outages and periods of Distributed Denial-of-Service (DDoS) attacks; and because synchronous protocols have hard coded steps that wait for a maximum delay. Instead, asynchronous replication solutions via state machine replication (SMR) [1] usually optimize for stability periods. This approach is modeled as partial synchrony [2]. It allows for periods of asynchrony in which progress might be compromised, but consistency never does. In the crash-failure model, this paradigm underlies most successful industrial solutions, for example, the Google Chubbie lock service [1], Yahoo’s ZooKeeper [1], etcd [3], Google’s Spanner [1], Apache Cassandra [4], and others. The algorithmic cores of these systems, e.g., Paxos [5], Viewstamp Replication [6], or Raft [7], revolve around a view-based paradigm. In the Byzantine model, where parties may act arbitrarily, this paradigm underlies many blockchain systems, including VMware’s Concord [8], Hyperledger Fabric [9], Cypherium [10][11], Celo [12], PaLa [13], and Libra [14]. The algorithmic cores of these BFT system are view-based, e.g., PBFT [15], SBFT [16], and HotStuff [17]. The advantage of the view-based paradigm is that each view has a designated leader from the parties that can drive a decision efficiently. Indeed, in both models, there are protocols that have per-view linear message and communication complexity, which is optimal. In order to guarantee progress, nodes must give up when a view does not reach a decision after a certain timeout period. Mechanisms for changing the view whose communication is linear exist both for the crash model (all the above) and, recently, for the Byzantine model (HotStuff [17]). An additional requirement for progress is that all nodes overlap for a sufficiently long period. Unfortunately, all of the above protocols incur quadratic message complexity for view synchronization. In order to address this, we first define the view synchronization problem independently of any specific protocol and in a fault-model agnostic manner. We then introduce a view synchronization algorithm called $\text{Cogsworth}$ whose message complexity is linear in expectation, as well as in the worst-case under a broad set of conditions. 1.2 The View Synchronization Problem We introduce the problem of view synchronization. All nodes start at view zero. A view change occurs as an interplay between the synchronizer, which implements a view synchronization algorithm, and the outer consensus solutions. The consensus solution must signal that it wishes to end the current view via a $\textsf{wish\_to\_advance}()$ notification. The synchronizer eventually invokes a consensus ${\textsf{ propose\_view}}(v)$ signal to indicate when a new view $v$ starts. View synchronization requires to eventually bring all honest nodes to execute the same view for a sufficiently long time, for the outer consensus protocol to be able to drive progress. The two measures of interest to us are latency and communication complexity between these two events. Latency is measured only during periods of synchrony, when a bound ${\delta}$ on message transmission delays is known to all nodes, and is expressed in ${\delta}$ units. View synchronization extends the PaceMaker abstraction presented in [17], formally defines the problem it solves, and captures it as a separate component. It is also related to the seminal work of Chandra & Toueg [18], [19] on failure detectors. Like failure detectors, it is an abstraction capturing the conditions under which progress is guaranteed, without involving explicit engineering solutions details such as packet transmission delays, timers, and computation. Specifically, Chandra & Toueg define a leader election abstraction, denoted $\Omega$, where eventually all non-faulty nodes trust the same non-faulty node as the leader. $\Omega$ was shown to be the weakest failure detector needed in order to solve consensus. Whereas Chandra & Toueg’s seminal work focuses on the possibility/impossibility of an eventually elected leader, here we care about how quickly it takes for a good leader to emerge (i.e., the latency), at what communication cost, and how to do so repeatedly, allowing the extension of one time single-shot consensus to SMR. We tackle the view synchronization problem against asynchrony and the most severe type of faults, Byzantine [20][21]. This makes the synchronizers we develop particularly suited for Byzantine Fault Tolerance (BFT) consensus systems relevant in today’s cryptoeconomic systems. More specifically, we assume a system of $n$ nodes that need to form a sequence of consensus decisions that implement SMR. We assume up to ${f < n/3}$ nodes are Byzantine, the upper bound on the number of Byzantine nodes in which Byzantine agreement is solvable [22]. The challenge is that during “happy” periods, progress might be made among a group of Byzantine nodes cooperating with a “fast” sub-group of the honest nodes. Indeed, many solutions advance when a leader succeeds in proposing a value to a quorum of $2f+1$ nodes, but it is possible that only the $f+1$ “fast” honest nodes learn it and progress to the next view. The remaining $f$ “slow” honest nodes might stay behind, and may not even advance views at all. Then at some point, the $f$ Byzantine nodes may stop cooperating. A mechanism is needed to bring the “slow” nodes to the same view as the $f+1$ “fast” ones. Thus, our formalism and algorithms may be valuable for the consensus protocols mentioned above, as well as others, such as Casper [23] and Tendermint [24][25], which reported problems around liveness [26][27]. 1.3 View Synchronization Algorithms We first extract two synchronization mechanisms that borrow from previous BFT consensus protocols, casting them into our formalism and analyzing them. One is a straw-man mechanism that requires no communication at all and achieves synchronization albeit with unbounded latency. This synchronizer works simply by doubling the duration of each view. Eventually, it guarantees a sufficiently long period in which all the nodes are in the same view. The second is the broadcast-based synchronization mechanism built into PBFT [15] and similar Byzantine protocols, such as [16]. This synchronizer borrows from the Bracha reliable broadcast algorithm [28]. Once a node hears of $f+1$ nodes who wish to enter the same view, it relays the wish reliably so that all the honest nodes enter the view within a bounded time. The properties of these synchronizers in terms of latency and communication costs are summarized in Table 1. For brevity, these algorithms and their analysis are deferred to Appendix A. Table 1: Comparison of the different protocols for view synchronization. t is the number of failures, δ is the upper bound on message delivery after GST. $\text{Cogsworth}$: leader-based synchronizer The main contribution of our work is $\text{Cogsworth}$, which is a leader-based view synchronization algorithm. $\text{Cogsworth}$ utilizes views that have an honest leader to relay messages, instead of broadcasting them. When a node wishes to advance a view, it sends the message to the leader of the view, and not to all the other nodes. If the leader is honest, it will gather the messages from the nodes and multicast them (send the same message to all the other nodes) using a threshold signature [29][30][31] to the rest of the nodes, incurring only a linear communication cost. The protocol implements additional mechanisms to advance views despite faulty leaders. The latency and communication complexity of this algorithm depend on the number of actual failures and their type. In the best case, the latency is constant and communication is linear. Faced with $t$ benign failures, in expectation, the communication is linear and in worst-case $O(t {\cdot} n)$, as mandated by the lower bound of Dolev & Reischuk [32]; the latency is expected constant and $O(t {\cdot}\delta)$ in the worst-case. Byzantine failures do not change the latency, but they can drive the communication to an expected $O(n^2)$ complexity and in the worst-case up to $O(t{\cdot}n^2)$. It remains open whether a worst-case linear synchronizer whose latency is constant is possible. To summarize, $\text{Cogsworth}$ performs just as well as a broadcast-based synchronizer in terms of latency and message complexity, and in certain scenarios shows up-to $O(n)$ better results in terms of message complexity. summarizes the properties of all three synchronizers. 1.4 Contributions The contributions of this paper are as follows: • To the best of our knowledge, this is the first paper to formally define the problem of view synchronization. • It includes two natural synchronizer algorithms cast into this framework and uses them as a basis for comparison. • It introduces $\text{Cogsworth}$, a leader-based Byzantine synchronizer exhibiting faultless and expected linear communication complexity and constant latency. The rest of this paper is structured as follows: Section 2 discusses the model; Section 3 formally presents the view synchronization problem; Section 4 presents the $\text{Cogsworth}$ view synchronization algorithm with formal correctness proof latency and communication cost analysis; Section 5 describes real-world implementations where the view synchronization algorithms can be integrated; Section 6 presents related work; and Section 7 concludes the paper. The description of the two natural view synchronization algorithms, view doubling and broadcast-based, are presented in Appendix A. 2. Model We follow the eventually synchronous model [2] in which the execution is divided into two durations: first, an unbounded period of asynchrony, where messages do not have a bounded time until delivered; and then, a period of synchrony, where messages are delivered within a bounded time, denoted as $\delta.$ The switch between the first and second periods occurs at a moment named Global Stabilization Time ($\text{GST}$). We assume all messages sent before GST arrive at or before ${\text{GST}+ \delta}.$ Our model consists of a set ${\Pi = \left\lbrace \mathcal{P}_{i} \right\rbrace_{i=1}^n}$ of $n$ nodes, and a known mapping, denoted by $\text{Leader}(\cdot)$: ${\mathbb{N} \mapsto \Pi}$ that continuously rotates among the nodes. Formally, ${\forall j \geq 0 \colon \bigcup_{i=j}^{\infty} \text{Leader}(i) = \Pi}$. We use a cryptographic signing scheme, a public key infrastructure (PKI) to validate signatures, as well as a threshold signing scheme [29][30][31]. The threshold signing scheme is used in order to create a compact signature of $k$-of-$n$ nodes and is used in other consensus protocols such as [30]. Usually $k = f+1$ or $k=2f+1$. We assume a non-adaptive adversary who can corrupt up to $f < n/3$ nodes at the beginning of the execution. This corruption is done without the knowledge of the mapping $\text{Leader}(\cdot)$. The set of remaining $n-f$ honest nodes is denoted $H$. We assume the honest nodes may start their local execution at different times. In addition, as in [1][30], we assume the adversary is polynomial-time bounded, i.e., the probability it will break the cryptographic assumptions in this paper (e.g., the cryptographic signatures, threshold signatures, etc.) is negligible. 3. Problem Definition We define a synchronizer, which solves the view synchronization problem, to be a long-lived task with an API that includes a $\textsf{wish\_to\_advance}()$ operation and a ${\textsf{ propose\_view}} (v)$ signal, where $v \in \mathbb{N}$. Nodes may repeatedly invoke $\textsf{wish\_to\_advance}()$, and in return get a possibly infinite sequence of ${\textsf{ propose\_view}}(\cdot)$ signals. Informally, the synchronizer should be used by a high-level abstraction (e.g., BFT state-machine replication protocol) to synchronize view numbers in the following way: All nodes start in view $0$, and whenever they wish to move to the next view they invoke $\textsf{wish\_to\_advance}()$. However, they move to view $v$ only when they get a ${\textsf{ propose\_view}}(v)$ signal. Formally, a time interval $\mathcal{I}$ consists of a starting time $t_1$ and an ending time $t_2 \ge t_1$ and all the time points between them. $\mathcal{I}$’s length is $\left| \mathcal{I} \right|= t_2-t_1$. We say $\mathcal{I}' \subseteq \mathcal{I}''$ if $\mathcal{I}'$ begins after or when $\mathcal{I}'$ begins, and ends before or when $\mathcal{I}''$ ends. We denote by $t^{\textit{prop}}_{\ mathcal{P}_{}, v}$ the time when node $\mathcal{P}_{}$ gets the signal ${\textsf{ propose\_view}}(v)$, and assume that all nodes get ${\textsf{ propose\_view}}(0)$ at the beginning of their execution. We denote time $t=0$ as the time when the last honest node began its execution, formally $\max_{\mathcal{P}_{} \in H} t^{\textit{prop}}_{\mathcal{P}_{},0}=0$. We further denote $\Delta^{\ textit{exec}}_{\mathcal{P}_{},v}$ as the time interval in which node $\mathcal{P}_{}$ is in view $v$, i.e., $\Delta^{\textit{exec}}_{\mathcal{P}_{},v}$ begins at $t^{\textit{prop}}_{\mathcal{P}_{},v} $ and ends at $t^{\textit{end}}_{\mathcal{P}_{},v} \triangleq \min_{v' > v}\left\lbrace t^{\textit{prop}}_{\mathcal{P}_{},v'} \right\rbrace$. We say node $\mathcal{P}_{}$ is at view $v$ at time $t$, or executes view $v$ at time $t$, if $t \in \Delta^{\textit{exec}}_{\mathcal{P}_{},v}$. We are now ready to define the two properties that any synchronizer must achieve. The first property, named view synchronization ensures that there is an infinite number of views with an honest leader that all the correct nodes execute for a sufficiently long time: Property 1 (View Synchronization): For every $c \ge 0$ there exists $\alpha> 0$ and an infinite number of time intervals and views $\left\lbrace \mathcal{I}_k, v_k \right\rbrace_{k=1}^{\infty}$, such that if the interval between every two consecutive calls to $\textsf{wish\_to\_advance}()$ by an honest node is $\alpha$, then for any $k \ge 1$ and any $\mathcal{P}_{} \in H$ the following holds: 1. $\left| \mathcal{I}_k \right| \ge c$ 2. $\mathcal{I}_k \subseteq \Delta^{\textit{exec}}_{\mathcal{P}_{},v_k}$ 3. $\text{Leader}(v_k) \in H$ The second property ensures that a synchronizer will only signal a new view if an honest node wished to advance to it. Formally: Property 2 (Synchronization Validity): The synchronizer signals ${\textsf{ propose\_view}}(v')$ only if there exists an honest node $\mathcal{P}_{} \in H$ and some view $v$ s.t. $\mathcal{P}_{}$ calls $\textsf{wish\_to\_advance}()$ at least $v' - v$ times while executing view $v$. The parameter $\alpha$, which is used in Property 1 is the time an honest node waits between two successive invocations of $\textsf{wish\_to\_advance}()$, and may differ between view synchronization algorithms. This parameter is needed to make sure that $\textsf{wish\_to\_advance}()$ is called an infinite number of times in an infinite run. In reality, it is likely that in most view synchronization algorithms $\alpha$ is larger than some value $d$ which is a function of the message delivery bound $\delta$, and also of $c$ from Property 1, i.e., the synchronization algorithm will work for any $\alpha\geq d \left( \delta, c \right)$. In this case, a consensus protocol using the synchronizer can execute the same view as long as progress is made, and trigger a new view synchronization in case liveness is lost. See Appendix A.3 for concrete examples. The requirement that the leader of all the synchronized views is honest is needed to ensure that once a view is synchronized, the leader of that view will drive progress in the upper-layer protocol, thus ensuring liveness. Without this condition, a synchronizer might only synchronize views with faulty leaders. Synchronization validity (Property 2) ensures that the synchronizer does not suggest a new view to the upper-layer protocol unless an honest node running that upper-layer protocol wanted to advance to that view. Latency and communication complexity In order to define how the latency and message communication complexity are calculated, we first define $\mathcal{I}_k^{\textit{start}}$ to be the time at which the $k$-th view synchronization is reached. Formally, $\mathcal{I}_k^{\textit{start}}$ $\triangleq \max_{\mathcal{P}_{} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{},v_k}\right\rbrace$, where $v_k$ is defined according to Property 1. With this we can define the latency of a synchronizer implementation: Definition 3.1 (Synchronizer Latency): The latency of a synchronizer is defined as $\lim_{\ell \to \infty} \left( \left( \mathcal{I}_1^{\textit{start}} - \text{GST}\right) + \sum_{k=2}^{\ell} \ mathcal{I}_{k}^{\textit{start}} - \mathcal{I}_{k-1}^{\textit{start}} \right) / \ell$. Next, in order to define communication complexity, we first need to introduce a few more notations. Let $M_{\mathcal{P}_{},v_1 \to v_2}$ be the total number of messages $\mathcal{P}_{}$ sent between $t^{\textit{prop}}_{=\mathcal{P}_{}, v_1}$ and $t^{\textit{prop}}_{\mathcal{P}_{},v_2}$. In addition, denote $M_{\mathcal{P}_{},\to v}$ as the total number of messages sent by $\mathcal{P}_{}$ from the beginning of $\mathcal{P}_{}$’s execution and $t^{\textit{prop}}_{\mathcal{P}_{},v}$. With this, we define the communication complexity of a synchronizer implementation: Definition 3.2 (Synchronizer communication complexity): Denote $v_k$ the $k-th$ view in which view synchronization occurs (Property 1). The message communication cost of a synchronizer is defined as $\lim_{\ell \to \infty} \left( \sum_{\mathcal{P}_{} \in H} M_{\mathcal{P}_{},\to v_1} + \sum_{k=2}^{\ell} \left( \sum_{\mathcal{P}_{} \in H} M_{\mathcal{P}_{},v_{k-1} \to v_{k}} \right) \right) / \ This concludes the formal definition of the view synchronization problem. Next, we present $\text{Cogsworth}$, a view synchronization algorithm with expected constant latency and linear communication complexity in a variety of scenarios. 4. Cogsworth: Leader-Based Synchronizer Before presenting $\text{Cogsworth}$, it is worth mentioning that we assume that all messages between nodes are signed and verified; for brevity, we omit the details about the cryptographic signatures. In the algorithm, when a node collects messages from $x$ senders, it is implied that these messages carry $x$ distinct signatures. We also assume that the $\text{Leader}(\cdot)$ mapping is based on a permutation of the nodes such that every consecutive $f+1$ views have at least one honest leader, e.g., ${\text{Leader}(v) = \left( v \bmod n \right) + 1}$. The algorithm can be easily altered to a scenario where this is not the case. 4.1 Overview $\text{Cogsworth}$ is a new approach to view synchronization that leverages leaders to optimistically achieve linear communication. The key idea is that instead of nodes broadcasting synchronization messages all-to-all and incurring quadratic communication, nodes send messages to the leader of the view they wish to enter. If the leader is honest, it will relay a single broadcast containing an aggregate of all the messages it received, thus incurring only linear communication. If a leader of a view $v$ is Byzantine, it might not help as a relay. In this case, the nodes time out and then try to enlist the leaders of subsequent views, one by one, up to view $v+f+1$, to help with relaying. Since at least one of those leaders is honest, one of them will successfully relay the aggregate. The full protocol is presented in , and consists of several message types. The first two are sent from a node to a leader. They are used to signal to the leader that the node is ready to advance to the next stage in the protocol. Those messages are named $\text{``}\textsf{WISH},v\text{''}$ and $\text{``} \textsf{VOTE},v \text{''}$ where $v$ is the view the message refers to. The other two message types are ones that are sent from leaders to nodes. The first is called $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ (short for “Time Certificate”) and is sent when the leader receives $f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages; and the second is called $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ (short for “Quorum Certificate”) and is sent when the leader receives $2f+1$ $\text{``} \textsf{VOTE},v \text{''}$ messages. In both cases, a leader aggregates the messages it receives using threshold signatures such that each broadcast message from the leader contains only one signature. The general flow of the protocol is as follows: When $\textsf{wish\_to\_advance}()$ is invoked, the node sends $\text{``}\textsf{WISH},v\text{''}$ to $\text{Leader}(v)$, where $v$ is the view succeeding $\textit{curr}$ (Line 5). Next, there are two options: (i) If $\text{Leader}(v)$ forms a $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ , it broadcast it to all nodes (Line 7). The nodes then respond with $\text{``} \textsf{VOTE},v \text{''}$ message to the leader (Line 10) (ii) Otherwise, if $2\delta$ time elapses after sending $\text{``}\textsf{WISH},v\text{''}$ to $\text{Leader}(v)$ without receiving $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ , a node gives up and sends $\text{``}\textsf{WISH},v\text{''}$ to the next leader, i.e., $\ text{Leader}(v+1)$ (Line 24). It then waits again $2\delta$ before forwarding $\text{``}\textsf{WISH},v\text{''}$ to $\text{Leader}(v+2)$, and so on, until $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ is received. Whenever $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ has been received, a node sends $\text{``} \textsf{VOTE},v \text{''}$ (even if it did not send $\text{``}\textsf{WISH},v\ text{''}$) to $\text{Leader}(v)$. Additionally, as above, it enlists leaders one by one until $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ is obtained. Here, the node sends leaders $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ as well as $\text{``} \textsf{VOTE},v \text{''}$. When a node finally receives $\text{``} \textsf{QC}\text{''}$ $\text{``} \ textsf{QC},v\text{''}$ from a leader, it enters view $v$ immediately (Line 17). 4.2 Correctness We will prove that $\text{Cogsworth}$ achieves eventual view synchronization (Property 1) for any $\alpha\ge 4\delta$ as well as synchronization validity (Property 2). Thus, the claims and lemmas below assume this. We start by proving that if an honest node entered a new view, and the leader of that view is honest, then all the other honest nodes will also enter that view within a bounded time. Claim 4.1: After $\text{GST}$, if an honest node enters view $v$ at time $t$, and the leader of view $v$ is honest then all the honest nodes enter view $v$ by $t+4\delta$, i.e., if $\text{Leader}(v) \in H$ then ${\max_{\mathcal{P}_{i} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{i},v}\right\rbrace - \min_{\mathcal{P}_{j} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{j},v} \right\ rbrace \le 4\delta}$. PROOF: Let $\mathcal{P}_{i}$ be the first honest node that entered view $v$ at time $t$. $\mathcal{P}_{i}$ entered view $v$ since it received $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v \text{''}$ from $\text{Leader}(r)$ such that $v \le r \le v+f+1$ (Line 17). If $r=v$ then we are done, since when $\text{Leader}(v)$ sent $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ it also sent it to all the other honest nodes (Line 16), which will be received by $t + \delta$, and all the honest nodes will enter view $v$. Next, if $r > k$ then the only way for $\text{Leader}(v)$ to send $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ is if it gathered $2f+1$ $\text{``} \textsf{VOTE},v \text{''}$ messages (), meaning at least $f+1$ of the $\text{``} \textsf{VOTE},v \text{''}$ messages were sent by honest nodes. An honest node will send a $\text{``} \textsf{VOTE},v \text{''}$ message only after first receiving $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ from $\text{Leader}(r')$ s.t. $v \le r' \le v+f+1$ (Line 10). Since when receiving a $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ an honest node sends the $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ to $\text{Leader} (v)$(Line 12), then $\text{Leader}(v)$ will receive $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ by $t+\delta$, will forward it to all other nodes by $t+2\delta$, who will send $\text{``} \textsf{VOTE},v \text{''}$ to $\text{Leader}(v)$ by $t+3\delta$ and by $t+4\delta$ all honest nodes will receive $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ from $\ text{Leader}(v)$ and enter view $v$. Next, assuming an honest node entered a new view, we bound the time it takes to at least $f+1$ honest nodes to enter the same view. Note that this time we do not assume anything on the leader of the new view, and it might be Byzantine. Claim 4.2: After $\text{GST}$, when an honest node enters view $v$ at time $t$, at least $f+1$ honest nodes enter view $v$ by $t+2\delta(f+2)$, i.e., after $\text{GST}$ for every $v$ there exists a group S of honest nodes s.t. $\left| S \right| \ge f+1$ and $\max_{\mathcal{P}_{i} \in S} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{i},v}\right\rbrace - \min_{\mathcal{P}_{j} \in S} \left\lbrace t ^{\textit{prop}}_{\mathcal{P}_{j},v} \right\rbrace \le 2\delta(f+2)$. PROOF: Let $\mathcal{P}_{i}$ be the first node that entered view $v$ at time $t$. $\mathcal{P}_{i}$ entered $v$ since it received $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ from $\text{Leader}(r)$ and $v \le r \le v+f+1$ (Line 17). If $\text{Leader}(r)$ is honest then we are done, since $\text{Leader}(r)$ multicasted $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf {QC},v\text{''}$ to all honest nodes (Line 16), and within $\delta$ all honest nodes will also enter view $v$ by $t+\delta$. Next, if $\text{Leader}(r)$ is Byzantine, then it might have sent $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ to a subset of the honest nodes, potentially only to $\mathcal{P} _{i}$. In order to form a $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ , $\text{Leader}(r)$ had to receive $2f+1$ $\text{``} \textsf{VOTE},v \text{''}$ messages (Line 14), meaning that at least $f+1$ honest nodes sent $\text{``} \textsf{VOTE},v \text{''}$ to $\text{Leader}(r)$. Denote $S$ as the group of those $f+1$ honest nodes. Each node in $S$ sent $\text{``} \textsf{VOTE},v \text{''}$ message since it received $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ from $\text{Leader}(r')$ for ${v \le r' \le v+f+1}$ (Line 10). Note that different nodes in $S$ might have received $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ from a different leader, i.e., $\text{Leader}(r')$ might not be the same leader for each node in $S$. After a node in $S$ sent $\text{``} \textsf{VOTE},v \text{''}$ it will either receive a $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ within $2\delta$ and enter view $v$, or timeout after $2\delta$ and send $\text{``} \textsf{VOTE},v \text{''}$ with $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ to $\text{Leader}(v+1)$ (Line 30). They will continue to do so when not receiving $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ for the next $f+1$ views after $v$. This ensures that at least one honest leader will receive $\text{``}\ textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ after at most $t+2\delta f + \delta$. Then, this honest leader will multicast the $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text {''}$ it received (Line 7) and at most by $t+2\delta(f+1)$, all the honest nodes will receive $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}.$ The honest nodes will then send $\ text{``} \textsf{VOTE},v \text{''}$ to the honest leader, which will be able to create $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ and multicast it. The $\text{``} \textsf{QC} \text{''}$ $\text{``} \textsf{QC},v\text{''}$ will thus be received by all the honest nodes by $t+2\delta(f+2)$ and we are done. Next, we show that during the execution, an honest node will enter some new view. Claim 4.3: After $\text{GST}$, some honest node $\mathcal{P}_{i}$ enters a new view. PROOF: From Claim 4.2, if an honest node enters some view $v$, the time by which at least another $f$ other honest nodes also enter $v$ is bounded. Eventually, those honest nodes will timeout and $\ textsf{wish\_to\_advance}()$ will be invoked (Line 5), which will cause them to send $\text{``}\textsf{WISH},v+1\text{''}$ to $\text{Leader}(v+1)$. If $\text{Leader}(v+1)$ is honest, then it will send a $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v+1\text{''}$ to all the nodes (Line 7) which will be followed by the leader sending a $\ text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v+1\text{''}$ (), and all honest nodes will enter view $v+1$. If $\text{Leader}(v+1)$ is not honest then the protocol dictates that the honest nodes that wished to enter $v+1$ will continue to forward their $\text{``}\textsf{WISH},v+1\text{''}$ message to the next leaders (up to $\text{Leader}(v+f+1)$, ) until each of them receives $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v+1\text{''}.$ This is guaranteed since at least one of those $f+1$ leaders is honest. The same process is then followed for $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v+1\text{''}$ (Line 28), and eventually all of those $v+1$ honest nodes will enter view $v+1$. Lemma 4.4: $\text{Cogsworth}$ achieves eventual view synchronization (Property 1). PROOF: From Claim 4.3 an honest node eventually will enter a new view, and by at least $f+1$ honest nodes will enter the same view within a bounded time. By applying Claim 4.3 recursively and again, eventually, a view with an honest leader is reached and by Claim 4.1 all honest nodes will enter the view within $4\delta$. Thus, for any $c \ge 0$, if the $\text{Cogsworth}$ protocol is run with $\alpha= 4\delta+ c$ it is guaranteed that all honest nodes will eventually execute the same view for $\left| \mathcal{I}\right | = c$. The above arguments can be applied inductively, i.e., there exists an infinite number of such intervals and views in which view synchronization is reached, also ensuring that the views that synchronized also have an honest leader. Lemma 4.5: $\text{Cogsworth}$ achieves synchronization validity (Property 2). To enter a new view $v$ a $\text{``} \textsf{QC}\text{''}$ $\text{``} \textsf{QC},v\text{''}$ is needed, which is consisted of $2f+1$ $\text{``} \textsf{VOTE},v \text{''}$ messages i.e., at least $f+1$ are from honest nodes. An honest node will send $\text{``} \textsf{VOTE},v \text{''}$ message only when it receives a $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ message, that requires $f+1$ $\text{``}\textsf{WISH},v\text{''}$ message, meaning at least one of those messages came from an honest node. An honest node will send $\text{``}\textsf{WISH},v\text{''}$ when the upper-layer protocol invokes $\textsf{wish\_to\_advance}()$ while it was in view $v-1$. This concludes the proof that $\text{Cogsworth}$ is a synchronizer for any $\alpha\ge 4\delta$. Similar to the broadcast-based synchronizer, it allows upper-layer protocols to determine the time they spend in each view. 4.3 Latency and communication Let $v^{\text{GST}}_{\textit{max}}$ be the maximum view an honest node is in at $\text{GST}$, and let $X$ denote the number of consecutive Byzantine leaders after $v^{\text{GST}}_{\textit{max}}$. Assuming that leaders are randomly allocated to views, then $X$ is a random variable of a geometrical distribution with a mean of $n / (n-f)$. This means that in the worst case of $t = f = \left\ lfloor n/3 \right\rfloor$, then ${\mathbb{E}(X) = (3f+1)/(2f+1) \approx 3/2}$. Since when $f+1$ honest nodes at view $v$ want to advance to view $v+1$, and if $\text{Leader}(v+1)$ is honest, all honest nodes enter view $v+1$ in constant time (Claim 4.1), the latency for view synchronization, in general, is $O(X {\cdot} \delta)$. For the same reasoning, this is also the case for any two intervals between view synchronizations (see Definition 3.1). In the worst-case of $X = t$, where $t$ is the number of actual failures during the run, then latency is linear in the view duration, i.e., $O(t {\cdot} \delta)$. But, in the expected case of a constant number of consecutive Byzantine leaders after $v^{\text{GST}}_{\textit{max}}$, the expected latency is $O(\delta)$. For communication complexity, there is a difference between Byzantine failures and benign ones. If a Byzantine leader of a view $r$ obtains $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\ text{''}$ for $r-(f+1) \le v \le r$, then it can forward the $\text{``}\textsf{TC}\text{''}$ $\text{``} \textsf{TC},v\text{''}$ to all the $f+1$ leaders that follow view $v$ and those leaders will multicast the message (Line 7), leading to expected $O(n^2)$ communication complexity, in the case of at least one Byzantine leader after $v^{\text{GST}}_{\textit{max}}$. In the worst-case of a cascade of $t$ failures after $v^{\text{GST}}_{\textit{max}}$, the communication complexity is $O(t{\cdot}n^2)$. In the case of benign failures, communication complexity is dependent on $X$, since the first correct leader after $v^{\text{GST}}_{\textit{max}}$ will get all nodes to enter his view and achieve view synchronization, and the benign leaders before it will only cause delays in terms of latency, but will not increase the overall number of messages sent. Thus, in general, the communication complexity with benign failures is $O(X {\cdot} n)$. In the worst-case of $X = t$ communication complexity is $O(t {\cdot} n)$, but in the average case it is linear, i.e., $O(n)$. For the same reasoning, this is also the case between any consecutive occurrences of view synchronization (see Definition 3.2). To sum up, the expected latency for both benign and Byzantine failures is $O(\delta)$, and worst-case ${O(t {\cdot} \delta)}$. Communication complexity for Byzantine nodes is optimistically $O(n)$, expected ${O(n^2)}$, and worst-case $O(t{\cdot}n^2)$ and for benign failures is expected $O(n)$ and worst-case $O(t {\cdot} n)$. $\text{Cogsworth}$ achieves expected constant latency and linear communication under a broad set of assumptions. It is another step in the direction of reaching the quadratic communication lower bound of Byzantine consensus in an asynchronous model [32]. In addition to $\text{Cogsworth}$ we present in Appendix A two more view synchronization algorithms. The first one is view doubling, where nodes simply double their view duration when entering a new view, which guarantees that eventually all nodes will be in the same view for sufficiently long. The other algorithm is borrowed from consensus protocols such as PBFT [15] and SBFT [16]. In Appendix A.3 we present a comprehensive discussion on all three algorithms. 5. Usages and Implementations of Synchronizers In this section, we describe real-world usages of the view synchronization algorithms. Many times, in different works, the terms “phase,” “round,” and “view” are mixed. In this work when “view” is mentioned, the meaning is that all the nodes agree on some integer value, mapped to a specific node that acts as the leader. There are SMR protocols where as long as the leader is driving progress in the protocol it is not changed. This will correspond to all the nodes staying in the same view, and this view can be divided into many phases, e.g., in PBFT [15] a single-shot consensus consists of two phases. In an SMR protocol based on PBFT a view can consist of many more phases, all with the same leader as long as progress is made, and there is no bound on the view duration. As mentioned in Section 1.2, in HotStuff [17], the view synchronization logic is encapsulated in a module named a PaceMaker, but does not provide a formal definition of what the PaceMaker does, nor an implementation. The most developed work which adopted HotStuff as the core of its consensus protocol is LibraBFT [33]. In LibraBFT, a module also named a PaceMaker is in charge of advancing views. In this module whenever a node timeouts of its current view, say view $v$, it sends a message named “TimeoutMsg, $v$”, and whenever it receives $2f+1$ of these messages, it advances to view $v$. In addition, the node sends an aggregated signature of these messages to the leader of view $v$, which according to the paper, if the leader of $v$ is honest, guarantees that all other nodes will enter view $v$ within $2 \delta$. The current implementation of the PaceMaker is linear communication as long as there are honest leaders, but quadratic upon reaching a view with a Byzantine one. The latency is constant. Many other works on consensus rely on view synchronization as part of their design. For example, in [34] a doubling view synchronization technique is used: “For the view-change process, each replica will start with a timeout $\delta$ and double this timeout after each view-change (exponential backoff). When communication becomes reliable, exponential backoff guarantees that all replicas will eventually view-change to the same view at the same time.” View synchronization in consensus protocols The idea of doubling round duration to cope with partial synchrony borrows from the DLS work [2], and has been employed in PBFT [15] and in various works based on DLS/PBFT [33][25][17]. In these works, nodes double the length of each view when no progress is made. The broadcast-based synchronization algorithm is also employed as part of the consensus protocol in works such as PBFT. HotStuff [17] encapsulates view synchronization in a separate module named a PaceMaker. Here, we provide a formal definition, concrete solutions, and performance analysis of such a module. HotStuff is the core consensus protocol of various works such as Cypherium [11], PaLa [13], and LibraBFT [33]. Other consensus protocols such as Tendermint [25] and Casper [23] reported issues related to the liveness of their design [26][27]. Notion of time in distributed systems Causal ordering is a notion designed to give partial ordering to events in a distributed system. The most known protocols to provide such ordering are Lamport Timestamps [35] and vector clocks [36]. Both works assume a non-crash setting. Another line of work stemmed from Awerbuch’s work on synchronizers [37]. The synchronizer in Awerbuch’s work is designed to allow an algorithm that is designed to run in a synchronous network to run in an asynchronous network without any changes to the synchronous protocol itself. This work is orthogonal to the work in this paper. Recently, Ford published preliminary work on Threshold Logical Clocks (TLC) [38]. In a crash-fail asynchronous setting, TLC places a barrier on view advancement, i.e., nodes advance to view $v+1$ only after a threshold of them reached view $v$. A few techniques are also described on how to convert TLCs to work in the presence of Byzantine nodes. The TLC notion of a view “barrier” is orthogonal to view synchronization, though a 2-phase TLC is very similar to our reliable broadcast synchronizer. Failure detectors The seminal work of Chandra & Toueg [18], [19] introduces the leader election abstraction, denoted $\Omega$, and proves it is the weakest failure detector needed to solve consensus. By using $\Omega$ , consensus protocols can usually be written in a more natural way. The view synchronization problem is similar to $\Omega$, but differs in several ways. First, it lacks any notion of leader and isolates the view synchronization component. Second, view synchronization adds recurrence to the problem definition. Third, it has a built-in notion of view-duration: nodes commit to spend a constant tine in a view before moving to the next. Last, this paper focuses on latency and communication costs of synchronizer implementations. Latency and message communication for consensus Dutta et al. [39] look at the number of rounds it takes to reach consensus in the crash-fail model after a time defined as GSR (Global Stabilization Round) which only correct nodes enter. This work provides an upper and a lower bound for reaching consensus in this setting. Other works such as [40][41] further discuss the latency for reaching consensus in the crash-fail model. These works focus on the latency for reaching consensus after $\text{GST}$. Both bounds are tangential to our performance measures, as they analyze round latency. GIRAF [42][43] is a view-based framework to analyze consensus protocols, and specifically analyzes protocols in the crash-fail model. Dolev et al. [32] showed a quadratic lower bound on communication complexity to reach deterministic Byzantine broadcast, which can be reduced to consensus. This lower bound is an intuitive baseline for work like ours, though it remains open to prove a quadratic lower bound on view synchronization per se. Clock synchronization The clock synchronization problem [44] in a distributed system requires that the maximum difference between the local clock of the participating nodes is bounded throughout the execution, which is possible since most works assume a synchronous setting. The clock synchronization problem is well-defined and well-treaded, and there are many different algorithms to ensure this in different models, e.g., [45][46][47]. In practical distributed networks, the most prevalent protocol is NTP [48]. Again, clock synchronization is an orthogonal notion to view synchronization, the latter guaranteeing to and stay in a view within a bounded window, but does not place any bound on the views of different nodes at any point in time. 7. Conclusion We formally defined the Byzantine view synchronization problem, which bridges classic works on failure detectors aimed to solve one-time consensus, and SMR which consists of multiple one-time consensus instances. We presented $\text{Cogsworth}$ which is a view synchronization algorithm that displays linear communication cost and constant latency under a broad variety of scenarios. This project was partially funded by a grant from the Technion Hiroshi Fujiwara Cyber Security Research Center. A. Protocols for View Synchronization In this section we place into the view synchronization framework two view synchronization algorithms which are used in various consensus protocols, and prove their correctness, as well as discuss their latency and message complexity. All protocol messages between nodes are signed and verified; for brevity, we omit the details about the cryptographic signatures. A.1 View Doubling Synchronizer A.1.1 Overview A solution approach inspired by PBFT [15] is to use view doubling as the view synchronization technique. In this approach, each view has a timer, and if no progress is made the node tries to move to the next view and doubles the timer time for the next view. Whenever progress is made, the node resets its timer. This approach is intertwined with the consensus protocol itself, making it hard to separate, as the messages of the consensus protocol are part of the mechanism used to reset the timer. We adopt this approach and turn it into an independent synchronizer that requires no messages. Fist, the nodes need to agree on some predefined constant $\beta> 0$ which is the duration of the first view. Next, there exists some global view duration mapping $\textit{VD}(\cdot): \mathbb{N} \mapsto \mathbb{R}^+$, which maps a view $v$ to its duration: $\textit{VD}(v) =2^v \beta$. A node in a certain view must move to the next view once this duration passes, regardless of the outer protocol actions. The view doubling protocol is described in Algorithm 2. A node starts at view $0$ () and a view duration of $\beta> 0$ (Line 4). Next, when $\textsf{wish\_to\_advance}()$ is called, a counter named $ \textit{wish}$ is incremented (Line 5). This counter guarantees validity by moving to a view $v$ only when the $\textit{wish}$ counter reaches $v$. Every time a view ends (Line 7), an internal counter $\textit{curr}$ is incremented, and if the $\textit{wish}$ allows it, the synchronizer outputs ${\textsf{ propose\_view}}(v)$ with a new view $v$. A.1.2 Correctness We show that the view doubling protocol achieves the properties required by a synchronizer. Lemma A.1: The view doubling protocol achieves view synchronization (Property 1). PROOF: Since this protocol does not require sending messages between nodes, the Byzantine nodes cannot affect the behavior of the honest nodes, and we can treat all nodes as honest. Recall that $t=0$ denotes the time by which all the honest nodes started their local execution of . Let $\textit{init}_{i}$ be the view at which node $\mathcal{P}_{i}$ is at during $t=0$. W.l.o.g assume ${\textit{init}_{1} \le \textit{init}_{2} \le \cdots \le \textit{init}_{n}}$ at time $t=0$. It follows from the definition of $\textit{init}_{i}$ and the sum of a geometric series that $t^{\textit{prop}}_{\mathcal{P}_{i},v} = \beta\left( 2^v -2^{\textit{init}_{i}} \right).$(1) We begin by showing that for every $i \le j$ the following condition holds: $t^{\textit{prop}}_{\mathcal{P}_{i},v} \ge t^{\textit{prop}}_{\mathcal{P}_{j},v}$ for any view $v$. Let $k = \textit{init}_ {i}$ and $l = \textit{init}_{j}$. From the ordering of the node starting times, for all $k \le l$. We get: $t^{\textit{prop}}_{\mathcal{P}_{i},v} \ge t^{\textit{prop}}_{\mathcal{P}_{j},v} \Leftrightarrow \beta\left( 2^v-2^k \right) \ge \beta\left( 2^v - 2^l \right) \Leftrightarrow l \ge k.$ Hence, for $i \le j$, since at $t=0$ node $\mathcal{P}_{j}$ had a view number larger than $\mathcal{P}_{i}$, then $\mathcal{P}_{j}$ will start all future views before $\mathcal{P}_{i}$. Next, let $k =\textit{init}_{1}$ and $l = \textit{init}_{n}$, i.e., the minimal view and the maximal view at $t=0$ respectively. To prove that the first interval of view synchronization is achieved, it suffices to show that for any constant $c \ge 0$ there exists a time interval $\mathcal{I}$ and a view $v$ such that $\left| \mathcal{I}\right| \ge c$ and $t^{\textit{prop}}_{n,v+1} - t^{\textit {prop}}_{1,v} \ge | \mathcal{I}|$. Using this, we will show that there exists an infinite number of such intervals and views that will conclude the proof. This also ensures that there is an infinite number of such views with honest leaders. Indeed, first note that as shown above, node $\mathcal{P}_{n}$ will start view $v$ before any other node in the system. The left-hand side of the equation is the time length in which both node $\ mathcal{P}_{n}$ and node $\mathcal{P}_{1}$ execute together view $v$. If the left-hand side is negative, then there does not exist an overlap, and if it is positive then an overlap exists. We get $t^{\textit{prop}}_{n,v+1} - t^{\textit{prop}}_{1,v} \ge | \mathcal{I}| \Leftrightarrow \beta\left( 2^{v+1} -2^l \right) - \beta\left( 2^v -2^k \right)$(2) $\ge | \mathcal{I}| \Leftrightarrow \beta\left[ 2^v + \left( 2^k -2^l \right) \right] \ge | \mathcal{I}|.$ For any $c \ge 0$ there exists a minimum view number $v'$ such that the inequality holds, and since $k$ is the minimum view number at $t = 0$ this solution holds for any other node $\mathcal{P}_{i}$ as well. In addition, for any $v \ge v'$ the inequality also holds, meaning there is an infinite number of solutions for it, including an infinite number of views with an honest leader. If $\textsf{wish\_to\_advance}()$ is called in intervals with $0< \alpha \le \beta$ then by the time the value of $\textit{curr}$ reaches some view value $v$, $\textit{wish}$ will always be bigger than $\textit{curr}$, meaning the condition in will always be true, and the synchronizer will always propose view $v$ by the time stated in Equation 1. Lemma A.2: The view doubling protocol achieves synchronization validity (Property 2). PROOF: The if condition in Line 10 ensures that the output of the synchronizer will always be a view that a node wished to advance to. This concludes the proof that view doubling is a synchronizer for any $0 < \alpha\le \beta$. A.1.3 Latency and communication Since the protocol sends no messages between the nodes, it is immediate that the communication complexity is $0$. As for latency, the minimal $v^*$ satisfying Equation 2 grows with $c \left(2^{\textit{init}_{n}} - 2^{\textit{init}_{1}} \right)$. Since the initial view-gap $\textit{init}_{n} - \textit{init}_{1}$ is unbounded, so is the view $v^*$ in which synchronization is reached. The latency to synchronization is $t^{\textit{prop}}_{\mathcal{P}_{1}, v^*} = 2^{v^*} - 2^{\textit{init}_{1}}$, also unbounded. A.2 Broadcast-Based Synchronizer A.2.1 Overview Another leaderless approach is based on the Bracha reliable broadcast protocol [28] and is presented in Algorithm 3. In this protocol, when a node wants to advance to the next view $v$ it multicasts a $\text{``}\textsf{WISH},v\text{''}$ message (multicast means to send the message to all the nodes including the sender) (Line 3). When at least $f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages are received by an honest node, it multicasts $\text{``}\textsf{WISH},v\text{''}$ as well (Line 5). A node advances to view $v$ upon receiving $2f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages (Line 7). A.2.2 Correctness We start by showing that the broadcast-based synchronizer achieves eventual view synchronization (Property 1) for any $\alpha\geq 2\delta$. Thus, the claims and lemmas below assume this. Claim A.3: After GST, whenever an honest node enters view $v$ at time $t$, all other honest nodes enter view $v$ by $t+2\delta$, i.e., $\max_{\mathcal{P}_{i} \in H} \left\lbrace t^{\textit{prop}}_{\ mathcal{P}_{i}, v} \right\rbrace - \min_{\mathcal{P}_{j} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{j},v} \right\rbrace \le 2 \delta.$ PROOF: Suppose an honest node $\mathcal{P}_{i} \in H$ enters view $v$ at time $t^{\textit{prop}}_{\mathcal{P}_{i}, v} = t$, then it received $2f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages, from at least ${f+1}$ honest nodes (Line 7). Since the only option for an honest node to disseminate $\text{``}\textsf{WISH},v\text{''}$ message is by multicasting it, then by $t + \delta$ all nodes will receive at least $f+1$ $\text{``}\textsf {WISH},v\text{''}$ messages. Then, any left honest nodes (at most $f$ nodes) will thus receive enough $\text{``}\textsf{WISH},v\text{''}$ to multicast the message on their own (Line 5) which will be received by $t + 2 \delta$ by all the nodes. This ensures that all the honest nodes receive $2f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages and enter view $v$ by $t + 2\delta$. Claim A.4: After GST, eventually an honest node $\mathcal{P}_{i}$ enters some new view. PROOF: All honest nodes begin their local execution at view $0$, potentially at different times. Based on the protocol eventually at least $f+1$ nodes (some of them might be Byzantine) send $\text {``}\textsf{WISH},1\text{''}$. This is because $\textsf{wish\_to\_advance}()$ is called every $\alpha$. Thus, eventually all honest nodes will reach view $1$, and from Claim A.3 the difference between their entry is at most $2\delta$ after $\text{GST}$. The above argument can be applied inductively. Suppose at time $t$ node $\mathcal{P}_{i}$ is at view $v$. We again know that by $t+2\delta$ all other honest nodes are also at view $v$, and once $f+1$ $\text{``}\textsf{WISH},v+1\text{''}$ are sent all honest nodes will eventually enter view $v+1$, and we are done. Lemma A.5: The broadcast-based protocol achieves view synchronization (Property 1). Proof. From Claim A.4 an honest node will eventually advance to some new view $v$ and from Claim A.3 after $2\delta$ all other honest nodes will join it. For any $c \ge 0$, if the honest nodes call $ \textsf{wish\_to\_advance}()$ every $\alpha= 2\delta+ c$ then it is guaranteed that all the honest nodes will execute view $v$ together for at least $\left| \mathcal{I}\right| = c$ time, since it requires $f+1$ messages to move to view $v+1$, i.e., at least one message is sent from an honest node. This argument can be applied inductively, and each view after $\text{GST}$ is synchronized, thus making an infinite number of time intervals and views which all honest leaders execute at the same Lemma A.6: The broadcast-based synchronizer achieves synchronization validity (Property 2). PROOF: In order for an honest node to advance to view $v$ it has to receive $2f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages (Line 7). From those, at least $f+1$ originated from honest nodes. An honest node can send $\text{``}\textsf{WISH},v\text{''}$ on two scenarios: (i) $\textsf{wish\_to\_advance}()$ was called when the node was at view $v-1$ (Line 3) and we are done. (ii) It received $f+1$ $\text{``}\textsf{WISH},v\text{''}$ messages (Line 5), meaning at least one honest node which already sent the message was at view $v-1$ and called $\textsf{wish\_to\_advance} ()$ and again we are done. This concludes the proof that the broadcast-based synchronizer is a view synchronizer for any ${\alpha\ge 2\delta}$. A.2.3 Latency and communication The broadcast-based algorithm synchronizes every view after $\text{GST}$ within $2\delta.$ Since the leaders of each view are allocated by the mapping $\text{Leader}(\cdot)$, in expectation every $\ approx 3/2$ nodes have an honest leader (see the communication complexity analysis done for $\text{Cogsworth}$ in Section 4). Therefore, for latency, the broadcast-based synchronizer will take an expected constant time to reach view synchronization after $\text{GST}$, as we have proved, and also the same between every two consecutive occurrences of view synchronization. Thus, the latency of this protocol is expected $O(\delta)$. In the worst-case of $t$ consecutive failures, the latency is $O(t{\cdot}\delta)$. For communication costs, the protocol requires that every node sends one $\text{``}\textsf{WISH},v\text{''}$ message to all the other nodes, and since the latency is expected constant, the overall communication costs are also expected quadratic, i.e., $O(n^2)$. In the worst-case of $t$ consecutive failures, the communication complexity is $O(t{\cdot}n^2)$. A.3 Discussion The three presented synchronizers in the paper have tradeoffs in their latency and communication costs, which are summarized in Table 1. Hence, a protocol designer may choose a synchronizer based on its needs and constraints. It might be possible to create combinations of the three protocols and achieve hybrid characteristics; we leave such variations for future work. In addition, there are differences in the constraints on the parameter $\alpha$ in these protocols, which is the time interval between two successive calls to $\textsf{wish\_to\_advance}()$ (see Property 1). The view doubling synchronizer prescribes a precise $\alpha$, which results in each view duration to be exactly twice as its predecessor. In the other two synchronizers there is only a lower bound on $\alpha$: in the broadcast-based it is $2\delta$, and in $\text{Cogsworth}$ it is $4\delta$. This difference is significant. Suppose an upper-layer protocol utilizing the synchronizer wishes to spend an unbounded amount of time in each view as long as progress is made, and triggers a view-change upon detecting that progress is lost. While the broadcast-based and $\text{Cogsworth}$ algorithms allow this upper-layer behavior, the view doubling technique does not, and thus may influence the decision on which view synchronization algorithm to choose. Another difference between the algorithms is that the view-doubling and the broadcast-based synchronizers both guarantee that after the first synchronized view, all subsequent views are also synchronized, regardless if the leaders are honest or not. $\text{Cogsworth}$ only guarantees synchronization after GST in views that have an honest leader. For most leader-based consensus protocols, this guarantee suffices to ensure progress, other protocols using a synchronizer might find the strengthened guarantee preferable.
{"url":"https://cryptoeconomicsystems.pubpub.org/pub/naor-cogsworth-synchronization/release/5?readingCollection=a1e776d2","timestamp":"2024-11-08T12:19:00Z","content_type":"text/html","content_length":"1049728","record_id":"<urn:uuid:e5b31c1f-b520-4ec6-acf9-9d7dc44b0cb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00062.warc.gz"}
How do you simplify 3/4 div 6? | HIX Tutor How do you simplify #3/4 div 6#? Answer 1 To simplify ( \frac{3}{4} \div 6 ), you divide the numerator (3) by the product of the denominator (4) and the divisor (6). So, ( \frac{3}{4} \div 6 = \frac{3}{4 \times 6} ). Then, calculate the Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 See a solution process below: Rephrase the sentence as follows: #3/4 -: 6 => 3/4 -: 6/1 => (3/4)/(6/1)# To finish the simplification, apply the following rule for dividing fractions: #(color(red)(a)/color(blue)(b))/(color(green)(c)/color(purple)(d)) = (color(red)(a) xx color(purple)(d))/(color(blue)(b) xx color(green)(c))# #(color(red)(3)/color(blue)(4))/(color(green)(6)/color(purple)(1)) => (color(red)(3) xx color(purple)(1))/(color(blue)(4) xx color(green)(6)) => (cancel(color(red)(3)) xx color(purple)(1))/(color (blue)(4) xx cancel(color(green)(6))2) => 1/8# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-simplify-3-4-div-6-8f9af8efd8","timestamp":"2024-11-10T14:43:19Z","content_type":"text/html","content_length":"575058","record_id":"<urn:uuid:7dca4df7-d907-446b-a32f-1ef20d9e2c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00857.warc.gz"}
Supertropical quadratic forms I We initiate the theory of a quadratic form q over a semiring R, with a view to study tropical linear algebra. As customary, one can writeq(x+y)=q(x)+q(y)+b(x,y), where b is a companion bilinear form. In contrast to the classical theory of quadratic forms over a field, the companion bilinear form need not be uniquely defined. Nevertheless, q can always be written as a sum of quadratic forms q=q [QL]+ρ, where q[QL] is quasilinear in the sense that q[QL](x+y)=q[QL](x)+q[QL](y), and ρ is rigid in the sense that it has a unique companion. In case that R is supertropical, we obtain an explicit classification of these decompositions q=q[QL]+ρ and of all companions b of q, and see how this relates to the tropicalization procedure. Dive into the research topics of 'Supertropical quadratic forms I'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/supertropical-quadratic-forms-i","timestamp":"2024-11-09T20:15:09Z","content_type":"text/html","content_length":"51121","record_id":"<urn:uuid:dd2167f6-199d-4076-9966-04e539681c0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00326.warc.gz"}
Detecting lines in image with OpenCV – Hough Line Transform There are times when you need to find straight lines on an image. This can be done using OpenCV’s built in implementation of Hough Line Transform. It is very interesting to see how Hough Line Transform actually works. Assume (as displayed in figure below) we have a line segment AB. In the Cartesian coordinate system, the line can be represented as y = mx + c. Now if we want to represent the same line in polar coordinate system, it can be represented as y=(−cosθ/sinθ)x+(r * sinθ) Now, let’s assume, our line segment AB has 3 points m, n, o lying on the same. Since all the 3 points lie on the same line they will satisfy the equation for that particular line. This is the concept that is used in Hough Line Transform to identify lines. Basically, lines are drawn from each of the points that are equal to 255 (white pixels in binary image) in all possible directions (180 degrees), and corresponding r (radius) and θ (angle) are noted down. This is done for each pixel with value 255 on the image. Now if there are multiple points on the image and they happen to lie on a line, they will generate same value of radius and θ (angle). Assume that we increment the count for same value of radius and θ (angle). When we are finished going through all the points on the image, we will have a few combinations of radius and θ (angle) which will have count more than 1. All these points that have same value for radius and θ (angle) can be joined using a straight line. Steps for the Hough Line Transform 1. First it creates a 2D array of accumulator (to hold values of two parameters) all the values in the array are set to 0 initially 2. Assume R (radius) is represented as columns and θ is represented rows 3. The size of array (or accumulator) depends on the accuracy you need. If you need the accuracy of angles to be 1 degree, you need 180 columns. For R (radius), the maximum distance possible is the diagonal length of the image. So if we are taking one pixel accuracy, number of rows can be diagonal length of the image. Hough Line transform goes through all pixels in the image and looks for all the possible angles (with precision of 1 degree if you are passing pi/180). This involves a lot of computation. Probabilistic Hough Transform reduces this computation by not taking into account all the points. With OpenCV’s cv2.HoughLinesP you can easily find lines and join gaps in lines as demonstrated below. The code displayed below can be used to run the example. The code is very basic that imports the necessary packages and uses OpenCV to read image, convert it to binary image. Remember we are using THRESH_BINARY_INV since we want the lines to be white in black background. Then we simply run the HoughLinesP to find the lines and then draw the same on the image. Download code and sample image HoughLine #import necessary packages import cv2 import numpy as np # Reading the sample image img = cv2.imread('./demo.png') imgLines = img.copy() imgGaps = img.copy() # Convert the img to grayscale gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #if the line is black on white it will not work. Remember to have #white line on black background (T, thresh) = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY_INV) #lets find all lines in the image lines = cv2.HoughLinesP(thresh,1,np.pi/180,50) # The below for loop runs there are lines in the #detected set of lines for line in lines: for x1,y1,x2,y2 in line: #Draw lines on image cv2.line(imgLines,(x1,y1), (x2,y2), (0,255,0),1) #display the image cv2.imshow('ImageWithLineDetected', imgLines) #doing the same thing as above again, however this time we are #interested in filling the gaps. The max gap that will be filled # whin joining lines would be 300 px lines = cv2.HoughLinesP(thresh,1,np.pi/180,50, maxLineGap=300) # The below for loop runs there are lines in the #detected set of lines for line in lines: for x1,y1,x2,y2 in line: cv2.line(imgGaps,(x1,y1), (x2,y2), (0,255,0),1) cv2.imshow('ImageWithGapsClosed', imgGaps) Feel free to add comments or ask questions
{"url":"https://codedeepai.com/detecting-lines-in-image-with-opencv-hough-line-transform/","timestamp":"2024-11-11T03:05:16Z","content_type":"text/html","content_length":"75542","record_id":"<urn:uuid:a2c9428a-80a4-4d46-b08e-dd304bc00a36>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00562.warc.gz"}
David Cushing It’s been almost two years since I last sat down with my friend David Cushing and did what God put us on this Earth to do: review integer sequences. This week I lured David into my office with promises of tasty food and showed him some sequences I’d found. Thanks to (and also in spite of) my Windows 10 laptop, the whole thing was recorded for your enjoyment. Here it is: I can only apologise for the terrible quality of the video – I was only planning on using it as a reminder when I did a write-up, but once we’d finished I decided to just upload it to YouTube and be done with it. Review: Unique polyhedral dice from Maths Gear Our good friends at Maths Gear have sent us a tube of “unique polyhedral dice” to review. The description on mathsgear.co.uk says they’re “made from polyhedra you don’t normally see in the dice world”. My first thought was that we should test they’re fair by getting David to throw them a few thousand times but — while David was up for it — I’d have to keep score, which didn’t sound fun. So instead we thought of some criteria we can judge the dice on, and sat down with a teeny tiny video camera. Here’s our review: Integer Sequence Review – Sloane’s birthday edition! The Online Encyclopedia of Integer Sequences contains over 200,000 sequences. It contains classics, curios, thousands of derivatives entered purely for completeness’s sake, short sequences whose completion would be a huge mathematical achievement, and some entries which are just downright silly. For a lark, David and I have decided to review some of the Encyclopedia’s sequences. We’re rating sequences on four axes: Novelty, Aesthetics, Explicability and Completeness. CP: It’s Neil Sloane’s 75th birthday today! As a special birthday gift to him, we’re going to review some integer sequences. DC: His birthday is 10/10, that’s pretty cool. CP: <some quick oeis> there’s a sequence with his birthdate in it! A214742 contains 10,10,39. DC: We can’t review that. It’s terrible. CP: I put it to you that you have just reviewed it. DC: Shut up. CP: Anyway, I’ve got some birthday sequences to look at. DC: About cake? CP: No. Diaconis-Mosteller approximation to the Birthday problem function. 1, 23, 88, 187, 313, 459, 622, 797, 983, 1179, 1382, 1592, 1809, 2031, 2257, 2489, 2724, 2963, 3205, 3450, 3698, 3949, 4203, 4459, 4717, 4977, 5239, 5503, 5768, 6036, 6305, 6575, 6847, 7121, 7395, 7671, 7948, 8227, 8506, 8787, 9068, 9351 CP and Cushing take the National Numeracy Challenge Cushing was injured in a serious maths accident recently (he fell out of the bath) so I wanted to assess the damage to his number-wrangling faculties. Fortunately, there’s the National Numeracy Challenge, which begins with a test to pinpoint your weak areas. National Numeracy is a charity that wants every adult in the UK to “reach a level of numeracy skills that allow them to meet their full potential.” Well, if there’s one thing we’ve got, it’s bags of potential. Podcast: Play in new window | Download Subscribe: RSS | List of episodes MC Hammer is mathematically untouchable Happy birthday to MC Hammer who, at age 52, is now mathematically untouchable. Integer sequence review: A193430 The Online Encyclopedia of Integer Sequences contains over 200,000 sequences. It contains classics, curios, thousands of derivatives entered purely for completeness’s sake, short sequences whose completion would be a huge mathematical achievement, and some entries which are just downright silly. For a lark, David and I have decided to review some of the Encyclopedia’s sequences. We’re rating sequences on four axes: Novelty, Aesthetics, Explicability and Completeness. This is the triumphant return of the integer sequence reviews! Primes p such that p+1 is in A055462. 23, 6911, 5944066965503999, ... A morning in the office with CP and Cushing
{"url":"https://aperiodical.com/author/cushydom/","timestamp":"2024-11-05T22:43:35Z","content_type":"text/html","content_length":"45126","record_id":"<urn:uuid:5af6a411-8b45-4097-af41-43b990fde87d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00240.warc.gz"}
Real Quantifier Elimination: recent algorithmic progress and applications Presenter: Matthew England Topic: Quantifier Elimination – a form of simplification in mathematical logic Date: 15th November 2023 Time: 18:30 for 18:40 lecture start Location: Webinar – recording available from this link Quantifier Elimination (QE) may be considered as a form of simplification in mathematical logic: given a quantified logical statement QE will produce a statement which is equivalent as does not involve the logical quantifiers (there exists / for all). Real QE refers to the case where the logical atoms are constraints on polynomials over the real numbers: in this case the work of Tarski shows that QE is always possible. The first implemented method to achieve this was Cylindrical Algebraic Decomposition (CAD) proposed by Collins. However, CAD is known to have doubly exponential complexity, in effect producing a wall beyond which its application is infeasible. In this talk we will introduce the ideas behind QE and CAD, and describe some recent algorithmic advances which “push back” that doubly exponential wall. We will also discuss some recent applications of this technology to problems emerging in domains as varied as bio-chemistry and economics. Dr Matthew England is co-Director of the Coventry University Research Centre for Computational Science and Mathematical Modelling. His main research interest is in algorithms for symbolic algebra and symbolic logic, in particular for solving problems with real polynomial systems. His work encompasses the design of new algorithms, their analysis, their integration with other tools, their application, and their optimisation using data science approaches. EP/T015748/1, He currently leads the EPSRC funded DEWCAD Project (Pushing Back the Doubly Exponential Wall of Cylindrical Algebraic Decomposition) and a group of related post-doctoral and PhD students. He previously led projects on the use of machine learning for algorithm optimisation and the integration of computer algebra systems and satisfiability-modulo-theory solvers.
{"url":"https://coventry.bcs.org/real-quantifier-elimination-recent-algorithmic-progress-and-applications/","timestamp":"2024-11-14T08:04:01Z","content_type":"text/html","content_length":"42597","record_id":"<urn:uuid:0911373a-ba00-4f5e-a168-ecdfc63fcc3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00347.warc.gz"}
• Posts • Joined • Last visited Everything posted by cjohnso0 1. I sort of agree with doG. Sort of becuase I had it right on the first try, but looking back, doG's observation makes sense. 2. The current record (from the NHRA) is 4.428 by Tony Schumacher back in '06. This is a Top Fuel dragster I believe. I'm looking for a reliable source for the 3 second rocket cars still. 3. Neat, It's definately not faster for numbers with larger digits. Try 9999 x 9999, not only do you need a big sheet of paper, it gets confusing real fast. It seems to take about the same amount of time (as standard multiplication) for smaller numbers though. 4. Assuming it works, what are the benefits of this over the standard crankshaft we all know and love? Along with a working design, this is a pretty important question. 5. This one's got me stumped so far 6. river_rat, I'm following you now, I was missing that you were referring to the original problem posted. All of my work was for the .pdf version. After playing with the original question, I cannot find a solution, at least not one that makes sense, yet. by this I mean a mathematical solution for the .pdf question. If the original question posted is correct, there must be some obscure solution that's beyond me at this point. 7. I think your sketches need some more detail, for example: Where is the combustion chamber? Assuming the crank is rotating CCW, what happens to the piston after 'B', it looks like it will fall back onto the springs, then when the crank comes around, it will tear the teeth off on the left side. I'm also not sure that gear teeth can take that kind of a sudden impact load without immediately shearing off or prematurely failing in fatigue. For a proper analysis, you really need to flesh out your design more. 8. river_rat, In your example, your system of equations does not have a solution. If you were to start with a solution, you could then build any number of equations which would satisfy it: x,y,z = 1,1,1 x + y = 2 x - z = 0 y + 2z = 3 2x - 5y + z = -2 Given these 4 equations, you can remove any one and still get the same solution, assuming the equations you choose have at least one occurence of each variable. I just ran the numbers using 3 different sets of 23 from the available 25, and the solution came up the same. I'm not going to try all 2300 I'm still not sure that there will be a solution to the initial problem. 9. My methodology started out poorly... 1st up was to boot up excel, my tried and true numerical analysis program. Then I simply laid out a matrix of all 23 used letters, each row being an equation for one of the names. Then I attempted gaussian elimination. This is when I saw that Y can equate to anything I choose, without effecting the rest of equations. At this point, my answers were way off base, so I tried using the solver tool in excel on the first 23 names (23 becuase I only had 23 potential letters) (got the method from googling "Excel Simultaneous Equations", initial guess was 1 for all letters) Bammo! I got my answers in seconds So I tried the linear algebra, but failed, then used my easy tool to solve. Given the problem definition, I'm still not convinced that there is a definate solution, or that the missing letters are of any consequence. We'll need to see if anyone can find a pattern in this. Not sure if I can, but I can post the excel file if anyone wants, or e-mail, as that will work. 10. Whoa - Just looked at my data in my last post and the numbers go 2-24, omiiting 19. If Y = 19, then Feynman = 94 thoughts? M 2 T 3 V 4 H 5 A 6 E 7 O 8 I 9 U 10 B 11 G 12 K 13 P 14 S 15 F 16 W 17 Z 18 Y 19 C 20 L 21 N 22 R 23 D 24 11. Just chiming in here, i've been thinking / working on this one for a week now, and finally i've come to the conclusion that the answer is inderminate. Given the problem specifications, and using the data from the .pdf linked above, the letter Y can have any value. Methodology was to make a huge matrix and reduce until I had some equations I could solve, at that point I saw the problem. Only 1 occurance of the letter Y. Here's values I've gotten for all the others: A 6 B 11 C 20 D 24 E 7 F 16 G 12 H 5 I 9 K 13 L 21 M 2 N 22 O 8 P 14 R 23 S 15 T 3 U 10 V 4 W 17 Z 18 Using these values, all names add up correctly to the .pdf values, and Y can have any value. Thinking about it, I would be not be surprised if the answer is something like this, as they can't expect you to solve 23 equations at once, high IQ or not. Anyhow, that's just my take, hopefully there's not too many errors... 12. Engineers Edge is a pretty good one, it's got a lot of mechanical engineering related stuff, calculators, graphs, etc... And not totally engineering, but http://www.treasure-troves.com/ has a ton of science and math, I use the math section pretty often as an engineer. As stated above, give a clue on what you need, and I'm sure a few more resources will be provided.
{"url":"https://www.scienceforums.net/profile/4234-cjohnso0/content/page/2/?all_activity=1","timestamp":"2024-11-08T18:19:16Z","content_type":"text/html","content_length":"92260","record_id":"<urn:uuid:ff8d2ea4-8d2a-4b4d-ac94-fce6a0378f06>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00223.warc.gz"}
Understanding 6 out of 13 as a Percentage: A Simple Guide Understanding the Basics of Percentages: What Does 6 Out of 13 Mean? To grasp the concept of percentages, it's essential to start with a basic understanding of fractions. When we say "6 out of 13," we are expressing a fraction – specifically, the fraction 6/13. This fraction represents a part of a whole, where 6 is the part, and 13 is the whole. To convert this fraction into a percentage, one must follow a simple mathematical formula. The Conversion Formula The formula to convert a fraction into a percentage is straightforward: • Divide the numerator (top number) by the denominator (bottom number). • Multiply the result by 100. Applying this to our fraction, we would calculate: • 6 ÷ 13 ≈ 0.4615 • 0.4615 x 100 ≈ 46.15 The result tells us that 6 out of 13 is equivalent to approximately 46.15%. Applications of Percentages Understanding what 6 out of 13 means in percentage form can be incredibly useful in real-world scenarios. For instance, this percentage can be employed in contexts such as: • Statistical analysis, where understanding results from a sample size is crucial. • In finance, calculating discounts, interest rates, and returns. • In education, assessing how many students passed versus failed in a class. Each of these situations relies on the ability to interpret fractions as percentages accurately. Visualizing Percentages Sometimes, visual aids can help better understand percentages. For example, if you visualize 13 objects and highlight 6 of them, it becomes clear how these numbers relate to the overall set. Visual representations can include pie charts or bar graphs that depict the comparison of parts to a whole, making it easier to grasp how 46.15% fits into a larger context. Common Misconceptions One common misunderstanding is equating the part directly with the whole. Saying "6 out of 13" doesn't automatically imply that 6 is half or some relatable fraction of the total 13; it requires calculation to understand its significance fully. Additionally, beginners may confuse decimal values with percentages, mistakenly thinking that a decimal like 0.4615 is the finished answer when instead, it needs to be converted by multiplying by 100 to represent a percentage. In summary, understanding what 6 out of 13 means in terms of percentages is foundational in math and everyday life. By grasping how to convert fractions to percentages and applying this knowledge in various contexts, individuals can improve their numeracy skills and enhance their ability to make informed decisions. How to Calculate 6 Out of 13 as a Percentage: A Step-by-Step Guide To determine how to calculate 6 out of 13 as a percentage, it's essential to understand the basic formula for converting a fraction into a percentage. This calculation involves simple mathematical operations that you can easily follow. In this step-by-step guide, we'll break down the process to ensure clarity and ease of understanding. Step 1: Understand the Formula The formula to convert any fraction into a percentage is: • Percentage = (Part / Whole) × 100 In our case, the "part" is 6 and the "whole" is 13. Plugging these values into the formula will help you find the percentage of how much 6 is out of 13. Step 2: Substitute the Values Next, substitute the values into the formula: • Percentage = (6 / 13) × 100 This step transforms the fraction into a fraction over 100, which is crucial for calculating percentage values. Step 3: Perform the Division Now, carry out the division: • 6 ÷ 13 ≈ 0.4615 (rounded to four decimal places) By dividing 6 by 13, you obtain a decimal representation of the fraction, which is a fundamental aspect of the calculation process. Step 4: Multiply by 100 The next step is to convert the decimal into a percentage: This multiplication gives you the final percentage. In this case, 6 out of 13 translates to approximately 46.15%. Step 5: Interpret the Result The final result of approximately 46.15% indicates that 6 represents about 46.15% of 13. This information can be useful in various situations, such as statistical analysis, performance assessments, or comparing values within a range. Percentage Conversion: Why Knowing 6 Out of 13 Matters Understanding percentage conversion is crucial in various fields, from education and finance to data analysis and marketing. When we say "6 out of 13," we are essentially discussing a situation where you need to convert a fraction into a percentage. Knowing how to do this can provide valuable insights and facilitate better decision-making. To compute the percentage conversion from a fraction, the formula is straightforward: • Percentage (%) = (Part / Whole) * 100 In the case of "6 out of 13," the calculation is as follows: • Part: 6 • Whole: 13 • Calculation: (6 / 13) * 100 When you perform this calculation, you find that: • The percentage conversion of 6 out of 13 is approximately 46.15%. This percentage is significant for several reasons. First, it provides a clear and concise way to compare quantities. For example, if you are evaluating responses in a survey, understanding that 46.15% of participants chose option A gives context to the data. Moreover, percentages are universally understood metrics. In business settings, stakeholders often prefer discussing figures in percentages rather than raw numbers, as this helps to convey trends and important insights more effectively. Knowing that 6 out of 13 translates to around 46.15% allows for easier comparison against other data points or benchmarks. In summary, when you know the percentage conversion of 6 out of 13, you empower yourself with a useful tool for data analysis and decision-making. Whether you’re a student, a business professional, or someone simply interested in understanding data better, this knowledge enhances your ability to interpret and communicate critical information. Real-Life Applications of Calculating 6 Out of 13 as a Percentage Calculating 6 out of 13 as a percentage has various real-life applications that span multiple fields. Understanding the concept of percentages is fundamental, as it can help individuals make informed decisions based on data representation. Here are some tangible instances where this specific calculation is crucial. 1. Educational Assessments In educational settings, teachers often evaluate students based on their performance in assessments. If a student answers 6 out of 13 questions correctly on a quiz, the percentage score can help educators assess the student's understanding of the material. This can influence grading, teacher feedback, and curriculum adjustments. 2. Sports Statistics In sports, calculating performance metrics is crucial. For instance, if a basketball player makes 6 successful shots out of 13 attempts, fans, coaches, and analysts can assess the player's shooting percentage. This statistic helps in evaluating player performance and can affect decisions like trades or starting positions. 3. Market Research and Surveys Market researchers often conduct surveys to elicit opinions from a sample population. If 6 out of 13 respondents express a preference for a new product, the findings can help businesses make marketing and production decisions. Understanding how many customers favor a product can guide inventory and pricing strategies. 4. Health and Nutrition In health and nutrition, calculating percentages can help individuals track progress towards their wellness goals. For instance, if someone consumes 6 out of 13 servings of recommended fruits and vegetables in a day, they can calculate their intake percentage. This aids in making healthier dietary choices and encourages better nutrition planning. 5. Financial Planning Financial planning often requires precise calculations to understand budget allocations. If one is evaluating an investment portfolio and finds that 6 out of 13 investments have performed well, they can calculate the percentage to gauge overall investment efficiency. This metric can help in making future investment decisions and risk assessments. In summary, the ability to convert 6 out of 13 to a percentage is a versatile skill applicable in various scenarios, from education and sports to market research, health, and finance. This calculation does not merely provide numerical insights but also aids in strategy and decision-making across diverse fields. Common Mistakes When Calculating Percentages: Learning from 6 Out of 13 Understanding the Basics When it comes to calculating percentages, many people struggle with fundamental concepts. One common mistake occurs when individuals confuse the numerator and denominator when setting up their equations. For example, if you need to find out what percentage 6 is of 13, a common error is to calculate it as 6/6 instead of the correct 6/13. This oversight can lead to incorrect conclusions and affects decision-making processes in various scenarios. Misinterpretation of Percentage Terms An additional challenge arises from the misinterpretation of percentage terms. For example, if someone states that 6 out of 13 represents a certain percentage, they may mistakenly assume it equates to 60%. This confusion stems from thinking in whole numbers rather than recognizing the need to divide by the total amount (13). This crucial step must not be overlooked to avoid gross inaccuracies. Incorrect Conversion of Fractions to Percentages Another frequent mistake is the incorrect conversion of fractions to percentages. It is vital to remember that to convert a fraction into a percentage, you should multiply the resulting decimal by 100. For instance, in the case of 6 out of 13, first, divide 6 by 13 to obtain approximately 0.4615, and then multiply by 100 to get around 46.15%. Neglecting this step can lead to mistakenly interpreting the values involved. Failing to Use a Calculator Appropriately While calculators can simplify percentage calculations, improper use can lead to significant errors. A common mistake occurs when users enter the wrong numbers or neglect to follow the proper order of operations. Double-checking entries, especially when working with fractions, can prevent these errors. When calculating 6 out of 13, ensure that the correct fraction is inputted to avoid inaccurate results. Overlooking the Importance of Rounding Lastly, many overlook the importance of rounding in percentage calculations. Depending on the context, a percentage may need to be rounded to the nearest whole number or decimal place. For instance, rounding 46.15% to 46% could be suitable for certain applications, while finer precision could require retaining the full decimal. Thus, it's important to be mindful of when and how to round to maintain accurate and relevant data. Tools and Resources for Calculating Percentages: Making 6 Out of 13 Simple Calculating percentages can often seem daunting, particularly when you're faced with fractions such as 6 out of 13. However, various tools and resources are readily available to simplify this process. Understanding how to leverage these tools can transform a complicated calculation into a straightforward task. Online Percentage Calculators One of the simplest and most accessible ways to calculate percentages is by utilizing online percentage calculators. These tools typically require you to input the numbers you want to compare. For instance, to find out what percentage 6 is of 13, you would enter 6 as the part and 13 as the whole. Most calculators will provide you with an instant result, showcasing that 6 out of 13 is approximately 46.15%. Popular online calculators include sites like Calculator.net and RapidTables. Spreadsheet Software Another effective way of calculating percentages is through spreadsheet software like Microsoft Excel or Google Sheets. By entering the numbers directly into cells, you can use functions to determine the percentage effortlessly. For instance, you can input 6 into cell A1 and 13 into cell A2, then use the formula =A1/A2 and format the result as a percentage. This method is particularly beneficial for those who often work with larger datasets, allowing for quick and efficient calculations. • Excel Functionality: You can also utilize Excel's built-in functions such as PERCENTAGE and PERCENTRANK for more complex scenarios. • Charting Features: Graph your results to visualize the proportion, making it easier to interpret the numbers at a glance. Mobile Apps If you're often on the go, smartphone applications can assist in calculating percentages quickly and easily. There are numerous apps available for both Android and iOS that serve this function. Look for apps designed for students or financial planning, as they commonly include percentage calculators. By simply entering your values, apps like Percent Calculator or Simple Percentage Calc can provide you with the percentages you need at your fingertips. Educational Websites and Videos For those looking to deepen their understanding of percentage calculations, educational websites and YouTube videos can serve as valuable resources. Websites like Khan Academy and Math is Fun offer comprehensive tutorials on how to work with percentages, including specific examples like the one involving 6 out of 13. These resources often break down the calculation process step-by-step, making it easier for you to grasp the concepts and apply them to similar calculations in the future. Statistical Software For advanced users, statistical software like R or Python libraries can automate percentage calculations, especially in large datasets. These tools allow users to write scripts that conduct complex analyses including percentage calculations among various data points. This approach not only saves time but also reduces the risk of human error, making it a preferred method for statisticians and data analysts. By incorporating these tools and resources into your calculations, you can make determining percentages like 6 out of 13 straightforward and manageable. if you are interested in this article Understanding 6 out of 13 as a Percentage: A Simple Guide you can see many more, similar to this one, here General Information.
{"url":"https://cteec.org/6-out-of-13-as-a-percentage/","timestamp":"2024-11-13T00:01:26Z","content_type":"text/html","content_length":"103853","record_id":"<urn:uuid:0e9252d6-6f01-4d50-95df-e3b71af2ea89>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00048.warc.gz"}
NCERT Solutions Class 6 Maths Chapter 4 Basic Geometrical Ideas Exercise 4.6 - MathswallahPadhai NCERT Solutions Class 6 Maths Chapter 4 Basic Geometrical Ideas Exercise 4.6 Basic Geometrical Ideas Exercise 4.6 Examples for Practice 1). From the figure, identify: (a) the centre of the circle O is the centre of the circle. (b) three radii Radius OA, OB, OC (c) a diameter AC is the diameter. (d) a chord ED is a chord. AC is also a chord. (e)two points in the interior O and P are the two points in the interior (f) a point in the exterior Q is a point in the exterior. (g) a sector AOB is a sector (h) a segment ED is a segment. (a) is every diameter of a circle also a chord? Yes. The diameter is the longest chord of a circle. (b) Is every chord of a circle also a diameter? 3). Draw any circle and mark (a) it’s centre (b) a radius (c) a diameter (d) a sector (e) a segment (f) a point in its interior (g) a point in its exterior (h) an arc 4). Say true or false: (a) Two diameters of a circle will necessarily intersect. (b) The centre of a circle is always in its interior. Click here for the solutions of Chapter 4 Basic Geometrical Ideas Chapter 3: Playing With Numbers Chapter 2: Whole Numbers Chapter 2: Knowing Our Numbers Leave a Comment
{"url":"https://mathswallahpadhai.com/ncert-solutions-class-6-maths-chapter-4-basic-geometrical-ideas-exercise-4-6/","timestamp":"2024-11-07T05:28:33Z","content_type":"text/html","content_length":"149652","record_id":"<urn:uuid:90aa0b0a-79fc-448c-9f41-0932f8528231>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00826.warc.gz"}
The Joined Pentachoron The joined pentachoron is a Catalan polychoron bounded by 10 triangular bipyramids, 30 isosceles triangles, 30 edges (10 long, 20 short), and 10 vertices. It is the dual of the rectified 5-cell. The cells are triangular bipyramids with isosceles triangular faces with an edge length ratio of 2 : 2 : 3. They are duals of the uniform triangular prism. The projection envelope of the joined pentachoron is a triakis tetrahedron, a Catalan solid. “Joined” in the name refers to the join operator in Conway's polyhedron notation (suitably generalized to 4D), which can be applied to the 5-cell to obtain this polytope. We will explore the structure of the joined pentachoron using its parallel projections into 3D. The Near Side The following images show the 4 cells facing the 4D viewpoint. These cells look flatter than they are in 3D; this is because they lie at an angle to the 4D viewpoint and therefore have been foreshortened by the parallel projection. All 4 cells share a common vertex in the middle of the projection, which is the closest vertex to the 4D viewpoint. It is where 4 apices of the cells meet. There are 5 such vertices in the joined These are all the cells that lie on the near side of the polytope. The Far Side Following this, we come to the far side of the polytope. There are 6 cells here, as shown below in 3 pairs: These cells appear distorted because of their oblique angle with the 4D viewpoint. However, this is merely an artifact of the parallel projection. The vertex in the center of the projection is shared by all 6 cells, and is where these cells' equators meet. It is antipodal to the central point on the near side of the polytope. There are 5 such vertices in the joined pentachoron. The following image shows all 6 far side cells together: In summary, there are 4 cells on the near side of the polychoron and 6 cells on the far side, for a total of 10 cells. Here's an animation of the joined pentachoron rotating in the WY plane: The Cartesian coordinates of the joined pentachoron are: • (−5/(6√10), −5/(6√6), −5/(6√3), ±5/6) • (−5/(6√10), −5/(6√6), 5/(3√3), 0) • (−5/(6√10), 5/(2√6), 0, 0) • (5/(4√10), 5/(4√6), 5/(4√3), ±5/4) • (5/(4√10), 5/(4√6), −5/(2√3), 0) • (5/(4√10), −15/(4√6), 0, 0) • (10/(3√10), 0, 0, 0) • (−5/√10, 0, 0, 0) These coordinates correspond with a dual rectified 5-cell of edge length 2.
{"url":"http://www.qfbox.info/4d/inv_rect5cell","timestamp":"2024-11-07T23:20:51Z","content_type":"text/html","content_length":"8073","record_id":"<urn:uuid:0572cf15-12a7-466f-a92f-78786a4bb823>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00691.warc.gz"}
Mixed does not exist in Math node complete overview Can someone please tell me? Why doesn't "mixed" exist in the type of Math Node Complete Overview? I'm implementing an application in Math. When I analyzed the recognition of mixed fractions, "mixed" was set for type. However, mixed did not exist in the type of Math Node Complete Overview. Best Answer Dear Yusuke-oka San, thank you for the update. Indeed the "mixed"type is missing in the documentation. This will be corrected in a future release of the documentation. Thank you, Best regards, 3 CommentsSorted by Oldest First Dear Yusuke-oka San, thank you for contacting us. First, which MyScript product are you using? The iink cloud, or the iink native (on device?) Currently, I am not sure to understand your question? What do you mean by Math Node Complete Overview? What are you trying to achieve? Indeed, I just tried to recognize mixed fractions using the webdemo (that uses the iink cloud), and these were properly recognized and exported as Latex and MathML The Latex export: 5\dfrac{1}{3}+7\dfrac{3}{7}\simeq 12.762 And the MathML export: <math xmlns='http://www.w3.org/1998/Math/MathML'> <mn> 5 </mn> <mfrac> <mrow> <mn> 1 </mn> </mrow> <mrow> <mn> 3 </mn> </mrow> </mfrac> <mo> + </mo> <mn> 7 </mn> <mfrac> <mrow> <mn> 3 </mn> </mrow> <mrow> <mn> 7 </mn> </mrow> </mfrac> <mo> &#x2243; <!-- asymptotically equal to --> </mo> <mn> 12.762 </mn> </math> So please explain what you mean by "Math Node Complete Overview", so that we better understand the behavior you are facing. Best regards, Dear Olivier Thank you for your reply.I'm sorry I'm not good at English >First, which MyScript product are you using? The iink cloud, or the iink native (on device?) the iink native (on device?) using, I confirmed the recognition result of mixed fractions in JIXX format The result of entering 「1 2/3 + 1/3」 is shown below. Please tell me about "type": "mixed" in the result. Why doesn't "type": "mixed" exist in the JIIX format reference? JIIX format reference | MyScript Developer (Excerpt of results) "expressions": [ { "type": "+", "id": "math/1664", "operands": [ { "type": "mixed", "id": "math/1660", "operands": [ { "type": "number", "id": "math/1656", "label": "1", "value": 1 }, { "type": "fraction", "id": "math/1659", "operands": [ { "type": "number", "id": "math/1657", "label": "2", "value": 2 }, { "type": "number", "id": "math/1658", "label": "3", "value": 3 } ] } ] }, { (Omitted thereafter) Dear Yusuke-oka San, thank you for the update. Indeed the "mixed"type is missing in the documentation. This will be corrected in a future release of the documentation. Thank you, Best regards, 1 person likes this Login or Signup to post a comment
{"url":"https://developer-support.myscript.com/support/discussions/topics/16000031776","timestamp":"2024-11-10T18:28:00Z","content_type":"text/html","content_length":"76243","record_id":"<urn:uuid:169ceb49-2520-433c-9b80-03ee899e0786>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00055.warc.gz"}
LinearTurboFold: Linear-Time Global Prediction of Conserved Structures for RNA Homologs with Applications to SARS-CoV-2 The constant emergence of COVID-19 variants reduces the effectiveness of existing vaccines and test kits. Therefore, it is critical to identify conserved structures in SARS-CoV-2 genomes as potential targets for variant-proof diagnostics and therapeutics. However, the algorithms to predict these conserved structures, which simultaneously fold and align multiple RNA homologs, scale at best cubically with sequence length, and are thus infeasible for coronaviruses, which possess the longest genomes (∼30,000 nt) among RNA viruses. As a result, existing efforts on modeling SARS-CoV-2 structures resort to single sequence folding as well as local folding methods with short window sizes, which inevitably neglect long-range interactions that are crucial in RNA functions. Here we present LinearTurboFold, an efficient algorithm for folding RNA homologs that scales linearly with sequence length, enabling unprecedented global structural analysis on SARS-CoV-2. Surprisingly, on a group of SARS-CoV-2 and SARS-related genomes, LinearTurboFold’s purely in silico prediction not only is close to experimentally-guided models for local structures, but also goes far beyond them by capturing the end-to-end pairs between 5’ and 3’ UTRs (∼29,800 nt apart) that match perfectly with a purely experimental work. Furthermore, LinearTurboFold identifies novel conserved structures and conserved accessible regions as potential targets for designing efficient and mutation-insensitive small-molecule drugs, antisense oligonucleotides, siRNAs, CRISPR-Cas13 guide RNAs and RT-PCR primers. LinearTurboFold is a general technique that can also be applied to other RNA viruses and full-length genome studies, and will be a useful tool in fighting the current and future pandemics. Significance Statement Conserved RNA structures are critical for designing diagnostic and therapeutic tools for many diseases including COVID-19. However, existing algorithms are much too slow to model the global structures of full-length RNA viral genomes. We present LinearTurboFold, a linear-time algorithm that is orders of magnitude faster, making it the first method to simultaneously fold and align whole genomes of SARS-CoV-2 variants, the longest known RNA virus (∼ 30 kilobases). Our work enables unprecedented global structural analysis and captures long-range interactions that are out of reach for existing algorithms but crucial for RNA functions. LinearTurboFold is a general technique for full-length genome studies and can help fight the current and future pandemics. Ribonucleic acid (RNA) plays important roles in many cellular processes.^1, 2 To maintain their functions, secondary structures of RNA homologs are conserved across evolution.^3, 4, 5 These conserved structures provide critical targets for diagnostics and treatments. Thus, there is a need for developing fast and accurate computational methods to identify structurally conserved regions. Commonly, conserved structures involve compensatory base pair changes, where two positions in primary sequences mutate across evolution and still conserve a base pair, for instance, an AU or a CG pair replaces a GC pair in homologous sequences. These compensatory changes provide strong evidence for evolutionarily conserved structures.^6, 7, 8, 9, 10 Meanwhile, they make it harder to align sequences when structures are unknown. To solve this issue, Sankoff proposed a dynamic programming algorithm that simultaneously predicts structures and a structural alignment for two or more sequences.^11 The major limitation of this approach is that the algorithm runs in O(n^3k) against k sequences with the average sequence length n. Several software packages provide implementations of the Sankoff algorithm^12, 13, 14, 15, 16, 17 that use simplifications to reduce runtime.^* As an alternative, TurboFold II,^18 an extension of TurboFold,^19 provides a more computationally efficient method to align and fold sequences. Taking multiple unaligned sequences as input, TurboFold II iteratively refines alignments and structure predictions so that they conform more closely to each other and converge on conserved structures. TurboFold II is significantly more accurate than other methods^12, 14, 20, 21, 22 when tested on RNA families with known structures and alignments. However, the cubic runtime and quadratic memory usage of TurboFold II prevent it from scaling to longer sequences such as full-length SARS-CoV-2 genomes, which contain ∼ 30,000 nucleotides; in fact, no joint-align-and-fold methods can scale to these genomes, which are the longest among RNA viruses. As a (not very principled) workaround, most existing efforts for modeling SARS-CoV-2 structures^29 , 24, 25, 27, 28, 26 resort to local folding methods^30, 31 with sliding windows plus a limited pairing distance, abandoning all long-range interactions, and only consider one SARS-CoV-2 genome (Fig. 1B– C), ignoring signals available in multiple homologous sequences. To address this challenge, we designed a linearized version of TurboFold II, LinearTurboFold (Fig. 1A), which is a global homologous folding algorithm that scales linearly with sequence length. This linear runtime makes it the first joint-fold-and-align algorithm to scale to full-length coronavirus genomes without any constraints on window size or pairing distance, taking about 13 hours to analyze a group of 25 SARS-CoV homologs. It also leads to significant improvement on secondary structure prediction accuracy as well as an alignment accuracy comparable to or higher than all benchmarks. Over a group of 25 SARS-CoV-2 and SARS-related homologous genomes, LinearTurboFold predictions are close to the canonical structures^32 and structures modeled with the aid of experimental data^24, 25 , 27 for several well-studied regions. Thanks to global rather than local folding, LinearTurboFold discovers a long-range interaction involving 5’ and 3’ UTRs (∼29,800 nt apart), which is consistent with recent purely experimental work,^28 and yet is out of reach for local folding methods used by existing studies (Fig. 1B–C). In short, our in silico method of folding multiple homologs can achieve results similar to, and sometimes more accurate than, experimentally-guided models for one genome. Moreover, LinearTurboFold identifies conserved structures supported by compensatory mutations, which are potential targets for small molecule drugs^33 and antisense oligonucleotides (ASOs).^26 We further identify regions that are (a) sequence-level conserved, (b) at least 15 nt long, and (c) accessible (i.e., likely to be completely unpaired) as potential targets for ASOs,^34 small interfering RNA (siRNA),^35 CRISPR-Cas13 guide RNA (gRNA)^36 and reverse transcription polymerase chain reaction (RT-PCR) primers.^37 LinearTurboFold is a general technique that can also be applied to other RNA viruses (e.g., influenza, Ebola, HIV, Zika, etc.) and full-length genome The framework of LinearTurboFold has two major aspects (Fig. 1A): linearized structure-aware pairwise alignment estimation (module 1); and linearized homolog-aware structure prediction (module 2). LinearTurboFold iteratively refines alignments and structure predictions, specifically, updating pairwise alignment probabilities by incorporating predicted base-pairing probabilities (from module 2) to form structural alignments, and modifying base-pairing probabilities for each sequence by integrating the structural information from homologous sequences via the estimated alignment probabilities (from module 1) to detect conserved structures. After several iterations, LinearTurboFold generates the final multiple sequence alignment (MSA) based on the latest pairwise alignment probabilities (module 3) and predicts secondary structures using the latest pairing probabilities (module 4). LinearTurboFold achieves linear time regarding sequence length with two major linearized modules: our recent work LinearPartition^38 (Fig. 1A module 2), which approximates the RNA partition function^ 39 and base pairing probabilities in linear time, and a novel algorithm LinearAlignment (module 1). LinearAlignment aligns two sequences by Hidden Markov Model (HMM) in linear time by applying the same beam search heuristic^40 used by LinearPartition. Finally, LinearTurboFold assembles the secondary structure from the final base pairing probabilities using an accurate and linear-time method named ThreshKnot^41 (module 4). LinearTurboFold also integrates a linear-time stochastic sampling algorithm named LinearSampling^42 (module 5), which can independently sample structures according to the homolog-aware partition functions and then calculate the probability of being unpaired for regions, which is an important property in, for example, siRNA sequence design.^35 Therefore, the overall end-to-end runtime of LinearTurboFold scales linearly with sequence length (see Methods §1–7 for more details). Scalability and Accuracy To evaluate the efficiency of LinearTurboFold against the sequence length, we collected a dataset consisting of seven families of RNAs with sequence length ranging from 210 nt to 30,000 nt, including five families from the RNAstralign dataset plus 23S ribosomal RNA, HIV genomes and SARS-CoV genomes, and the calculation for each family uses five homologous sequences (see Methods §8 for more details). Fig. 2A compares the running times of LinearTurboFold with TurboFold II and two Sankoff-style simultaneous folding and alignment algorithms, LocARNA and MXSCARNA. Clearly, LinearTurboFold scales linearly with sequence length n, and is substantially faster than other algorithms, which scale superlinearly. The linearization in LinearTurboFold brought orders of magnitude speedup over the cubic-time TurboFold II, taking only 12 minutes on the HIV family (average length 9,686 nt) while TurboFold II takes 3.1 days (372 × speedup). More importantly, LinearTurboFold takes only 40 minutes on five SARS-CoV sequences while all other benchmarks fail to scale. Regarding the memory usage (Fig. 2B), LinearTurboFold costs linear memory space with sequence length, while other benchmarks use quadratic or more memory. In Fig. 2C–D, we also demonstrate that the runtime and memory usage against the number of homologs (k = 5 ∼ 20), using sets of 16S rRNAs about 1,500 nt in length. The apparent complexity against the group size of LinearTurboFold is higher than TurboFold II because the cubic-time partition function calculation, which dominates the runtime of TurboFold II, was linearized in LinearTurboFold by LinearPartition (Fig. S10C). We next compare the accuracies of predicted secondary structures and MSAs between LinearTurboFold and several benchmark methods (see Methods §9). Besides Sankoff-style LocARNA and MXS-CARNA, we also consider three types of negative controls: (a) single sequence folding (partition function-based): Vienna RNAfold^31 (-p mode) and LinearPartition; (b) sequence-only alignment: MAFFT^21 and LinearAlignment (a standalone version of the alignment method developed for this work, but without structural information in LinearTurboFold); and (c) an align-then-fold method that predicts consensus structures from MSAs (Fig. S6): MAFFT + RNAalifold.^20 For secondary structure prediction, LinearTurboFold, TurboFold II and LocARNA achieve higher F1 scores than single sequence folding methods (Vienna RNAfold and LinearPartition) (Fig. 2E), which demonstrates folding with homology information performs better than folding sequences separately. Overall, LinearTurboFold performs significantly better than all the other benchmarks on structure prediction. For the accuracy of MSAs (Fig. 2F), the structural alignments from LinearTurboFold obtain higher accuracies than sequence-only alignments (LinearAlignment and MAFFT) on all four families, especially for families with low sequence identity. On average, LinearTurboFold performs comparably with TurboFold II and significantly better than other benchmarks on alignments. We also note that the structure prediction accuracy of the align-then-fold approach (MAFFT + RNAalifold) depends heavily on the alignment accuracy, and is the worst when the sequence identity is low (e.g., SRP RNA) and the best when the sequence identity is high (e.g., 16S rRNA) (Fig. 2E–F). Highly Conserved Structures in SARS-CoV-2 and SARS-related Betacoronaviruses RNA sequences with conserved secondary structures play vital biological roles and provide potential targets. The current COVID-19 outbreak raises an emergent requirement of identifying potential targets for diagnostics and therapeutics. Given the strong scalability and high accuracy, we used LinearTurboFold on a group of full-length SARS-CoV-2 and SARS-related (SARSr) genomes to obtain global structures and identify highly conserved structural regions. We used a greedy algorithm to select the 16 most diverse genomes from all the valid SARS-CoV-2 genomes submitted to the Global Initiative on Sharing Avian Influenza Data (GISAID)^43 up to December 2020 (Methods §11). We further extended the group by adding 9 SARS-related homologous genomes (5 human SARS-CoV-1 and 4 bat coronaviruses).^44 In total, we built a dataset of 25 full-length genomes consisting of 16 SARS-CoV-2 and 9 SARS-related sequences (Tab. S2). The average pairwise sequence identities of the 16 SARS-CoV-2 and the total 25 genomes are 99.9% and 89.6%, respectively. LinearTurboFold takes about 13 hours and 43 GB on the 25 genomes. To evaluate the reliability of LinearTurboFold predictions, we first compare them with the Huston et al.’s SHAPE-guided models^24 for regions with well-characterized structures across betacoronaviruses. For the extended 5’ and 3’ untranslated regions (UTRs), LinearTurboFold’s predictions are close to the SHAPE-guided structures (Fig. 3A– B), i.e., both identify the stem-loops (SLs) 1–2 and 4–7 in the extended 5’ UTR, and the bulged stem-loop (BSL), SL1, and a long bulge stem for the hypervariable region (HVR) including the stem-loop II-like motif (S2M) in the 3’ UTR. Interestingly, in our model, the high unpaired probability of the stem in the SL4b indicates the possibility of being single-stranded as an alternative structure, which is supported by experimental studies.^26, 25 In addition, the compensatory mutations LinearTurboFold found in UTRs strongly support the evolutionary conservation of structures (Fig. 3A). The most important difference between LinearTurboFold’s prediction and Huston et al.’s experimentally-guided model is that LinearTurboFold discovers an end-to-end interaction (29.8 kilobases apart) between the 5’ UTR (SL3, 60-82 nt) and the 3’ UTR (final region, 29845-29868 nt), which fold locally by themselves in Huston et al.’s model. Interestingly, this 5’-3’ interaction matches exactly with the one discovered by the purely experimental work of Ziv et al.^23 using the COMRADES technique to capture long-range base-pairing interactions (Fig. 3C). These end-to-end interactions have been well established by theoretical and experimental studies^45, 46, 47 to be common in natural RNAs, but are far beyond the reaches of local folding methods used in existing studies on SARS-CoV-2 secondary structures.^24, 25, 27, 28 By contrast, LinearTurboFold predicts secondary structures globally without any limit on window size or base-pairing distance, enabling it to discover long-distance interactions across the whole genome. The similarity between our predictions and the experimental work shows that our in silico method of folding multiple homologs can achieve results similar to, if not more accurate than, those experimentally-guided single-genome prediction. We also observed that LinearPartition, as a single sequence folding method, can also predict a long-range interaction between 5’ and 3’ UTRs, but it involves SL2 instead of SL3 of the 5’ UTR (Fig. 3A), which indicates that the homologous information assists to adjust the positions of base pairs to be conserved in LinearTurboFold. Additionally, the align-then-fold approach (MAFFT + RNAalifold) fails to predict such long-range interactions (Fig. S11B). The frameshifiting stimulation element (FSE) is another well-characterized region. For an extended FSE region, the LinearTurboFold prediction consists of two substructures (Fig. 4A): the 5’ part includes an attenuator hairpin and a stem, which are connected by a long internal loop (16 nt) including the slippery site, and the 3’ part includes three stem loops. We observe that our predicted structure of the 5’ part is consistent with experimentally-guided models^24, 25, 28 (Fig. 4B–D). In the attenuator hairpin, the small internal loop motif (UU) was previously selected as a small molecule binder that stabilizes the folded state of the attenuator hairpin and impairs frameshifting.^33 For the long internal loop including the slippery site, we will show in the next section that it is both highly accessible and conserved (Fig. 5), which makes it a perfect candidate for drug design. For the 3’ region of the FSE, LinearTurboFold successfully predicts stems 1–2 (but misses stem 3) of the canonical three-stem pseudoknot^32 (Fig. 4E). Our prediction is closer to the canonical structure compared to the experimentally-guided models^24, 25, 28 (Fig. 4B–D); one such model (Fig. 4B) identified the pseudoknot (stem 3) but with an open stem 2. Note that all these experimentally-guided models for the FSE region were estimated for specific local regions. As a result, the models are sensitive to the context and region boundaries^28, 24, 48 (see Fig. S12D–F for alternative structures of Fig. 4B–D with different regions). LinearTurboFold, by contrast, does not suffer from this problem by virtue of global folding without local windows. Besides SARS-CoV-2, we notice that the estimated structure of the SARS-CoV-1 reference sequence (Fig. 4F) from LinearTurboFold is similar to SARS-CoV-2 (Fig. 4A), which is consistent with the observation that the structure of the FSE region is highly conserved among betacoronaviruses.^32 Finally, as negative controls, both the single sequence folding algorithm (LinearPartition in Fig. 4G) and the align-then-fold method (RNAalifold in Fig. S12G) predict quite different structures compared with the LinearTurboFold prediction (Fig. 4A) (39%/61% of pairs from the LinearTurboFold model are not found by LinearPartition/RNAalifold). In addition to the well-studied UTRs and FSE regions, LinearTurboFold discovers 50 conserved structures with identical structures among 25 genomes, and 26 regions are novel compared to previous studies^29, 24 (Fig. 4H and Tab. S4). These novel structures are potential targets for small-molecule drugs^33 and antisense oligonucleotides.^26, 49 LinearTurboFold also recovers fully conserved base pairs with compensatory mutations (Tab. S3), which imply highly conserved structural regions whose functions might not have been explored. We also provide the whole multiple sequence alignment and predicted structures for 25 genomes from LinearTurboFold (see Fig. S13 for the format and link). Highly Accessible and Conserved Regions in SARS-CoV-2 and SARS-related Betacoronaviruses Studies show that the siRNA silencing efficiency, ASO inhibitory efficacy, CRISPR-Cas13 knockdown efficiency, and RT-PCR primer binding efficiency, all correlate with the target region’s accessibility,^37, 35, 36, 50 which is the probability of a target site being fully unpaired. However, most existing work for designing siRNAs, ASOs, CRISPR-Cas13 gRNAs, and RT-PCR primers does not take this feature into consideration^51, 52 (Tab. S5). Here LinearTurboFold is able to provide more principled design candidates by identifying accessible regions of the target genome. In addition to accessibility, the emerging variants around the world reduce effectiveness of existing vaccines and test kits (Tab. S5), which indicates sequence conservation is another critical aspect for therapeutic and diagnostic design. LinearTurboFold, being a tool for both structural alignment and homologous folding, can identify regions that are both (sequence-wise) conserved and (structurally) accessible, and it takes advantage of not only SARS-CoV-2 variants but also homologous sequences, e.g., SARS-CoV-1 and bat coronavirus genomes, to identify conserved regions from historical and evolutionary perspectives. To get unstructured regions, Rangan et al.^29 imposed a threshold on unpaired probability of each position, which is a crude approximation because the probabilities are not independent of each other. By contrast, the widely-used stochastic sampling algorithm^53, 42 builds a representative ensemble of structures by sampling independent secondary structures according to their probabilities in the Boltzmann distribution. Thus the accessibility for a region can be approximated as the fraction of sampled structures in which the region is single-stranded. LinearTurboFold utilized LinearSampling^ 42 to generate 10,000 independent structures for each genome according to the modified partition functions after the iterative refinement (Fig. 1A module 5), and calculated accessibilities for regions at least 15 nt long. We then define accessible regions that are with at least 0.5 accessibility among all 16 SARS-CoV-2 genomes (Fig. 5A–B). We also measure the free energy to open a target region [i, j],^54 notated: ΔG[u][i, j] = −RT (log Z[u][i, j] −log Z) = −RT log P[u][i, j] where Z is the partition function which sums up the equilibrium constants of all possible secondary structures, Z[u][i, j] is the partition function over all structures in which the region [i, j] is fully unpaired, R is the universal gas constant and T is the thermodynamic temperature. Therefore P [u][i, j] is the unpaired probability of the target region and can be approximated via sampling by s[0]/s, where s is the sample size and s[0] is the number of samples in which the target region is single-stranded. The regions whose free energy changes are close to zero need less free energy to open, thus more accessible to bind with siRNAs, ASOs, CRISPR-Cas13 gRNAs and RT-PCR primers. Next, to identify conserved regions that are highly conserved among both SARS-CoV-2 and SARS-related genomes, we require that these regions contain at most three mutated sites on the 9 SARS-related genomes compared to the SARS-CoV-2 reference sequence because historically conserved sites are also unlikely to change in the future,^55 and the average sequence identity with reference sequence over a large SARS-CoV-2 dataset is at least 0.999 (here we use a dataset of ∼ 2M SARS-CoV-2 genomes submitted to GISAID up to June 30, 2021^†; see Methods §11). Finally, we identified 33 accessible and conserved regions (Fig. 5G and Tab. S6), which are not only structurally accessible among SARS-CoV-2 genomes but also highly conserved among SARS-CoV-2 and SARS-related genomes (Fig. 5C). Because the specificity is also a key factor influencing siRNA efficiency,^56 we used BLAST against the human transcript dataset for these regions (Tab. S6). Finally, we also listed the GC content of each region. Among these regions, region 16 corresponds to the internal loop containing the slippery site in the extended FSE region, and it is conserved at both structural and sequence levels (Fig. 5D and 5H). Besides SARS-CoV-2 genomes, the SARS-related genomes such as the SARS-CoV-1 reference sequence (NC_004718.3) and a bat coronavirus (BCoV, MG772934.1) also form similar structures around the slippery site (Fig. 5A). By removing the constraint of conservation on SARS-related genomes, we identified 38 additional candidate regions (Tab. S7) that are accessible but only highly conserved on SARS-CoV-2 variants. We also designed a negative control by analyzing the SARS-CoV-2 reference sequence alone using LinearSampling, which can also predict accessible regions. However, these regions are not structurally conserved among the other 15 SARS-CoV-2 genomes, resulting in vastly different accessibilities, except for one region in the M gene (Tab. S8). The reason for this difference is that, even with a high sequence identity (over 99.9%), single sequence folding algorithms still predict greatly dissimilar structures for the SARS-CoV-2 genomes (Fig. 5E–F). Both regions (in nsp11 and N genes) are fully conserved among the 16 SARS-CoV-2 genomes, yet they still fold into vastly different structures due to mutations outside the regions; as a result, the accessibilities are either low (nsp11) or in a wide range (N) (Fig. 5D). Conversely, addressing this by folding each sequence with proclivity of base pairing inferred from all homologous sequences, LinearTurboFold structure predictions are more consistent with each other and thus can detect conserved structures (Fig. 5A–B). The constant emergence of new SARS-CoV-2 variants is reducing the effectiveness of exiting vaccines and test kits. To cope with this issue, there is an urgent need to identify conserved structures as promising targets for therapeutics and diagnostics that would work in spite of current and future mutations. Here we presented LinearTurboFold, an end-to-end linear-time algorithm for structural alignment and conserved structure prediction of RNA homologs, which is the first joint-fold-and-align algorithm to scale to full-length SARS-CoV-2 genomes without imposing any constraints on base-pairing distance. We also demonstrate that LinearTurboFold leads to significant improvement on secondary structure prediction accuracy as well as an alignment accuracy comparable to or higher than all benchmarks. Unlike existing work on SARS-CoV-2 using local folding and single-sequence folding workarounds, LinearTurboFold enables unprecedented global structural analysis on SARS-CoV-2 genomes; in particular, it can capture long-range interactions, especially the one between 5’ and 3’ UTRs across the whole genome, which matches perfectly with a recent purely experiment work. Over a group of SARS-CoV-2 and SARS-related homologs, LinearTurboFold identifies not only conserved structures supported by compensatory mutations and experimental studies, but also accessible and conserved regions as vital targets for designing efficient small-molecule drugs, siRNAs, ASOs, CRISPR-Cas13 gRNAs and RT-PCR primers. LinearTurboFold is widely applicable to the analysis of other RNA viruses (influenza, Ebola, HIV, Zika, etc.) and full-length genome analysis. Detailed description of our algorithms, datasets, and evaluation metrics are available in the online version of the paper. §1 Pairwise Hidden Markov Model We use a pairwise Hidden Markov Model (pair-HMM) to align two sequences.^57, 58 The model includes three actions (h): aligning two nucleotides from two sequences (ALN), inserting a nucleotide in the first sequence without a corresponding nucleotide in the other sequence (INS1), and a nucleotide insertion in the second sequence without a corresponding nucleotide in the first sequence (INS2). We then define 𝒜 (x, y) as a set of all the possible alignments for the two sequences, and one alignment a ∈ 𝒜 (x, y) as a sequence of steps (h, i, j) with m + 2 steps, where (h, i, j) means an alignment step at the position pair (i, j) by the action h. Thus, for the lth step a[l] = (h[l], i[l], j[l]) ∈ a, the values of i[l] and j[l] depend on the action h[l] and the positions i[l−1] and j[ l−1] of a[l−1]: with (ALN, 0, 0) as the first step, and (ALN, |x| + 1, |y| + 1) as the last one. For two sequences {ACAAGU, AACUG}, one possible alignment {− ACAAGU, AAC −− UG} can be specified as {(ALN, 0, 0) → (INS2, 0, 1) → (ALN, 1, 2) → (ALN, 2, 3) → (INS1, 3, 3) → (INS1, 4, 3) → (ALN, 5, 4) → (ALN, 6, 5) → (ALN, 7, 6)}, where a gap symbol (−) represents a nucleotide insertion in the other sequence at the corresponding position (Fig. S8). The action h[l] in each step (h[l], i[l], j[l]) corresponds to a line segment starting from the previous node (i[l]−[1], j[l]−[1]) and stopping at the node (i[l], j[l]). Thus the line segment is horizontal, vertical or diagonal towards the top-right corner when h[l] is INS1, INS2 or ALN, respectively (Fig. S8). We initialize the first step with the state ALN of probability 1, thus p[π](ALN) = 1. p[t](h[2] | h[1]) is the transition probability from the state h[1] to h[2], and p[e]((c[1], | c[2]) h[1]) is the probability of the state h[1] emitting a character pair (c[1], c[2]) with values from {A, G, C, U, −}. Both the emission and transition probabilities were taken from TurboFold II. The function e() yields a character pair based on a[l] and the nucleotides of two sequences: where x[i] and y[i] are the ith and jth nucleotides of sequences x and y, respectively. Note that the first step a[0] = (ALN, 0, 0) and the last a[m+1] = (ALN, |x| + 1, |y| + 1) do not have emissions. We denote forward probability encompassing the probability of the partial alignments of x and y up to positions i and j, and all the alignments that go through the step (h, i, j): where a[: k] indicates the partial alignments from the starting node up to the kth step and a[k] = (h, i, j). For instance, and corresponds to the region circled by the blue dashed lines (Fig. S8B, C and D). Similarly, the backward probability assembles the probability of partial alignments a[k + 1 :] from the (k + 1)th step up to the end one: For example, and are the regions circled by the yellow dashed line (Fig. S8B, C and D). Thus, the probability of observing two sequences p(x, y) is or . §2 Posterior Co-incidence Probability Computation Nucleotide positions i and j in two sequences x and y are said to be co-incident (notated as i ∼ j) in an alignment a if the alignment path goes through the node (i, j).^57 Since the node (i, j) is reachable by three actions ℋ = {ALN, INS1, INS2}, the co-incidence probability for a position pair (i, j) given two sequences is: where p(x, y, a) is the probability of two sequences with the alignment a, and p(x, y) is the probability of observing two sequences, which is the sum of probability of all the possible alignments: The co-incidence probability for positions i and j (Equation 1) can be computed by: §3 LinearAlignment Unlike a previous method^57 that fills out all the nodes in the alignment matrix by columns (Fig. S8), LinearAlignment scans the matrix based on the step count s, which is the sum value of i and j (s = i + j) for the partial alignments of x[[1,i]] and y[[1,j]]. As shown in the pseudocode (Fig. S9), the forward phase starts from the node (0, 0) in the state ALN of probability 1, then iterates the step count s from 0 to |x| + |y| −1. For each step count s with a specific state h from ℋ, we first collect all the nodes (i, j) with the step count s with existing, which means the position pair (i, j) has been visited via the state h before. Then each node makes transitions to next nodes by there states, and updates the corresponding forward probabilities and , respectively. The current alignment algorithm is still an exhaustive-search algorithm and costs quadratic time and space for all the |x| × |y| nodes. To reduce the runtime, LinearAlignment uses the beam search heuristic algorithm^40 and keeps a limited number of promising nodes at each step. For each step count s with a state h, LinearAlignment applies the beam search method first over B(s, h), which is the collection of all the nodes (i, j) with step count s and the presence of (Fig. S9 line 6). This algorithm only saves the top b[1] nodes with the highest forward scores in B(s, h), and these are subsequently allowed to make transitions to the next states. Here b[1] is a user-specified beam size and the default value is 100. In total, O(b[1]n) nodes survive because the length of s is |x| + |y | and each step count keeps b[1] nodes. For simplicity, we show the topological order and the beam search method with alignment examples (Fig. S8A), while the forward-backward algorithm adopts the same idea by summing the probabilities of all the possible alignments. After the forward phase, the backward phase (Fig. S9) performs in linear time to calculate the co-incidence probabilities automatically because only a linear number of nodes in B(s, h) are stored. Thus by pruning low-scoring candidates at each step in the forward algorithm, we reduce the runtime from O(n^2) to O(b[1]n) for aligning two sequences. For k input homologous sequences, LinearTurboFold computes posterior co-incidence probabilities for each pair of sequences by LinearAlignment, which costs O(k^2b[1]n) runtime in total. §4 Match Scores Computation and Modified LinearAlignment To encourage the pairwise alignment conforming with estimated secondary structures, LinearTurboFold predicts structural alignments by incorporating the secondary structural conformation. PMcomp^59 first proposed the match score to measure the structural similarity for position pairs between a pair of sequences, and TurboFold II adapts it as a prior. Based on the base pair probabilities P[x](i, j) estimated from the partition function for a sequence x, a position i could be paired with bases upstream, downstream or unpaired, with corresponding probability P[x,>](i) = ∑[j<i] P[x](i, j), P[x, <](i) = ∑ [j>i] P[x](i, j) and P[x,o](i) = 1 − P[x,>](i) − P[x,<](i), respectively. The match score m[x,y](i, j) for two positions i and j from two sequences x and y is based on the probabilities of these three structural propensities from the last iteration (t − 1): where α[1], α[2] and α[3] are weight parameters trained in TurboFold II. The forward-backward phrases integrate the match score as a prior when aligning two nucleotides (Fig. S9 line 10, and Fig. S9 line 12). TurboFold II separately pre-computes match scores for all the O(n^2) position pairs for pairs of sequences before the HMM alignment calculation. However, only a linear number of pairs O(b[1]n) survive after applying the beam pruning in LinearAlignment. To reduce redundant time and space usage, LinearTurboFold calculates the corresponding match scores for co-incident pairs when they are first visited in LinearAlignment. Overall, for k homologous sequences, LinearTurboFold reduces the runtime of the whole module of pairwise posterior co-incidence probability computation from O(k^2n^ 2) to O(k^2b[1]n) by applying the beam search heuristic to the pairwise HMM alignment, and only calculating the match scores for position pairs that are needed. §5 Extrinsic Information Calculation To update partition functions for each sequence with the structural information from homologs, TurboFold^19 introduces extrinsic information to model the the proclivity for base pairing induced from the other sequences in the input set 𝒮. The extrinsic information e[x](i, j) for a base pair (i, j) in the sequence x maps the estimated base pairing probabilities of other sequences to the target sequence via the coincident nucleotides between each pair of sequences: where is the base pair probability for a base pair (k, l) in the sequence y from (t − 1)th iteration. and are the posterior co-incidence probabilities for position pairs (i, k) and (j, l), respectively, from (t)th iteration. The extrinsic information first sums all the base pair probabilities of alignable pairs from another one sequence with the co-incidence probabilities and then iterates over all the other sequences. s[x,y] is the sequence identity for sequences x and y. The sequences with a low identity contribute more to the extrinsic information than sequences of higher identity. The sequence identity is defined as the fraction of nucleotides that are aligned and identical in the alignment. §6 LinearPartition for Base Pairing Probabilities Estimation with Extrinsic Information The classical partition function algorithm scales cubically with sequence length. The slowness limits its extension to longer sequences. To address this bottleneck, our recent LinearPartition^38 algorithm approximates the partition function and base paring probability matrix computation in linear time. LinearPartition is significantly faster, and correlates better with the ground truth structures than the traditional cubic partition function calculation. Thus LinearTurboFold uses LinearPartition to predict base pair probabilities instead of the traditional O(n^3)-time partition TurboFold introduces the extrinsic information in the partition function as a pseudo-free energy term for each base pair (i, j). Similarly, in LinearPartition, for each span [i, j], which is the subsequence x[i]…x[j], and its associated partition function Q(i, j), the partition function is modified as if (x[i], x[j]) is an allowed pair, where λ denotes the contribution of the extrinsic information relative to the intrinsic information. Specifically, at each step j, among all possible spans [i, j] where x[i] and x[j] are paired, we replace the original partition function Q(i, j) with by multiplying the extrinsic information. Then LinearTurboFold applies the beam pruning heuristic over the modified partition function instead of the original. Similarly, TurboFold II obtains the extrinsic information for all the O(n^2) base pairs before the partition function calculation of each sequence, while only a linear number of base pairs survives in LinearPartition. Thus, LinearTurboFold only requires the extrinsic information for those promising base pairs that are visited in LinearPartition. Overall, for k homologous sequences, LinearTurboFold reduces the runtime of base pair probabilities estimation for each sequence from O(kn^3 + k^2n^2) to by applying the beam search heuristic to the partition function calculation, and only calculating extrinsic information for the saved base pairs. §7 MSA Generation and Secondary Structure Prediction After several iterations, TurboFold II builds the multiple sequence alignment using a probabilistic consistency transformation, generating a guide tree and performing progressive alignment over the pairwise posterior co-incidence probabilities.^22 The whole procedure is accelerated in virtue of the sparse matrix by discarding alignment pairs of probability smaller than a threshold (0.01 by default). Since LinearAlignment uses the beam search method and only saves a linear number of co-incident pairs, the MSA generation in LinearTurboFold costs linear runtime against the sequence length Estimated base pair probabilities are fed into downstream methods to predict secondary structures. To maintain the end-to-end linear-time property, LinearTurboFold uses ThreshKnot,^41 which is a thresholded version of ProbKnot^60 and only considers base pairs of probability exceeding a threshold θ (θ = 0.3 by default). We evaluate the performance of ThreshKnot and MEA with different hyperparameters (θ and γ). On a sampled RNAStrAlign training set, ThreshKnot is closer to the upper right-hand than MEA, which indicates that ThreshKnot always has a higher Sensitivity than MEA at a given PPV (Fig. S10B). §8 Efficiency and Scalability Datasets Four datasets are built and used for measuring efficiency and scalability. To evaluate the efficiency and scalability of LinearTurboFold with sequence length, we collected groups of homologous RNA sequences with sequence length ranging from 200 nt to 29,903 nt with a fixed group size 5. Sequences are sampled from RNAStrAlign dataset,^18 the Comparative RNA Web (CRW) Site,^61 the Los Alamos HIV database (http://www.hiv.lanl.gov/) and the SARS-related betacoronaviruses (SARS-related).^44 RNAStrAlign, aggregated and released with TurboFold II, is an RNA alignment and structure database. Sequences in RNAStrAlign are categorized into families, i.e. sets of homologs, and some of families are further split into subfamilies. Each subfamily or family includes a multiple sequence alignment and ground truth structures for all the sequences. 20 groups of five homologs were randomly chosen from the small subunit ribosomal RNA (Alphaproteobacteria subfamily), SRP RNA (Protozoan subfamily), RNase P RNA (bacterial type A subfamily) and telomerase RNA families. For longer sequences, we sampled five groups of 23S rRNA (of sequence length ranging from 2,700 nt to 2,926 nt) from the CRW Site, HIV-1 genetic sequences (of sequence length ranging from 9,597 nt to 9,738 nt) from the Los Alamos HIV database, and SARS-related sequences (of sequence length ranging from 29,484 nt to 29,903 nt). All the sequences in one group belong to the same subfamily or subtype. We sampled five groups for each family and obtained 35 groups in total. Due to the runtime and memory limitations, we did not run TurboFold II on SARS-CoV-2 groups (Fig. 2, A and D). To assess the runtime and memory usage of LinearTurboFold with group size, we fixed the sequence length around 1,500 nt, and sampled 5 groups of sequences from the small subunit ribosomal RNA (Alphaproteobacteria subfamily) with group size 5, 10, 15 and 20, respectively (Fig. 2, B and F). We used a Linux machine (CentOS 7.7.1908) with 2.30 GHz Intel Xeon E5-2695 v3 CPU and 755 GB memory, and gcc 4.8.5 for benchmarks. We built a test set from the RNAStrAlign dataset to measure and compare the performance between LinearTurboFold and other methods. 60 groups of input sequences consisting of five homologous sequences were randomly selected from the small subunit ribosomal RNA (rRNA) (Alphaproteobacteria subfamily), SRP RNA (Protozoan subfamily), RNase P RNA (bacterial type A subfamily) and telomerase RNA families from RNAStrAlign dataset. We removed sequences shorter than 1,200 nt for the small subunit rRNA to filter out subdomains, and removed sequences that are shorter than 200 nt for SRP RNA following the TurboFold II paper to filter out less reliable sequences. We resampled the test set five times and show the average PPV, Sensitivity and F1 scores over the five samples (Fig. 2, C and F). An RNAStrAlign training set was built to compare accuracies between MEA and ThreshKnot. 40 groups of 3, 5 and 7 homologs were randomly sampled from 5S ribosomal RNA (Eubacteria subfamily), group I intron (IC1 subfamily), tmRNA, and tRNA families from RNAStrAlign dataset. We chose θ = 0.1, 0.2, 0.3, 0.4 and 0.5 for ThreshKnot, and γ = 1, 1.5, 2, 2.5, 3, 3.5, 4, 8 and 16 for MEA. We reported the average secondary structure prediction accuracies (PPV and Sensitivity) across all training families (Fig. S10B). §9 Benchmarks The Sankoff algorithm^11 uses dynamic programming to simultaneously fold and align two or more sequences, and it requires O(n^3k) time and O(n^2k) space for k input sequences with the average length n. Both LocARNA^12 and MXSCARNA^14 are Sankoff-style algorithms. LocARNA (local alignment of RNA) costs O(n^2(n^2 + k^2)) time and O(n^2 + k^2) space by restricting the alignable regions. MXSCARNA progressively aligns multiple sequences as an extension of the pairwise alignment algorithm SCARNA^62 with improved score functions. SCARNA first aligns stem fragment candidates, then removes the inconsistent matching in the post-processing to generate the sequence alignment. MXSCARNA reduces runtime to O(k^3n^2) and space to O(k^2n^2) with a limited searching space of folding and alignment. Both MXSCARNA and LocARNA uses pre-computed base pair probabilities for each sequence as structural input. All the benchmarks use the default options and hyper-parameters running on the RNAStrAlign test set. TurboFold II iterates three times, then predicts secondary structures by MEA (γ=1). LinearTurboFold also runs three iterations with default beam sizes (b[1] = b[2] = 100) in LinearAlignment and LinearPartition, then predicts structures with ThreshKnot (θ = 0.3). §10 Significance Test We use a paired, two-tailed permutation test^63 to measure the significant difference. Following the common practice, the repetition number is 10,000, and the significance threshold α is 0.05. §11 SARS-CoV-2 Datasets We used two large SARS-CoV-2 datasets. The first dataset is used to draw a representative sample of most diverse SARS-CoV-2 genomes. We downloaded all the genomes submitted to GISAID^43 by December 29, 2020 (downloaded on December 29, 2020), and filtered out low-quality genomes (with more than 5% unknown characters and degenerate bases, shorter than 29,500 nt, or with framing error in the coding region), and we also discard genomes with more than 600 mutations compared with the SARS-CoV-2 reference sequence (NC_0405512.2).^64 After preprocessing, this dataset includes about 258,000 genomes. To identify a representative group of samples with more variable mutations, we designed a greedy algorithm to select 16 most diverse genomes genomes found at least twice in the 258,000 genomes. The general idea of the greedy algorithm is to choose genomes one by one with the most new mutations compared with the selected samples, which consists of only the reference sequence at the The second, larger, dataset is to evaluate the conservation of regions with respect to more up-to-date variants. We downloaded all the genomes submitted to GISAID by June 30, 2021 (downloaded on July 25, 2021), and did the same preprocessing as the first dataset. This resulted in a dataset of ∼2M genomes, which was used to evaluate conservation in Figure 5 and Tables S5, S6, S7. Supporting Information • ↵^* Besides these joint-fold-and-align algorithms, there exist two alternative approaches to homologous folding: align-then-fold and fold-then-align; see Fig. S6 for details. • ↵^† The average sequence identity is 0.9987 on that ∼2M dataset (downloaded on July 25, 2021).
{"url":"https://www.biorxiv.org/content/10.1101/2020.11.23.393488v3.full","timestamp":"2024-11-07T08:03:06Z","content_type":"application/xhtml+xml","content_length":"417445","record_id":"<urn:uuid:bed7a15f-703c-48dc-9fd8-e840e6d115d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00222.warc.gz"}
A parallel implementation of an algorithm devised for solving the traveling salesman problem is presented. The algorithm is simulated annealing, and is implemented on a hypercubic, MIMD computer of 64 processing nodes. The parallel algorithm is discussed and performance figures are given. Efficiencies greater than 90% have been achieved. Original language English (US) Title of host publication Proceedings of the International Conference on Parallel Processing Editors Douglas DeGroot Publisher IEEE Pages 6-10 Number of pages 5 ISBN (Print) 0818606371 State Published - 1985 Publication series Name Proceedings of the International Conference on Parallel Processing ISSN (Print) 0190-3918 All Science Journal Classification (ASJC) codes • Hardware and Architecture Dive into the research topics of 'TRAVELING SALEMAN PROBLEM ON A HYPERCUBIC, MIMD COMPUTER.'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/traveling-saleman-problem-on-a-hypercubic-mimd-computer","timestamp":"2024-11-09T16:34:01Z","content_type":"text/html","content_length":"46394","record_id":"<urn:uuid:a0e2b71e-6586-4c73-932a-0a3259057156>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00752.warc.gz"}
Virtual Model Development of the Load Application System of a Wind Turbine Nacelle Test Bench for Hybrid Test Applications The existing nacelle testing methods require continuous improvements to satisfy the ever-increasing demands for testing modern wind turbines. One way to achieve this goal is to use advanced simulation techniques to undertake hybrid testing, in which experiments and simulations are combined to push the boundaries of nacelle testing even further. To do so requires the development of a virtual model of the test bench featuring the true test bench dynamics and functionalities. This contribution presents the development of a virtual model of the complete nontorque load application system of a nacelle test bench at Fraunhofer IWES. The model development methodology is explained and the impact of different levels of modeling depth of the hydraulic system model is investigated. It is concluded that modeling of friction and valve dynamics is necessary as they have significant influence on the generated loads. These findings can help in the development of virtual models of nacelle test benches and pave the way for performing hybrid testing for wind turbine nacelles. Issue Section: Research Papers Actuators, Fluids, Modeling, Stress, Valves, Wind turbines, Friction, Simulation, Model development, Cylinders, Pipes, Hydraulic circuits, Dynamics (Mechanics), Pressure, Hydraulic drive systems, Control equipment 1 Introduction Modern wind turbine systems are designed to generate and supply electricity for an operational period of 20years or more [1]. However, past years have shown a high rate of failures in both offshore and onshore wind turbine systems [2,3]. Such frequent failures cause long downtimes requiring costly repairs which dominate the overall operation and maintenance (O&M) costs [4,5]. This has raised the need for finding ways of developing more reliable wind turbine systems. One major way of developing reliable wind turbines is by extensive system level testing and experimental validation [6]. In recent years, nacelle test benches have become an attractive method for testing wind turbine drivetrains as compared to the conventional way of field testing. The most advanced nacelle test benches, like the Dynamic Nacelle Testing Laboratory (DyNaLab) at Fraunhofer IWES, incorporate sophisticated hardware-in-the-loop (HIL) schemes to enable the emulation of field-like loads on a nacelle in a controlled testing environment [7] (Fig. 1). They offer faster and reproducible test campaigns at a lower cost compared with field testing. However, wind turbine technologies are evolving rapidly and the operational capacities of wind turbines are breaking new records every year [8]. Consequently, the existing test benches must keep up with the growing demands of testing modern wind turbine nacelles. Upgrading the test bench operational capacity might be the obvious solution, but it is not necessarily the most feasible and realistic one. An alternative solution for complementing test bench capabilities is needed that is both economically feasible and applicable for all the existing test benches. This is where simulation techniques come into play. The advancements in simulation technology have allowed the development of complete virtual twins of the actual test setup and opened the doors for “hybrid testing” of wind turbine nacelles [9]. Combining the simulation results with the experimental measurements makes it possible to investigate the complex interactions between the device under test (DUT) and the test bench effectively. The virtual models of the RWTH Aachen University test benches [10,11] and the Clemson University test bench [12] have already demonstrated some of these advantages. These models have primarily been used for detailed analysis of the nacelle DUT. The ongoing VirtGondel research project [13] at Fraunhofer IWES aims to advance this technology further by using the virtual test bench model to augment the physical testing and provide a parallel virtual environment with load ranges beyond the test bench capacity. The virtual test bench model will also be utilized to further develop the existing methods for nacelle testing. However, the development of such high-fidelity virtual models of a nacelle test bench for hybrid test application is challenging. The DyNaLab test bench features a hexapod system for the application of nontorque loads which comprises several servohydraulic actuators that are powered via a large hydraulic circuit. The selection of the required modeling fidelity that represents all the relevant system dynamics with optimal computation times for a system of such unique topology, size, and operational capacity is by no means trivial. Moreover, there is very limited information available in the literature for developing virtual models for a test bench of similar topology. The test bench model in Ref. [11] considers a simplified actuator dynamic model, whereas no details are given on the load application system model of the test bench in Ref. [10]. The test bench model in Ref. [12] considers a detailed model of the valve dynamics for studying the test bench control system and considers the cylinder pressure dynamics which are presented in Refs. [14,15]. However, the recommended modeling fidelity is still not evident from these works, as this was not the focus. This leaves the question of required modeling fidelity still open. For complete virtual testing, it is important to model all the relevant test bench dynamics that can influence the global system response. Therefore, a detailed study is needed to understand the important system characteristics and the required modeling fidelity. This contribution presents the development of a virtual model of the complete nontorque load application system (LAS) of the DyNaLab test bench. Multibody simulation (MBS) is utilized for the modeling of the mechanical components of the LAS. The complete hydraulic system of the test bench is modeled using bond graph methods. Both system models are dynamically coupled via cosimulation in Simulink, which also features the LAS force control scheme using PI control. This allows implementation of the loads on the DUT in the same manner as on the actual bench. The model development methodology will be explained, and the influence of different levels of modeling depth of the system model investigated, and the model fidelity most relevant for capturing the system dynamics highlighted. The findings will aid the development of virtual models of similar test benches as the DyNaLab and shall open further doors for performing hybrid testing for wind turbine nacelles. A detailed description of the test bench LAS is given in Sec. 2. The modeling methodology is explained in Sec. 3. The implemented cosimulation framework for the test bench LAS is described in Sec. 4. The case studies performed and discussion of the results are provided in Secs. 5 and 6. The paper ends with an outlook for future work in Sec. 7 and conclusion in Sec. 8. 2 DyNaLab Load Application System The DyNaLab offers electrical and mechanical tests for a wind turbine nacelle of up to 10MW power. With its direct drive and hexapod LAS, the test bench can apply loads on the nacelle DUT in six degrees-of-freedom (DOF) to emulate the wind loads. The LAS consists of a particular configuration of a 6DOF Stewart-Gough platform, which is driven by means of six servohydraulic cylinders. Figure 2 shows the force control scheme of the LAS. These cylinders are arranged in a hexagonal configuration. The LAS can apply up to 20 MNm bending moment and 2 MN thrust and shear forces. The internal control of the LAS transfers the desired loads at the load application point (LAP) to the individual cylinders. This transformation is governed according to an inverse kinematic approach that performs a mapping of desired LAP loads in the task space into the required amount of individual actuator loads in the joint space. Each actuator has a dedicated servovalve controller that receives the required actuator force set points to control the servovalve. Pressure sensors in each cylinder measure the generated cylinder forces which serve as the feedback signal to the actuator controller. In this manner, each actuator has a closed-loop in its respective joint space. This enables the application of nontorque loads in 5DOFs in a controlled fashion. Each hydraulic actuator is connected to a comprehensive hydraulic circuit that ensures stable delivery of hydraulic fluid for the entire operating range of the test bench. Figure 3 shows a simplified representation of the test bench hydraulic circuit. A set of six motor powered displacement pumps supply pressurized fluid from the reservoir tank to the high pressure line. A series of accumulators are connected to the pressure lines for minimizing pressure ripples. The pressure line also consists of several relief valves for ensuring safe pressure limits. The return line delivers the low pressure through heat exchangers into the reservoir tank. 3 System Modeling 3.1 Modeling of the Hydraulic Sub-Systems. Elements of the hydraulic system of the test bench have been modeled using the bond graph method in 20-sim software [16]. The bond graph method is an energy-based modeling approach that provides a practical way of fully coupling multidomain systems with various power conversions. In the bond graph method, effort and flow variables, also known as power variables, are used to describe how the systems interact and exchange energy. This modeling approach categorizes each element of physical systems based on their ability for energy supply, storage, transformation, or dissipation. The kinetic and potential energy storage are represented by inertia (I) and capacitance (C) elements, respectively. Energy dissipation is represented by resistance (R) elements, while lossless energy transformation is depicted by transformer (TF) elements. These system components are interconnected with power bonds (denoted with a half arrow), representing the energy exchange. The sense of the half arrow gives the direction of the power. The causality of effort and flow variables is shown with a vertical stroke, which determines whether these variables are considered input or output in the respective bond graph elements. Elements under the same effort are associated with parallel junctions (0-junctions), whereas elements under the same flow are associated with series junctions The LAS is powered by a hydraulic system composed of several key subsystems such as the hydraulic supply unit, pipes, accumulators, and servocontrolled actuators. Modeling these subsystems using the bond graph approach gives the advantage of having complete control over the causal relationships between the component models. This helps in identifying and preventing algebraic loops in the system model. Furthermore, defining the coupling interface and the associated input and output variables between the hydraulic system model and the MBS model during cosimulation becomes straightforward as these variables are simply the power variables of the bond graph model at any interface. The constitutive equations for modeling these subsystems have been gathered from Ref. [17] and are described in Secs. 3.1.1–3.1.5. 3.1.1 Hydraulic Fluid Supply. The test bench pump station supplies the hydraulic fluid to the circuit. This consists of a series of displacement pumps powered by an electric motor that drives the pumps to supply fluid to the manifold unit connected to the high-pressure line. Figure shows the bond graph model of the fluid supply system and pressure relief valve. The motor is modeled as a modulated source of flow with rotation speed as the modulation signal. The main tank supplying the fluid is an effort source. The pump is modeled as a transformer element that converts motor power (speed and torque) into fluid power (pressure and flowrate) according to the following where $V˙$ is the theoretical pump delivery as a function of pump displacement d and motor speed ω. The relief valve maintains the pressure in the system to a limit value. The relief valve opens fully when the line pressure exceeds the limit pressure and is fully closed when the line pressure is below the limit pressure. This is modeled as a modulated R element with line pressure as modulation signal. 3.1.2 Fluid Pipe System. The pipe model allows the consideration of fluid inertia, fluid compressibility, line resistance, and elasticity of the line material. Figure shows the bond graph model of the pipe. The pipe and fluid capacitance is modeled as a element of the following form: is the pipe area, is the total pipe length, represents the fluid bulk modulus, is the pipe diameter, is the pipe thickness, and is the elastic modulus of the pipe material. The fluid inertia is lumped into a single element that accounts for the total inertia of the fluid in the pipe and is defined as: is the fluid density. The line resistance is modeled as an R element with the following relation between pressure and flowrate for a laminar flow regime: where μ is the dynamic viscosity of the fluid. 3.1.3 Accumulator. Accumulators are storage elements that store flow part of the time for a delivery at sudden pressure drops. In this manner, they reduce ripples and pressure transients in the pressure line. The test bench accumulators are of bladder type with precharged gas. Such a device can be modeled as a flow storing element. The constitutive relation of this element considering isentropic process is as follows: where k is the specific heat ratio for an isentropic process and $Po$ and $Vo$ are the nominal pressure and volume of the accumulator. 3.1.4 4/3 Proportional Valve. The actuators are controlled by 4/3 proportional spool valves with individual PI controller. These are modeled as a circuit of modulated elements with spool position as the modulation signal. Figure shows the bond graph model of the spool valve. Each element allows flow when active along with the appropriate pressure drop according to the equation: where x[spool] is the spool displacement which is modeled by a second order transfer function characterized by the valve response frequency and damping. 3.1.5 Hydraulic Actuator. The actuators convert hydraulic power into mechanical power. Figure shows the bond graph model of the hydraulic actuator. This is modeled by two transformer elements representing conversion from fluid power into mechanical power for each cylinder chamber Each chamber volume is modeled by C element and the piston inertia is modeled with an element. The actuator leakage reduces the effective flow rates and leads to power losses. The internal leakage between the two chambers and the external leakage through the piston rod can be modeled as an element of the following constitutive relation: where δ is the seal clearance and l[n] is the seal contact length. Two modulated effort sources are linked to the piston side. The first one applies the bumper force on the piston and the second one applies the forces from external loads on the piston. The piston seal friction has also been considered. The steady-state friction model [ ] and the LuGre model [ ] are widely employed for modeling actuator friction. The steady-state friction model combines the Coulomb friction, viscous friction, and static friction and is described by the following equation: where F[r] is the friction force, F[c] is the Coulomb friction force, and F[s] is the static friction force. ν[s] is the Stribeck velocity with n as the exponent that affects the slope of the Stribeck curve. The velocity between the sliding surface is defined by ν. The LuGre model, which is a dynamic friction model, is governed by the following set of equations: where $g(ν)$ is the Stribeck function that expresses the Coulomb friction and the Stribeck effect. σ[0], σ[1], and σ[2] represent the average stiffness coefficient of bristles, the average damping coefficient of bristles, and the viscous friction coefficient, respectively. z represents the mean deflection of the elastic bristles. The first and second term of Eq. (13) represent the force of friction arising due to the elastic bending of the bristles while the third term represents the viscous friction. Both friction models defined by Eqs. (10) and (13) can be modeled as an R element in the bond graph model of the cylinder. However, in the presented work, the friction is modeled in the MBS system in the translational joints of each actuator. This results in two main advantages. First, the normal force is directly calculated in the MBS system. Second, the more robust HHT integrator of MSC Adams can be used to solve the differential equations. 3.2 Test Bench Hydraulic Circuit Model. The actual hydraulic circuit of the test bench features several components that are there for safety and maintaining steady operation. By assuming that the entire system operates fault free and ignoring thermal effects, several components like filters, heat exchangers, check valves, and emergency drains can be ignored. Moreover, the goal of the virtual model is realistic simulation of the LAP loads. Therefore, only those aspects of the hydraulic circuit have been considered that are required for a realistic generation of the LAP forces under normal mode of operation. Figure 8 shows the selected abstraction of the test bench hydraulic circuit along with the equivalent bond graph model. The individual accumulators are lumped together as an equivalent model representing the combined volume of all the accumulators. The pipes are modeled as a lumped model as explained in Sec. 3.1.2. The fluid supply system assembly to which the hydraulic actuator components are connected closely resembles the actual test bench system. The hydraulic circuit model has defined interfaces that allow it to connect with external models that are located at the spool valve input signal, the piston force input and the piston velocity output. 3.3 Test Bench Multibody Simulation Model. A high fidelity MBS model of the test bench has been developed that models all the components of the test bench drive system and LAS. Figure 9 shows the complete MBS model of the test bench and the calibration unit modeled using mscadams software [20]. The calibration unit is a steel reaction structure that serves as a DUT. The DUT structure with platform, flange adaptors, coupling, and motor rotor are modeled as flexible bodies. These bodies are created by modal reduction of their respective FE models using the component mode synthesis (CMS) method [21]. All components in the model are connected with each other and the ground using constraints according to their configuration in the actual system. Table 1 provides information on the MBS model fidelity. The choice of model fidelity for the test bench system and the DUT was mainly driven by the need to achieve the maximum accuracy possible without raising computational costs. More details on the test bench MBS model are provided in [22,23]. Table 1 Components Fidelity Motors Rigid spring-mass system Drive shaft Flexible body Torque limiter Flexible body Coupling flanges Flexible body Coupling links Rigid body Bearings Bushing with 6×6 stiffness matrix Interface flange Flexible body Hexapod Rigid body Actuator Rigid body Calibration unit Flexible body Platform Flexible body Components Fidelity Motors Rigid spring-mass system Drive shaft Flexible body Torque limiter Flexible body Coupling flanges Flexible body Coupling links Rigid body Bearings Bushing with 6×6 stiffness matrix Interface flange Flexible body Hexapod Rigid body Actuator Rigid body Calibration unit Flexible body Platform Flexible body 4 Co-Simulation Framework The high-fidelity test bench MBS model, the hydraulic system model, and the actuator controllers are coupled via cosimulation. This will allow emulation of the actual LAS of the DyNaLab test bench. Simulink serves as the cosimulation interface. The bond graph model of the hydraulic circuit developed in 20-sim is exported to Simulink as a functional mockup unit (FMU). The multibody simulation model of the test bench is imported as an Adams-Simulink block. The hydraulic system model in 20-sim is fully coupled to the MBS model in Adams at the hydraulic actuator interface (as shown in Fig. 10). The 20-sim model receives the piston reaction force as input variables and delivers the piston velocities as output variables. The MBS model in Adams receives the piston velocities as input variables and applies them as constraints to the respective actuators. The resulting piston force reactions are delivered as output variables. Apart from these coupling variables, additional outputs are extracted from the MBS model and the 20-sim model for postprocessing of the results. These include the LAP forces, LAP displacements, piston displacements, and cylinder chamber pressures. Both the hydraulic model in 20-sim and MBS model in Adams communicate by passing the input and output variables back and forth with a communication interval that corresponds to the simulation step size. The hydraulic model in 20-sim uses the fourth order Runge–Kutta method for time integration with a step size of 60 μs. The Adams model uses the HHT integrator with a step size of 6ms. Simulink provides the necessary rate transitions between the coupled models and performs the time integration using the 4th order Runge–Kutta method with a step size of 60μs. Each individual hydraulic actuator model in 20-sim is controlled by a dedicated PI controller which is modeled in Simulink. The difference in the actuator set point force and the cylinder pressure calculated force is sent to the PI controller as an error signal. The PI controller attempts to reduce the error by controlling the spool positions. This way, the force is being controlled in the joint space that translates into the desired LAP forces in the task space according to the inverse kinematics of the hexapod system. This force control scheme is similar to the one implemented in the actual test bench. 5 Case Studies Several case studies are performed with different model variations, which are listed in Table 2. In this way, the modeling features that are most relevant for force generation at the LAP can be identified. The base model represents an idealized hydraulic circuit that ignores the fluid inertial effects in the pressure line, the spool valve dynamics, and the frictional losses in the hydraulic cylinder and pipes. This is used as a reference, and all subsequent modeling details are added individually to the base model for comparison. Figure 11 shows the load profile considered for all the cases. The test load profile involves stepped loading in the beginning and sinusoidal load toward the end. This makes it possible to understand the system response for the static and dynamic load regimes. The tests are performed for three loading directions. These directions correspond to the thrust forces ($Fx$), the yaw forces ($Fy$), and the pitch forces ($Fz$) that are applied at the LAP. Though less realistic than actual field loads, such unidirectional load scenarios reveal key system characteristics that might otherwise be difficult to identify in the case of mixed load Table 2 Variation Pipe model Cylinder friction Cylinder leakage Valve dynamics Base model × × × × Case1 ✓ × × × Case2 × ✓ × × Case3 × × ✓ × Case4 × × × ✓ Variation Pipe model Cylinder friction Cylinder leakage Valve dynamics Base model × × × × Case1 ✓ × × × Case2 × ✓ × × Case3 × × ✓ × Case4 × × × ✓ Figure 12 shows the influence of modeling pipe in the hydraulic circuit. Three variations of pipe models have been used: the first variation considers the fluid inertia (green); the second variation considers the line friction (yellow); and the third variation combines the fluid inertial effects and line friction (red). The first two pipe model variations focus on the individual effects of fluid inertia and pipe friction, whereas the third model variation focuses on their combined effect on the system response. In all variations, the pipe capacitance has been modeled according to Eq. (2). It can be observed that, with the exception of the transition regions, the deviations are insignificant when compared to the base model. Figure 13 shows the influence of modeling the cylinder friction. Noticeable changes are observed for the static load case regions as the models with friction result in lower LAP force in axial direction when compared to the base model. For any given force set point for an actuator, the cylinder forces that are feedback to the PI controller are calculated from the pressure difference of the cylinder chambers in the joint space. The presence of friction raises this differential force. This results in cylinder forces reaching the set point values while the actual force applied at the piston end is less than the set point force (by a factor of the frictional force in the system). As a result, the PI controller assumes that the set point has been achieved by assessing the pressure differential force. This consequently leads to the difference in the generated LAP forces in the task space. The modeled friction has a maximum value of 0.5% of nominal actuator force, which has caused up to 12% deviations in the LAP forces. The deviations are greatest in the axial load case and are very minor for the remaining two directions. This is possibly due to larger movement of the pistons for the axial load case as compared to the pitch and yaw load cases. The friction models will return higher frictional forces for larger piston movements. Figure 14 shows the influence of modeling cylinder leakages. Deviations in the static load regions can be observed, as the model with the highest leakage rate tends to have a continuous drop in force levels. At any stationary force level, the leakage causes pressure drops that lead to a drop in the actuator forces. The PI controller continuously attempts to maintain the force levels and therefore, due to active controller actions, the resulting forces are noisier. Figure 15 compares the models with varying levels of valve dynamics. The results show significant deviations in the dynamic load regions and some noticeable noise in the transient regions of the step loads. The PI controller actions can possibly influence the valve response frequencies that can lead to deviations in the LAP forces. The results have shown that changes in valve response frequency of as much as 5% can cause up to 20% variations in dynamic LAP forces. From the presented results of the case studies, it is evident that certain elements of the hydraulic circuit have a more significant impact on the resulting LAP forces than the others. The cylinder friction and valve dynamics when modeled can significantly affect the model behavior. The cylinder leakage has shown minor changes in the constant force regions, whereas the modeling of the pipe has shown negligible changes in the results. Furthermore, in almost all cases, the $Fy$ and $Fz$ hub forces showed more noise as compared to the $Fx$ hub forces. One possible reason for this could be linked to the higher stiffness of the DUT structure in the Y and Z directions as compared to the X direction. This higher stiffness leads to higher-frequency pressure fluctuations in the hydraulic actuator model, which causes the actuator PI controller to react more aggressively, eventually leading to more noise in hub forces. 6 Model Validation Based on the findings of the case studies, features such as cylinder friction and valve dynamics have been incorporated into the base model. A preliminary validation is performed for the virtual model by comparing the hub forces from simulations with experimental results for a sinusoidal load case. In the experimental setup (shown in Fig. 9), multiple load cells are installed between the DUT and the interface flange of the hexapod. This allows a direct measurement of forces at the load application point between the DUT and the hexapod unit during experiments. Figure 16 shows the comparison of hub forces simulated by the virtual model and those determined during experiments via the load cells. The virtual model can reproduce the dynamic axial forces with decent accuracy, having less than 4% deviation from the experimentally measured axial forces. However, the model appears to overestimate the generated $Fy$ and $Fz$ forces as compared to the experimental measurements of the corresponding loads. These deviations could be linked to several reasons, such as uncertainties in the model parameters of the hydraulic system, uncertainties in the MBS model, and uncertainties in the controller model. The actuator controller used in the virtual model is an abstraction of the actual controller on the test bench. Therefore, the modeled controller can have differences in dynamic behavior as compared to the actual controller on the test bench. Further investigations are needed to optimize the PI parameters that can make the controller more robust for all load directions. 7 Future Work Although several aspects were covered in the presented case studies, some modeling features that might require further investigation include, cylinder expansion, seal clearance variations, fluid viscosity variations, spool valve friction and hysteresis. In physical systems, the cylinders do expand at higher pressures leading to changes in the volume and seal clearance. This could influence the pressure dynamics and the leakage rates leading to fluctuations in actuator forces. The spool valves also have some level of friction and hysteresis effects that might have an influence on the actuator load response. The viscosity of fluids changes with variations in temperature. Changes in fluid viscosity can lead to changes in friction and leakages, which can influence the loads applied by the actuators. In the presented work, only unidirectional load cases were considered to investigate the dynamic response of the LAS. Future work will involve further investigations using different types of mixed load cases that closely resemble the loads that are typically encountered by a wind turbine during field operation. Such investigations can help in estimating the capability of the test bench to reproduce field-relevant dynamic wind loads. Furthermore, aspects concerning the behavior of the actuator PI controller require detailed investigations, which weren't the focus of this paper. However, it can have a strong influence on the generated hub forces, both in terms of response time and deviation from target setpoints. Further investigations are required to highlight the key aspects of developing actuator controller models that are both robust and resemble the behavior of the controllers on the actual test bench. 8 Conclusion This contribution has provided insights into the modeling of the complete load application system of a multimegawatt wind turbine nacelle test bench. The important modeling aspects of a hydraulic circuit were revealed by a series of case studies with different modeling variations. The results showed that modeling of the actuator friction and valve dynamics has significant influence on the generated loads at the load application point. Introducing friction with stick-slip effects with a maximum value of 0.5% of the nominal actuator force can lead to up to 12% deviations in the LAP forces. Variations in valve dynamic response of as much as 5% can lead to up to 20% variations in dynamic LAP forces. Therefore, it is recommended to include these features in the system model. These findings can be relevant for modeling the load application systems of similar types of nacelle test bench systems. The authors would like to thank the involved colleagues at Fraunhofer IWES for their contributions to the VirtGondel project. The funding of the VirtGondel project by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) (No. 03EE2018) is kindly acknowledged. Data Availability Statement The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. IEC, 2019, “ IEC 61400-1 : Wind energy generation systems - Part 1: Design requirements,” International Electrotechnical Commission, Geneva, Switzerland. , and , “ Wind Turbine Reliability: A Comprehensive Review Towards Effective Condition Monitoring Development Appl. Energy ), pp. T. N. de Oliveira C. M. R. C. D. , and de A. Monteiro J. R. B. , “ Wind Turbine Failures Review and Trends J. Control, Autom. Electr. Syst. ), pp. , and P. J. , “ Wind Turbine Downtime and Its Importance for Offshore Deployment Wind Energy ), pp. , and , “ Failure Rate, Repair Time and Unscheduled O&M Cost Analysis of Offshore Wind Turbines Wind Energy ), pp. , and , “ Gearbox Reliability Collaborative Phase 3 Gearbox 3 Test National Renewable Energy Laboratory , Report No. NREL/TP-5000-67612. , and , “ Evaluation of a Hardware-in-the-Loop Test Setup Using Mechanical Measurements With a DFIG Wind Turbine Nacelle J. Phys.: Conf. Ser. ), p. , and , “ FUTURE OF WIND: Deployment, Investment, Technology, Grid Integration and Socio-Economic Aspects) The International Renewable Energy Agency , Report. M. O. A. R. , and , “ On a New Methodology for Testing Full Load Responses of Wind Turbine Drivetrains on a Test Bench Forschung im Ingenieurwesen ), pp. , and , “ Full Scale System Simulation of a 2.7MW Wind Turbine on a System Test Bench Conference for Wind Power Drives, CWD 2017: Conference Proceedings , Aachen, Mar. 7–8, pp. , and , “ Dynamic Simulation of Full-Scale Wind Turbine Nacelle System Test Benches ,” Ph.D. thesis, RWTH Aachen University Aachen, Germany , and , “ On the Multi-Body Modeling and Validation of a Full Scale Wind Turbine Nacelle Test Bench Paper No. DSCC2018-9100. Fraunhofer, VirtGondel: Development and Validation of a Virtual Representation of the Nacelle Test Bench for the Elaboration of Advanced Test Methods and more Efficient Test Campaigns," accessed Sept. 6, 2022, R. F. , and , “ Hydraulic Spool Valve Modeling for System Level Analysis ,” American Control Conference ( ), Portland, OR, June 4–6, pp. R. F. , “ Sliding Mode Control of a Hydraulically Actuated Load Application Unit With Application to Wind Turbine Drivetrain Testing IEEE Trans. Control Syst. Technol. ), pp. M. G. Fluid Power Engineering New York Control of Machines With Friction Springer US Boston, MA Canudas de Wit K. J. , and , “ A New Model for Control of Systems With Friction IEEE Trans. Autom. Control ), pp. MSC Software Corporation , “ Adams - The Multibody Dynamics Simulation Solution ,” accessed Sept. 6, 2022, R. R. , and M. C. C. , “ Coupling of Substructures for Dynamic Analyses AIAA J. ), pp. M. O. , and , “ Implementation and Experimental Validation of a Dynamic Model of a 10MW Nacelle Test Bench Load Application System J. Phys.: Conf. Ser. ), p. M. O. , and , “ Virtual Framework for the Torque Load Application System of a 10MW Testbench for Nacelles of Wind Turbines Proceedings of SIRM, the 14th International Conference on Dynamics of Rotating Machines Copyright © 2024 by ASME; reuse license CC-BY 4.0
{"url":"https://appliedmechanics.asmedigitalcollection.asme.org/dynamicsystems/article/146/2/021002/1169661/Virtual-Model-Development-of-the-Load-Application","timestamp":"2024-11-08T07:47:48Z","content_type":"text/html","content_length":"322271","record_id":"<urn:uuid:7ed803a2-35df-44e3-b352-eb35f88b9ff6>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00097.warc.gz"}
Impulse response of the hilbert transform hi there, i was going through a paper on using hilbert transform for edge detection in image processing.It said over there that the hilbert transform works better than differentiation for edge detection as it has longer impulse response which helps reduce the effect of noise.I am new to the subject and dont understand what exactly does a longer impulse response mean.??.and how does the impulse response of a system determine its susceptibility to noise..??..and ya what is the impulse response of the hilbert transform..i think its [-j.sgn(f)].correct me if i am wrong but does a longer impulse response mean that it covers a larger band of frequency. thanks in advance
{"url":"https://www.dsprelated.com/showthread/comp.dsp/110875-1.php","timestamp":"2024-11-03T23:29:49Z","content_type":"text/html","content_length":"58500","record_id":"<urn:uuid:e7db14e3-ac6d-40ed-afb0-023c02b22e39>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00584.warc.gz"}
Cosmology | Stanford Institute for Theoretical Physics Main content start Since its discovery by A. Linde and others, cosmic inflation -- exponential expansion of the universe driven by the potential energy contained in an `inflaton' field -- has become a successful paradigm of early universe cosmology and the origin of structure in the universe. At the same time, it leads to great theoretical problems which remain unsolved. This is a paradigm in search of a theory, and SITP members (including Dimopoulos, Kachru, Kallosh, Linde, Senatore, and Silverstein) have led a major upgrade of our understanding of the dynamics of inflation, taking into account the sensitivity of inflationary theory to quantum gravity that follows from the enormous expansion of the universe and range of the inflaton field during the process. At the same time, SITP theorists discovered an elegant characterization of observables that are captured by low energy quantum fields, and determined precisely how they are constrained by possible symmetries of nature, including a special candidate known as supersymmetry. Read More Accordion Read More About Cosmology Among the major discoveries by SITP members are methods for stabilization of the extra dimensions of string theory to produce accelerated expansion in line with the observed late-universe cosmological constant, several canonical early-universe inflationary mechanisms, and a low energy effective theory of the quantum fluctuations produced during inflation. This includes the recent discovery that string theory naturally produces inflation at large field range (large as compared to the Planck scale of quantum gravity) along ubiquitous highly symmetric `axion' directions in field space, via a mathematical structure known as monodromy -- a fancy version of a spiral staircase. Microwave background experiments are actively testing the signatures of this and several other inflationary mechanisms discovered at Stanford, an unprecedented interface between quantum gravity research and data. Despite the success of inflation as a theory of the origin of structure, it presents big conceptual challenges. Several SITP members (including Linde, Senatore, Shenker, Silverstein, and Susskind) pursue the difficult problem of deriving a more complete framework for inflationary cosmology. One set of approaches involves upgrading the AdS/CFT correspondence to cosmological backgrounds, taking into account the basic structure of the string landscape. Several interesting lessons have emerged, including remnants of lower-dimensional gravity surviving at least temporarily in the dual description, along with a pair of quantum field theory sectors. This has also led to potential observational predictions -- negative spatial curvature and signatures of exotic bubble collisions -- which apply if the early-universe inflationary expansion is minimal. A recent SITP paper (done in collaboration with a gravitational expert in KIPAC) has discovered on the other hand that the onset of inflation is quite robust even in the presence of large variations in the initial conditions, given sufficient field range. There is clearly much more to learn in this direction. Accelerated expansion of the universe is implied by observations, and theoretically cosmological backgrounds massively dominate among the solutions of string theory (compared to the more extensively studied anti de Sitter and flat spacetimes). Current and near-future cosmic microwave background and large-scale structure measurements provide sensitive observational probes of early universe physics (as well as much interesting astrophysics). The subject is full of interesting and important challenges. As a result, this area will remain a major component of SITP research for the foreseeable future. Video Briefs Cosmological observations show that on the largest scales accessible to our telescopes, the… None of us were consulted when the universe was created. And yet it is tempting to ask not only… Black hole and cosmological horizons -- from which nothing can escape according to… Related Events None of us were consulted when the universe was created. And yet it is tempting to ask not only how the universe evolves, but also why, and could it be different? Our universe weighs more than 1050 tons. Could it be created “… I will argue that if the density of dark matter in the early universe is dominated by subhorizon, non-relativistic field modes, then there is a relatively model-independent bound on the mass of dark matter particles [m > 10^(-18) eV]. The… Cosmological observations show that on the largest scales accessible to our telescopes, the universe is very uniform, and the same laws of physics operate in all the parts of it that we can see. As Andrei Linde, 2014 Kavli Prize Laureate in… Big Bang Nucleosynthesis (BBN) is a powerful tool for probing both new physics and LCDM, and complements analyses utilizing the Cosmic Microwave Background (CMB) and results from particle experiment. I will provide two examples of BBN… Models of dark sectors with a mass threshold can have important cosmological signatures. When a relativistic species becomes non-relativistic before recombination and is then depopulated in equilibrium, measurable effects on the CMB arise as… The matter power spectrum on small scales (< 1 Mpc) is very weakly constrained so far. While inflation predicts a nearly scale-invariant primordial power spectrum down to very small scales, many new physics scenarios can lead to significantly… Professor Eva Silverstein of the Stanford Institute for Theoretical Physics (SITP) discusses the physics of horizons, black holes, and string theory. Black hole and cosmological horizons -- from which nothing can escape according to… Black hole and cosmological horizons -- from which nothing can escape according to classical gravity -- play a crucial role in physics. They are central to our understanding of the origin of structure in the universe, but also lead to fascinating… The process of Big Bang Nucleosynthesis (BBN) is a crucial test of cosmology. In this talk, I will describe a new code for predicting the primordial elemental abundance due to BBN. This code takes advantage of JAX, a machine learning framework,…
{"url":"https://sitp.stanford.edu/research/cosmology","timestamp":"2024-11-11T16:01:15Z","content_type":"text/html","content_length":"70207","record_id":"<urn:uuid:fd36b242-6978-42b5-b564-aea92c0af05a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00432.warc.gz"}
Stochastic Process Characteristics What Is a Stochastic Process? A time series y[t] is a collection of observations on a variable indexed sequentially over several time points t = 1, 2,...,T. Time series observations y[1], y[2],...,y[T] are inherently dependent. From a statistical modeling perspective, this means it is inappropriate to treat a time series as a random sample of independent observations. The goal of statistical modeling is finding a compact representation of the data-generating process for your data. The statistical building block of econometric time series modeling is the stochastic process. Heuristically, a stochastic process is a joint probability distribution for a collection of random variables. By modeling the observed time series y[t] as a realization from a stochastic process $y=\left\{{y}_{t};t=1,...,T\right\}$, it is possible to accommodate the high-dimensional and dependent nature of the data. The set of observation times T can be discrete or continuous. Figure 1-1, Monthly Average CO2 displays the monthly average CO[2] concentration (ppm) recorded by the Mauna Loa Observatory in Hawaii from 1980 to 2012 [3]. Figure 1-1, Monthly Average CO2 Stationary Processes Stochastic processes are weakly stationary or covariance stationary (or simply, stationary) if their first two moments are finite and constant over time. Specifically, if y[t] is a stationary stochastic process, then for all t: • E(y[t]) = μ < ∞. • V(y[t]) = ${\sigma }^{2}$ < ∞. • Cov(y[t], y[t–h]) = γ[h] for all lags $he 0.$ Does a plot of your stochastic process seem to increase or decrease without bound? The answer to this question indicates whether the stochastic process is stationary. “Yes” indicates that the stochastic process might be nonstationary. In Figure 1-1, Monthly Average CO2, the concentration of CO[2] is increasing without bound which indicates a nonstationary stochastic process. Linear Time Series Model Wold’s theorem [2] states that you can write all weakly stationary stochastic processes in the general linear form ${y}_{t}=\mu +\sum _{i=1}^{\infty }{\psi }_{i}{\epsilon }_{t-i}+{\epsilon }_{t}.$ Here, ${\epsilon }_{t}$ denotes a sequence of uncorrelated (but not necessarily independent) random variables from a well-defined probability distribution with mean zero. It is often called the innovation process because it captures all new information in the system at time t. Unit Root Process A linear time series model is a unit root process if the solution set to its characteristic equation contains a root that is on the unit circle (i.e., has an absolute value of one). Subsequently, the expected value, variance, or covariance of the elements of the stochastic process grows with time, and therefore is nonstationary. If your series has a unit root, then differencing it might make it For example, consider the linear time series model ${y}_{t}={y}_{t-1}+{\epsilon }_{t},$ where ${\epsilon }_{t}$ is a white noise sequence of innovations with variance σ^2 (this is called the random walk). The characteristic equation of this model is $z-1=0,$ which has a root of one. If the initial observation y[0] is fixed, then you can write the model as ${y}_{t}={y}_{0}+\sum _{i=1}^{t}{\ epsilon }_{i}.$ Its expected value is y[0], which is independent of time. However, the variance of the series is tσ^2, which grows with time making the series unstable. Take the first difference to transform the series and the model becomes ${d}_{t}={y}_{t}-{y}_{t-1}={\epsilon }_{t}$. The characteristic equation for this series is $z=0$, so it does not have a unit root. Note that • $E\left({d}_{t}\right)=0,$ which is independent of time, • $V\left({d}_{t}\right)={\sigma }^{2},$ which is independent of time, and • $Cov\left({d}_{t},{d}_{t-s}\right)=0,$ which is independent of time for all integers 0 < s < t. Figure 1-1, Monthly Average CO2 appears nonstationary. What happens if you plot the first difference d[t] = y[t] – y[t–1] of this series? Figure 1-2, Monthly Difference in CO2 displays the d[t]. Ignoring the fluctuations, the stochastic process does not seem to increase or decrease in general. You can conclude that d[t] is stationary, and that y[t] is unit root nonstationary. For details, see Differencing. Figure 1-2, Monthly Difference in CO2 Lag Operator Notation The lag operator L operates on a time series y[t] such that ${L}^{i}{y}_{t}={y}_{t-i}$. An mth-degree lag polynomial of coefficients b[1], b[2],...,b[m] is defined as $B\left(L\right)=\left(1+{b}_{1}L+{b}_{2}{L}^{2}+\dots +{b}_{m}{L}^{m}\right).$ In lag operator notation, you can write the general linear model using an infinite-degree polynomial $\psi \left(L\right)=\left(1+{\psi }_{1}L+{\psi }_{2}{L}^{2}+\dots \right),$ ${y}_{t}=\mu +\psi \left(L\right){\epsilon }_{t}.$ You cannot estimate a model that has an infinite-degree polynomial of coefficients with a finite amount of data. However, if $\psi \left(L\right)$ is a rational polynomial (or approximately rational), you can write it (at least approximately) as the quotient of two finite-degree polynomials. Define the q-degree polynomial $\theta \left(L\right)=\left(1+{\theta }_{1}L+{\theta }_{2}{L}^{2}+\dots +{\theta }_{q}{L}^{q}\right)$ and the p-degree polynomial $\varphi \left(L\right)=\left(1+{\ varphi }_{1}L+{\varphi }_{2}{L}^{2}+\dots +{\varphi }_{p}{L}^{p}\right)$. If $\psi \left(L\right)$ is rational, then $\psi \left(L\right)=\frac{\theta \left(L\right)}{\varphi \left(L\right)}.$ Thus, by Wold’s theorem, you can model (or closely approximate) every stationary stochastic process as ${y}_{t}=\mu +\frac{\theta \left(L\right)}{\varphi \left(L\right)}{\epsilon }_{t},$ which has p + q coefficients (a finite number). Characteristic Equation A degree p characteristic polynomial of the linear time series model ${y}_{t}={\varphi }_{1}{y}_{t-1}+{\varphi }_{2}{y}_{t-2}+...+{\varphi }_{p}{y}_{t-p}+{\epsilon }_{t}$ is $\varphi \left(a\right)={a}^{p}-{\varphi }_{1}{a}^{p-1}-{\varphi }_{2}{a}^{p-2}-...-{\varphi }_{p}.$ It is another way to assess that a series is a stationary process. For example, the characteristic equation of ${y}_{t}=0.5{y}_{t-1}-0.02{y}_{t-2}+{\epsilon }_{t}$ is $\varphi \left(a\right)={a}^{2} The roots of the homogeneous characteristic equation $\varphi \left(a\right)=0$ (called the characteristic roots) determine whether the linear time series is stationary. If every root in $\varphi \ left(a\right)$ lies inside the unit circle, then the process is stationary. Roots lie within the unit circle if they have an absolute value less than one. This is a unit root process if one or more roots lie inside the unit circle (i.e., have absolute value of one). Continuing the example, the characteristic roots of $\varphi \left(a\right)=0$ are $a=\left\{0.4562,0.0438\right\}.$ Since the absolute values of these roots are less than one, the linear time series model is stationary. [1] Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994. [2] Wold, H. A Study in the Analysis of Stationary Time Series. Uppsala, Sweden: Almqvist & Wiksell, 1938. Related Topics
{"url":"https://se.mathworks.com/help/econ/stationary-stochastic-process.html","timestamp":"2024-11-03T22:44:12Z","content_type":"text/html","content_length":"88945","record_id":"<urn:uuid:8fb1fc95-9d28-4663-9a2b-158dfea4a760>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00679.warc.gz"}
How much power is produced if a voltage of 6 V is applied to a circuit with a resistance of 16 Omega? | HIX Tutor How much power is produced if a voltage of #6 V# is applied to a circuit with a resistance of #16 Omega#? Answer 1 Power dissipated #P# across a resistor of resistance #R# due to application of voltage #V# is #V^2/R# Given, #V=6,R=16# So, #P=6^2/16=2.25# watt Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-much-power-is-produced-if-a-voltage-of-6-v-is-applied-to-a-circuit-with-a-re-3-8f9af8c6ee","timestamp":"2024-11-07T22:47:25Z","content_type":"text/html","content_length":"583749","record_id":"<urn:uuid:36f0a52a-8aa0-47c7-aaff-3571567e54b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00872.warc.gz"}
Google Interview Questions Deconstructed: The Knight’s Dialer (Impossibly Fast Edition) Google Interview Questions Deconstructed: The Knight’s Dialer (Logarithmic Time Edition) Join our discord to discuss these problems with the author and the community! This is the second post in my series where I lay out my favorite interview questions I used to ask at Google until they were leaked and banned. This post is a continuation of the first one, so if you haven’t taken a look yet, I recommend you read it first and come back. If you don’t feel like it, I’ll still do my best to make this post sensible, but I still recommend reading the first one for some background. First, the obligatory disclaimer: while interviewing candidates is one of my professional responsibilities, this blog represents my personal observations, my personal anecdotes, and my personal opinions. Please don’t mistake this for any sort of official statement by or about Google, Alphabet, or any other person or organization. Apologies for the delay, by the way. In the time since I published the first part of this series, I’ve gone through a number of (very positive) changes in my life, and as a result writing sort of fell by the wayside for a while. I’ll share what I can as things become public. This post goes way above and beyond what I would expect to see during a job interview. I’ve personally never seen anyone produce this solution, and I only know it exists because my colleague mentioned that the best candidate he had ever seen had blasted through the simpler solutions and spent the rest of the interview trying to develop this one. Even that candidate failed, and I only arrived at this solution after weeks of on-again, off-again pondering. I’m sharing this with you for your curiosity and because I think it’s a cool intersection of mathematics and programming. With that out of the way, allow me to reintroduce the question: The Question Imagine you place a knight chess piece on a phone dial pad. This chess piece moves in an uppercase “L” shape: two steps horizontally followed by one vertically, or one step horizontally then two Suppose you dial keys on the keypad using only hops a knight can make. Every time the knight lands on a key, we dial that key and make another hop. The starting position counts as being dialed. How many distinct numbers can you dial in N hops from a particular starting position? At the end of the previous post, we had developed a solution that solves this problem in linear time (as a function of the number of hops we’d like to make), and requires constant space. This is pretty good. I used to give a “Strong Hire” to candidates who were able to develop and implement the final solution from that post. However, it turns out we can do better if we use a little math… Adjacency Lists The crucial insight of the solutions in the previous post involved framing the number pad as a graph in which each key is a node and the knight’s possible next hops from a key are that node’s In code, this can be represented as follows: This is a fine representation for a number of reasons. First off, it’s compact: we represent only the nodes and edges that exist in the graph (I include number 5 for completeness, but we can remove it without any repercussions). Second off, it’s efficient to access: we can get the set of neighbors in constant time via a map lookup, we can iterate over all neighbors of a particular node in time linear to the number of neighbors by iterating over the result of that lookup. We can also easily modify this structure to determine the existence of an edge in constant time by using a sets instead of tuples. This data structure is known as an adjacency list, named after the explicit listing of adjacent nodes to represent edges. This representation is by far the most common method of representing graphs, chiefly because of its linear-in-nodes-and-edges space complexity as well as its time-optimal access patterns. Most computer scientists would look at this representation and say “pack it up, that’s about as good as it gets.” Mathematicians, on the other hand, would not be so happy. Yes, it’s compact and fast to operate on, but mathematicians are (by and large) not in the business of pragmatic ease of use like most computer scientists and engineers. A computer scientist might look at this graph data structure and say “how does this help me design efficient algorithms?” whereas a mathematician might look at it and say “how does this representation allow me to use the rest of my theoretical toolkit?” With that question in mind, the mathematician might be disappointed by this representation. Personally, this representation of a graph rhymes with nothing I’ve encountered during my mathematical education. It’s useful for writing algorithms, but that’s pretty much it. Graphs as Matrices There is another, more fruitful, way to represent a graph, though. You’ll notice a graph is all about relationships between nodes. In the case of an adjacency list, we relate each node with the nodes it’s connected to. Why not instead focus on pairs of nodes? Instead of asking “what nodes are connected to one another with an edge,” you can ask “given a pair of nodes, is there an edge that connects them?” If this seems like a sort of “six of one, half dozen of another” situation, it is. But the second formulation is magical because it calls into focus something that’s invisible in the adjacency list representation: suddenly we’re very interested in pairs of nodes that don’t have edges. Rather than starting with nodes and computing only the relevant pairs, we start with all possible pairs, and decide whether or not they are relevant later. We can reframe the adjacency list as follows. Note for each pair (A, B), NEIGHBORS_MAP[A][B] will be 1 if that pair represents an edge in the graph and 0 otherwise: Why would we do this? Certainly not to create a more efficient data structure. Our space complexity has gone from being proportional to the number of edges to the number of possible edges, which means N squared, where N is the number of nodes. Iterating over neighbors also just got more expensive: for a given node we get a bunch of irrelevant zeros that we have to filter through. A mathematician, on the other hand, just got interested. Anyone beyond the junior year of a mathematics undergrad should look at this and immediately say “that’s a matrix!” (For the sake of brevity, I’ll assume here that you know enough about linear algebra and matrix multiplication to follow along with this post. If you don’t, you can find a great introduction here.) The wonderful thing about matrices is that they support an algebra. Matrices can be added, subtracted, and multiplied with one another, according to some simple rules. What this particular representation lacks in compactness, it more than makes up for in abstract ease of manipulation. An Aside A slight digression: “okay cool”, you might say, “we’ve represented the graph as a matrix. And that matrix can be multiplied by another matrix. What does this have to do with the graph? Who cares?” This is a much more valid question that you may realize, and the answer is “nothing, yet.” Undergrads are my intended audience, so I feel obligated to put you in the right frame of mind before I continue because I’m afraid this might otherwise be more discouraging than enlightening. After you finish reading this logic presented in the rest post, you may be tempted to ask yourself “how the hell was I supposed to come up with that?” I certainly had that reaction time and time again while reading proofs and textbooks. The short answer is: you’re not. At least not immediately. The more proofs and theorems you learn, the more you’ll find you’re able to spot patterns and apply your knowledge. I suggest treating this post as just another tidbit to know and hopefully apply later. Down to Business Alright, now that that’s out of the way, let’s get down to the solution. First we’ll explore the structure of this matrix a little. (Note all indices are offsets from zero. This is a departure from mathematical tradition, but this is a CS-oriented post, so let’s go with it.) In this matrix, each row represents the destinations accessible from each key: row 0 has a 1 in position 4 to show you can hop from 0 to 4. It has a 0 in position 9 to show you can’t hop from 0 to 9. The rows also have a meaning. While the rows represent where you can go from the corresponding position, the columns represent how can get to each position. If you look closely, you’ll notice that the rows and columns look strikingly similar: the i-th position in each row is the same as the i-th position in each column. This is because this graph is undirected: each edge can be traversed in both directions. As a result, the entire matrix can be flipped along its main diagonal and emerge unchanged (the main diagonal is formed by the positions where the row and column numbers are equal). Now that we’ve introduced representing the graph as a matrix, it’s no longer an algorithmic object but an algebraic one. The particular algebraic operation we’ll be concerned with is matrix-vector multiplication. What happens when we multiply this matrix by a vector? Recall that the formula for multiplying an R row by C column matrix A with an C-length column vector v (short for a matrix with C rows and 1 column) is: In words, this means that the resulting C-length vector can be computed by taking each row, multiplying each element of that row by the corresponding element in the vector, and adding the component values together. The results are then placed in a vertical, C-by-one matrix, or a C-length vector for short. This may seem uninteresting at first glance, but that algebraic relation up there is actually the crux of this entire solution. Consider what it means. Each row represents the numbers you can reach from that row’s corresponding key. With this in mind, matrix multiplication is no longer an abstract algebraic operation, it’s a means of summing values corresponding to destinations from a given key on the dialpad. To make the implications clear, recall the recurrence relation from my previous post: caption: Remember T represents the number of distinct sequences you can dial from key K in N hops This is nothing more than a weighted sum of values corresponding to destinations from a given key on a dialpad! This framing ignores edges that aren’t in the graph by not even considering them in the iteration, whereas the matrix-oriented one includes them, but only as multiplications by zero that don’t affect the final sum. The two statements are equivalent! So then what is the meaning of the vector v in all this? So far we’ve been talking almost entirely about the matrix, and we’ve mostly ignored the vector. We can choose any v we want, but we want to choose one that will be meaningful in this calculation. The recurrence relation provides us with a hint: in that case, we start with T(K, 0), which is always 1 because in zero hops we can only dial the starting key. Let’s see what happens with a v where all the entries are 1: Multiplying the transition matrix by the 1 vector gives us a vector where each element corresponds to the count of numbers that can be dialed in 1 hop. Multiplying again, we can: Now each element in the resulting vector equals the count of numbers that can be dialed from the corresponding key in 2 hops! We’ve just developed a new linear-time solution to the Knight’s Dialer Problem. In particular: Logarithmic Time But this solution is still linear. We need to multiply A by the vector v again and again and again, N times. If anything, this solution is actually slower than the dynamic programming solution we developed in the previous post because this one requires unnecessarily multiplying by zero a whole bunch of times. There is, however, another algebraic property we can use: matrices can be multiplied, and anything that can be multiplied can be exponentiated (to an integer power). Our solution becomes the Again, I won’t be defining matrix multiplication, so if you need a refresher, take a look at this post. How do we compute A^N? Naturally, one way is to repeatedly multiply A by itself. However, this is somehow even more wasteful than multiplying by the vector: rather than multiplying one vector by A again and again, we multiply all the columns of A again and again. There is a better way: exponentiation by squaring. As you probably know, every number has a binary representation. If you’ve been studying computer science you already know that this the preferred way of representing a number in hardware. In particular, every number can be represented as a sequence of bits: Where k is the largest nonzero bit. For example, 49 in binary is “110001,” or: Something interesting happens when we perform this expansion for N in our matrix exponentiation solution: Recall that addition in the exponent translates to multiplication underneath it This results in a total of k matrix multiplications. How does k relate to N? k is equal to the number of bits required to represent N, which as you may already know is equal to log2(N). Instead of requiring a number of multiplications that grows linearly in N, we only need a logarithmic number of matrix multiplications! This hinges on a few useful facts: • A to the power of zero is the identity matrix. Multiplying any matrix by the identity matrix results in the original matrix. As a result, if any bit is zero, we’ll end up multiplying by the identity matrix, and it’ll be as though we ignored it. • We can compute A to any power of two by squaring the result again and again. A squared is A times A. A to the 4th is A squared times A squared, etc. This is it! We now have a logarithmic solution. While this solution requires a little more code than the previous ones on account of the definition of matrix multiplication, it’s still quite compact: Wrapping Up On the face of it, this solution seems awesome. It features logarithmic time complexity and constant space complexity. You might think it really doesn’t get any better than that, and for this particular problem you would be right. However, this matrix exponentiation-based approach has one glaring drawback: we need to represent the entire graph as a (potentially very sparse) matrix. This implies we’ll have to explicitly store a value of every possible pair of nodes, which requires space quadratic in the number of nodes. For a 10-node graph like this one, that isn’t a problem, but for more realistic graphs which might have thousands if not millions of nodes, it becomes hopelessly infeasible. What’s worse, the matrix multiplication I gave up there is actually cubic in the number of rows (for square matrices). The best-known matrix multiplication algorithms like Strassen or Coppersmith–Winograd have sub-cubic runtimes, but either require extreme memory overhead or feature constant factors that negate the effects for matrices of reasonable size. A cubic-time matrix multiplication starts to become unreasonable with graphs with sizes around the ten thousand range. At the end of the day, none of these limitations really matter in my mind. Let’s be honest: how often are you going to be computing this on any realistic graph? Feel free to correct me in the comments, but I personally can’t think of any practical application of this algorithm. The main purpose of this problem is to evaluate candidates on their algorithm design chops and coding skills. If a candidate makes it anywhere near the things I discussed in this post, they’re probably a lot more qualified for the Google SWE job than I am… If you liked this post, applaud or leave a response! I’m writing this series to educate and inspire people, and nothing makes me feel quite as good as receiving feedback. Also, if this is the sort of stuff you enjoy, and if you’re all the way down here there’s a good chance it is, give me a follow! There’s a lot more where this came from. If you want to ask questions or discuss with like-minded people, join our discord! Also, you can find runnable the code for this and the previous post here. Next Steps After this question was banned I felt like I wanted to start asking a more straightforward programming question. I searched far and wide for a question that was simple to state, had a simple solution, allowed for many levels of followup questions, and had an obvious tie to Google’s products. I found one. If that sounds like something you’d like to read about, stay tuned… Update: Now that we’re up and running, here is a listing of all the other posts in this series:
{"url":"https://alexgolec.medium.com/google-interview-questions-deconstructed-the-knights-dialer-impossibly-fast-edition-c288da1685b8?source=user_profile_page---------2-------------fc349a0617c0---------------","timestamp":"2024-11-03T16:03:25Z","content_type":"text/html","content_length":"198475","record_id":"<urn:uuid:7825d631-108f-428b-9bc5-67df9edc2997>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00622.warc.gz"}
Monte Carlo simulation A Monte Carlo simulation is a stochastic method, which relies on repeated random experiments. This method is mainly used, if it is impossible or infeasible to compute an exact result. The calculated observables are statistical averages, whose variance strongly depends on the number of performed random samplings. We use Monte Carlo simulations to model and predict the physical properties of our sample systems. Here, the energy and the spatially resolved magnetization of the simulated system are the for us most important observables. In a Monte Carlo simulation random states are created according to a predefined probability distribution. Depending on the problem, a suitable probability function has to be chosen, because it determines the trajectory the system traverse in phase space. An important characteristic of this method is, that the newly created states do not depend on the previous states, hence, the system does not have a memory. The repeated sampling of such states is called a Markov chain. By creating a Markov chain the discrete Monte Carlo algorithm satisfies the condition of detailed balance, which ensures, the system to relax into the canonical equilibrium. As in our group many different experiments are available, we can directly compare the results from simulations with real system. This combination between experimental and theoretical approaches allows further insight to the physical properties of the system under investigation. For further information and mathematical background of Monte Carlo simulations we refer to the literature [1]. To increase the density of future magnetic storage devices the size of the magnetic bits, the carrier of the magnetic information, need to be decreased constantly. In our group we do not concentrate on the technical realization to shrink those bits, but to find the physical limits and to determine the technical limits, where the bit can still be controlled. The smaller the bit, the easier a bit is excited thermally. Hence, the smaller the bit, the smaller the temperature at which the magnetic information is lost. This phenomena is called superparamagnetism and is one of the biggest challenges for future storage technology. To define and understand the limits of superparamagnetism, we study the temperature region of the magnetic stability in dependence of the bit size [2], see figure 1. From Monte Carlo simulation we not only gain insight to the temperature dependent switching frequency, but also insight to the switching mechanism. By comparison between experiment and simulation, we could show, that even very small bits, consisting of less than 100 atoms, switch via nucleation and propagation of domain walls and not as assumed by coherent rotation [3]. Figure 1: By fitting the correlation function G(r), one can determine the critical temperatures. In another study we propose a theoretical concept of domain wall manipulation by means of the tip of a spin-polarized scanning tunneling microscope and check our concept performing Monte Carlo simulations .The domain wall is driven by a spin-polarized current induced by the magnetic tip placed above the magnetic nanowire and then moved along its long axis with a current flowing through the vacuum barrier. The angular momentum coming from the spin-polarized current exerts a torque on the magnetic moments underneath the tip and leads to a displacement of the domain wall. By analyzing time-dependent Monte Carlo configurations , we can study the kinematics of the domain wall motion. Hence, we can observe how the systems relaxes into thermal equilibrium [4, Spin dynamic simulation]. Figure 2 shows an animation of a successful domain wall manipulation. Figure 2: Animation of a successful domain wall manipulation.
{"url":"http://www.nanoscience.de/HTML/methods/monte_carlo_simulation.html","timestamp":"2024-11-08T05:26:17Z","content_type":"text/html","content_length":"15344","record_id":"<urn:uuid:4137ff30-e14c-4224-a6bb-ac777d3aa5ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00631.warc.gz"}
Admission control with advance reservations in simple networks In the admission control problem we are given a network and a set of connection requests, each of which is associated with a path, a time interval, a bandwidth requirement, and a weight. A feasible schedule is a set of connection requests such that at any given time, the total bandwidth requirement on every link in the network is at most 1. Our goal is to find a feasible schedule with maximum total weight. We consider the admission control problem in two simple topologies: the line and the tree. We present a 12c-approximation algorithm for the line topology, where c is the maximum number of requests on a link at some time instance. This result implies a 12c-approximation algorithm for the rectangle packing problem, where c is the maximum number of rectangles that cover simultaneously a point in the plane. We also present an O (log t)-approximation algorithm for the tree topology, where t is the size of the tree. We consider the loss minimization version of the admission control problem in which the goal is to minimize the weight of unscheduled requests. We present a c-approximation algorithm for loss minimization problem in the tree topology. This result is based on an approximation algorithm for a generalization of set cover, in which each element has a covering requirement, and each set has a covering potential. The approximation ratio of this algorithm is Δ, where Δ is the maximum number of sets that contain the same element. • Admission control • Approximation algorithms • Axis parallel rectangles • Local ratio • Scheduling Dive into the research topics of 'Admission control with advance reservations in simple networks'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/admission-control-with-advance-reservations-in-simple-networks-4","timestamp":"2024-11-06T13:49:27Z","content_type":"text/html","content_length":"56177","record_id":"<urn:uuid:583f8272-013b-4f5b-bc97-49499f7b63cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00742.warc.gz"}
Stochastic Gradient Langevin Dynamics As students who didnt have much experience with ML, we started off our journey to implementing SGLD by first familiarising ourselves with the basics of ML. As a first step, we implemented Linear and Logistic Regression models in python from scratch. In doing so, we developed a good understanding of the Stochastic Gradient Descent algorithm, and from here on we proceeded to try to parallelise and distribute it on the server provided. After doing so successfully, we moved on to Bayesian machine learning and began exploring the Naive Bayes classifier as well as Bayesian Linear Regression. We Write a comment ...
{"url":"https://dcll-research.iiitd.edu.in/Stochastic-Gradient-Langevin-Dynamics.html","timestamp":"2024-11-14T17:09:08Z","content_type":"text/html","content_length":"96681","record_id":"<urn:uuid:ef01aa6a-bba2-4cd7-bba6-8eab2c3e0599>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00437.warc.gz"}
Draw a circle of radius 4 cm. Draw any two of its chords. Construct the perpendicular bisectors of these chords. Where do they meet? A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. A day full of math games & activities. Find one near you. Draw a circle of radius 4 cm. Draw any two of its chords. Construct the perpendicular bisectors of these chords. Where do they meet? Following are the steps to draw the perpendicular bisector of two chords of a circle. Step 1: Draw a circle of radius 4 cm with center O. Step 2: Draw any two chords that meet the circle at point PQ and XY Step 3: Taking P and Q as the center, mark arcs that intersect each other at points A and B. Step 4: Join AB to construct a perpendicular bisector of the chord PQ. Step 5:Taking X and Y as the center, mark arcs that intersect each other at points C and D. Step 6: Join CD to construct a perpendicular bisector of XY. Thus, we see that the two perpendicular bisectors AB and DC of the chords PQ and XY respectively meet at the center O. NCERT Solutions for Class 6 Maths Chapter 14 Exercise 14.5 Question 8 Draw a circle of radius 4 cm. Draw any two of its chords. Construct the perpendicular bisectors of these chords. Where do they meet? For a circle of radius 4 cm, we see that the two perpendicular bisectors AB and DC of the chords PQ and XY respectively meet at the center O. ☛ Related Questions: Math worksheets and visual curriculum
{"url":"https://www.cuemath.com/ncert-solutions/draw-a-circle-of-radius-4-cm-draw-any-two-of-its-chords-construct-the-perpendicular-bisectors-of-these-chords-where-do-they-meet/","timestamp":"2024-11-14T10:35:38Z","content_type":"text/html","content_length":"226989","record_id":"<urn:uuid:573338f7-f4b4-4f02-a876-a28d005bbefb>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00732.warc.gz"}
MS-E1992 - How to lie with statistics?, 26.10.2020-02.12.2020 Topic outline How to lie with statistics? (5cr) This is an advanced course in statistics. The course is aimed at doctoral students and master's students interested in statistics. Maturity in performing statistical analysis is needed and thus students should have taken at least one master's level statistics course before attending this course. There are no other prerequisites. During this course, students will talk about typical problems and faults in sample selection, choices of location measure, graphical presentation of data, forming questionnaires, statistical testing, regression analysis, and clinical trials. Students are assumed to be familiar with these methods before attending the course. The focus will be on examples about using these methods wrongly --- either accidentally or on purpose --- and on improving statistical analyses. Intended learning outcomes The objectives are to learn to evaluate statistical analyses critically, to learn to avoid typical pitfalls in simple statistical analyses and to learn to improve presentation of the results obtained in statistical analyses. The objective is not to learn to lie with statistics, but to learn to spot if there is something fishy in a statistical analysis. The ultimate goal is to learn to tell the truth with statistics. Lectures and assignments The course consists of 12 lectures, lecture assignments, project work and study journal. Lectures are on Mondays and on Wednesdays from 10.15 to 12.00. The lectures are given in zoom. Please note that the lectures are not recorded. Students are expected to attend the zoom lectures. Majority of the lectures, instead of traditional lecturing, consists of discussions. Students will find problematic data examples themselves and their findings and ideas for improving data analyses are discussed during the lectures. Students will also learn to defend their ideas and discoveries by conducting their project works where statistical analyses are used in justifying opinions and claims. Students will also write a study journal. In the study journals students may write down notes about their thoughts and reactions to what has been discussed. Writing and submitting a study journal on time is compulsory for completing the course! PLEASE FIND BELOW THE ZOOM LINKS FOR THE LECTURES. JUST SCROLL DOWN THIS PAGE. Lecture topics Lecture 1: Introduction --- We talk about the project works and about all the lecture assignment and about common errors and problems that are related to the lecture assignment topics. Lecture 2: Getting ready for the project works Lecture 3: Selecting the sample Lecture 4: Measures of location Lecture 5: Graphics Lecture 6: Questionnaires Lecture 7: Testing Lecture 8: Regression analysis Lecture 9: Statistics related to the current pandemic Lecture 10: Miscellaneous Lecture 11: Project work presentations Lecture 12: Summary Lecture assignments There is an assignment related to almost every lecture. Submit your assignments on time! Late submission is not possible! For lecture 2, every student has to come up with at least two possible project work topics. On lecture 2, we will discuss about the topics and every student selects his/her topic. Project work presentations take place on Lecture 11 so there is plenty of time to prepare for that. For Lecture 3, every student has to find one real data example or invent two examples that illustrate the problems related to biased sample. For Lecture 4, every student has to find one real data example or simulate two examples, where different location measures tell completely different stories. For Lecture 5, every student has to find one real data example or simulate two examples about misleading graphical presentation. For Lecture 6, every student has to find one real data example or write two examples of badly worded questionnaire questions or answer choices. For Lecture 7, every student has to find one real data example or simulate two examples, where results of statistical testing are false or misleading. For Lecture 8, every student has to find one real data example or simulate two examples, where regression analysis gives misleading results. For Lecture 9, every student has to give one example related to misleading interpretation, analysis or comparison of data that is related to COVID-19 pandemic or discuss two possible problems related to the topic. For Lecture 10, every student has to find one real data example or simulate two examples about false statistical analyses. Examples and ways to improve statistical analyses are discussed during the lectures. Study journal In order to complete the course, students have to keep a study journal (approximately 1/2 pages per lecture). Study journal must be submitted on time! Writing and submitting the study journal on time is compulsory for completing the course! The assessment is based on the lecture assignments, compulsory study journal and the project work. Writing and submitting the study journal on time is compulsory for completing the course! Final grade of the course is given by grade = 5 - 0.5ms - 0.5ma - 1md - 1ij, where ms is the number of the student's missed lectures, ma is the number of the student's missed lecture assignments, md is 1 if the student does not present his/her project work (and 0 if the student does present his/her project work), and ij is 1 if the student's study journal is incomplete (and 0 if the study journal is complete). The grades are rounded up to the closest integer. For example, grade 5 may be obtained by full attendance, completing all but one lecture assignments, submitting a complete study journal on time and presenting the project work. Grade 3 may be obtained by full attendance, completed lecture assignments, and submitting an incomplete study journal on time. Grade 1 may be obtained by attending all but 2 lectures, completing all but 2 lecture assignments, and submitting an incomplete study journal on time. Majority of students' workload will come from independent assignments. Lecture assignments will take on average 7*8 = 56 hours to complete. That includes finding representative data examples and observing problems in them. Writing the study journal takes on average 20-25 h as total. Project work will take on average about 15-20 h. Attending the lectures takes as total 24 h. Learning materials Main materials for this course are the examples found by the students. The book "How to lie with Statistics" written by Irving Geis may also be used as study material, but there is no need for the students to purchase this book for the course.
{"url":"https://mycourses.aalto.fi/course/view.php?id=30376","timestamp":"2024-11-10T19:32:18Z","content_type":"text/html","content_length":"118387","record_id":"<urn:uuid:867c77e8-4d90-43f3-9725-98dbb4121e4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00774.warc.gz"}
Modelling Measles in 20th Century US A while ago some people at the Wall Street Journal published a number of heatmaps of incidence of infectious diseases in states of the US over the 20th century. This started a bit of a trend online where people remadetheplotswiththeirownsensibilites of data presentation or aestethics. After the fifth or so heatmap I got annoyed and made a non-heatmap version. Got tired of that measles heat map… here’s my time series version. pic.twitter.com/n621sMkquJ — valentine svensson (@vallens) April 13, 2015 I considered the states as individual data points, just as others had treated the weeks of the years in the other visualizations. Since the point was to show a trend over the century I didn’t see the point of stratifying over states. (The plot also appeared on Washinton Post’s Wonkblog) The data is from the Tycho project, an effort to digitize public health data from historical records to enable trend analysis. I thought it would be interesting to look at some other features of the data beside the global downward trend, and in particular how the incidence rate varies over time. In total there are about 150 000 data points, which I figured would also be enough to tell differences between states when considering the seasonal trend over a year as well. We make a linear model by fitting a spline for the yearly trend, and a cosine curve for the seasonal change in measles incidence over a year. We also choose to be even more specific by finding different seasonal parameters for each state in the US. \(y = \sum_{i=0}^n c_i \cdot B_i\left(x_\text\right) + \alpha_\text \cdot \cos\frac{2 \pi}\cdot ( x_\text - 1)\right) + \beta_\text \cdot \sin\left(\frac{2 \pi}\cdot ( x_\text - 1)\right)\) This way we will capture the global trend over the century, as well as a seasonal component which varies with the week of the year for every state. The global trend curve look very similar to the old Every faint dot in the plot is a (state, week) pair. There are up to 2 600 of these in every year, so seeing them all at once is tricky. But we can still see the trend. As a zoomed in example, let us now focus on one state and investigate a seasonal component. It is fairly clear both from looking at the data and the fit of the cosine curve how the measles incidence (which is defined as number of cases per 100 000 people) changes over the year, peaking in early April. As an alternative representation of the same plot, we can put it in polar coordinates to emphasise the yearly periodicity. We fitted the model such that we will have a different seasonal component for each state. To visualize them all at once we take the seasonal component for each state, assign a color value based on it’s level on a given week, and put this on a map. We do this for every week and animate this over the year, resulting in the video below. In the video green corresponds to high incidence and blue to low. There is an appearance of a wave starting from some states and going outwards towards the coasts. This means some states have different peak times of the measles incidence than others. We can find the peak time of a state’s seasonal component by doing some simple trigonometry. \(\rho \cos(\frac{2 \pi t} - \phi) = \rho \cos(\phi) \cos(\frac{2 \pi t}{}) + \rho \sin(\phi) \sin(\frac{2 \pi t}{}) =\) \(= \alpha \cos(\frac{2 \pi t}{}) + \beta \sin(\frac{}{2 \pi t})\) \(\alpha = \rho \cos(\phi), \ \beta = \rho \sin(\phi)\) \(\rho^2 = \alpha^2 + \beta^2\) \(\phi = \arccos\left(\frac{}{}\right)\) So $ \phi $ will (after scaling) give us the peak time for a state (meaning the offset of the cosine curve). After extracting the offset for each state, we make a new map illustrating the peak times of measles incidence over the year. The color scale ranges from week 10 (white) in early March to week 14 (black) in early April. We didn’t provide the model with locations of the states, but the peak times do seem to cluster I also want to show how I did this. %pylab inline # Parsing and modelling import pandas as pd import seaborn as sns import statsmodels.formula.api as smf # Helps with datetime parsing and formatting from isoweek import Week import calendar # For plotting the spline from patsy.splines import BS from scipy import interpolate # For creating US maps import us from matplotlib.colors import rgb2hex from bs4 import BeautifulSoup The data we use from the Tycho project comes in a csv which we parse with Pandas. measles = pd.read_csv('MEASLES_Incidence_1928-2003_20150413093130.csv', skiprows=2) measles.index = pd.DatetimeIndex([Week(*t).monday() for t in zip(measles.YEAR, measles.WEEK)]) measles = measles.drop(['YEAR', 'WEEK'], 1) measles = measles.convert_objects(convert_numeric=True) Now we have tabular data of this form measles.iloc[:5, :5] This is a large matrix of states vs time points with incidence in each cell. For making a linear model we would prefer to have it in a form where each row is an observation, and the columns giving variables used. year = [] week = [] state = [] incidence = [] for i in measles.index: for c in measles.columns: year.append(i.year) week.append(i.week) state.append(c) incidence.append(np.log10(measles.ix[i, c] + 1)) data = pd.DataFrame({'year': year, 'week': week, 'state': state, 'incidence': incidence}) data.iloc[:5, :5] We can now use statsmodels to define and fit the linear model df = 12 d = 3 model = smf.ols('incidence ~ bs(year, df=df, degree=d, include_intercept=True) \ + np.cos(2 * np.pi / 52 * (week - 1)) : C(state) \ + np.sin(2 * np.pi / 52 * (week - 1)) : C(state) \ - 1', data=data).fit() To be able to use the parameters fitted for the spline, a bit of plumbing is needed. We let patsy and statsmodels pick the knots for the spline, for which the coefficients are fitted by the OLS model. One could calculate these knots directly from the data. But it is simpler to grab the function for picking the knots from patsy. Then we extract the spline coefficients from the model my_bs = BS() my_bs.memorize_chunk(data.year, df=df, degree=d, include_intercept=False) my_bs.memorize_finish() coeffs = [] conf_int= model.conf_int(alpha=0.05) for i in range(df): parameter_name = 'bs(year, df=df, degree=d, include_intercept=True)[{}]'.format(i) coeffs.append(model.params[parameter_name]) knots = my_bs._all_knots tck = (np.array(knots), np.array(coeffs), 3) The interpolate function from scipy uses triples of knots, coefficients and the degree of the basis polynomials as a parameter, which is used to evaluate any point in the domain of the spline. x = np.arange(1928, 2004) y = interpolate.splev(x, tck) Once we have done this, we can create the first figure. figsize(8, 6) # For legend with high alpha dots plt.scatter(-1, -1, c='k', edgecolor='w', label='Observations') plt.scatter(data.year, data.incidence, alpha=0.01, edgecolor='w', c='k'); plt.plot(x, y, c='w', lw=5); plt.plot(x, y, c='r', lw=3, label='Model fit'); plt.xlim(x.min() - 1, x.max() + 1); plt.xlabel('Year') plt.ylim(0, 2.); loc, lab = plt.yticks() lab = [np.round(10 ** l) for l in loc] plt.yticks(loc, lab); plt.ylabel('Measles incidence'); plt.legend(scatterpoints=5) plt.title('U.S. Wide trend'); plt.tight_layout(); plt.savefig('per_year.png'); Next we extract the periodic component parameters for a given state and define a function of the week of the year. state = 'CALIFORNIA' alpha = model.params['np.cos(2 * np.pi / 52 * (week - 1)):C(state)[{}]'.format(state)] beta = model.params['np.sin(2 * np.pi / 52 * (week - 1)):C(state)[{}]'.format(state)] def periodic(week): return alpha * np.cos(2 * np.pi / 52 * (week - 1)) + \ beta * np.sin(2 * np.pi / 52 * (week - 1)) And using this we can create the second plot figsize(8, 6) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) plt.scatter(data.query('state == "{}"'.format(state)).week, data.query('state == "{}"'.format(state)).incidence, c=data.query('state == "{}"'.format(state)).year, alpha=0.99, s=30, cmap=cm.Greys_r, edgecolor='w', label='Observations', color='k') x = np.linspace(1, 52, 100) year = 1950 yy = np.maximum(periodic(x) + y[year - 1928], 0) plt.plot(x, yy, lw=5, c='w') + \ plt.plot(x, yy, lw=3, c='r', label='Model fit'); plt.xlim(x.min() - 1, x.max() + 1); plt.ylim(0, 2); loc, lab = plt.yticks() lab = [np.round(10 ** l) for l in loc] plt.yticks(loc, lab); plt.ylabel('Measles incidence'); xtloc = 1.5 + 52. / 12 * np.arange(0, 13) plt.xticks(xtloc, calendar.month_name[1:], rotation=30); plt.xlabel('Time of year'); plt.title(state) plt.legend(scatterpoints=5) plt.colorbar(label='Year'); plt.tight_layout(); plt.savefig('ca_per_week.png'); To create the polar version of the same plot we don’t need to change much, just define the axis to be polar, and transform the x range from 1 to 52 to be between 0 and $2\pi$. Though we do also flip the orientation and rotate the 0 of the polar coordinates since having January 1st on top feels more intuitive, as well as clockwise direction. fig = plt.figure() ax = fig.add_subplot(111, projection='polar') ww = data.query('state == "{}"'.format(state)).week plt.scatter(np.pi / 2.0 - 2 * np.pi * ww / 52, data.query('state == "{}"'.format (state)).incidence, c=data.query('state == "{}"'.format(state)).year, alpha=0.99, s=30, cmap=cm.Greys_r, edgecolor='white', color='k', label='Observations') x = np.linspace(1, 52, 100) year = 1950 yy = np.maximum(periodic(x) + y[year - 1928], 0) xx = np.pi / 2.0 - 2 * np.pi * x / 52 plt.plot(xx, yy, lw=5, c='w'); plt.plot(xx, yy, lw=3, c='r', label='Model fit'); plt.ylim(0, 2); month_locs = np.pi / 2.0 + 2 * np.pi / 24 - 2 * np.pi / 12 * np.arange(13) ax.set_xticks(month_locs); ax.set_xticklabels(calendar.month_name); ax.set_yticks([0.5, 1.0, 1.5, 2.0]); ax.set_yticklabels([''] * 4); ax.set_title(state); plt.legend(scatterpoints=5, loc='upper center') plt.colorbar(label='Year'); plt.tight_layout(); plt.savefig('ca_per_week_polar.png'); Now, the US state map (also known as a choropleth). This was WAY more complicated than I tought it would be! I imagined there would be some simple functions in matplotlib or similar package. But the ones I tried required map files I wasn’t familiar with and some of them I couldn’t get running. In particular I think GeoPandas is interesting, but I could not get it working. After trying a bunch of packages and approaches, at some point I ended up with an SVG file with defined state paths. I’m not completely sure were it came from, it might have been from simplemapplot. Just to be sure, I put the SVG file up on a gist. Before going on to making the visualization, let’s put all the states’ seasonal functions in a dictionary so we can easily use them. def make_periodic(state): alpha = model.params['np.cos(2 * np.pi / 52 * (week - 1)):C(state)[{}]'.format(state)] beta = model.params['np.sin(2 * np.pi / 52 * (week - 1)):C(state)[{}]'.format(state)] def periodic(week): return alpha * np.cos(2 * np.pi / 52 * (week - 1)) + \ beta * np.sin(2 * np.pi / 52 * (week - 1)) return periodic periodics = {} for state in data.state.unique(): periodics[state] = make_periodic(state) Ok, so the approach that we take is to use BeautifulSoup to parse and edit the SVG file. svg = open('output_state_map.svg', 'r').read() soup = BeautifulSoup(svg) The way to change the colors is to change the style attributes of each state element in the SVG. The inspiration for this strategy was this IPython notebook where the same is done on an Iowa map. First we set up a baseline path style which we will append the color to. path_style = "font-size:12px;fill-rule:nonzero;stroke:#000000;" + \ "stroke-width:1;stroke-linecap:butt;stroke-linejoin:bevel;" + \ "stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:none;" + \ So what we do with this is to loop over the weeks, and for each week color all the states according to their seasonal incidence functions. The colors are converted to hex codes and assigned to the corresponding state’s path. We save one SVG per week. for w in range(1, 53): all_states = soup.findAll(attrs=) for p in all_states: state = us.states.lookup(p['id']).name.upper() p['style'] = path_style + rgb2hex(cm.winter(periodics[state](w) + 0.4)) h = soup.find(attrs={'id': 'hdlne'}) mnd = Week(1990, w).monday() h.string = calendar.month_name[mnd.month] fo = open("/tmp/state_map_.svg".format(w), "wb") fo.write(soup.prettify()); fo.close() The modified SVG files are saved as if they are HTML files by BeatifulSoup. The files will still be viewable in web browsers. But for using conversion tools or vector graphics editing tools these need to be in proper SVG format. This is farily easy to fix, one just need to remove the <body> and <html> tags from the file. %%bash for f in /tmp/state_map_*.svg; do grep -v "body\|html" $f > $f.tmp mv $f.tmp $f done Once this has beed fixed, we can create an animated gif using the convert command from ImageMagick. %%bash convert -delay 6 -loop 0 /tmp/state_map_??.svg animated_state_map.gif Because having a constantly animating gif on the page was extremely annoying, I converted it to a video using ffmpeg. %%bash ffmpeg -f gif -i animated_state_map.gif animated_state_map.mp4 The final bit is making the map with the seasonal peak times. We simple solve the equations described above for each state to find te offset of the cosine function. state_peak = {} for state in data.state.unique(): alpha = model.params['np.cos(2 * np.pi / 52 * (week - 1)):C(state)[{}]'.format(state)] beta = model.params['np.sin(2 * np.pi / 52 * (week - 1)):C (state)[{}]'.format(state)] rho = np.sqrt(alpha ** 2 + beta ** 2) theta = np.arccos(alpha / rho) peak_week = theta / (2 * np.pi) * 52 state_peak[state] = peak_week state_peak = pd.Series(state_peak) Just like above we color the states by the value. svg = open('output_state_map.svg', 'r').read() soup = BeautifulSoup(svg) h = soup.find(attrs={'id': 'hdlne'}) h.string = 'Peak incidence' h['x'] = 600 all_states = soup.findAll(attrs=) for p in all_states: state = us.states.lookup(p['id']).name.upper() peak = state_peak[state] col = rgb2hex(cm.Greys((peak - state_peak.min()) / (state_peak.max() - state_peak.min()))) p['style'] = path_style + col fo = open("measles_spread.svg".format(w), "wb") fo.write(soup.prettify()); fo.close() To add the color bar I edited the resulting SVG file in Adobe Illustrator.
{"url":"https://www.nxn.se/p/modelling-measles-in-20th-century-us","timestamp":"2024-11-09T01:02:04Z","content_type":"text/html","content_length":"175285","record_id":"<urn:uuid:1705785c-89b0-41ac-8ab5-033e1bb18f90>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00676.warc.gz"}
How to use the MAKEARRAY function What is the MAKEARRAY function? The MAKEARRAY function returns an array with a specific number of rows and columns calculated by applying a LAMBDA function formula and the result populates each container in the array. 1. Introduction What is an array in Excel? An array in Excel is a collection of values arranged in rows and columns. It can be thought of as a table or a grid of data. There are two types of arrays. One-dimensional arrays: A single row or column of data. Two-dimensional arrays: Data organized in both rows and columns. There are two types of array formulas: The first type returns a single value and the second type returns multiple values. An array formula is a formula that can perform multiple calculations on one or more sets of values. Excel 365 subscribers have access to dynamic array formulas, a powerful feature that automatically adjusts its output range. These formulas populate the initial target cell and expand into neighboring cells as needed, adapting their size based on the formula's result. This automatic expansion and contraction of the output range is the key characteristic that gives them the name "dynamic" array formulas. The process of extending results into adjacent cells is known as "spilling. Excel processes arrays in RAM allowing for rapid computations. However, when array sizes exceed available memory, Windows may resort to using virtual memory on the hard drive or SSD. This fallback to disk storage significantly slows down calculations, as accessing data from these devices is much slower than from RAM. What is the LAMBDA function? MAKEARRAY uses LAMBDA as its third argument to define how each cell in the array should be calculated. This combination allows you to create complex, dynamic arrays based on row and column positions. The LAMBDA function is required in the MAKEARRAY function, you can't leave it out. 2. Syntax MAKEARRAY(rows, cols, lambda(row, col, calculation)) Argument Description rows The number of rows in the array to be created. Must be larger than 0 (zero). cols The number of columns in the array to be created. Must be larger than 0 (zero). row Required. A number representing the row in the array, the number changes from cell to cell. col Required. A number representing the column in the array, the number changes from cell to cell. 3. Example 1 This basic example creates an array with five rows and five columns and 25 containers in total. Each container is populated with value "Yes!". Formula in cell B2: This formula in cell B2 is a dynamic Excel 365 formula that spills values to cells below and to the right automatically. This is a feature new to Excel 365. This simple Excel 365 formula is the only technique that I know of that can create any array and populate all containers with the same value. Perhaps you know a different way? Please comment. Explaining the formula Step 1 - Define LAMBDA function The LAMBDA function build custom functions without VBA, macros or javascript. Function syntax: LAMBDA([parameter1, parameter2, …,] calculation) This is the LAMBDA function that defines what will be in each cell of the array. r and c are parameters representing the current row and column, respectively. Value "Yes!" is the output for every cell, regardless of its position. The LAMBDA function doesn't use the r and c parameters in this case, so the output is the same for every cell. Step 2 - Create array This creates a 5x5 array (5 rows and 5 columns). This is a simple example, but it demonstrates how MAKEARRAY can quickly generate an array of any size and populate it with anything. 4. Example 2 This example demonstrates how to flip or reverse values both horizontally and vertically using the MAKEARRAY function, see the blue cell range (B9:E14). The original source data is in a green cell range (B2:E7). The third cell range colored yellow has values rearranged by the TRANSPOSE function in order to show the difference between transposing values and flip/reverse values. The difference lies in the arrangement of values. In cells B9:E14, the data is reorganized from the bottom-right corner to the top-left corner of the original dataset. In contrast, cells B16:G19 display a straightforward transformation where the original rows are converted into columns, and the original columns become rows, maintaining the sequence of data but altering its orientation. Excel 365 formula in cell B9: This formula spills values to cells below and to the right as far as needed. Reorganizing data from the bottom-right corner to the top-left corner can be useful in several scenarios: • If the original data is in chronological order this transformation could quickly give you the most recent data first. • In task lists or project management, if higher priority items are at the bottom, this reorganization brings them to the top. • In some sports leagues teams at the bottom of the table are relegated. This transformation could highlight those teams. • For analyzing trends in reverse such as looking at the most recent quarterly results first. • If older stock is at the top of the list, this brings newer items to the forefront. This specific transformation is less common than simple transposition or sorting. Explaining formula Step 1 - Count rows in the given cell range The ROWS function calculate the number of rows in a cell range. Function syntax: ROWS(array) r is a variable, it starts from one and increments up to the number of rows in B2:E7. This makes the formula start from the bottom and not from the top. Step 2 - Calculate rows in the given cell range The COLUMNS function calculates the number of columns in a cell range. Function syntax: COLUMNS(array) Step 3 - Get value The INDEX function returns a value or reference from a cell range or array, you specify which value based on a row and column number. Function syntax: INDEX(array, [row_num], [column_num]) Step 4 - Build the LAMBDA function The LAMBDA function build custom functions without VBA, macros or javascript. Function syntax: LAMBDA([parameter1, parameter2, …,] calculation) Step 5 - Create and populate array 'MAKEARRAY' function examples 'MAKEARRAY' function examples 'MAKEARRAY' function examples 'MAKEARRAY' function examples Functions in 'Lookup and reference' category The COLUMNS function function is one of 25 functions in the 'Lookup and reference' category. Excel function categories Excel categories 3 Responses to “How to use the MAKEARRAY function” 1. An alternative to obtain the same result without MAKEARRAY: = LET( r,ROWS(M), c,COLUMNS(M), INDEX( M, SEQUENCE(r,1,r,-1), SEQUENCE(1,c,c,-1) ) ) where M is the name of the range of cell containing the matrix. 2. An alternative to obtain the same result is to pre and postmutiply the original matrix by exchange matrices (which have ones in the antidiagonal and zeros elsewhere). Those exchange matrices can be constructed using MAKEARRAY. Specifically, the original matrix M has to be premultiplied by and postmultiplied by Exchange matrix: https://en.wikipedia.org/wiki/Exchange_matrix □ Rodolfo, Thank you for your comments! How to comment How to add a formula to your comment <code>Insert your formula here.</code> Convert less than and larger than signs Use html character entities instead of less than and larger than signs. < becomes &lt; and > becomes &gt; How to add VBA code to your comment [vb 1="vbnet" language=","] Put your VBA code here. How to add a picture to your comment: Upload picture to postimage.org or imgur Paste image link to your comment.
{"url":"https://www.get-digital-help.com/how-to-use-the-makearray-function/","timestamp":"2024-11-07T00:17:08Z","content_type":"application/xhtml+xml","content_length":"189624","record_id":"<urn:uuid:29eee3ee-c4a9-4ca8-bf1c-719966268230>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00770.warc.gz"}
Jason Sachs ● December 4, 2013 Happy Thanksgiving! Maybe the memory of eating too much turkey is fresh in your mind. If so, this would be a good time to talk about overflow. In the world of floating-point arithmetic, overflow is possible but not particularly common. You can get it when numbers become too large; IEEE double-precision floating-point numbers support a range of just under 21024, and if you go beyond that you have problems: for k in [10, 100, 1000, 1020, 1023, 1023.9, 1023.9999, 1024]: try: ... Jason Sachs ● September 7, 2013 ●6 comments When I posted an article on estimating velocity from a position encoder, I got a number of responses. A few of them were of the form "Well, it's an interesting article, but at slow speeds why can't you just take the time between the encoder edges, and then...." My point was that there are lots of people out there which take this approach, and don't take into account that the time between encoder edges varies due to manufacturing errors in the encoder. For some reason this is a hard concept... Author’s note: This article was originally called Adventures in Signal Processing with Python (MATLAB? We don’t need no stinkin' MATLAB!) — the allusion to The Treasure of the Sierra Madre has been removed, in deference to being a good neighbor to The MathWorks. While I don’t make it a secret of my dislike of many aspects of MATLAB — which I mention later in this article — I do hope they can improve their software and reduce the price. Please note this... My coworkers and I recently needed a new oscilloscope. I thought I would share some of the features I look for when purchasing one. When I was in college in the early 1990's, our oscilloscopes looked like this: Now the cathode ray tubes have almost all been replaced by digital storage scopes with color LCD screens, and they look like these: Oscilloscopes are basically just fancy expensive boxes for graphing voltage vs. time. They span a wide range of features and prices:... Other articles in this series: This article is mainly an excuse to scribble down some cryptic-looking mathematics — Don’t panic! Close your eyes and scroll down if you feel nauseous — and... Jason Sachs ● April 18, 2018 Last time we looked at some techniques using LFSR output for system identification, making use of the peculiar autocorrelation properties of pseudorandom bit sequences (PRBS) derived from an LFSR. This time we’re going to jump back to the field of communications, to look at an invention called Gold codes and why a single maximum-length PRBS isn’t enough to save the world using spread-spectrum technology. We have to cover two little side discussions before we can get into Gold... Jason Sachs ● September 7, 2013 ●6 comments When I posted an article on estimating velocity from a position encoder, I got a number of responses. A few of them were of the form "Well, it's an interesting article, but at slow speeds why can't you just take the time between the encoder edges, and then...." My point was that there are lots of people out there which take this approach, and don't take into account that the time between encoder edges varies due to manufacturing errors in the encoder. For some reason this is a hard concept... Last time we looked at spread-spectrum techniques using the output bit sequence of an LFSR as a pseudorandom bit sequence (PRBS). The main benefit we explored was increasing signal-to-noise ratio (SNR) relative to other disturbance signals in a communication system. This time we’re going to use a PRBS from LFSR output to do something completely different: system identification. We’ll show two different methods of active system identification, one using sine waves and the other... Jason Sachs ● April 18, 2018 Last time we looked at some techniques using LFSR output for system identification, making use of the peculiar autocorrelation properties of pseudorandom bit sequences (PRBS) derived from an LFSR. This time we’re going to jump back to the field of communications, to look at an invention called Gold codes and why a single maximum-length PRBS isn’t enough to save the world using spread-spectrum technology. We have to cover two little side discussions before we can get into Gold... Jason Sachs ● June 12, 2018 Last time, we talked about Gold codes, a specially-constructed set of pseudorandom bit sequences (PRBS) with low mutual cross-correlation, which are used in many spread-spectrum communications systems, including the Global Positioning System. This time we are wading into the field of error detection and correction, in particular CRCs and Hamming codes. Ernie, You Have a Banana in Your Ear I have had a really really tough time writing this article. I like the... Jason Sachs ● December 29, 2017 ●1 comment Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things: • the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties • the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit The unusually-good correlation properties... Last time we looked at spread-spectrum techniques using the output bit sequence of an LFSR as a pseudorandom bit sequence (PRBS). The main benefit we explored was increasing signal-to-noise ratio (SNR) relative to other disturbance signals in a communication system. This time we’re going to use a PRBS from LFSR output to do something completely different: system identification. We’ll show two different methods of active system identification, one using sine waves and the other...
{"url":"https://dsprelated.com/blogs-2/nf/Jason_Sachs/all.php","timestamp":"2024-11-12T17:14:27Z","content_type":"text/html","content_length":"41296","record_id":"<urn:uuid:370681c6-ebf5-47db-a595-c5d73fc09b3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00624.warc.gz"}
Random Number Generation in R - Tele Blue Soft Random number generation is a fundamental concept in many fields, including statistics, computer science, and engineering. The ability to generate random numbers that follow specific distributions is essential for accurate simulations, robust algorithm testing, and effective risk assessment. This article will explore how to generate random numbers in R, focusing on four key distributions: Chi-Square, Exponential, Logistic, and Normal. We’ll also discuss the importance of these distributions and provide practical R code examples. Why Random Number Generation Matters Before diving into the specifics of random number generation, let’s understand why it is important: 1. Simulation Accuracy: Simulations are widely used in scientific research, financial modeling, and engineering. The accuracy of a simulation depends on how well the random numbers used mimic the behavior of real-world phenomena. By using random numbers that follow the correct distribution, simulations yield more reliable and realistic results. 2. Algorithm Testing: In computer science, algorithms are often tested with random inputs to evaluate their performance and robustness. The effectiveness of these tests increases when the random inputs follow a distribution that reflects real-world data. This ensures that the algorithm performs well not only in theory but also in practice. 3. Risk Assessment: Industries like finance and insurance rely heavily on risk assessment models. These models often require the generation of random numbers that represent potential future events. Accurate risk assessment depends on using random numbers that follow distributions modeling these events, leading to better predictions and decision-making. Understanding these concepts is crucial for designing experiments and systems that are both efficient and reflective of real-world conditions. With that in mind, let’s explore how to generate random numbers in R according to various distributions. Random Number Generation in R R provides several functions for generating random numbers from different distributions. The basic syntax for these functions is: rdistribution(n, parameters) Where rdistribution is the specific function for the desired distribution, n is the number of random numbers to generate, and parameters are the additional arguments that define the distribution (e.g., degrees of freedom, rate, mean, etc.). 1. Chi-Square Distribution The Chi-Square distribution is commonly used in hypothesis testing and in constructing confidence intervals for variance. To generate random numbers from a Chi-Square distribution, you use the rchisq () function in R. # Generating 1000 random numbers from a Chi-Square distribution with 5 degrees of freedom chi_square_data <- rchisq(1000, df = 5) # Plotting the distribution hist(chi_square_data, breaks = 50, col = "blue", main = "Chi Square Distribution", xlab = "Value", ylab = "Frequency") In this example, we generate 1,000 random numbers from a Chi-Square distribution with 5 degrees of freedom and then plot a histogram to visualize the distribution. 2. Exponential Distribution The Exponential distribution is often used to model the time between events in a Poisson process, such as the time between arrivals of customers at a service point. To generate random numbers from an Exponential distribution, you use the rexp() function. # Generating 1000 random numbers from an Exponential distribution with a rate of 1 exp_data <- rexp(1000, rate = 1) # Plotting the distribution hist(exp_data, breaks = 50, col = "blue", main = "Exponential Distribution", xlab = "Value", ylab = "Frequency") In this example, 1,000 random numbers are generated from an Exponential distribution with a rate parameter of 1. The histogram helps visualize the typical “decay” shape of the Exponential 3. Logistic Distribution The Logistic distribution is similar to the Normal distribution but has heavier tails. It is often used in logistic regression and other statistical models. To generate random numbers from a Logistic distribution, you use the rlogis() function. # Generating 1000 random numbers from a Logistic distribution with mean 0 and scale 1 logistic_data <- rlogis(1000, location = 0, scale = 1) # Plotting the distribution hist(logistic_data, breaks = 50, col = "blue", main = "Logistic Distribution", xlab = "Value", ylab = "Frequency") Here, we generate 1,000 random numbers from a Logistic distribution with a mean of 0 and a scale parameter of 1. The histogram shows a bell-shaped curve similar to the Normal distribution but with slightly fatter tails. 4. Normal Distribution The Normal distribution is one of the most important distributions in statistics, often referred to as the “bell curve.” It is widely used in natural and social sciences to represent real-valued random variables with unknown distributions. The rnorm() function generates random numbers from a Normal distribution. # Generating 1000 random numbers from a Normal distribution with mean 0 and standard deviation 1 normal_data <- rnorm(1000, mean = 0, sd = 1) # Plotting the distribution hist(normal_data, breaks = 50, col = "blue", main = "Normal Distribution", xlab = "Value", ylab = "Frequency") This example generates 1,000 random numbers from a Normal distribution with a mean of 0 and a standard deviation of 1. The histogram displays the characteristic bell-shaped curve of the Normal Understanding how to generate random numbers according to specific distributions is crucial for accurate simulations, effective algorithm testing, and robust risk assessments. R provides a powerful set of tools for generating these numbers, allowing you to model a wide range of real-world phenomena. By mastering these techniques, you can ensure that your experiments and models are based on realistic and reliable foundations, leading to better predictions and more informed decisions. Whether you’re working in finance, engineering, or research, the ability to generate and work with random numbers is a skill that will serve you well across many applications. So, get started with R, explore different distributions, and see how they can enhance your work
{"url":"https://www.telebluesoft.com/2024/09/01/random-number-generation-in-r/","timestamp":"2024-11-02T18:11:28Z","content_type":"text/html","content_length":"67438","record_id":"<urn:uuid:59eb43e3-dded-46db-bc04-435a1fa786b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00662.warc.gz"}
The Stacks project Lemma 59.70.4. Let $j : U \to X$ be an open immersion of schemes. For any abelian sheaf $\mathcal{F}$ on $U_{\acute{e}tale}$, the adjunction mappings $j^{-1}j_*\mathcal{F} \to \mathcal{F}$ and $\ mathcal{F} \to j^{-1}j_!\mathcal{F}$ are isomorphisms. In fact, $j_!\mathcal{F}$ is the unique abelian sheaf on $X_{\acute{e}tale}$ whose restriction to $U$ is $\mathcal{F}$ and whose stalks at geometric points of $X \setminus U$ are zero. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0F70. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0F70, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0F70","timestamp":"2024-11-07T16:19:20Z","content_type":"text/html","content_length":"15023","record_id":"<urn:uuid:279607b2-ba95-4068-9745-eb0cc04af7e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00646.warc.gz"}
Excel Formula for PAYE Tax Calculation In this tutorial, we will learn how to create an Excel formula that calculates the amount of PAYE (Pay As You Earn) tax each employee has to pay based on their income amount. The tax rates used in this formula are defined by the Malawi Revenue Authority. By understanding and implementing this formula, you will be able to automate the calculation of PAYE tax for employees in Excel. To calculate the PAYE tax, we will use nested IF functions in Excel. These functions allow us to check the income amount and apply the corresponding tax rate based on the defined thresholds. The formula provided takes into account the different tax rates for different income ranges. Let's break down the formula step-by-step: 1. The first IF function checks if the income amount is less than or equal to 100,000. If true, it returns 0, indicating that no tax is applicable for the first 100,000. 2. If the income amount is greater than 100,000, the second IF function is executed. It checks if the income amount is less than or equal to 450,000. If true, it calculates the tax by subtracting 100,000 from the income amount and multiplying it by the tax rate of 25%. 3. If the income amount is greater than 450,000, the third IF function is executed. It checks if the income amount is less than or equal to 2,500,000. If true, it calculates the tax by subtracting 450,000 from the income amount, multiplying it by the tax rate of 30%, and adding 85,000 (which represents the tax on the first 350,000). 4. If the income amount is greater than 2,500,000, the fourth IF function is executed. It calculates the tax by subtracting 2,500,000 from the income amount, multiplying it by the tax rate of 35%, and adding 665,000 (which represents the tax on the first 2,050,000). The resulting tax amount is returned by the formula. Let's look at an example to understand how the formula works. If an employee has an income of 200,000, the formula would calculate the tax as follows: • The income amount is greater than 100,000, so the first IF condition is false. • The income amount is less than or equal to 450,000, so the second IF condition is true. The tax is calculated as (200,000 - 100,000) * 0.25 = 25,000. Therefore, the tax amount for an income of 200,000 would be 25,000. By using this Excel formula, you can easily calculate the PAYE tax for different income amounts based on the tax rates set by the Malawi Revenue Authority. This can save you time and ensure accurate calculations for employee payroll. An Excel formula =IF(A1<=100000, 0, IF(A1<=450000, (A1-100000)*0.25, IF(A1<=2500000, (A1-450000)*0.3+85000, (A1-2500000)*0.35+665000))) Formula Explanation This formula calculates the amount of PAYE (Pay As You Earn) tax each employee has to pay based on the income amount. The tax rates are defined by the Malawi Revenue Authority. Step-by-step explanation 1. The formula uses nested IF functions to check the income amount and apply the corresponding tax rate. 2. The first IF function checks if the income amount (in cell A1) is less than or equal to 100,000. If true, it returns 0, indicating that no tax is applicable for the first 100,000. 3. If the income amount is greater than 100,000, the second IF function is executed. It checks if the income amount is less than or equal to 450,000. If true, it calculates the tax by subtracting 100,000 from the income amount and multiplying it by the tax rate of 25% (0.25). 4. If the income amount is greater than 450,000, the third IF function is executed. It checks if the income amount is less than or equal to 2,500,000. If true, it calculates the tax by subtracting 450,000 from the income amount, multiplying it by the tax rate of 30% (0.3), and adding 85,000 (which represents the tax on the first 350,000). 5. If the income amount is greater than 2,500,000, the fourth IF function is executed. It calculates the tax by subtracting 2,500,000 from the income amount, multiplying it by the tax rate of 35% (0.35), and adding 665,000 (which represents the tax on the first 2,050,000). 6. The resulting tax amount is returned by the formula. For example, if an employee has an income of 200,000, the formula =IF(A1<=100000, 0, IF(A1<=450000, (A1-100000)0.25, IF(A1<=2500000, (A1-450000)0.3+85000, (A1-2500000)*0.35+665000))) would calculate the tax as follows: • The income amount is greater than 100,000, so the first IF condition is false. • The income amount is less than or equal to 450,000, so the second IF condition is true. The tax is calculated as (200,000 - 100,000) * 0.25 = 25,000. Therefore, the tax amount for an income of 200,000 would be 25,000.
{"url":"https://codepal.ai/excel-formula-generator/query/tPHSOsFX/excel-formula-calculate-paye-tax","timestamp":"2024-11-09T04:28:55Z","content_type":"text/html","content_length":"97178","record_id":"<urn:uuid:6eb11a46-28e0-42ba-8be4-cd5eae8c2dfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00222.warc.gz"}
Nonlinear Optics We have conducted studies of basic nonlinear optical processes. Traditionally, nonlinear optical processes have been studied using focused laser beams. The illumination of the interaction region usually comes from a solid angle W much smaller than 2p steradians, the solid angle subtended by a hemisphere. Most theoretical treatments of nonlinear optics consequently treat the nonlinear interaction through use of the paraxial approximation. In our study, we have for the first time studied the opposite limiting case, where the sample is excited coherently from all directions by an incoming spherical wave that subtends a solid angle of 4p steradians. Nonlinear optical processes behave very differently in this limit. There is no phase-matching requirement, because the nonlinear signal comes primarily from the focal region, which in this case has a size of approximately the wavelength of light. We find that nonlinear processes consequently become very efficient in this limit We have studied the process of adiabatic wavelength conversion [2] in a highly nonlinear material, indium tin oxide excited at a wavelength where the real part of its dielectric permittivity vanishes, the so-called epsilon-near-zero (ENZ) condition. We find that the wavelength range over which the output wave can be tuned is much larger for ENZ regions than had previously been studied under non-ENZ conditions. We have also performed a theoretical study of the nonlinear propagation of THz pulses [3]. THz propagation shows effect qualitatively different from the propagation of visible light because the wavelength of THz waves is so large the diffraction effects are dominant and the paraxial is not valid. Moreover, THz nonlinearities tend to be very much stronger than nonlinearities at optical frequencies. We have also studied the nonlinear propagation of few-cycle optical pulses [4]. A key finding is that self-focusing effects tend to be strongly suppressed in this circumstance, as a result of pulse broadening through the process of group velocity dispersion. 1. Nonlinear optics with full three-dimensional illumination, R. Penjweini, M. Weber, M. Sondermann, R. W. Boyd, and G. Leuchs, Optica 6, 878-883 (2019). 2. Broadband frequency translation through time refraction in an epsilon-near-zero material, Y. Zhou, M. Z. Alam, M. Karimi, J. Upham, O. Reshef, C. Liu, A. E. Willner and R. W. Boyd, Nature Communications 11, 2180 (2020). 3. Propagation of broadband THz pulses: effects of dispersion, diffraction and time-varying nonlinear refraction, P. Rasekh, M. Saliminabi, M. Yildirim, R. W. Boyd, J-M. Ménard, and K. Dolgaleva, Optics Express 28, 3237-3248 (2020). 4. Suppression of self-focusing for few-cycle pulses, S. A. Kozlov, A. A. Drozdov, S. Choudhary, M. A. Kniazev, and R. W. Boyd, Journal of the Optical Society of America B 36, G68-G77 (2019).
{"url":"https://www.boydnlo.ca/research_results/nonlinear-optics/","timestamp":"2024-11-09T03:37:20Z","content_type":"text/html","content_length":"109891","record_id":"<urn:uuid:4bb2e66d-b923-4a1a-847e-282de608d953>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00802.warc.gz"}
What Is The Monthly Payment For The HELOC That depends. An outstanding balance will have a minimum monthly payment, which will fluctuate each month during the draw period because the payment is based on a few different factors, including: 1. The total amount of funds withdrawn or transferred 2. The number of days in the billing cycle 3. Changes to the index rate(s) 4. Over-credit-limit amounts 5. Annual fees or other charges 6. Past due amounts on the account • Related Articles • Can A HELOC Reduce The Monthly Minimum Payments Yes, it can reduce the monthly minimum payments and as time goes on the monthly minimums will continue to decrease as the principal balance is lowered. In fact, most HELOCs only charge an interest-only payment that is typically much less than a ... • Can A Payment Be Skipped With A HELOC No, one must make at least the minimum monthly payment. • We Need Help Lowering Our HELOC Payments Due To Loss / Income / Medical Issue One can get the HELOC payment down a bit by contacting the lender and asking them to recast the loan by stating, “Can this loan be recast with the remaining principal balance, over a 30 year term, so it drops the minimum payments?” • How Much Extra Do We Add To Monthly Payments Nothing! In fact, the more one uses their HELOC like a regular bank account - the quicker the mortgage is paid off, but with zero extra payments! • Does 401(k) Proceeds Count Towards Monthly Income Yes, a 401(k) can be used for a down payment and count as income for a home loan. The 401(k) will either be calculated based on the lump sum amount and then divided by 24. The lender calculates what a lump sum figure would be when dividing it over 2 ...
{"url":"https://support.privatewealth.academy/portal/en/kb/articles/what-is-the-monthly-payment-for-the-heloc","timestamp":"2024-11-03T15:38:54Z","content_type":"text/html","content_length":"31399","record_id":"<urn:uuid:900bf41d-a517-431f-9417-265c9ce9da33>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00725.warc.gz"}
Cusanus : Incorruptible certainty of mathematical Cusanus : Incorruptible certainty of mathematical signs DID not Pythagoras, the first philosopher both in name and in fact, consider all investigation of truth to be by means of numbers? The Platonists and also our leading [thinkers] followed him to such an extent that our Augustine, and after him Boethius, affirmed that, assuredly, in the mind of the Creator number was the principal exemplar of the things to be created. How was Aristotle (who by refuting his predecessors wanted to appear as someone without parallel) able in the Metaphysics to teach us about the difference of species otherwise than by comparing the species to numbers? And when, regarding natural forms, he wanted to teach how the one form is in the other, he resorted of necessity to mathematical forms, saying: "Just as a triangle is in a quadrangle, so the lower [form] is in the higher [form]." I will not mention innumerable other similar examples of his. Also, when the Platonist Aurelius Augustine made an investigation regarding the quantity of the soul and its immortality, and regarding other very deep matters, he had recourse to mathematics as an aid. This pathway seemed to please our Boethius to such an extent that he repeatedly asserted that every true doctrine is contained in [the notions of] multitude and magnitude. And to speak more concisely, if you wish: was not the opinion of the Epicureans about atoms and the void - an opinion which] denies God and is at variance with all truth - destroyed by the Pythagoreans and the Peripatetics only through mathematical demonstration? [I mean the demonstration] that the existence of indivisible and simple atoms - something which Epicurus took as his starting point - is not possible. Proceeding on this pathway of the ancients, I concur with them and say that since the pathway for approaching divine matters is opened to us only through symbols, we can make quite suitable use of mathematical signs because of their incorruptible certainty. Cf. Valery, Perfection dans tous les ordres , Learn gladly from everyone , Socrates fought foolishness, Plato perfected philosophy Heidegger, Through a foundational poetic and noetic experience of Being W.K.C. Guthrie, Life of Plato and philosophical influences Papacy Reference address : https://ellopos.net/elpenor/greeks-us/cusanus_math.asp
{"url":"http://ellopos.net/elpenor/greeks-us/cusanus_math.asp","timestamp":"2024-11-13T04:28:48Z","content_type":"text/html","content_length":"25920","record_id":"<urn:uuid:222f2df1-966a-4a15-a1d2-6be050ee38d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00787.warc.gz"}
Powers of 2 To say what a whole number generator does we have to say two things clearly: 1. Which whole numbers can be used as input to the whole number generator; usually this will just be the positive whole numbers 1, 2, 3, ,4 ,… but sometimes it will be the non-negative whole numbers 0, 1, 2, 3, 4 ,… (and it might even be the whole numbers -2, -1, 0, 2, 3, ,4, ..). Whatever it is, we need to spell it out clearly. 2. Which numbers, in which order, will be the output of the whole number generator: if a whole number n is input, what will be the output? The output for n = 0 is the whole number 1. Then, for any input the output is 2 times the previous output. The powers of 2 generator gives us a whole number sequence that begins 1, 2, 4, 8, 16 ,32, 64, 128, 256, 512, 1024, 2048, … Using index notation we would write: $\textrm{Po2}(n) = 2^n \textrm{ for }n= 0, 1, 2, 3, 4,\ldots$ This index notation was invented by the French mathematician and philosopher René Descartes. Remember, it is just a notation: we usually write powers of 2 using index notation but there’s no special reason other than convention that we do so. Block towers and powers of 2 A block tower built from red and blue blocks is a stack of red and blue blocks placed on a flat surface (a table, for example): 4 different block towers of height 4 The “height” of a block tower is the number of blocks used to make the tower. How many block towers of height 1 are there? How many block towers are there of heights 2, 3, 4, 5, 6,? Does it make sense to talk about a block tower of height 0? If so, how many block towers of height 0 are there? A puzzle involving powers of 2 Take a square grid, with each side of the grid having a power of 2 number of squares (2, 4, 8, 16, … ). Remove a corner square of the grid: Can the resulting figure – the grid less the corner square – be exactly covered with L- shaped tiles (shown below)? Sums of powers of 2 Here’s a remarkable fact: every positive whole number can be written in one, and only one, way as sums of powers of 2. Examples are: $7= 2^2 + 2^1 + 2^0$ $12 = 2^3 + 2^2$ How could we think about why this might be true? A major clue lies in James Tanton’s 1 <- 2 exploding dots machine. Sums of digits of powers of 2 Here’s a new whole number generating machine, based on the base 10 representation of powers of 2 (which is how powers of 2 look in a 1<- 10 exploding dots machine): we input numbers n = 1, 2, 3, ,4 ,5 … in order, and output the sum of the base 10 digits of $2^n$. Here’s the first 20 outputs for this whole number generator: and here’s how the output looks versus the input n: The sums of base 10 digits of powers of 2 generally seems to follow an upward trend, but sometimes dips down, before increasing again. How does this plot continue? What regularities can you find in the sums of base 10 digits of powers of 2 ? There is nothing special about base 10 – it just happens to be how we usually write whole numbers. What about other bases? Will base 2 give us anything interesting? How about base 3? Is it similar to base 10 or something quite different? What about base 4? And what about base 8? If the base $b= 2^k$ is already a power of 2 (for example, 2, 4, 8, 16, 32, …), is there something regular or peculiar about the sums of base b digits of powers of 2? (What are digits base 16, 32, 64, … digits anyway, and why might this question about sums of digits of powers of 2 in these odd bases make any sense?) Mersenne prime numbers Sometimes a number 1 less than a power of 2 is prime. Examples are: $31 = 2^5-1$ A prime number that is one less than a power of 2 is called a Mersenne prime, after Marin Mersenne who studied them in the early 17th century. The largest prime number known, as of January 27, 2018 is the Mersenne prime $2^{77232917}-1$ which was discovered by John Pace on December 26, working as part of the Great Internet Mersenne Prime Search (GIMPS). You can download software and participate in GIMPS: you may be the next person to discover a new Mersenne prime. Leading digits in powers of 2 In different bases b = 2, 3, 4, 5, … we can look at the leading (=left most) digit of powers of 2. For example, here’s what we get looking at leading digits of $2^n$ for $1\leq n \leq 50$: Base 2: Base 3 Base 4 Base 5 There’s a fairly clear pattern for bases 2 and 4. What about bases 3 and 5? Can you see a pattern in the leading digits of $2^n$ as n varies? What about other bases, such as base 10? For a given base b, how often do the digits 1, …, b-1 occur as the leading digit of $2^n$. Distribution of digits in powers of 2 The powers of 2 grow rapidly in size and already $2^{1000}$ has 302 digits when written in base 10. When we count how often the digits 0, 1, …, 9 occur in $2^{1000}$ we see that they are roughly equally distributed: As we look at higher and higher powers of 2 it seems the digits 0, 1, …,9 in powers of 2 are more and more evenly distributed. No-one knows yet whether this is true, or has any real idea why it might be true. The evidence from computation suggests it is true, and perhaps it has something to do with the way in which carrying when multiplying scrambles the digits, but to date no-one knows. Software for computations To compute large powers of 2, and to count the number of occurrences of the digits 0, 1, …, 9 in such large powers of 2, you will need to use some computational software. Mathematica is powerful software that will carry out almost any computational task. However, it is not freely available and not cheap. Fortunately, Mathics is a free, general-purpose online computer algebra system featuring Mathematica-compatible syntax and functions. Mathics is written in Python and is available either for download, or to use online. If you go the download route you will also need to install Python. Documentation for Mathics is available as a PDF file. Here’s how we might do some calculations involving powers of 2: • Calculate $2^{100}$. □ First enter “2^100” in the cell with the red x: □ Then click on the shaded “=” to the right, or press shift+return, to evaluate the cell: We can count the number of occurrences of the digit 0 in $2^{1000}$: To learn how to get a list of the digits of a whole number in Mathematica we can just Google “getting the digits of a number Mathematica“. Now we can create a list of the number of occurrences of each of the digits 0, 1, …, 9 in $2^{1000}$ and view the table in table-form: We can also Mathics to plot powers of 2:
{"url":"http://www.math4plus.com/powers-of-2/","timestamp":"2024-11-08T09:18:25Z","content_type":"text/html","content_length":"75336","record_id":"<urn:uuid:578a7c88-141d-4991-9e5c-bade6e420922>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00366.warc.gz"}
Determining the Definite Integral of a Trigonometric Function Question Video: Determining the Definite Integral of a Trigonometric Function Mathematics • Third Year of Secondary School Determine β «_(β π ) ^(β π /4) 8 cos 5π dπ . Video Transcript Determine the integral of eight cos five π dπ between negative π by four and negative π . Our first step here is to take out the constant eight. This leaves us with eight multiplied by the integral of cos five π dπ . The integral of cos π is a standard one that we should know. It is equal to sin π , as integrating is the opposite, or inverse, of differentiating. We want to integrate cos five π . This means we need to use another general rule. This states that the integral of cos π π is equal to one over π multiplied by sin π . We differentiate the π π to give us π . The integral of cos five π is equal to one-fifth multiplied by sin five π . We need to multiply this by eight and have limits of negative π by four and negative π . As with the eight at the start, we can take the constant one-fifth outside of the bracket. This gives us eight-fifths multiplied by sin five π . Our next step is to substitute in our two limits and subtract the answers. Substituting in the upper limit gives us sin of five multiplied by negative π over four. This can be rewritten as sin of negative five π over four. Substituting in our lower limit gives us sin of five multiplied by negative π . Once again, this can be rewritten as sin of negative five π . At this point, it is worth drawing the sine curve to see if negative five π by four and negative five π correspond with any our known angles. The sine curve has a maximum value of one and a minimum value of negative one. It has key values on the π -, or π ₯-axis, of π by two and π , negative π by two and negative π . If you prefer to think of these angles in degrees. Itβ s worth remembering that π radians is equal to 180 degrees. The sine curve looks as shown in the diagram. However, at the moment, we have a slight problem as our two angles negative five π by four and negative five π donβ t fit in the range. As the sine curve has a period of two π , it repeats every two π radians, we can continue the graph as shown. We can see clearly from the graph that the sin of negative five π is equal to zero. Negative five π by four is shown in the diagram. By going vertically upwards to the sine curve and then horizontally along to the π ¦-axis, we can see what this value will take. Due to the symmetry of the sine curse, sin of negative five π by four is equal to sin of π by four. π by four is equal to 45 degrees and is one of our known angles. This is equal to root two over two. The sin of 45 degrees equals root two over two. Therefore, the sin of π by four radians must also equal root two over two. The sin of negative five π by four is equal to root two over two. And the sin of negative five π is equal to zero. Root two over two minus zero is root two over two. So, we need to multiply this by eight-fifths. Multiplying the numerators gives us eight root two. And multiplying the denominators, gives us 10. We have eight root two over 10. Eight and 10 have a common factor of two. So, we can divide the numerator and denominator by two. This gives us four root two over five. We can, therefore, say that the integral of eight cos five π dπ between negative π by four and negative π is equal to four root two over five.
{"url":"https://www.nagwa.com/en/videos/748137847924/","timestamp":"2024-11-05T02:20:22Z","content_type":"text/html","content_length":"254108","record_id":"<urn:uuid:8f88cb64-a1a2-4761-bab9-79f1770cf378>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00679.warc.gz"}
Black holes with Proca hair/Spinning Proca stars Here, the numerical data described in the paper "Black holes with synchronised Proca hair: linear clouds and fundamental non-linear solutions", arXiv:2004.09536 [gr-qc] [1], is made available for public use. This data pertains the fundamental states (n=0) of these hairy black holes and spinning Proca stars. Some data for the excited states (n=1) was previously made available here. The data is presented in the same form as the data we have previously made available here for Kerr black holes with scalar hair, described in the paper "Construction and physical properties of Kerr black holes with scalar hair", Class. Quant. Grav. 32 (2016) 144001; arXiv:1501.04319 [gr-qc] [2], which expands on the solutions first presented in the paper "Kerr black holes with scalar hair", Phys. Rev. Lett. 112 (2014) 221101; e-Print: arXiv:1403.2757. The original data for the Proca star was obtained in the paper "Proca Stars: gravitating Bose-Einstein condensates of massive spin 1 particles", Richard Brito, Vitor Cardoso, Carlos A. R. Herdeiro, Eugen Radu, Phys. Lett. B752 (2016) 291-295; arXiv:1508.05395 [gr-qc]. But in the spinning case, the solutions reported in this reference are excited states (n=1), as explained in [1]. The data for two spinning Proca stars (PSs) and for two hairy black holes (HBHs) can be found in the attachment "Data_files.zip", which contains four data files: - PS-n=0.dat - PS-n=1.dat - HBH1.dat - HBH2.dat These files contain the data for the four solutions described in the paper - figures 6,7,8 (PS-n=0,1) and figures 9, 10, 11 (HBH1 and HBH2) respectively. These solutions are: PS-n=0 : a typical n=0 Proca star belonging to the main branch of PS solutions; - the input parameters are: (r_H=0; w=0.9 ; m=1) - ADM mass=0.726; ADM angular momentum=0.75 - BH mass=0; BH angular momentum=0 - Proca field mass=0.726; Proca field angular momentum=0.75 PS-n=1 : a typical n=1 Proca star belonging to the main branch of PS solutions; - the input parameters are: (r_H=0; w=0.9 ; m=1) - ADM mass=1.456; ADM angular momentum=1.5 - BH mass=0; BH angular momentum=0 - Proca field mass=1.456; Proca field angular momentum=1.5 HBH1 : a HBH close to the existence line (n=0), thus Kerr-like - the corresponding data is given in the eq. (3.12) of [1] HBH2 : a non-Kerr like HBH - the corresponding data is given in the eq. (3.13) of [1] The data is presented in the following order, in the files (F_1,F_2,F_0,W,H_1,H_2,H_3,V are the metric and Proca functions used in the paper): X_1 theta_1 F1 F2 F0 W H1 H2 H3 V X_2 theta_1 F1 F2 F0 W H1 H2 H3 V X_261 theta_1 F1 F2 F0 W H1 H2 H3 V X_1 theta_2 F1 F2 F0 W H1 H2 H3 V X_2 theta_2 F1 F2 F0 W H1 H2 H3 V X_261 theta_2 F1 F2 F0 W H1 H2 H3 V X_1 theta_35 F1 F2 F0 W H1 H2 H3 V X_2 theta_35 F1 F2 F0 W H1 H2 H3 V X_261 theta_35 F1 F2 F0 W H1 H2 H3 V where the grid points are: X_k=(k-1)/260 (k=1,..,261) theta_k=(k-1)*Pi/34/2 (k=1,..,35) The corresponding values for \pi/2<\theta\leq\pi result from the reflection symmetry of the solutions along the equatorial plane. X=x/(1+x) is a compactified radial coordinate, 0\leq x\leq 1 x=\sqrt{r^2-r_H^2} was a new radial coordinate defined in the paper (with r_H=0 for a Proca Star)
{"url":"http://gravitation.web.ua.pt/node/2146","timestamp":"2024-11-09T01:27:59Z","content_type":"text/html","content_length":"39863","record_id":"<urn:uuid:398d8de8-48b4-448a-a9a0-7cdab0356bb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00018.warc.gz"}
The 12% Myth: Estimating Long-Term Equity Returns - CapitalmindThe 12% Myth: Estimating Long-Term Equity Returns What is your expectation of annualised long-term return from investing in equities? 12% is the most common assumption plugged into financial planning models. Data says India’s “long-term” returns range from 7% to 20%, depending on when you invested. The way we set expectations needs to factor in this variability. This post explores two ways to set expectations for long-term equity returns. The 12% number has some basis. That’s how the index (NSE500) has done over the long term. The chart shows the outcome of a monthly 10k SIP into the NSE 500 since its inception in Jan 1995. A 10k monthly SIP into the NSE 500 starting in Jan 1995, continuing uninterrupted over 28 years, would be worth 4.6 Cr in September 2023. A CAGR of ~12% and a higher XIRR of nearly 15%. That’s over nearly three decades. We examined what returns have been like over 15-year periods. Think of it as a period shorter than an investing lifetime but long enough to be considered long-term. We considered systematic investments and not one-time lump sums because that’s how most of us invest. Applying that same 12% annual rate to a shorter time frame of 15 years means your investment ₹ 18L (10k / month * 12 months * 15 years) should be worth about ₹ 50L by the time you complete your last 15-year investment periods in the past The chart below shows the ending values of monthly SIP portfolios with start dates in January each year from 1995 to 2008 (The chart ends at 2008 because 15 years from 2008 brings us to 2023). These are values 15 years after having started investing. If long-term equity returns were written in stone, all those columns would be more or less the same height. They are not. The investor who started in 1996 would have had ₹93L (~19.5%) by the time they stopped in 2010. The investors who began in 2005 would have had 46L (11.7%) when they stopped in 2020. All else being the same, the 2005 investor had half the corpus of the 1996 investor. Imagine if the 2005 investor set his expectations based on what the 1996 investor was worth. Another way to look at this information is an equity curve chart of 14 investors, each starting their SIPs one year apart. The final number labelled on the chart is their portfolio value after 15 years, the same as in the earlier column chart. This chart also shows the path those portfolios took to reach their ending Imagine being the investor who started in 1995. Seeing your portfolio exceed 1 Cr (₹ 10 Million) in January 2008, 14 years into your investing journey, only to see it plummet 60% to 40L (₹ 4 Million) late in 2008 before finally coming back to 89L (₹ 8.9 Million) by the end of 2009. You wouldn’t know then that, except for the investor starting in 1996, none of the rest would even come close to breaching the 1 Cr (10 Million) threshold. The chart shows the XIRR (Extended Internal Rate of Return) from starting an equal-amount 15-year SIP each month from Jan 1995 to Aug 2008. Think of XIRR as the answer to the question, “If I had to sum up the overall yearly growth rate of my investment, considering all the ups and downs and irregular timings, what would it be?” November 1995 was a great time to start an SIP. That investor would’ve made 20% on investments made over the next 15 years. April 2005 was the worst, with just a 7% annualised rate of return! (note: you’d realise that 7% return only if you sold precisely 15 years later, at the end of March 2020, i.e., with the market at the pandemic lows). You can’t help but notice that long-term returns have trended down from the 90s to now. The median SIP XIRR that started before and after 2002 shows a 380 basis point difference, which can’t be explained away by inflation or interest rates since they were not markedly different (I would love to be corrected on this). One possible reason why XIRRs have trended down is the markets changed in the 2008-09 GFC. Pre-GFC markets had frequent bouts of high volatility and periods of going nowhere, like 1994-98. There were frequent sharp corrections, thus giving windows of opportunity for lucky SIPs to gather cheap units. That has not been the case in the decade since 2009 when the world’s stock markets went up with relatively low volatility. The pandemic-driven correction in March 2020 was the first time volatility spiked significantly in over a decade. Irrespective of the reasons, there are two important takeaways for us: • The investment environment does not stay static. It changes over time. • We don’t control the environment we get to invest in This means point estimates of future returns based on historical returns will likely lead to disappointment. So, how should we form a guesstimate of future portfolio value? One way is to look at historical ranges like from the chart above. Between 1995 and 2008, starting a monthly SIP and thus investing 18L led to something between 46L (2.6x invested value) and 93L (5.2x). Assume you’ll end up somewhere between 2.6 and 5x your regularly invested capital, given that’s what happened in the past. This means an expected annualised return range of 12% to 20%. Might that be too optimistic? Another way is to do many simulations to arrive at various potential outcomes over 15 years. Simulating an Investing Multiverse The process in a nutshell: 1. Let’s say we’ve decided to invest a certain amount of money every month starting October 2023 for 15 years. We want to run 10,000 “what-if” scenarios to see how that investment might grow by October 2038. 2. First, we look at historical data of the Nifty500, based on which we calculate the average daily return and a variability measure (standard deviation) 3. Now, for each day of each month, we roll a special “financial die” that considers the calculated average and variability. This dice gives us a random daily return that’s somewhat similar to what the market has done in the past, so there will be a large number of minor up-and-down days and a small number of BIG up-and-down days 4. Stacking those daily returns one after for 15 years gives us the potential end values. Some simulations might have more big down days and not enough big up days, while some will end up the opposite, purely by chance. 5. Assume 10,000 independent portfolios with identical starting values, then model daily returns which are based on the average return Think of each “what-if” as a separate storyline for the investment, a possible reality in an infinite multiverse. The most probable (hypothetical) realities form the central band of outcomes with less likely outcomes at the extremes. Note that this method is by no means the “best” or the “right” way to model future returns. Other methods do a better job of applying fat tails, incorporating mean reversion, and applying trends. But they all suffer from the core drawback of any modelling exercise; they assume the future will look like the past. Keep that caveat in mind when considering what comes next. The chart shows the outcome of 10,000 simulations: what a 15-year 10k/month SIP investment starting October 2023 for 15 years could be worth by October 2038. The mass of thin grey lines snaking their way to 2038 are the portfolio values of the 10,000 simulations with five specific portfolios highlighted: the best, 80th and 20th percentile, the median and the worst portfolios. The dotted line is the total investment amount of 18L over 15 years. The median portfolio is worth ₹ 51L (12.7%) in 15 years, almost what the primary 12% assumption got us earlier. But also note the range between the best case, ₹ 8.9 Cr (45%) and worst case, ₹ 7.9L (-12%) potential outcomes. Those are outliers and probably not where we should focus. The most relevant zone is between the 20th and 80th percentile. This means a wide range of long-term return assumptions annualised between 6.8% and 19.1%. Someone looking to avoid all that uncertainty could find a fixed-income instrument yielding 7%, do the same SIP for 15 years, they’d have ~31L in 2038. The chart shows that equities might do worse but have a decent chance of doing much better. Put another way, investing in the NSE500 does better than safe debt in 8,000 out of 10,000 parallel universes. Where should you set your expectations? “Happiness is reality minus expectations.” – Tom Magliozzi, co-host of NPR’s “Car Talk” Since a lot of heartburn comes from setting expectations too high, the recipe for investors to save and invest expecting a conservative return. That means looking at the 20-50th percentile as your expected return over the long term. This means assuming and being okay that, over the long term, equity investments will return between 7% and 11%. That way, you do decently over the long run and have a strong chance of being pleasantly surprised. I wrote in Ten Money Messages for my younger self that for most people, it’s not brilliant investment decisions but growth in their earnings that will make them wealthy. Expecting conservative investment returns means everything above goes directly to the happiness bottom line. That seems to be a good life practice overall.
{"url":"https://premium.capitalmind.in/2023/10/estimating-longterm-equity-returns/","timestamp":"2024-11-08T08:37:10Z","content_type":"text/html","content_length":"83235","record_id":"<urn:uuid:4034c167-fc63-4405-b906-14ec598b053c>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00240.warc.gz"}
Category:Unary operations Here is a list of articles in the Unary operations category of the Computing portal that unifies foundations of mathematics and computations using computers. Wikimedia Commons has media related to Unary operations. This category is for unary operations and functions. This category has the following 3 subcategories, out of 3 total. Pages in category "Unary operations" The following 39 pages are in this category, out of 39 total.
{"url":"https://handwiki.org/wiki/Category:Unary_operations","timestamp":"2024-11-09T12:40:43Z","content_type":"text/html","content_length":"32002","record_id":"<urn:uuid:7ac8dfd8-548e-4e23-953d-dbe542dd0d8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00707.warc.gz"}
What is his cost basis? Understand the Problem The question is asking for the cost basis of Professor Ducksworth's investment in Ethereum after he invested $1,000 each month for three months, accumulating 0.85 ETH. The cost basis is essentially the average price he paid per ETH. The goal is to calculate this average based on his total investment over the period. The cost basis is approximately $3529. Answer for screen readers The cost basis is approximately $3529. Steps to Solve 1. Determine Total Investment Professor Ducksworth invests $1,000 every month for three months. So, total investment is: $$ \text{Total Investment} = 3 \times 1000 = 3000 $$ 2. Calculate Cost Basis Cost Basis is defined as the total investment divided by the total amount of Ethereum acquired. In this case, he acquired 0.85 ETH. The formula for cost basis is: $$ \text{Cost Basis} = \frac{\text{Total Investment}}{\text{Total ETH Accumulated}} $$ 3. Substitute Values Now, substituting the total investment and the total Ethereum into the equation: $$ \text{Cost Basis} = \frac{3000}{0.85} $$ 4. Calculate the Cost Basis Now, performing the division: $$ \text{Cost Basis} \approx 3529.41 $$ Since cost basis is generally rounded to two decimal places, we will round it to $3529. The cost basis is approximately $3529. More Information The cost basis represents the average price per unit of Ethereum that Professor Ducksworth paid over the three-month investment period. This is a crucial metric for tracking investment performance and potential tax liability. • Misunderstanding Cost Basis: Confusing total investment with the cost basis; remember, cost basis involves dividing total investment by the amount of asset acquired. • Rounding Errors: Not rounding correctly can lead to discrepancies in the final answer.
{"url":"https://quizgecko.com/q/what-is-his-cost-basis-wruej","timestamp":"2024-11-04T18:49:15Z","content_type":"text/html","content_length":"168780","record_id":"<urn:uuid:674d33ba-1016-4660-8b04-b1999152cfd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00787.warc.gz"}
Johann Carl Friedrich Gauss: German mathematician honoured By Google on 241st birthdayJohann Carl Friedrich Gauss: German mathematician honoured By Google on 241st birthday - Amsterdam Times Home Technology Johann Carl Friedrich Gauss: German mathematician honoured By Google on 241st birthday Johann Carl Friedrich Gauss: German mathematician honoured By Google on 241st birthday GOOGLE DOODLE: Johann Carl Friedrich Gauss was known of the Prince of Mathematics German child prodigy Johann Carl Friedrich Gauss is referred to as one of the best mathematicians in history. The genius was known as “The Prince of Mathematicians” for his numerous contributions to number theory, algebra, geophysics, mechanics and statistics. Google Doodle are honouring him and his contributions to maths on what would have been his 241st birthday. The Enlightenment mathematician is shown in a Doodle depicted by Bene Rohlmann, surrounded by his many discoveries. Bene Rohlmann BIRTHDAY: Today would have been Gauss 241st birthday Who was Johann Carl Friedrich Gauss? Johann Carl Friedrich Gauss was born into a poor family on April 30, 1777 in Brunswick, north Germany. Despite having an illiterate mother, Johann showed great potential for maths and numbers. However, his mother, Dorothea Benze, was still intelligent despite receiving no education. His father, Gebhard Dietrich Gauss, tried desperately to make ends meet and had numerous jobs such as sales assistant, butcher, bricklayer, gardener and treasurer. The best Google Doodles We celebrate the best of Google's graphic art works. When Gauss was just three years old he baffled his parents by correcting an error in his fathers payroll calculations. By the time he was five the prodigy was looking after his fathers accounts. Before reaching the age of eight, he could add up every number from one to 100 almost instantly. One of the many astounding things he did, was work out his birth date using only the little information he had – that it was a Wednesday eight days before an Easter holiday. PRODIGY: Gauss showed great potential from a young age The Duke of Brunswick quickly spotted Gauss talents and sent him to the Collegium Carolinum when he was 15, followed by the prestigious University of Göttingen. Another one of his greatest discoveries was the heptadecagon, or a 17-sided polygon, which could be made with a compass and a ruler. While studying the underlying theory he revealed an important connection between algebra and geometrical shapes. At the age of 21 he published Disquisitiones Arithmeticae in 1798, which proved to be immensely important to number theory. Three years later calculated the orbit of the asteroid Ceres, the largest object in the asteroid belt between Mars and Jupiter. COLLABORATION: Gauss worked with Wilhelm Weber Gauss contributed to a wide range of other mathematical and physical sciences, including astronomy, optics, electricity, magnetism, statistics and surveying. The astonishing mathematician married Johanna Osthoff in 1805, and had three children. Although happy during this time of his life, tragedy would soon strike. Gauss father died in 1808, his wife sadly passed away in 1809, followed immediately by his second son. However, merely a year later Gauss remarried and went on to have three more children. Despite this, biographers say he never quite recovered from the loss of his wife and suffered from depression. His second wife, Minna Waldeck, died in 1831. This was the same year he started collaborating with physics professor Wilhelm Weber, which added to new knowledge in magnetism. The two aided in the discovery of Kirchhoffs circuit laws in electricity. Gauss continued adding to the field, formulating Gauss Law, relating to the distribution of electric charge to the resulting electric field. The amazing duo produced the first electromechanical telegraph in 1833. GENIUS: Gauss is known as one of the greatest mathematicians of all time In 1840 he published another immensely influential work, Dioptrische Untersuchungen. Gauss did not write many books, as his life motto was “few, but ripe”. The astounding mathematical died of a hearth attack on February 23, 1855, and his brain was preserved and studied by Rudolf Wagner. His brain was found to have a larger cerebral area, and weighed above average at 1,492 gr. He had requested that a heptadecagon he inscribed on his tombstone when he died, however, the stonemason responsible refused, because it would be too difficult. Related articles [contf] [contfnew] [contfnewc] [contfnewc]
{"url":"https://amsterdamtimes.info/johann-carl-friedrich-gauss-german-mathematician-honoured-by-google-on-241st-birthday/","timestamp":"2024-11-13T19:06:42Z","content_type":"text/html","content_length":"79625","record_id":"<urn:uuid:7f64a602-3533-4cd0-8bd5-8d08983bf7d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00483.warc.gz"}
Nonlinear Analysis and its Applications in Geometry July 16 - July 22, 2023 organized by Shibing Chen, Guohuan Qiu, Yi Zhang All times are local to Beijing │Monday, July 16 at AMSS N202 │ │Registration │ │Monday, July 17 at AMSS N202 │ │Time │Speaker │Title │ │14:30 - 15:00│Opening ceremony │ │15:00 - 15:45│Yuxin Ge │Compactness of asymptotically hyperbolic Einstein manifolds in dimension 4 and applications │ │16:00 - 16:45│Xavier Cabre │Stable solutions to semilinear elliptic equations are smooth up to dimension 9 │ │17:00 - 19:00│Dinner Break │ │At Zoom │ │19:00 - 19:45│Karoly Boroczky(Online) │Lp-Minkowski problem - Old and New results │ │20:00 - 20:45│Zhijie Chen(Online) │Asymptotic behaviors of low energy nodal solutions for Lane-Emden problems │ │Tuesday, July 18 at AMSS N202 │ │15:00 - 15:45│Yong Wei │Curvature measures and volume preserving curvature flows │ │16:00 - 16:45│Qirui Li │On the Lp-Minkowski problem with super-critical exponents. │ │17:00 - 19:00│Dinner Break │ │At Zoom │ │19:00 - 19:45│Alessio Figalli (Online)│Quantitative stability in geometric and functional inequalities │ │20:00 - 20:45│Nicola Fusco (Online) │Local and global minimizers for a capillarity type problem │ │Wednesday, July 19 at AMSS N202 │ │15:00 - 15:45│Kelei Wang │Nondegeneracy for stable solutions to one phase free boundary problem │ │16:00 - 20:00│Dinner Break │ │At Zoom │ │19:00 - 19:45│Jose Galvez (Online) │Linearity of homogeneous solutions to elliptic equations in dimension three │ │20:00 - 20:45│Yannick Sire (Online) │Geometric variational problems: regularity vs singularity formation │ │Thursday, July 20 at AMSS N202 │ │15:00 - 15:45│Genggeng Huang │Monge-Ampere equation with Guillemin boundary condition │ │16:00 - 16:45│Xu-Jia Wang │Free boundary problems in the Monge-Ampere equation │ │17:00 - 19:00│Dinner Break │ │At Zoom │ │19:00 - 19:45│Emanuel Milman (Online) │Multi-Bubble Isoperimetric Problems - Old and New │ │20:00 - 20:45│Xavier Ros-Oton (Online)│The singular set in the Stefan problem │ │Friday, July 21 at AMSS N202 │ │15:00 - 15:45│Jie Zhou │Regularity for varifolds with critical Allard conditions │ │16:00 - 16:45│Zhizhang Wang │Hessian equations on exterior domains in hyperbolic space │ │17:00 - 19:00│Dinner Break │ │At Zoom │ │ │19:00 - 19:45│Guofang Wang (Online) │Optimal geometric inequalities for capillary hypersurfaces │ │20:00 - 20:45│Jingang Xiong (Online) │Harmonic maps with finite hyperbolic distances to the Extreme Kerr │ │Monday, July 22 │ │Leaving │ Titles and abstract: ● Karoly J. Boroczky (Renyi Institute of Mathematics, Budapest, Hungary) Title: Lp-Minkowski problem - Old and New results Abstract: Lutwak’s Lp Minkowski problem as a Monge-Ampere equation on the n-dimensional sphere for real p has been in the center of attention the last couple of decades. The talk surveys the state of art (like the recent resolution of the case p < -n-1, or stability versions strengthening Brendle, Choi and Daskalopoulos’ celebrated result about the uniqueness of the unit ball as a solution for the suitable equation for p >- n- 1), and points out some major open problems. ● Xavier Cabre (ICREA and Universitat Politecnica de Catalunya) Title: Stable solutions to semilinear elliptic equations are smooth up to dimension 9 Abstract:The regularity of stable solutions to semilinear elliptic PDEs has been studied since the 1970’s. It was initiated by a work of Crandall and Rabinowitz, motivated by the Gelfand problem in combustion theory. The theory experienced a revival in the mid-nineties after new progress made by Brezis and collaborators. I will present these developments, as well as a recent work, in collaboration with Figalli, Ros-Oton, and Serra, which finally establishes the regularity of stable solutions up to the optimal dimension 9. I will also describe a more recent paper of mine which provides full quantitative proofs of the regularity results. ● Zhijie Chen (Tsinghua University) Title: Asymptotic behaviors of low energy nodal solutions for Lane Emden problems Abstract: In this talk, I will introduce our recent work about asymptotic behaviors of low energy nodal solutions for Lane-Emden equations when the exponent goes to infinity. Aymptotics of positive solutions have been studied well, but asymptotics of nodal solutions is more difficult to study and not much is known. Here we can obtain some results for low energy nodal solutions. This is based on joint work with my students Zetao Cheng and Hanqing Zhao. ● Alessio Figalli (ETH Zurich) Title: Quantitative stability in geometric and functional inequalities Abstract: Geometric and functional inequalities play a crucial role in several problems arising in analysis and geometry. Proving the validity of such inequalities, and understanding the structure of minimizers, is a classical and important question. In this talk, I will overview this beautiful topic and discuss some recent results. ● Nicola Fusco (Universita di Napoli "Federico II") Title:Local and global minimizers for a capillarity type problem Abstract:I will present a model for vapor-liquid-solid growth of nanowires where liquid drops are described as local or global volume-constrained minimizers of the capillarity energy outside a semi-infinite convex obstacle modeling the nanowire. I will first discuss global existence of minimizers and then, in the case of rotationally symmetric nanowires, I will explain how the presence of a sharp edge affects the shape of local minimizers and the validity of Young’s law. Finally, I will present some recent regularity results for local minimizers and the connections of this problem with an isoperimetric inequality outside convex sets. ● Jose A. Galvez (Universidad de Granada) Title: Linearity of homogeneous solutions to elliptic equations in dimension three Abstract:An old conjecture by Alexandrov, Koutrofiotis and Nirenberg states that every 1-homogeneous solution to a linear elliptic equation in Euclidean 3-space must be linear. A striking counterexample to this claim was found by Martinez-Maure in 2001. In it, the Hessian of the solution vanishes exactly at 4 disjoint geodesic semicircles of the unit sphere, and along them the equation is not uniformly elliptic. In this talk we prove the converse of this result: for any (non-linear) homogeneous solution of a linear elliptic equation in Euclidean 3-space, there must exist four disjoint geodesic semicircles in the unit sphere along which the Hessian of u vanishes, and the uniform ellipticity of the equation is lost. The result is sharp, by Martinez-Maure’s example. Joint work with Pablo Mira. ● Yuxin Ge (University of toulouse 3) Title: Compactness of asymptotically hyperbolic Einstein manifolds in dimension 4 and applications Abstract: Given a closed riemannian manfiold of dimension 3 (M^3 , [h]), when will we fill in an asymptotically hyperbolic Einstein manifold of dimension 4 (X^4 , g+) such that r ^2 g+|M = h on the boundary M =? X for some defining function r on X^4 ? This problem is motivated by the correspondance AdS/CFT in quantum gravity proposed by Maldacena in 1998 et comes also from the study of the structure of asymptotically hyperbolic Einstein manifolds. In this talk, I discuss the compactness issue of asymptotically hyperbolic Einstein manifolds in dimension 4, that is, how the compactness on conformal infinity leads to the compactness of the compactification of such manifolds under the suitable conditions on the topology and on some conformal invariants. As application, I discuss the uniqueness problem and non-existence result. It is based on the works with Alice Chang. ● Genggeng Huang (Fudan University) Title:Monge-Ampere equation with Guillemin boundary condition Abstract: We will talk about the following boundary value problem of Monge-Ampere equation det D^{2}u = h(x)/ II^N_{i=1}^{N} l_{i}(x) , in P \in R^ n , (1) u(x)-∑_{i=1}^{N} l_{i}(x) log l_{i}(x) ∈ C^ ∞(\bar{P}) (2) 0 < h(x) ∈ C^ ∞(\bar{P}), P = ∩ _{ i=1}^N{l_i(x) > 0} is a simple convex polytope in R ^n. li(x) are affine functions i = 1, · · · , N. Under suitable conditions, we will show (1) and (2) are solvable. This is a joint work with Weiming Shen. ● Qirui Li (Zhejiang University) Title: On the Lp Minkowski problem with super critical exponents. Abstract: The Lp-Minkowski problem deals with the existence of closed convex hypersurface with prescribed p-area measure. The problem has been solved in the sub-critical case p > -n- 1, but remains widely open 4 in the super-critical case p < -n-1. In this talk, we introduce new ideas to solve the problem for all super-critical exponents. A crucial ingredient in the proof is a topological method based on the calculation of the homology of a topological space of ellipsoids. The talk is based on recent joint work with Qiang Guang and Xu-Jia Wang. ● Emanuel Milman (Technion-Israel Institute of Technology) Title: Multi-Bubble Isoperimetric Problems - Old and New Abstract:The classical isoperimetric inequality in Euclidean space R n states that among all sets of prescribed volume, the Euclidean ball minimizes surface area. One may similarly consider isoperimetric problems for more general metric-measure spaces, such as on the n-sphere S n and on n dimensional Gaussian space G^n (i.e. R ^n endowed with the standard Gaussian measure). Furthermore, one may consider the “multi-bubble" isoperimetric problem, in which one prescribes the volume of p ≥ 2 bubbles (possibly disconnected) and minimizes their total surface area – as any mutual interface will only be counted once, the bubbles are now incentivized to clump together. The classical case, referred to as the single-bubble isoperimetric problem, corresponds to p = 1; the case p = 2 is called the double-bubble problem, and so on. In 2000, Hutchings, Morgan, Ritoré and Ros resolved the double-bubble conjecture in Euclidean space R 3 (and this was subsequently resolved in R n as well) – the boundary of a minimizing double-bubble is given by three spherical caps meeting at 120 -degree angles. A more general conjecture of J. Sullivan from the 1990’s asserts that when p ≤ n + 1, the optimal multi-bubble in R ^n (as well as in S ^n) is obtained by taking the Voronoi cells of p + 1 equidistant points in S n and applying appropriate stereographic projections to R n (and backwards). In 2018, together with Joe Neeman, we resolved the analogous multi bubble conjecture for p ≤ n bubbles in Gaussian space Gn – the unique partition which minimizes the total Gaussian surface area is given by the Voronoi cells of (appropriately translated) p + 1 equidistant points. In the present talk, we describe our recent progress with Neeman on the multi-bubble problem on R n and S n. In particular, we show that minimizing bubbles in R n and S n are always spherical when p ≤ n, and we resolve the latter conjectures when in addition p ≤ 5 (e.g. the triple bubble conjectures when n ≥ 3 and the quadruple-bubble conjectures when n ≥ 4). ● Xavier Ros Oton (Universitat de Barcelona) Title: The singular set in the Stefan problem Abstract: The Stefan problem, dating back to the XIXth century, is probably the most classical and important free boundary problem. The regularity of free boundaries in the Stefan problem was developed in the groundbreaking paper (Caffarelli, Acta Math. 1977). The main result therein establishes that the free boundary is C∞ in space and time, outside a certain set of singular points. The fine understanding of singularities is of central importance in a number of areas related to nonlinear PDEs and Geometric Analysis. In particular, a major question in such a context is to establish estimates for the size of the singular set. The goal of this talk is to present some recent results in this direction for the Stefan problem. This is a joint work with A. Figalli and J. Serra. ● Yannick Sire (Johns Hopkins University) Title: Geometric variational problems: regularity vs singularity formation Abstract: I will describe in a very informal way some techniques to deal with the existence ( and more qualitatively regularity vs singularity formation) in different geometric problems and their heat flows motivated by (variations of) the harmonic map problem, the construction of Yang-Mills connections or nematic liquid crystals. I will emphasize in particular on recent results on the construction of very fine asymptotics of blow up solutions via a new gluing method designed for parabolic flows. I’ll describe several open problems and many possible generalizations, since the techniques are rather flexible. ● Guofang Wang (University of Freiburg) Title: Optimal geometric inequalities for capillary hypersurfaces Abstract: In the talk I will first review our previous work on hypersurfaces with free boundary supported on the unit sphere. Then I will introduce suitable geometric quantities, quermassintegrals, for capillary hypersurfaces supported on a hyperplane and consider the corresponding Alexandrov-Fenchel inequalities by introducing a suitable curvature flow. I will also talk about a corresponding Heintze-Karcher-Ros inequality and a Minkowski problem. The talk bases on the joint work with Chao Xia and other collaborators. ● Kelei Wang (Wuhan University) Title: Nondegeneracy for stable solutions to one phase free boundary problem Abstract: Since the seminal work of Alt-Caffarelli in 1981, the one phase free boundary problem has been studied by many people. To study the regularity and singularity of free boundaries, the blow up analysis is a standard method. It turns out for this free boundary problem, the nondegeneracy condition is crucial for the application of this method. Although the nondegeneracy condition has been known for energy minimizers for a long time, it's not true for general solutions. In this talk, I will discuss a proof of the nondegeneracy for stable solutions. This is based on a joint work with N. ● Xu-jia Wang (Australian National University) Title: Free boundary problems in the Monge-Ampere equation Abstract: In this talk we consider the regularity of free boundary in the Monge- Ampere obstacle problem, and the regularity of free boundary in the Gauss curvature flow of convex hypersurface with flat side. By the Legendre transform, these problems are equivalent to the regularity of solutions to Monge-Ampere type equations with a singular point in polar coordinates. By analysing the geometric profile carefully near the singular point, we prove the C^ 2,α regularity for the free boundary in all dimensions. ● Zhizhang Wang (Fudan University) Title: Hessian equations on exterior domains in hyperbolic space Abstract: Suppose Ω is some domain in the hyperbolic space Hn. In this talk, we will consider the homogenous k-Hessian equations on Hn \Ω with constant -1 on the boundary of Ω and asymptotic to zero at the infinity. We will give the existence of this equation. This is a joint work with Ling Xiao. ● Yong Wei (University of Science and Technology of China) Title: Curvature measures and volume preserving curvature flows Abstract: Volume preserving mean curvature flow was introduced by Huisken in 1987 and it was proved that the flow deforms convex initial hypersurface smoothly to a round sphere. This was generalized later by McCoy in 2005 and 2017 to volume preserving flows driven by a large class of 1-homogeneous symmetric curvature functions. In this talk, we discuss the flows with higher homogeneity and describe the convergence result for volume preserving curvature flows in Euclidean space by arbitrary positive powers of k-th mean curvature for all k = 1, · · · , n. As key ingredients, the monotonicity of a generalized isoperimetric ratio will be used to control the geometry of the evolving hypersurfaces and the curvature measure theory will be used to prove the Hausdorff convergence to a sphere. We also discuss some generalizations including the flows in the anisotropic setting, and the flows in the hyperbolic setting. The talk is based on joint work with Ben Andrews (ANU), Yitao Lei (ANU), Changwei Xiong (Sichuan Univ.), Bo Yang (CAS) and Tailong Zhou (USTC). ● Jingang Xiong (Beijing Normal University) Title: Harmonic maps with finite hyperbolic distances to the Extreme Kerr Abstract: Motivated by stationary vacuum solutions of the Einstein field equations, we study singular harmonic maps from domains of 3-dimensional Euclidean space to the hyperbolic plane having bounded hyperbolic distance to Kerr harmonic maps. In the degenerate case, we prove that every such harmonic map admits a unique tangent harmonic map at the extreme black hole horizon. The possible tangent maps are classified and rates of convergence to the tangent map are established. Similarly, expansions in the asymptotically flat end are presented. These results, together with those of Li-Tian 1992 and Weinstein 1989, provide a complete regularity theory for such singular harmonic maps. This is joint with Q. Han, M. Khuri and G. Weinstein. ● Jie Zhou (Capital Normal University) Title: Regularity for varifolds with critical Allard conditions Abstract: The classical Allard regularity theorem says, for a rectifiable n varifold in the unit ball of the Euclidean space passing through the origin with density not less than one, if its the mass in the unit ball is close to the volume of a flat n-dimensional unit disk and the L ^p norm of the generalized mean curvature is small enough for some supercritical index p > n, then the support of the varifold is a C ^{1,α=1-n /p} graph near the origin. In this talk, we will present some regularity result in the critical case. In dimension two, we show the support of the varifold is (locally) bi-Lipschitz homeomorphic to the unit disk. In dimension n > 2, we discuss the W^1,p regularity for p < ∞. The presentation is based on joint works with Dr. Yuchen Bi.
{"url":"https://english.amss.cas.cn/ua/conferences/202307/t20230704_333151.html","timestamp":"2024-11-08T15:45:07Z","content_type":"text/html","content_length":"108049","record_id":"<urn:uuid:aa4ec491-b5fd-4f25-abf9-ba06b2e64f44>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00401.warc.gz"}
Electric-field fluctuations in random dielectric composites When a composite is subjected to a constant applied electric, thermal, or stress field, the associated local fields exhibit strong spatial fluctuations. In this paper, we evaluate the distribution of the local electric field (i.e., all moments of the field) for continuum (off-lattice) models of random dielectric composites. The local electric field in the composite is calculated by solving the governing partial differential equations using efficient and accurate integral equation techniques. We consider three different two-dimensional dispersions in which the inclusions are either (i) circular disks, (ii) squares, or (iii) needles. Our results show that in general the probability density function associated with the electric field for disks and squares exhibits a double-peak character. Therefore, the variance or second moment of the field is inadequate in characterizing the field fluctuations in the composite. Moreover, our results suggest that the variances for each phase are generally not equal to each other. In the case of a dilute concentration of needles, the probability density function is a singly peaked one, but the higher-order moments are appreciably larger for needles than for either disks or squares. All Science Journal Classification (ASJC) codes • Electronic, Optical and Magnetic Materials • Condensed Matter Physics Dive into the research topics of 'Electric-field fluctuations in random dielectric composites'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/electric-field-fluctuations-in-random-dielectric-composites","timestamp":"2024-11-03T03:26:11Z","content_type":"text/html","content_length":"51474","record_id":"<urn:uuid:4d82b8a9-71c3-48ea-8bab-1429b19e979f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00183.warc.gz"}
Extragalactic distance scale The extragalactic distance scale is a series of techniques used today by astronomers to determine the distance of cosmological bodies (beyond our own galaxy) not easily obtained with traditional methods. Some procedures utilize properties of these objects, such as stars, globular clusters, nebulae, and galaxies as a whole. Other methods are based more on the statistics and probabilities of things such as entire galaxy clusters. It is important to note that the following methods are for, as the name implies, extragalactic objects, so the more traditional and well known method of trigonometric parallax will not be covered as it is only accurate to around a kiloparsec or so. Wilson-Bappu Effect Discovered in 1956 by Olin Wilson and M.K. Vainu Bappu, The Wilson-Bappu Effect utilizes the effect known as spectroscopic parallax. Certain stars have features in their emission/absorption spectra which makes it relatively easy to calculate their absolute magnitudes. Certain spectral lines are directly related to an objects magnitude, such as the K absorption line of calcium. From there one can use the distance modulus :$M - m = - 2.5 log_{10}(F_1/F_2) ,.$ to calculate the star’s distance. Though in theory this method has the ability to provide reliable distance calculations to stars roughly 7 Megaparsecs (Mpc) away, it is generally only used for stars hundreds of kiloparsecs (kpc) away. It is also important to note that this method is only valid for stars over 15 magnitudes. Cepheid Scale Distance Beyond the reach of the Wilson-Bappu effect, the next method relies on the period-luminosity relation of Cepheid variable stars, first discovered by Henrietta Leavitt. The following Cepheid relations can be used to calculate the distance to Galactic and extragalactic Cepheids: : $5log_{10}{d}=V+ (3.43) log_{10}{P} - (2.58) (V-I) + 7.50 ,.$: $5log_{10}{d}=V+ (3.30) log_{10}{P} - (1.48) (V-J) + 7.63 ,.$Majaess D. J., Turner D. G., Lane D. J. (2008). [http://arxiv.org/abs /0808.2937 "Assessing potential cluster Cepheids from a new distance and reddening parameterization and 2MASS photometry"] , MNRAS] The use of Cepheid variable stars is not without its problems however. The largest source of error with Cepheids as standard candles is the possibility that the period-luminosity relation is affected by metallicity. For Galactic use only, the following relation is also valid in addition to those highlighted above: : $5log_{10}{d}=V+ (4.42) log_{10}{P} - (3.43) (B-V) + 7.15 ,.$ Cepheid variable stars were the key instrument in Edwin Hubble’s 1923 conclusion that M31 (Andromeda) was an external galaxy, as opposed to a smaller nebula within the Milky Way. He was able to calculate the distance of M31 to 285 Kpc, today’s value being 770 Kpc. As detected thus far, NGC 3370, a spiral galaxy in the constellation Leo, contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid variable stars are in no way perfect distance markers: at nearby galaxies they have an error of about 7% and up to a 15% error for the most distant. Supernovae as distance indicators There are several different methods for which supernovae can be used to measure extragalactic distances, here we cover the most used. Measuring SN's photosphere We can assume that a SN expands spherically symmetric. If the SN is close enough such that we can measure the angular extent, θ(t), of its photosphere, we can use the equation :${omega} = fracDelta}{ hetaDelta}{t ,.$. Where ω is angular velocity, θ is angular extent. In order to get an accurate measurement, it is necessary to make two observations separated by time Δt. Subsequently, we can use :$d = frac{V_{ej{omega} ,.$. Where d is the distance to the SN, V[ej] is the SN’s ejecta’s radial velocity (it can be assumed that V[ej] equals V[θ] if spherically symmetric. This method works only if the SN is close enough to be able to measure accurately the photosphere. Similarly, the expanding shell of gas is in fact not perfectly spherical nor a perfect blackbody. Also interstellar extinction can hinder the accurate measurements of the photosphere. This problem is further exacerbated by core-collapse supernova. All of these factors contribute to the distance error of up to 25%. Type Ia light curves Type Ia SN are some of the best ways to determine extragalactic distances. Ia's occur when a binary white dwarf star begins to accrete matter from its companion Red Dwarf star. As the white dwarfs gains matter, eventually it reaches its Chandrasekhar Limit of $1.4 M_{odot}$. Once reached, the star becomes unstable and undergoes a runaway nuclear fusion reaction. Because all Type Ia SN explode at about the same mass, their absolute magnitudes are all the same. This makes them great standard candles. All Type Ia SN all have a standard blue and visual magnitude of $M_B approx M_V approx -19.3 pm 0.03 ,.$ Therefore when observing a type Ia SN, if it is possible to determine what its peak magnitude was, then its distance can be calculated. It is not intrinsically necessary to capture the SN directly at its peak magnitude; using the multicolor light curve method (MCLS), the shape of the light curve (taken at any reasonable time after the initial explosion) is compared to a family of parameterized curves that will determine the absolute mag at the maximum brightness. This method also takes into effect interstellar extinction/dimming from dust and gas. Similarly, the stretch method fits the particular SN magnitude light curves to a template light curve. This template, as opposed to being several light curves at different wavelengths (MCLS) is just a single light curve that has been stretched (or compressed) in time. By using this "Stretch Factor", the peak magnitude can be determined. Using Type Ia SN is one of the most accurate methods, particularly since SN explosions can be visible at great distances (their luminosities rival that of the galaxy in which they are situated), much farther than Cepheid Variables (500 times farther). Much time has been devoted to the refining of this method. The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes. Novae in distance determinations Novae can be used in much the same way as supernovae to derive extragalactic distances. There is a direct relation between a nova's max magnitude and the time for its visible light to decline by two magnitudes. This relation is shown to be: $M^{max}_{V} = -9.96 - 2.31 log_{10} dot{x} ,.$ Where $dot{x}$ is the time derivative of the nova's mag, describing the average rate of decline over the first 2 magnitudes. After novae fade, they are about as bright as the most luminous Cepheid Variable stars, therefore both these techniques have about the same max distance: ~ 20 Mpc. The error in this method produces an uncertainty in magnitude of about ± 0.4 Globular cluster luminosity function Based on the method of comparing the luminosities of globular clusters (located in galactic halos) from distant galaxies to that of the Virgo cluster, the globular cluster luminosity function carries an uncertainty of distance of about 20% (or .4 magnitudes). US astronomer William Alvin Baum first attempted to use globular clusters to measure distant elliptical galaxies. He compared the brightest globular clusters in Virgo A galaxy with those in Andromeda, assuming the luminosities of the clusters were the same in both. Knowing the distance to Andromeda, has assumed a direct correlation and estimated Virgo A’s distance. Baum used just a single globular cluster, but individual formations are often poor standard candles. Canadian astronomer Racine assumed the use of the globular cluster luminosity function (GCLF) would lead to a better approximation. The number of globular clusters as a function of magnitude given by: $Phi (m) = A e^{(m-m_0)^2/2{sigma}^2} ,.$ Where m[0] is the turnover magnitude, and M[0] the magnitude of the Virgo cluster, sigma the dispersion ~ 1.4 mag. It is important to remember that it is assumed that globular clusters all have roughly the same luminosities within the universe. There is no universal globular cluster luminosity function that applies to all galaxies. Planetary nebula luminosity function Like the GCLF method, a similar numerical analysis can be used for planetary nebulae (note the use of more than one!) within far off galaxies. The planetary nebula luminosity function (PNLF) was first proposed in the late 1970’s by Holland Cole and David Jenner. They suggested that all planetary nebulae might all have similar maximum intrinsic brightness, now calculated to be M = -4.53. This would therefore make them potential standard candles for determining extragalactic distances. Astronomer George Howard Jacoby and his fellow colleagues later proposed that the PNLF function equaled: $N (M) propto e^{0.307 M} (1 - e^{3(M^{*} - M)} ) ,.$ Where N(M) is number of planetary nebula, having absolute magnitude M. M* is equal to the nebula with the brightest magnitude. Surface brightness fluctuation method The following method deals with the overall inherent properties of galaxies. These methods, though with varying error percentages, have the ability to make distance estimates beyond 100 Mpc, though it is usually applied more locally. The surface brightness fluctuation (SBF) method takes advantage of the use of CCD cameras on telescopes. Because of spatial fluctuations in a galaxy’s surface brightness, some pixels on these cameras will pick up more stars than others. However, as distance increases the picture will become increasingly smoother. Analysis of this describes a magnitude of the pixel-to-pixel variation, which is directly related to a galaxy’s distance. D-σ Relation The D- σ relation, used in elliptical galaxies, relates the angular diameter (D) of the galaxy to its velocity dispersion. It is important to describe exactly what D represents in order to have a more fitting understanding of this method. It is, more precisely, the galaxy’s angular diameter out to the surface brightness level of 20.75 B-mag arcsec$^{-2}$. This surface brightness is independent of the galaxy’s actual distance from us. Instead, D is inversely proportional to the galaxy’s distance, represented as d. So instead of this relation imploring standard candles, instead D provides a standard ruler. This relation between D and σ is :$log_{10}(D) = 1.333 log (sigma) + C ,.$ Where C is a constant which depends on the distance to the galaxy clusters. This method has the possibility of become one of the strongest methods of galactic distance calculators, perhaps exceeding the range of even the Tully-Fisher method. As of today, however, elliptical galaxies aren’t bright enough to provide a calibration for this method through the use techniques such as Cepheids. So instead calibration is done using more crude methods. All the methods mentioned above are used today by astronomers for measuring objects beyond our own galaxy. Like all methods, different techniques and calibrations are used by other astronomers. The following table shows at-a-glance information for most of the methods mentioned above. It lists each error (in magnitudes), distance to the Virgo Cluster as calculated by each technique, and overall range of how far out each method can be used effectively. ee also * Cosmic distance ladder * Standard candle 1) "An Introduction to Modern Astrophysics", Carroll and Ostlie, copyright 2007 2) "Measuring the Universe The Cosmological Distance Ladder", Stephen Webb, copyright 2001 3) "The Cosmos", Pasachoff and Filippenko, copyright 2007 4) "The Astrophysical Journal", "The Globular Cluster Luminosity Function as a Distance Indicator: Dynamical Effects", Ostriker and Gnedin, May 5 1997 External links * [http://heasarc.gsfc.nasa.gov/docs/cosmic/ NASA Cosmic Distance Scale] * [http://www.noao.edu/jacoby/pnlf/pnlf.html PNLF information database] * [http://www.journals.uchicago.edu/toc/apj/current The Astrophysical Journal] Wikimedia Foundation. 2010.
{"url":"https://en-academic.com/dic.nsf/enwiki/9800817","timestamp":"2024-11-12T12:19:15Z","content_type":"text/html","content_length":"50767","record_id":"<urn:uuid:52e2ee65-74d5-4b38-a531-5e89f1eebc91>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00076.warc.gz"}
Confidence and Prediction Bounds About Confidence and Prediction Bounds Curve Fitting Toolbox™ software lets you calculate confidence bounds for the fitted coefficients, and prediction bounds for new observations or for the fitted function. Additionally, for prediction bounds, you can calculate simultaneous bounds, which take into account all predictor values, or you can calculate nonsimultaneous bounds, which take into account only individual predictor values. The coefficient confidence bounds are presented numerically, while the prediction bounds are displayed graphically and are also available numerically. The available confidence and prediction bounds are summarized below. Types of Confidence and Prediction Bounds Interval Type Description Fitted coefficients Confidence bounds for the fitted coefficients New observation Prediction bounds for a new observation (response value) New function Prediction bounds for a new function value Prediction bounds are also often described as confidence bounds because you are calculating a confidence interval for a predicted response. Confidence and prediction bounds define the lower and upper values of the associated interval, and define the width of the interval. The width of the interval indicates how uncertain you are about the fitted coefficients, the predicted observation, or the predicted fit. For example, a very wide interval for the fitted coefficients can indicate that you should use more data when fitting before you can say anything very definite about the coefficients. The bounds are defined with a level of certainty that you specify. The level of certainty is often 95%, but it can be any value such as 90%, 99%, 99.9%, and so on. For example, you might want to take a 5% chance of being incorrect about predicting a new observation. Therefore, you would calculate a 95% prediction interval. This interval indicates that you have a 95% chance that the new observation is actually contained within the lower and upper prediction bounds. Confidence Bounds on Coefficients The confidence bounds for fitted coefficients are given by where b are the coefficients produced by the fit, t depends on the confidence level, and is computed using the inverse of Student's t cumulative distribution function, and S is a vector of the diagonal elements from the estimated covariance matrix of the coefficient estimates, (X^TX)^–1s^2. In a linear fit, X is the design matrix, while for a nonlinear fit X is the Jacobian of the fitted values with respect to the coefficients. X^T is the transpose of X, and s^2 is the mean squared error. You can view the confidence bounds in the Curve Fitter app. The app displays the bounds in the Coefficients and 95% Confidence Bounds table in the Results pane. The fitted value for the coefficient p1 is -0.6675, the lower bound is -0.7622, and the upper bound is -0.5728. You can calculate confidence intervals at the command line with the confint function. Prediction Bounds on Fits As mentioned previously, you can calculate prediction bounds for the fitted curve. The prediction is based on an existing fit to the data. Additionally, the bounds can be simultaneous and measure the confidence for all predictor values, or they can be nonsimultaneous and measure the confidence only for a single predetermined predictor value. If you are predicting a new observation, nonsimultaneous bounds measure the confidence that the new observation lies within the interval given a single predictor value. Simultaneous bounds measure the confidence that a new observation lies within the interval regardless of the predictor value. Bound Type Observation Functional Simultaneous $y±f\sqrt{{s}^{2}+xS{x}^{T}}$ $y±f\sqrt{xS{x}^{T}}$ Nonsimultaneous $y±t\sqrt{{s}^{2}+xS{x}^{T}}$ $y±t\sqrt{xS{x}^{T}}$ • s^2 is the mean squared error • t depends on the confidence level, and is computed using the inverse of Student's t cumulative distribution function • f depends on the confidence level, and is computed using the inverse of the F cumulative distribution function. • S is the covariance matrix of the coefficient estimates, (X^TX)^–1s^2. • x is a row vector of the design matrix or Jacobian evaluated at a specified predictor value. You can graphically display prediction bounds using the Curve Fitter app. In the Curve Fitter app, you can display nonsimultaneous prediction bounds for new observations. On the Curve Fitter tab, in the Visualization section, select a level of certainty from the Prediction Bounds list. You can change this level to any value by selecting Custom from the list. You can display numerical prediction bounds of any type at the command line with the predint function. To understand the quantities associated with each type of prediction interval, recall that the data, fit, and residuals are related through the formula data = fit + residuals where the fit and residuals terms are estimates of terms in the formula data = model + random error Suppose you plan to take a new observation at the predictor value x[n+1]. Call the new observation y[n+1](x[n+1]) and the associated error ε[n+1]. Then y[n+1](x[n+1]) = f(x[n+1]) + ε[n+1] where f(x[n+1]) is the true but unknown function you want to estimate at x[n+1]. The likely values for the new observation or for the estimated function are provided by the nonsimultaneous prediction If instead you want the likely value of the new observation to be associated with any predictor value, the previous equation becomes The likely values for this new observation or for the estimated function are provided by the simultaneous prediction bounds. The types of prediction bounds are summarized below. Types of Prediction Bounds Type of Bound Simultaneous or Nonsimultaneous Associated Equation Nonsimultaneous y[n+1](x[n+1]) Simultaneous y[n+1](x), for all x Nonsimultaneous f(x[n+1]) Simultaneous f(x), for all x The nonsimultaneous and simultaneous prediction bounds for a new observation and the fitted function are shown below. Each graph contains three curves: the fit, the lower confidence bounds, and the upper confidence bounds. The fit is a single-term exponential to generated data and the bounds reflect a 95% confidence level. Note that the intervals associated with a new observation are wider than the fitted function intervals because of the additional uncertainty in predicting a new response value (the curve plus random errors). Calculate Prediction Intervals from the Command Line Calculate and plot observation and functional prediction intervals for a fit to noisy data. Generate noisy data with an exponential trend. x = (0:0.2:5)'; y = 2*exp(-0.2*x) + 0.5*randn(size(x)); Fit a curve to the data using a single-term exponential. fitresult = fit(x,y,'exp1'); Compute 95% observation and functional prediction intervals, both simultaneous and nonsimultaneous. Nonsimultaneous bounds are for individual elements of x; simultaneous bounds are for all elements of x. p11 = predint(fitresult,x,0.95,'observation','off'); p12 = predint(fitresult,x,0.95,'observation','on'); p21 = predint(fitresult,x,0.95,'functional','off'); p22 = predint(fitresult,x,0.95,'functional','on'); Plot the data, fit, and prediction intervals. Observation bounds are wider than functional bounds because they measure the uncertainty of predicting the fitted curve plus the random variation in the new observation. plot(fitresult,x,y), hold on, plot(x,p11,'m--'), xlim([0 5]), ylim([-1 5]) title('Nonsimultaneous Observation Bounds','FontSize',9) legend off plot(fitresult,x,y), hold on, plot(x,p12,'m--'), xlim([0 5]), ylim([-1 5]) title('Simultaneous Observation Bounds','FontSize',9) legend off plot(fitresult,x,y), hold on, plot(x,p21,'m--'), xlim([0 5]), ylim([-1 5]) title('Nonsimultaneous Functional Bounds','FontSize',9) legend off plot(fitresult,x,y), hold on, plot(x,p22,'m--'), xlim([0 5]), ylim([-1 5]) title('Simultaneous Functional Bounds','FontSize',9) legend({'Data','Fitted curve', 'Prediction intervals'},... Calculate Prediction Bounds Using Curve Fitter App Load the census data set. The variables cdate and pop contain data for the date and population when the census was taken. Open the Curve Fitter app. In the app, select the data variables for the fit. On the Curve Fitter tab, in the Data section, click Select Data. In the Select Fitting Data dialog box, select cdate as the X data value and pop as the Y data value. The app plots the data points as you select the variables. The plot shows the census data and the linear fit for the data. Plot the 95% prediction bounds for the fit. In the Visualization section of the Curve Fitter tab, select 95% for Prediction Bounds. The plot now shows the 95% prediction intervals in addition to the census data and linear fit. To plot the 60% prediction bounds for the fit, you must specify a custom confidence level. In the Visualization section of the Curve Fitter tab, select Custom for Prediction Bounds. In the Set Prediction Bounds dialog box, type 60 in Confidence level (%) box, and click OK. The plot now shows the 60% prediction intervals in addition to the census data and linear fit. Together, the two plots show that the 60% prediction intervals lie closer to the linear fit than the 95% prediction intervals.
{"url":"https://au.mathworks.com/help/curvefit/confidence-and-prediction-bounds.html","timestamp":"2024-11-08T18:40:33Z","content_type":"text/html","content_length":"92909","record_id":"<urn:uuid:6d3fbf7e-ee76-4dc5-959a-a42b3650e516>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00680.warc.gz"}
Python - Divide Two Complex Numbers - Data Science Parichay In this tutorial, we will look at how to divide two complex numbers together in Python with the help of some examples. The above image shows how we can simplify the computation of the division of complex numbers. The idea is to multiply both the numerator and the denominator with the complex conjugate of the denominator. This way the denominator becomes real and grouping the real and imaginary components becomes easier. How to divide complex numbers in Python? Performing the complex number division is very simple and direct in Python. You can use the division operator / to divide two complex numbers together. The following is the syntax – # divide complex numbers z1 and z2 It gives you the complex number resulting from dividing the complex number z1 with the complex number z2. Let’s now look at some examples of using the above syntax to divide some complex numbers together. Example 1 – Divide two complex numbers Let’s divide two complex numbers (both having non-zero real and imaginary parts) together using the / operator. # two complex numbers z1 = 2+3j z2 = 3+4j # divide z1 by z2 We get the resulting complex number. 📚 Data Science Programs By Skill Level Introductory ⭐ Intermediate ⭐⭐⭐ Advanced ⭐⭐⭐⭐⭐ 🔎 Find Data Science Programs 👨💻 111,889 already enrolled Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers. Note that the division operation in mathematics is not commutative, that is, z1/z2 != z2/z1 # divide z2 by z1 We get a different result from what we got above. Example 2 – Divide a complex number by a real number Let’s now divide a complex number by a real number (a number without any imaginary component). # a complex number and a real number z1 = 2+8j z2 = 2 # divide z1 by z2 Dividing a complex number by a real number is straightforward to understand. The real number divides both the real and the imaginary components of the complex number to get our resulting complex Example 3 – Divide a complex number by an imaginary number Let’s now divide a complex number by an imaginary number. # a complex number and an imaginary number z1 = 2+8j z2 = 4j # divide z1 by z2 We get the resulting complex number. In this tutorial, we looked at how we can use the division operator, / in Python to divide two complex numbers in Python. Keep in mind that the division operation is not commutative, meaning z1 / z2 != z2 / z1. You may also be interested in – Subscribe to our newsletter for more informative guides and tutorials. We do not spam and you can opt out any time.
{"url":"https://datascienceparichay.com/article/python-divide-complex-numbers/","timestamp":"2024-11-07T18:41:24Z","content_type":"text/html","content_length":"261284","record_id":"<urn:uuid:f4c69773-2f2d-4909-8086-fac3cbbde9f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00287.warc.gz"}
Year 11 Maths Methods Exam Questions Archives - MathsMethods.com.au Hey guys! It's Alex, I'm the creator of MathsMethods.com.au with degrees in Mathematics/Astrophysics. I've educated students for my entire career, spent several years rewriting the Maths Methods textbook to create my popular video tutorials that focus on a solid understanding the fundamentals, as this is the key to making Maths Methods easier for all students.
{"url":"https://mathsmethods.com.au/courses/mathsmethodsexamquestions-year-11/","timestamp":"2024-11-09T12:29:19Z","content_type":"text/html","content_length":"812870","record_id":"<urn:uuid:b99fbc82-fb1a-40e9-9eb1-b5845d35bfff>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00063.warc.gz"}
- Math Jokes Start Math Class With a Chuckle! Most of the jokes below can be easily tied to a new math concept to help students reinforce rules and meanings. And... most importantly they are funny! How will we ever use negative numbers in the real world? You haven't seen my bank account. Hey, what's your sign? What kind of roots does a "geom-e-tree" have? Square roots Why are the parentheses wearing blue ribbons? Because they always come first Why was the math teacher upset with cupid? He kept changing "like terms" to "love terms." Why was the math teacher upset with one of her students? He kept asking, "What's the point?" Parent: Why do you have that sheet of paper in a bowl of water? Student: It's my homework. I am trying to dissolve an equation. Teacher: Why don't you have your homework today? Student: I divided by zero and the paper vanished into thin air. How do equations get in shape? They do multi-step aerobics. Why did the variable add its opposite? To get to the other side. If you give 15 cents to one friend and 10 cents to another friend, what time is it? A quarter to two. What did the circle see when sailing on the ocean? Pi rates. Why did the variable break up with the constant? The constant was incapable of change. Son: Dad, what does it mean when someone tells me to give 110%? Dad: It means they didn't take Algebra. Banker: Do you have any interest in taking out a loan? Customer: If there's interest, I'm not interested. Why did the shopper think the store was selling everything wholesale? Because the store had two "half off" signs. Why did the Moore family name their son Lester? So he could be called "Moore" or "Less". Why did the parents think their little variable was sick? The nurse said he had to be isolated. What did the math teacher do to prepare for class? She made a “less-than” plan. What did the doctor say to the multi-step inequality? I can solve your problem with a few operations. How does a math teacher get a compound fracture? She breaks her (h)AND What does an absolute- value expression work on when it goes to the gym? Its “abs”! What did Miss Manners say to the inequality symbol? It's not polite to point. What do a Math teacher and an English teacher have in common? They both can make a "pair-a-graph". Why did the y-variable leave the city? He was more at home on the range. Why did the x-variable move home? She was more comfortable in her own domain. Psychology Teacher: Can anyone use the word "dysfunction" in a sentence? Math Student: I can! "Dysfunction" is really hard to graph. What should you title a graph showing the relative diameters and weights of a batch of pancakes? The Batter Plot! (Back to Teacher's Main Page)
{"url":"https://mathforthemiddle.com/mathjokes.php","timestamp":"2024-11-06T23:27:45Z","content_type":"text/html","content_length":"40404","record_id":"<urn:uuid:acd760e8-9b7e-4d70-b4e9-39af197cf9fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00059.warc.gz"}
DUALITY OF TIME - 3.2.7 The Twin Paradox 3.2.7 The Twin Paradox In the first paper on relativity in 1905, Einstein concluded that a moving clock ticks slower. If two clocks were synchronized together, and then one of them was moved away and brought back at relativistic speed, it would be found to be lagging behind the stationary clock. In 1911, he elaborated further on this result by saying: “If we placed a living organism in a box ... one could arrange that the organism, after any arbitrary lengthy flight, could be returned to its original spot in a scarcely altered condition, while corresponding organisms which had remained in their original positions had already long since given way to new generations. For the moving organism, the lengthy time of the journey was a mere instant, provided the motion took place with approximately the speed of light.” Resnick (1968) Other scientists quickly noticed that if a man made such a relativistic travel, while his twin stayed at home, then when the traveler returns he finds his twin brother much aged compared to himself. The paradox is that, in relativity, either twin could regard the other as the traveler, in which case each should find the other younger. For example if the traveler make a trip at a Lorentz factor This contention is based on the wrong assumption that the situations of the twin are symmetrical and interchangeable, but other versions get around this by supposing that keep sending signals to each other at a constant rate, but this also means that we need to include the relativistic Doppler shift which affects the signal rates. The asymmetry that occurred because only the traveler underwent acceleration, can be removed by introducing the “three-brother” approach, where the traveling twin transfers his clock reading to a third one, traveling in the opposite direction. Another way of avoiding acceleration effects is the use of the relativistic Doppler effect. Einstein called the result of such relativistic travel “peculiar” but he did not consider it to be self-contradictory and does not constitute any challenge to the self-consistency of relativistic physics. This result appears puzzling because of incorrect and naive application of time dilation and the principle of relativity. Time dilation has been later verified experimentally by precise measurements of atomic clocks flown in aircraft and satellites.
{"url":"https://www.smonad.com/time/book.php?id=96","timestamp":"2024-11-10T20:47:17Z","content_type":"text/html","content_length":"32256","record_id":"<urn:uuid:97cb0516-c2bf-435c-be53-0aacd70dd626>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00232.warc.gz"}
Sam decides he wants to know whether 45% of his customers own only one cat. Sam decides he wants to know whether 45% of his customers own only one cat. a.... Sam decides he wants to know whether 45% of his customers own only one cat. a. Set up the null and alternative hypothesis (DO NOT TEST). b. If Sam wants to minimize the probability of making a type 2 error, should he set alpha equal to 0.05 or 0.01? Explain. c. In terms of the problem, what does it mean if he makes a type 1 error? d. What would be the consequences of making such an error? Ho : p = 0.45 H1 : p ╪ 0.45 set alpha equal to 0.05 TYPE I error is rejection of true null hypothesis so, it means he concluded that not 45% of his customers own only one cat but inactual 45% of his customers own only one cat consequences are : we have wrongly conlcude that not 45% of his customers own only one revert back for doubt please upvote
{"url":"https://justaaa.com/statistics-and-probability/96988-sam-decides-he-wants-to-know-whether-45-of-his","timestamp":"2024-11-04T07:21:38Z","content_type":"text/html","content_length":"43494","record_id":"<urn:uuid:e915e520-51cc-4e05-8db0-27c08ac27475>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00022.warc.gz"}
Solving Quadratic Equations By Graphing Worksheet Answers - Function Worksheets Solving Quadratics Graphically Worksheet – A highly-designed Functions Worksheet with Responses will offer pupils with solutions to a number of important questions about capabilities. It … Read more Solving Quadratic Functions By Graphing Worksheet Solving Quadratic Functions By Graphing Worksheet – The graphing of characteristics is the process of pulling information. For instance an exponential function can have a … Read more
{"url":"https://www.functionworksheets.com/tag/solving-quadratic-equations-by-graphing-worksheet-answers/","timestamp":"2024-11-09T11:15:17Z","content_type":"text/html","content_length":"60803","record_id":"<urn:uuid:10a22571-dd3a-4709-8696-3acae53baed1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00025.warc.gz"}
Nolasagna - Math Attacked: Precision The topic of ‘decolonizing’ math and science has been on the rise. The problem is that this movement isn’t about math or science, they’re just the talking points for the real target: objective truth . While spending time writing an article on this attack on objective truth, I’ve been forced to cut out my discussions on math – which I do find interesting. I’m sharing these topics here because math needs its own positive defense. The people actively fighting to ‘decolonize’ math have certain vectors of attack to disintegrate math. The goal is the attack on objective truth, but it’s important to understand the vectors of attack. Precision is of measurement and the degree of units of magnitude. Simply put, something measured to the nearest nanometer has more precision than that measured to the nearest millimeter. I want to start by pointing out that Timothy Gowers is not some internet SJW loon. He’s a mathematician, professor, and Fields Medal winner. Even though nothing is expressed here, based on Timothy’s past tweets trying to appease the 2+2=5 crowd, there is an implication. He may be suggesting integer + integer = rational number, but when I see 4.00, I see someone bring precision into the mix. Update: Gowers did clarify this poll with a series of incoherent and unrelated thought tweets (22 of them) that hint at precision, integer vs real numbers, and various math games. As expected, instead of any clarity, there was more disintegrated confusion. He did clarify that he’s not a Platonist in math (good), but views himself from the formal theory – which means math is merely a numbers game detached from reality and something to play around with. What is the ‘Precision’ Attack? The precision argument summed up goes as follows: 2 is only really something in the abstract, once you bring it down to the real world (2 of something), there will always be a range of precision that results in 2 not being 2. For example, if you have a 2×4, which is a 2″x4″ piece of wood – what are the dimensions? Well, it’s 2″ by 4″. Increase the precision, it’s 2.0″ by 4.0″. Increase it again, you may end up with 2.0362″ x 4.0197″. From this, they try to bring out the idea that numbers, in a sense, are fluid. When one speaks of 2, they’re not talking about 2… like a real hard 2, they’re talking about something like 2. It’s not that they’re asking you to declare 2+2=5; it would honestly be too hard for people. What they’re asking of you is to declare that 2+2≠4 (2+2 doesn’t equal 4) and this is the true goal of these people. Abstract Numbers Allow for the Infinite Numbers, as a system, were built in such a way to allow one to go beyond the perceivable bounds of the real world need at a specific timeframe. There was a time when counting didn’t see the need to go past of a few hundred thousand. The idea of needing more than a million was viewed as silly because a normal city only had 50,000 and population was the highest conceivable thing to count. Even though counting didn’t conceive a need of going past a certain threshold of numbers, numbers were still constructed to allow for one to expand it, if needed. As science progressed, we found the need for more numbers. Speaking about DNA cells, we are into the quadrillions. The same is true of precision. Numbers can go as far as one needs in precision. There was a time, where the most complicated things people did was make buildings, a precision of millimeters was as far as needed. Microprocessors are working with the precision much more intense like nanometers. The range of numbers and precision is based on the context of what one is doing. If one is building a deck, a 2.0″ x 4.0″ board is what is used. It is of no consequence if it is really 2.01″ x 4.02″ or 2.000000000000″ x 4.000000000000″. The same is true of counting humans. I am 1 human. I’m not 1.0 human. I’m not 0.91 human if I’m missing a hand, nor am I 0.9999999 human if I just cut my fingernails short. Precision, in the real world, isn’t an infinite range of possibilities. No matter how hard one tries, there will never be 0.999999999999999999999999999 human. The Takeaway and The Motive While writing this article, I found the real motive to become clear. Even though the argument that a 2×4 with more precision would be 2.0015″ is conceivably a real thing, why isn’t the same thing plausible with a human? (as in a 1.00000001 human?). The takeaway is that this isn’t about math, but the concepts that math is applied to. This includes humans, 2x4s, and units of measurement. It’s about people’s conceptual faculties. The reason a human isn’t 1.000 is that it doesn’t matter to the concept of a human. There’s just a human and there’s not human. You’re not more quantity of a human if you’re overweight, nor are you less quantity of a human if you’re below 170lbs. The same is true of a 2×4 board. Even though one could take the precision to some infinite range, it’s not part of the concept beyond the range of the 2×4 concept. This is why 2.0″ precision with regards to a 2×4 is right and 2.0″ precision with regards to a microprocessor is wrong. When one is expanding the precision outside the range of the concept, they’re deconstructing the concept with the inevitable goal of destroying it. This is what it’s all about.
{"url":"https://www.nolasagna.com/math-attacked/math-attacked-precision","timestamp":"2024-11-11T06:28:14Z","content_type":"text/html","content_length":"76969","record_id":"<urn:uuid:b33a45ee-48fa-4648-8055-fe1d9d795901>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00005.warc.gz"}
Minimization-based sampling from the posterior distribution for inverse problems with Gaussian prior distributions Minimization-based sampling from the posterior distribution for inverse problems with Gaussian prior distributions Dean S. Oliver, Uni Centre for Integrated Petroleum Research, Bergen, Norway 2.09.0.1410:15-11:45 Inverse problems for subsurface ow are typically characterized by large numbers of para- meters (e.g. coef cients of PDEs describing ow and transport) and fairly large numbers of observations that are indirectly and nonlinearly related to the parameters. One fairly effective method for characterizing the uncertainty in predictions of future behavior is to generate samples of model parameters from the posterior probability distribution via minimization of a stochastic cost function. This method is known to sample correctly for Bayesian data assimilation problems with Gaussian prior distributions, linear observation operators and additive Gaussian observation errors. Sampling is only approximate when the observation operator is nonlinear, but experience has shown that it is often quite good, even when the posterior probability distribution is multimodel. In practice, the only cor- rection that we make to sampling is to apply model diagnostics to eliminate samples that appear to be stuck in a local minimum of the cost function. I will show how the approximate methodology is applied to large-scale problems using ensemble Kalman lter-like methods, and will describe attempts to improve sampling by computation of importance
{"url":"https://www.sfb1294.de/events/event/minimization-based-sampling-from-the-posterior-distribution-for-inverse-problems-with-gaussian-prior-distributions","timestamp":"2024-11-09T17:14:09Z","content_type":"text/html","content_length":"18612","record_id":"<urn:uuid:70022859-0c99-44cf-9d9e-cfe425bb1373>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00525.warc.gz"}
Stress and Strain The term stress (σ) is used to express the loading in terms of force applied to a certain cross-sectional area of an object. From the perspective of loading, stress is the applied force or system of forces that tends to deform a body. From the perspective of what is happening within a material, stress is the internal distribution of forces within a body that balance and react to the loads applied to it. The stress distribution may or may not be uniform, depending on the nature of the loading condition. For example, a bar loaded in pure tension will essentially have a uniform tensile stress distribution. However, a bar loaded in bending will have a stress distribution that changes with distance perpendicular to the normal axis. Simplifying assumptions are often used to represent stress as a vector quantity for many engineering calculations and for material property determination. The word "vector" typically refers to a quantity that has a "magnitude" and a "direction". For example, the stress in an axially loaded bar is simply equal to the applied force divided by the bar's cross-sectional area. Some common measurements of stress are: Psi = lbs/in^2 (pounds per square inch) ksi or kpsi = kilopounds/in^2 (one thousand or 10^3 pounds per square inch) Pa = N/m 2 (Pascals or Newtons per square meter) kPa = Kilopascals (one thousand or 10^3 Newtons per square meter) GPa = Gigapascals (one million or 10^6 Newtons per square meter) *Any metric prefix can be added in front of psi or Pa to indicate the multiplication factor Strain is the response of a system to an applied stress. When a material is loaded with a force, it produces a stress, which then causes a material to deform. Engineering strain is defined as the amount of deformation in the direction of the applied force divided by the initial length of the material. This results in a unitless number, although it is often left in the unsimplified form, such as inches per inch or meters per meter. For example, the strain in a bar that is being stretched in tension is the amount of elongation or change in length divided by its original length. As in the case of stress, the strain distribution may or may not be uniform in a complex structural element, depending on the nature of the loading condition. If the stress is small, the material may only strain a small amount and the material will return to its original size after the stress is released. This is called elastic deformation, because like elastic it returns to its unstressed state. Elastic deformation only occurs in a material when stresses are lower than a critical stress called the yield strength. If a material is loaded beyond it elastic limit, the material will remain in a deformed condition after the load is removed. This is called plastic deformation. Engineering and True Stress and Strain The discussion above focused on engineering stress and strain, which use the fixed, undeformed cross-sectional area in the calculations. True stress and strain measures account for changes in cross-sectional area by using the instantaneous values for the area. The engineering stress-strain curve does not give a true indication of the deformation characteristics of a metal because it is based entirely on the original dimensions of the specimen, and these dimensions change continuously during the testing used to generate the data. Engineering stress and strain data is commonly used because it is easier to generate the data and the tensile properties are adequate for engineering calculations. When considering the stress-strain curves in the next section, however, it should be understood that metals and other materials continues to strain-harden until they fracture and the stress required to produce further deformation also When an axial load is applied to a piece of material with a uniform cross-section, the norm al stress will be uniformly distributed over the cross-section. However, if a hole is drilled in the material, the stress distribution will no longer be uniform. Since the material that has been removed from the hole is no longer available to carry any load, the load must be redistributed over the remaining material. It is not redistributed evenly over the entire remaining cross-sectional area but instead will be redistributed in an uneven pattern that is highest at the edges of the hole as shown in the image. This phenomenon is known as stress concentration.
{"url":"https://www.nde-ed.org/Physics/Materials/Mechanical/StressStrain.xhtml","timestamp":"2024-11-12T00:20:28Z","content_type":"application/xhtml+xml","content_length":"34748","record_id":"<urn:uuid:2ab0ffa0-8413-4ca1-b873-85be1c39d5ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00874.warc.gz"}
Strength Type In the Define Material Properties dialog, a wide variety of Strength Type models are available for modeling shear strength of the various materials of your Slide2 model. Each Strength Type requires various Strength Parameters as described below. The most common way to model soil shear strength is with the Mohr-Coulomb equation: s = shear strength c’ = effective cohesion u = pore pressure The Mohr-Coulomb equation can be used for either total or effective stress conditions. For a total stress analysis, cohesion and friction angle are defined for total stress conditions. Pore water pressure is not considered, and the Mohr-Coulomb equation is simply: For the Undrained soil model, the friction angle phi is automatically set to zero. The shear strength is defined only by the cohesion of the material. Three sub-options for defining the Undrained cohesion are available, by selecting from the Cohesion Type drop-down list: Cohesion is constant throughout the material. F(Depth from Top of Layer) Cohesion is a function of depth, where depth is measured from the top of the material layer that is local to the slice, to the center of a slice base. • Cohesion (Top) is the Cohesion at the top of the material layer. • Cohesion Change is the rate of change of Cohesion with depth. • If you wish to specify a maximum soil strength, select the Cutoff checkbox and enter a maximum value for Cohesion. If the rate of Cohesion Change is negative, then this value represents the minimum soil strength. F(Depth from Horizontal Datum) Cohesion is a function of depth, where depth is measured from a user-specified Datum (y-coordinate) to the center of a slice base. • Cohesion (Datum) is the Cohesion at the Datum elevation. • Cohesion Change is the rate of change of Cohesion with distance (ydatum-y) from the Datum. If the Cohesion Change is positive, then cohesion increases below the datum and decreases above the datum, according to the change in elevation ydatum-y. • If you wish to specify a maximum soil strength, select the Cutoff checkbox and enter a maximum value for Cohesion. If the rate of Cohesion Change is negative, then this value represents the minimum soil strength. • Datum is the datum elevation (y-coordinate). F(Distance to Slope) Cohesion is a function of depth, where depth is measured from the closest point on the slope (actual distance) to the center of a slice base as shown below. • Cohesion (Top) is the Cohesion at the top (closest point on the slope) of the slope to the center of a slice base. • Cohesion Change is the rate of change of Cohesion with depth. • If you wish to specify a maximum soil strength, select the Cutoff checkbox and enter a maximum value for Cohesion. If the rate of Cohesion Change is negative, then this value represents the minimum soil strength. Note that the Water Parameters are automatically disabled in the Define Materials dialog when Strength Type = Undrained, since pore water pressure is not required. No Strength The No Strength model is intended primarily to model ponded water. When you set the Strength Type = No Strength: • Strength Parameters are disabled since shear strength is zero (slip surfaces do not pass through a No Strength material). No Strength material contributes only its weight (and hydrostatic force) to the model. • Water Parameters are not applicable and are disabled. Ponded water modeled as a No Strength material In most cases, use of the No Strength material is NOT recommended. Ponded water can be modelled in much more easily, by simply drawing a Water Table above the External Boundary. See the Add Water Table topic for details. The No Strength material may be useful in some cases if you wish to customize the Unit Weight of Ponded Water (i.e. use a different unit weight from the Pore Fluid Unit Weight specified in Project Settings). Except for this situation, it is recommended that Ponded Water is modelled using a Water Table, rather than the No Strength material. Infinite Strength An Infinite Strength material in Slide2 represents a slip surface "exclusion zone", through which slip surfaces are not allowed to pass. Use an Infinite Strength material when you wish to define a region through which a failure surface cannot pass (e.g. a concrete retaining wall, or a heavily reinforced soil region). For an Infinite Strength material: • Strength Parameters are disabled. • Water Parameters are disabled. "Allow Sliding Along Boundary" - use this option when you have an infinite strength bedrock material at the bottom of your model and want to allow the slip surface to slide along its boundary. This option should otherwise be off. Anisotropic Strength The Anisotropic Strength model allows you to define Anisotropic strength properties for a soil or rock mass, by defining cohesion and friction angle along two perpendicular axes. The axes do NOT have to be oriented horizontally and vertically but can be specified at an arbitrary angle as shown in the figure below. Required Strength Parameters are: • Cohesion 1 and Phi 1 (cohesion and friction angle in 1-direction) • Cohesion 2 and Phi 2 (cohesion and friction angle in 2-direction, perpendicular to 1-direction) • Angle (in degrees, measured counter-clockwise from the positive x-axis to the 1-direction) Definition of axes and angle for Anisotropic Strength model The cohesion and friction angle for any arbitrary plane is given by: The Anisotropic Strength model in Slide2 could also be referred to as "Transversely Isotropic" Strength. Shear / Normal Function The Shear / Normal Function model allows you to define an arbitrary shear / normal function, to define a non-linear Mohr-Coulomb strength envelope for a material. When you set the Strength Type to Shear / Normal Function, the Shear / Normal Function drop-list will appear in the Strength Parameters area. • If there are no Shear / Normal Functions currently defined, this list will be disabled. Select the New button to define a Shear / Normal Function. • If you have previously defined one (or more) Shear / Normal Functions, then this list will contain the names of the currently defined function(s), and you can simply select an existing function from this list. Or if you need to define a new Shear / Normal Function, select the New button. • To edit or delete existing Shear / Normal Functions, select the function name from the list, and select the Edit or Delete button. See the Shear / Normal Strength Function topic for details about defining Shear / Normal Strength Functions. C / Phi Function The C / Phi Function model allows you to define an arbitrary c/phi function, to define a non-linear Mohr-Coulomb strength envelope for a material. When you set the Strength Type to C/Phi Function, the C/Phi Function drop-list will appear in the Strength Parameters area. • If there are no C/Phi Functions currently defined, this list will be disabled. Select the New button to define a C/Phi Function. • If you have previously defined one (or more) C/Phi Functions, then this list will contain the names of the currently defined function(s), and you can simply select an existing function from this list. Or if you need to define a new C/Phi Function, select the New button. • To edit or delete existing C/Phi Functions, select the function name from the list, and select the Edit or Delete button. See the C/Phi Strength Function topic for details about defining C/Phi Strength Functions. Anisotropic Function The Anisotropic Function model is another method for defining Anisotropic strength properties for soil or rock. With this model, you can define discrete angular ranges of slice base inclination, each with its own cohesion and friction angle. When you set the Strength Type to Anisotropic Function, the Anisotropic Function drop-list will appear in the Strength Parameters area. • If there are no Anisotropic Functions currently defined, this list will be disabled. Select the New button to define an Anisotropic Function. • If you have previously defined one (or more) Anisotropic Functions, then this list will contain the names of the currently defined function(s), and you can simply select an existing function from this list. Or if you need to define a new Anisotropic Function, select the New button. • To edit or delete existing Anisotropic Functions, select the function name from the list, and select the Edit or Delete button. See the Anisotropic Strength Function topic for details about defining Anisotropic Strength Functions. The Hoek-Brown strength criterion in Slide2 refers to the ORIGINAL Hoek-Brown failure criterion [ Hoek & Bray (1981) ], described by the following equation: This is a special case of the Generalized Hoek-Brown criterion, with the constant a = 0.5. See below for definition of the parameters in this equation. The original Hoek-Brown criterion has been found to work well for most rocks of good to reasonable quality in which the rock mass strength is controlled by tightly interlocking angular rock pieces. For lesser quality rock masses, the Generalized Hoek-Brown criterion can be used. Generalized Hoek-Brown The Generalized Hoek-Brown strength criterion is described by the following equation: See the Generalized Hoek-Brown topic for more information. Vertical Stress Ratio With the Vertical Stress Ratio model, the shear strength at the base of each slice is determined by multiplying the effective vertical (overburden) stress by a constant K for the material. • The effective vertical stress is computed from the total weight of each slice, and the pore pressure acting at the center of the base of each slice. • The "vertical stress ratio" K is simply a constant, equal to the ratio of the shear strength to the vertical stress. (e.g. if K = 0.3, then the shear strength will be 30 % of the effective vertical stress.) The Barton-Bandis strength model can be used to model the shear strength of a joint. The Barton-Bandis strength model establishes the shear strength of a failure plane as: where Barton and Choubey, 1977], JRC is the joint roughness coefficient, and JCS is the joint wall compressive strength. For further information on the shear strength of discontinuities, including a discussion of the Barton-Bandis failure criterion parameters, see Chapter 4 of Practical Rock Engineering by Dr. Evert Hoek, available on the Rocscience website. Power Curve The Power Curve model for shear-strength, can be expressed as: • a, b and c are parameters typically obtained from a least-squares regression fit of data obtained from small-scale shear tests. The d parameter represents the tensile strength. If included, it must be entered as a positive value. • If you are using the Power Curve to model the shear strength of a joint, then the parameter Waviness Angle Waviness is a parameter that can be included in calculations of the shear strength of a joint or failure plane. It accounts for the waviness (undulations) of the joint surface, observed over distances on the order of 1 m to 10 m. [ Miller (1988) ] The waviness angle is equal to the AVERAGE dip of a failure plane, minus the MINIMUM dip of the failure plane. A non-zero waviness angle, will always increase the effective shear strength of the failure plane. If you are NOT modelling the strength of a joint, then you can simply set the waviness angle = 0, and this term in the Power Curve equation will NOT contribute to the shear strength. A Hyperbolic shear strength envelope is defined by the following equation: It is important to note the definition of the parameters Hyperbolic shear strength envelope Friction angle The definitions of Cohesion and Friction Angle for the Hyperbolic shear strength model. The Cohesion for a Hyperbolic shear strength envelope is actually the limiting, maximum shear strength, for high normal stress. The Hyperbolic shear strength model has been found to characterize the shear strength of soil/geo-synthetic interfaces, and other types of interfaces [ Esterhuizen, Filz & Duncan (2001) ]. For example, it could be used to model the shear strength of: • a concrete/soil interface • a geotextile/soil interface You may wish to use the Hyperbolic shear strength model, to model the failure mode of "direct sliding" along a Geosynthetic/Soil interface. In this case, you will have to define a narrow layer of soil along the geotextile, and assign a material type which uses the Hyperbolic shear strength model. Discrete Function The Discrete Function option allows you to specify the shear strength at discrete x,y locations throughout a material. The shear strength at any point within the material can then be interpolated. Shear strength may be specified for either the undrained case (cohesion only), or drained (cohesion and friction angle). When you set the Strength Type to Discrete Function, the Discrete Function drop-list will appear in the Strength Parameters area. • If there are no Discrete Functions currently defined, this list will be disabled. Select the New button to define a Discrete Function. • If you have previously defined one (or more) Discrete Functions, then this list will contain the names of the currently defined function(s), and you can simply select an existing function from this list. Or if you need to define a new Discrete Function, select the New button. • To edit or delete existing Discrete Functions, select the function name from the list, and select the Edit or Delete button. See the Discrete Strength Function topic for details about defining Discrete Strength Functions. The Drained-Undrained option allows you to define a soil strength envelope which considers both drained and undrained Mohr-Coulomb strength parameters. The shear strength is defined in terms of effective stress parameters c’ and phi’, up to a maximum value of shear strength defined by the undrained cohesion Cu. If you only need to define constant strength parameters which do NOT vary with depth, then you can enter the parameters directly in the Define Materials dialog. In this case, the shear strength envelope is defined by constant values of: • Cu (undrained cohesion) • c’/Cu ratio (drained cohesion c’ is defined as a fraction of Cu) • drained phi’ Cohesion Varies with Depth If the cohesion is variable with depth, then you must select the Cohesion varies with depth checkbox. This will enable the Define button. Select the Define button, and you will see another dialog (the Drained-Undrained Strength Properties dialog), in which you can define the drained and/or undrained cohesion as a function of depth. See the Drained-Undrained Strength topic for more information. Anisotropic Linear The Anisotropic Linear strength model (Mercer, 2012; Mercer, 2013), is similar to the Anisotropic Strength model described above. It allows you to define a material with the following anisotropic strength characteristics: • Bedding plane cohesion and friction angle • Rock mass cohesion and friction angle • Angle of bedding plane from horizontal • Parameters A and B which define a linear transition from bedding plane strength to rock mass strength, with respect to shear plane orientation. See the Anisotropic Linear Strength Function topic for details. Generalized Anisotropic The Generalized Anisotropic Strength option allows you to create a composite strength model, in which you can assign any strength model in Slide2, to any range of slice base orientations. For example, you could create a material with Hoek-Brown properties over a range of orientations, and Mohr-Coulomb properties over another range of orientations (e.g. to simulate a weak bedding orientation in a rock mass). See the Generalized Anisotropic topic for details. Snowden Modified Anisotropic Linear The Snowden Modified Anisotropic Linear strength model (Mercer, 2013) is based on the Anisotropic Linear strength model, with the following additions: • allows you to define non-linear stress-dependent strength envelopes for the rock mass and bedding material, and • allows non-symmetric anisotropy. See the Snowden Modified Anisotropic Linear topic for details. The SHANSEP model (Stress History and Normalized Soil Engineering Properties) is widely used for modelling the undrained shear strength of soils (Ladd and Foote, 1974). See the SHANSEP strength topic for details.
{"url":"https://www.rocscience.com/help/slide2/documentation/slide-model/material-properties/define-material-properties/strength-parameters/strength-type","timestamp":"2024-11-09T09:48:22Z","content_type":"application/xhtml+xml","content_length":"447260","record_id":"<urn:uuid:9088dd6a-3adc-45fe-9da6-1cd97a3edd54>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00805.warc.gz"}
Posit AI Weblog: Group highlight: Enjoyable with torchopt From the start, it has been thrilling to look at the rising variety of packages growing within the torch ecosystem. What’s wonderful is the number of issues folks do with torch: prolong its performance; combine and put to domain-specific use its low-level automated differentiation infrastructure; port neural community architectures … and final however not least, reply scientific This weblog submit will introduce, briefly and moderately subjective kind, considered one of these packages: torchopt. Earlier than we begin, one factor we must always most likely say much more typically: In case you’d prefer to publish a submit on this weblog, on the bundle you’re growing or the best way you utilize R-language deep studying frameworks, tell us – you’re greater than torchopt is a bundle developed by Gilberto Camara and colleagues at National Institute for Space Research, Brazil. By the look of it, the bundle’s motive of being is moderately self-evident. torch itself doesn’t – nor ought to it – implement all of the newly-published, potentially-useful-for-your-purposes optimization algorithms on the market. The algorithms assembled right here, then, are most likely precisely these the authors had been most wanting to experiment with in their very own work. As of this writing, they comprise, amongst others, varied members of the favored ADA* and *ADAM* households. And we might safely assume the checklist will develop over time. I’m going to introduce the bundle by highlighting one thing that technically, is “merely” a utility operate, however to the person, will be extraordinarily useful: the flexibility to, for an arbitrary optimizer and an arbitrary take a look at operate, plot the steps taken in optimization. Whereas it’s true that I’ve no intent of evaluating (not to mention analyzing) totally different methods, there may be one which, to me, stands out within the checklist: ADAHESSIAN (Yao et al. 2020), a second-order algorithm designed to scale to giant neural networks. I’m particularly curious to see the way it behaves as in comparison with L-BFGS, the second-order “traditional” obtainable from base torch we’ve had a dedicated blog post about final yr. The best way it really works The utility operate in query is known as test_optim(). The one required argument issues the optimizer to attempt (optim). However you’ll doubtless wish to tweak three others as properly: • test_fn: To make use of a take a look at operate totally different from the default (beale). You possibly can select among the many many offered in torchopt, or you may cross in your individual. Within the latter case, you additionally want to supply details about search area and beginning factors. (We’ll see that straight away.) • steps: To set the variety of optimization steps. • opt_hparams: To change optimizer hyperparameters; most notably, the educational charge. Right here, I’m going to make use of the flower() operate that already prominently figured within the aforementioned submit on L-BFGS. It approaches its minimal because it will get nearer and nearer to (0,0) (however is undefined on the origin itself). Right here it’s: flower <- operate(x, y) { a <- 1 b <- 1 c <- 4 a * torch_sqrt(torch_square(x) + torch_square(y)) + b * torch_sin(c * torch_atan2(y, x)) To see the way it seems, simply scroll down a bit. The plot could also be tweaked in a myriad of the way, however I’ll persist with the default format, with colours of shorter wavelength mapped to decrease operate values. Let’s begin our explorations. Why do they all the time say studying charge issues? True, it’s a rhetorical query. However nonetheless, typically visualizations make for essentially the most memorable proof. Right here, we use a well-liked first-order optimizer, AdamW (Loshchilov and Hutter 2017). We name it with its default studying charge, 0.01, and let the search run for two-hundred steps. As in that earlier submit, we begin from distant – the purpose (20,20), means outdoors the oblong area of curiosity. # name with default studying charge (0.01) optim = optim_adamw, # cross in self-defined take a look at operate, plus a closure indicating beginning factors and search area test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 200 Whoops, what occurred? Is there an error within the plotting code? – Under no circumstances; it’s simply that after the utmost variety of steps allowed, we haven’t but entered the area of curiosity. Subsequent, we scale up the educational charge by an element of ten. optim = optim_adamw, # scale default charge by an element of 10 opt_hparams = list(lr = 0.1), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 200 What a change! With ten-fold studying charge, the result’s optimum. Does this imply the default setting is unhealthy? After all not; the algorithm has been tuned to work properly with neural networks, not some operate that has been purposefully designed to current a particular problem. Naturally, we additionally need to see what occurs for but greater a studying charge. optim = optim_adamw, # scale default charge by an element of 70 opt_hparams = list(lr = 0.7), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 200 We see the conduct we’ve all the time been warned about: Optimization hops round wildly, earlier than seemingly heading off ceaselessly. (Seemingly, as a result of on this case, this isn’t what occurs. As an alternative, the search will soar distant, and again once more, repeatedly.) Now, this may make one curious. What really occurs if we select the “good” studying charge, however don’t cease optimizing at two-hundred steps? Right here, we attempt three-hundred as an optim = optim_adamw, # scale default charge by an element of 10 opt_hparams = list(lr = 0.1), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), # this time, proceed search till we attain step 300 steps = 300 Apparently, we see the identical sort of to-and-fro occurring right here as with the next studying charge – it’s simply delayed in time. One other playful query that involves thoughts is: Can we observe how the optimization course of “explores” the 4 petals? With some fast experimentation, I arrived at this: Who says you want chaos to provide a wonderful plot? A second-order optimizer for neural networks: ADAHESSIAN On to the one algorithm I’d like to take a look at particularly. Subsequent to somewhat little bit of learning-rate experimentation, I used to be in a position to arrive at a superb outcome after simply thirty-five steps. optim = optim_adahessian, opt_hparams = list(lr = 0.3), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 35 Given our current experiences with AdamW although – that means, its “simply not settling in” very near the minimal – we might wish to run an equal take a look at with ADAHESSIAN, as properly. What occurs if we go on optimizing fairly a bit longer – for two-hundred steps, say? optim = optim_adahessian, opt_hparams = list(lr = 0.3), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 200 Like AdamW, ADAHESSIAN goes on to “discover” the petals, nevertheless it doesn’t stray as distant from the minimal. Is that this shocking? I wouldn’t say it’s. The argument is identical as with AdamW, above: Its algorithm has been tuned to carry out properly on giant neural networks, to not resolve a traditional, hand-crafted minimization process. Now we’ve heard that argument twice already, it’s time to confirm the specific assumption: {that a} traditional second-order algorithm handles this higher. In different phrases, it’s time to revisit Better of the classics: Revisiting L-BFGS To make use of test_optim() with L-BFGS, we have to take somewhat detour. In case you’ve learn the submit on L-BFGS, you could do not forget that with this optimizer, it’s essential to wrap each the decision to the take a look at operate and the analysis of the gradient in a closure. (The reason is that each need to be callable a number of instances per iteration.) Now, seeing how L-BFGS is a really particular case, and few individuals are doubtless to make use of test_optim() with it sooner or later, it wouldn’t appear worthwhile to make that operate deal with totally different circumstances. For this on-off take a look at, I merely copied and modified the code as required. The outcome, test_optim_lbfgs(), is discovered within the appendix. In deciding what variety of steps to attempt, we consider that L-BFGS has a special idea of iterations than different optimizers; that means, it could refine its search a number of instances per step. Certainly, from the earlier submit I occur to know that three iterations are ample: optim = optim_lbfgs, opt_hparams = list(line_search_fn = "strong_wolfe"), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 3 At this level, after all, I would like to stay with my rule of testing what occurs with “too many steps.” (Although this time, I’ve sturdy causes to imagine that nothing will occur.) optim = optim_lbfgs, opt_hparams = list(line_search_fn = "strong_wolfe"), test_fn = list(flower, operate() (c(x0 = 20, y0 = 20, xmax = 3, xmin = -3, ymax = 3, ymin = -3))), steps = 10 Speculation confirmed. And right here ends my playful and subjective introduction to torchopt. I actually hope you appreciated it; however in any case, I feel it’s best to have gotten the impression that here’s a helpful, extensible and likely-to-grow bundle, to be watched out for sooner or later. As all the time, thanks for studying! test_optim_lbfgs <- operate(optim, ..., opt_hparams = NULL, test_fn = "beale", steps = 200, pt_start_color = "#5050FF7F", pt_end_color = "#FF5050FF", ln_color = "#FF0000FF", ln_weight = 2, bg_xy_breaks = 100, bg_z_breaks = 32, bg_palette = "viridis", ct_levels = 10, ct_labels = FALSE, ct_color = "#FFFFFF7F", plot_each_step = FALSE) { if (is.character(test_fn)) { # get beginning factors domain_fn <- get(paste0("domain_",test_fn), envir = asNamespace("torchopt"), inherits = FALSE) # get gradient operate test_fn <- get(test_fn, envir = asNamespace("torchopt"), inherits = FALSE) } else if (is.list(test_fn)) { domain_fn <- test_fn[[2]] test_fn <- test_fn[[1]] # start line dom <- domain_fn() x0 <- dom[["x0"]] y0 <- dom[["y0"]] # create tensor x <- torch::torch_tensor(x0, requires_grad = TRUE) y <- torch::torch_tensor(y0, requires_grad = TRUE) # instantiate optimizer optim <- do.call(optim, c(list(params = list(x, y)), opt_hparams)) # with L-BFGS, it's essential to wrap each operate name and gradient analysis in a closure, # for them to be callable a number of instances per iteration. calc_loss <- operate() { z <- test_fn(x, y) # run optimizer x_steps <- numeric(steps) y_steps <- numeric(steps) for (i in seq_len(steps)) { x_steps[i] <- as.numeric(x) y_steps[i] <- as.numeric(y) # put together plot # get xy limits xmax <- dom[["xmax"]] xmin <- dom[["xmin"]] ymax <- dom[["ymax"]] ymin <- dom[["ymin"]] # put together knowledge for gradient plot x <- seq(xmin, xmax, size.out = bg_xy_breaks) y <- seq(xmin, xmax, size.out = bg_xy_breaks) z <- outer(X = x, Y = y, FUN = operate(x, y) as.numeric(test_fn(x, y))) plot_from_step <- steps if (plot_each_step) { plot_from_step <- 1 for (step in seq(plot_from_step, steps, 1)) { # plot background x = x, y = y, z = z, col = hcl.colors( n = bg_z_breaks, palette = bg_palette # plot contour if (ct_levels > 0) { x = x, y = y, z = z, nlevels = ct_levels, drawlabels = ct_labels, col = ct_color, add = TRUE # plot start line pch = 21, bg = pt_start_color # plot path line lwd = ln_weight, col = ln_color # plot finish level pch = 21, bg = pt_end_color Loshchilov, Ilya, and Frank Hutter. 2017. “Fixing Weight Decay Regularization in Adam.” CoRR Yao, Zhewei, Amir Gholami, Sheng Shen, Kurt Keutzer, and Michael W. Mahoney. 2020. “ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Studying.” CoRR
{"url":"http://thefutureofworkinstitute.xyz/2023/03/31/posit-ai-blog-community-spotlight-fun-with-torchopt/","timestamp":"2024-11-12T11:54:19Z","content_type":"text/html","content_length":"120417","record_id":"<urn:uuid:fcdd3e8f-e369-4acb-87b3-52e0de622ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00251.warc.gz"}
Calculate the difference between two dates Use the DATEDIF function when you want to calculate the difference between two dates. First put a start date in a cell, and an end date in another. Then type a formula like one of the following. Warning: If the Start_date is greater than the End_date, the result will be #NUM!. Difference in days In this example, the start date is in cell D9, and the end date is in E9. The formula is in F9. The “d” returns the number of full days between the two dates. Difference in weeks In this example, the start date is in cell D13, and the end date is in E13. The “d” returns the number of days. But notice the /7 at the end. That divides the number of days by 7, since there are 7 days in a week. Note that this result also needs to be formatted as a number. Press CTRL + 1. Then click Number > Decimal places: 2. Difference in months In this example, the start date is in cell D5, and the end date is in E5. In the formula, the “m” returns the number of full months between the two days. Difference in years In this example, the start date is in cell D2, and the end date is in E2. The “y” returns the number of full years between the two days. Calculate age in accumulated years, months, and days You can also calculate age or someone’s time of service. The result can be something like “2 years, 4 months, 5 days.” 1. Use DATEDIF to find the total years. In this example, the start date is in cell D17, and the end date is in E17. In the formula, the “y” returns the number of full years between the two days. 2. Use DATEDIF again with “ym” to find months. In another cell, use the DATEDIF formula with the “ym” parameter. The “ym” returns the number of remaining months past the last full year. 3. Use a different formula to find days. Now we need to find the number of remaining days. We'll do this by writing a different kind of formula, shown above. This formula subtracts the first day of the ending month (5/1/2016) from the original end date in cell E17 (5/6/2016). Here's how it does this: First the DATE function creates the date, 5/1/2016. It creates it using the year in cell E17, and the month in cell E17. Then the 1 represents the first day of that month. The result for the DATE function is 5/1/2016. Then, we subtract that from the original end date in cell E17, which is 5/6/2016. 5/6/2016 minus 5/1/2016 is 5 Warning: We don't recommend using the DATEDIF "md" argument because it may calculate inaccurate results. 4. Optional: Combine three formulas in one. You can put all three calculations in one cell like this example. Use ampersands, quotes, and text. It’s a longer formula to type, but at least it’s all in one. Tip: Press ALT+ENTER to put line breaks in your formula. This makes it easier to read. Also, press CTRL+SHIFT+U if you can’t see the whole formula. Download our examples You can download an example workbook with all of the examples in this article. You can follow along, or create your own formulas.Download date calculation examples Other date and time calculations As you saw above, the DATEDIF function calculates the difference between a start date and an end date. However, instead of typing specific dates, you can also use the TODAY() function inside the formula. When you use the TODAY() function, Excel uses your computer's current date for the date. Keep in mind this will change when the file is opened again on a future day. Please note that at the time of this writing, the day was October 6, 2016. Use the NETWORKDAYS.INTL function when you want to calculate the number of workdays between two dates. You can also have it exclude weekends and holidays too. Before you begin: Decide if you want to exclude holiday dates. If you do, type a list of holiday dates in a separate area or sheet. Put each holiday date in its own cell. Then select those cells, select Formulas > Define Name. Name the range MyHolidays, and click OK. Then create the formula using the steps below. 1. Type a start date and an end date. In this example, the start date is in cell D53 and the end date is in cell E53. 2. In another cell, type a formula like this: Type a formula like the above example. The 1 in the formula establishes Saturdays and Sundays as weekend days, and excludes them from the total. 3. If necessary, change the 1. If Saturday and Sunday are not your weekend days, then change the 1 to another number from the IntelliSense list. For example, 2 establishes Sundays and Mondays as weekend days. 4. Type the holiday range name. If you created a holiday range name in the “Before you begin” section above, then type it at the end like this. If you don't have holidays, you can leave the comma and MyHolidays out. You can calculate elapsed time by subtracting one time from another. First put a start time in a cell, and an end time in another. Make sure to type a full time, including the hour, minutes, and a space before the AM or PM. Here’s how: 1. Type a start time and end time. In this example, the start time is in cell D80 and the end time is in E80. Make sure to type the hour, minute, and a space before the AM or PM. 2. Set the h:mm AM/PM format. Select both dates and press CTRL + 1 (or Custom > h:mm AM/PM, if it isn’t already set. 3. Subtract the two times. In another cell, subtract the start time cell from the end time cell. 4. Set the h:mm format. Press CTRL + 1 (or Custom > h:mm so that the result excludes AM and PM. To calculate the time between two dates and times, you can simply subtract one from the other. However, you must apply formatting to each cell to ensure that Excel returns the result you want. 1. Type two full dates and times. In one cell, type a full start date/time. And in another cell, type a full end date/time. Each cell should have a month, day, year, hour, minute, and a space before the AM or PM. 2. Set the 3/14/12 1:30 PM format. Select both cells, and then press CTRL + 1 (or Date > 3/14/12 1:30 PM. This isn't the date you'll set, it's just a sample of how the format will look. Note that in versions prior to Excel 2016, this format might have a different sample date like 3/14/01 1:30 PM. 3. Subtract the two. In another cell, subtract the start date/time from the end date/time. The result will probably look like a number and decimal. You'll fix that in the next step. 4. Set the [h]:mm format. Press CTRL + 1 (or Custom. In the Type box, type [h]:mm.
{"url":"https://support.microsoft.com/en-us/office/calculate-the-difference-between-two-dates-8235e7c9-b430-44ca-9425-46100a162f38?wt.mc_id=fsn_excel_formulas_and_functions&ui=en-us&rs=en-us&ad=us","timestamp":"2024-11-04T17:45:08Z","content_type":"text/html","content_length":"161433","record_id":"<urn:uuid:b9424268-777c-47c0-b819-842515f94a04>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00556.warc.gz"}
402 hectometers per square second to decimeters per square second 402 Hectometers per square second = 402,000 Decimeters per square second Acceleration Converter - Hectometers per square second to decimeters per square second - 402 decimeters per square second to hectometers per square second This conversion of 402 hectometers per square second to decimeters per square second has been calculated by multiplying 402 hectometers per square second by 1,000 and the result is 402,000 decimeters per square second.
{"url":"https://unitconverter.io/hectometers-per-square-second/decimeters-per-square-second/402","timestamp":"2024-11-08T02:47:41Z","content_type":"text/html","content_length":"27162","record_id":"<urn:uuid:36e5f8b7-519e-4b4f-81c7-089da1b0f102>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00255.warc.gz"}
Test 8 Test 8 Mechanical Insight In some of the drawings the following signs are used: ◯ is a movable pivot around which a wheel or a lever can turn. ⦵ is a fixed pivot around which a wheel or a lever can turn. Example a If lever S is pulled in the direction of the arrow, then … 1. no movement will be possible. 2. W will move nearer to N. 3. W will move away from N. 4. M will move to the left. 5. angle P will become bigger. If lever S is pulled in the direction of the arrow, rod W will descend and thus move closer to N. B is therefore the answer. Example b Which figure can be built with the two plates above? One of these plates has four holes and the otherhastwo holes. Only FigureAcan be built with theses two plates. In the other figures the plates have too many or too few holes. NB: The black dot indicates where the plates are screwed together. Remember to count this dot as a hole as well. Example c Pulleys R and S are connected by means of a belt. In which direction will S turn if R turns in the direction of the arrow? 1. S will move to and fro 2. Cannot say 3. S will stand still 4. In the opposite direction to R 5. In the same direction as R S will turn in the opposite direction to R because the connecting band is crossed. D is therefore the answer. Example D: In which case would it be the easiest to lift the 50 kg object by exerting pressure at P? A. 1 B. 2 C. 3 D. 4 E. 5 Case 1 has the longest lever, therefore it will lift the object the easiest. A is the answer. Example E If each of these wheels makes exactly one rotation, which one will have covered the greatest distance? A. 1 B. 2 C. 3 D. 4 E. All will cover the same distance The wheel with the largest circumference will cover the greatest distance in one rotation. The answer is therefore C.
{"url":"https://lnandaleroux.com/index.php/test-8e-instructions/","timestamp":"2024-11-14T18:19:14Z","content_type":"text/html","content_length":"55732","record_id":"<urn:uuid:3eec80a9-767b-4acc-aa54-2522e957a948>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00075.warc.gz"}
Bilangan Kromatik Graf Commuting dan Non Commuting Grup Dihedral Rahayuningtyas, Handrini, Abdussakir, Abdussakir and Nashichuddin, Ach. ORCID: https://orcid.org/0009-0006-8273-8878 (2015) Bilangan Kromatik Graf Commuting dan Non Commuting Grup Dihedral. Cauchy: Jurnal Matematika Murni dan Aplikasi, 4 (1). pp. 16-21. ISSN 2086-0382 Text (Fulltext) 1717.pdf - Published Version Download (719kB) | Preview Commuting graph is a graph that has a set of points X and two different vertices to be connected directly if each commutative in G. Let G non abelian group and Z(G) is a center of G. Noncommuting graph is a graph which the the vertex is a set of G\Z(G) and two vertices x and y are adjacent if and only if xy≠yx. The vertex colouring of G is giving k colour at the vertex, two vertices that are adjacent not given the same colour. Edge colouring of G is two edges that have common vertex are coloured with different colour. The smallest number k so that a graph can be coloured by assigning k colours to the vertex and edge called chromatic number. In this article, it is available the general formula of chromatic number of commuting and noncommuting graph of dihedral group Downloads per month over past year Origin of downloads Actions (login required)
{"url":"http://repository.uin-malang.ac.id/1717/","timestamp":"2024-11-13T02:58:50Z","content_type":"application/xhtml+xml","content_length":"26763","record_id":"<urn:uuid:e85bc0b3-9029-44e8-87fa-0f4a0259eaf8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00765.warc.gz"}
class unreal.ChaosSolverConfiguration¶ Bases: unreal.StructBase Chaos Solver Configuration C++ Source: □ Module: Chaos □ File: ChaosSolverConfiguration.h Editor Properties: (see get_editor_property/set_editor_property) □ breaking_filter_settings (SolverBreakingFilterSettings): [Read-Write] Breaking Filter Settings □ cluster_connection_factor (float): [Read-Write] Cluster Connection Factor □ cluster_union_connection_type (ClusterUnionMethod): [Read-Write] Cluster Union Connection Type □ collision_cull_distance (float): [Read-Write] During collision detection, if tweo shapes are at least this far apart we do not calculate their nearest features during the collision detection □ collision_filter_settings (SolverCollisionFilterSettings): [Read-Write] Collision Filter Settings □ collision_margin_fraction (float): [Read-Write] A collision margin as a fraction of size used by some boxes and convex shapes to improve collision detection results. The core geometry of shapes that support a margin are reduced in size by the margin, and the margin is added back on during collision detection. The net result is a shape of the same size but with rounded □ collision_margin_max (float): [Read-Write] An upper limit on the collision margin that will be subtracted from boxes and convex shapes. See CollisionMarginFraction □ collision_pair_iterations (int32): [Read-Write] During solver iterations we solve each constraint in turn. For each constraint we run the solve step CollisionPairIterations times in a row. □ collision_push_out_pair_iterations (int32): [Read-Write] During pushout iterations we pushout each constraint in turn. For each constraint we run the pushout step CollisionPairIterations times in a row. □ generate_break_data (bool): [Read-Write] Generate Break Data □ generate_collision_data (bool): [Read-Write] Generate Collision Data □ generate_contact_graph (bool): [Read-Write] Generate Contact Graph □ generate_trailing_data (bool): [Read-Write] Generate Trailing Data □ iterations (int32): [Read-Write] The number of iterations to run during the constraint solver step □ joint_pair_iterations (int32): [Read-Write] The number of iterations to run on each constraint during the constraint solver step □ joint_push_out_pair_iterations (int32): [Read-Write] The number of iterations to run during the constraint fixup step for each joint. This applies a post-solve correction that can address errors left behind during the main solver iterations. □ push_out_iterations (int32): [Read-Write] The number of iterations to run during the constraint fixup step. This applies a post-solve correction that can address errors left behind during the main solver iterations. □ trailing_filter_settings (SolverTrailingFilterSettings): [Read-Write] Trailing Filter Settings
{"url":"https://dev.epicgames.com/documentation/en-us/unreal-engine/python-api/class/ChaosSolverConfiguration?application_version=4.27","timestamp":"2024-11-07T23:03:42Z","content_type":"text/html","content_length":"11409","record_id":"<urn:uuid:b4922178-87b8-473a-b007-87baeea7a2f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00656.warc.gz"}
Deriving Antilogs In Excel - ExcelAdept Key Takeaway: • Antilogs in Excel are used to convert logarithmic values back to their original values. Understanding antilogs is important for data analysis and scientific research. • To derive antilogs in Excel, input the logarithmic value, use the exponential function, and apply the formula. This process is simple and can be applied to single values or multiple values. • Examples of antilogarithmic calculations in Excel include finding the antilog of log10(100) and deriving antilogs for multiple values. By practicing these calculations, users can become proficient in using antilogs for data analysis. Is your business facing challenges when it comes to finding antilogs in Excel? You’re not alone! This article will explain how to use Excel to quickly and easily compute antilogs in no time. Understanding Antilogs in Excel Antilogs in Excel are the inverse of logarithms, used to convert logarithmic values back to their original form. To understand antilogs in Excel, one must grasp the concept of logarithms and their Antilogs, denoted by the EXP function in Excel, can be applied to large numbers, including exponential growth and decay rates. By knowing how to derive antilogs in Excel, one can effectively manipulate data to derive meaningful insights. When working with logarithmic values in Excel, one must understand the inverse operation of logarithms known as antilogs. Antilogs convert logarithmic values back to their original form and can be calculated through the EXP function. Understanding antilogs in Excel is crucial to work with exponential growth rates, as well as scientific and mathematical data. Deriving antilogs in Excel is a fundamental skill for data analysts, scientists, and statisticians. It is important to note that antilogs can be calculated by raising the base of the logarithm to the power of the log value. Excel provides an easier method through the EXP function, which automatically calculates the antilog. This function is useful for large numbers, as manual calculations can be time-consuming. Pro Tip: Rather than memorizing formulas, consider using Excel’s built-in functions for faster and more accurate calculations. Steps to Derive Antilogs in Excel In Excel, computing antilogs requires specific steps to accomplish this task accurately. Here’s a concise guide to Derive Antilogs in Excel: 1. Input base-10 logarithms of the values into a designated cell. 2. Use the power function with the base 10 as the argument to convert the log to decimal. 3. Input the antilog in a different cell using the exponential function, with the decimal output as the argument. 4. Verify the results by comparing them with a calculator. In addition to the above steps, it’s important to remember that antilogs should always be checked against manual calculations to avoid error. By following the given steps, anyone can derive antilogs in Excel accurately and without confusion. Don’t risk making errors by manually computing antilogs in your documents. Follow these steps in Excel, and you can derive antilogs confidently and efficiently. Upgrade your Excel skills today! Examples of Antilogarithmic Calculation in Excel Antilogarithmic calculation is an important function in Excel that helps to convert logarithmic values into their corresponding antilogs. This is useful for various financial and scientific Below are some examples of antilogarithmic calculations in Excel. Log Value Antilog Value Formula 1.23 18.738 =10^A2 2.56 359.381 =10^A3 3.78 7,196.856 =10^A4 It is important to note that antilog values are always positive and the base of the logarithm should match the base of the antilog function. For example, if the base of the logarithm is 10, then the antilog function should be 10 raised to the power of the logarithmic value. Another important point to keep in mind is that antilog values can be used to calculate percentages. For instance, if the antilog value of 0.5 is 3.1623, then it can be concluded that 3.1623 represents 50% of the original value. In a similar vein, a financial analyst once used antilog values to project future earnings for a company. By inputting various logarithmic values into the formula, the analyst was able to come up with a projection that was both accurate and reliable. Five Facts About Deriving Antilogs in Excel: • ✅ Antilogs are the inverse of logs and are used for converting logarithmic values back to their original form. (Source: Excel Campus) • ✅ The antilog of a logarithm to base 10 can be easily derived in Excel by using the POWER function. (Source: Excel Easy) • ✅ In Excel, the antilog function is expressed as “=10^x”, where “x” is the logarithmic value. (Source: Spreadsheets Made Easy) • ✅ Antilogs are commonly used in financial analysis, statistics, and scientific research where logarithmic values are frequently encountered. (Source: Wall Street Mojo) • ✅ Deriving antilogs can also be done using the EXP function in Excel, which calculates the exponential value of a number. (Source: Excel Jet) FAQs about Deriving Antilogs In Excel What is Deriving Antilogs in Excel? Deriving Antilogs in Excel is the process of calculating the inverse of logarithms in a Microsoft Excel spreadsheet. What is the formula for calculating Antilogs in Excel? The formula for calculating Antilogs in Excel is =10^x, where x is the logarithm of the number you want to find the Antilog of. Can I calculate Antilogs for negative numbers in Excel? Yes, you can calculate Antilogs for negative numbers in Excel. However, you will need to use the formula =10^x+1, where x is the absolute value of the logarithm of the number. Is it possible to calculate Antilogs for multiple numbers at once in Excel? Yes, it is possible to calculate Antilogs for multiple numbers at once in Excel. You can use the “Array Formula” function to achieve this. How do I create an Array Formula for calculating Antilogs in Excel? To create an Array Formula for calculating Antilogs in Excel, select a range of cells where you want to display the results. Then, type the formula =10^(logarithmic range), and press Ctrl+Shift+Enter to enter it as an array formula. Can I use the Antilogs function in Excel to solve exponential equations? Yes, you can use the Antilogs function in Excel to solve exponential equations. By taking the Antilog of both sides of the equation, you can find the value of the unknown variable.
{"url":"https://exceladept.com/deriving-antilogs-in-excel/","timestamp":"2024-11-03T21:59:58Z","content_type":"text/html","content_length":"62226","record_id":"<urn:uuid:341d0fa5-4d4f-491f-880c-082fad8f82b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00364.warc.gz"}
Leetcode 94 Binary Tree Inorder Traversal The problem description is as follow: Given a binary tree, return the inorder traversal of its nodes' values. For example: Given binary tree {1,#,2,3}, return [1,3,2]. In case you are not familiar with inorder tree traversal, the following graph would help: The in-order traversal for previous tree is A, B, C, D, E, F, G, H, I. Recursive Approach Recursive approach is pretty straight forward. The order of inorder is : 1. Left Child 2. Parent Node 3. Right Child Thus the code will be: * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * TreeNode right; * TreeNode(int x) { val = x; } * } public class Solution { public List<Integer> inorderTraversal(TreeNode root) { List<Integer> result=new ArrayList<Integer> (); if (root==null){ return result; builder(root, result); return result; private void builder(TreeNode root, List<Integer> result){ if (root==null){ builder(root.left, result); builder(root.right, result); The time complexity of recursive approach is O(n) since we are just traversing the whole tree once. The space complexity of recursive approach is O(log n). Iterative Approach When it comes to iterative approach in tree traversal, always use a stack or queue. We could use a stack to imitate the recursive process we did above: public class Solution { public List<Integer> inorderTraversal(TreeNode root) { List<Integer> res = new ArrayList<Integer>(); if(root == null) return res; Stack<TreeNode> stack = new Stack<TreeNode>(); //define a pointer to track nodes TreeNode p = root; while(!stack.empty() || p != null){ if(p != null){ p = p.left; TreeNode t = stack.pop(); p = t.right; return res; The time complexity of recursive approach is O(n) since we are just traversing the whole tree once. The space complexity of recursive approach is O(log n). Morris Traversal Algorithm Actually there’s a better way to solve the problem. Instead of using O(log n) space, we could solve it with O(l) space. In order to solve the problem with O(l) space, the most important question is how to traverse back to parent node from child node. Morris traversal introduce threaded binary tree. In threaded binary tree, we don’t have to assign additional space to every node pointing to the predecessor and successor, nor do we have to save the nodes into stack. 1. If left child of current node is empty, output current node. Set current's right child as current node 2. If left child of current node is not empty, find the in-order traversal predecessor node in its left subtree - If predecessor's right child is empty, predecessor's right child = current node. Current node = current node's left child. - If predecessor's right child is current node, set predecessor's right child back to empty (recover structure of tree). Output current node. Set current node = current node's right child. - There won't be other possibility according to definition of predecessor. Repeat process 1 and 2 until current node is empty. The following graph would help a lot: Here comes the code: public class Solution { public List<Integer> inorderTraversal(TreeNode root) { List<Integer> res = new ArrayList<Integer>(); TreeNode cur = root; TreeNode pre = null; while(cur != null) { if(cur.left == null) { cur = cur.right; else { pre = cur.left; while(pre.right!=null && pre.right!=cur) pre = pre.right; if(pre.right==null) { pre.right = cur; cur = cur.left; else { pre.right = null; cur = cur.right; return res; Since we are traversing each node at most twice, the time complexity is still O(n) and we are only using O(1) extra space.
{"url":"https://www.martinxia.me/2015/09/30/leetcode-94-binary-tree-inorder-traversal/","timestamp":"2024-11-12T03:47:46Z","content_type":"text/html","content_length":"51390","record_id":"<urn:uuid:17680f31-08ce-4666-af85-b2408766953b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00052.warc.gz"}
Understanding Mathematical Functions: How To Find A In Absolute Value Introduction to Absolute Value Functions An absolute value function is a mathematical function that returns the absolute value of the input. In simpler terms, it gives the distance of a number from zero on a number line. These functions have various applications in mathematics, physics, and engineering. Understanding absolute value functions is essential for solving equations involving inequalities and distance-related problems. Overview of absolute value functions and their importance in mathematics Absolute value functions are denoted by |x|, where x is the input value. These functions are crucial in calculus, algebra, and geometry for their ability to express the magnitude of a number without considering its sign. In geometry, absolute value functions are used to calculate distances between points on a coordinate plane. Brief explanation of what ‘a’ represents in absolute value functions In an absolute value function, the variable 'a' represents a scaling factor that affects the steepness of the graph. The value of 'a' determines how quickly the function changes direction at the point where x = 0. It modifies the slope of the function and alters the width of the V-shaped graph. Purpose of the blog post: to guide readers on how to find 'a' in absolute value functions effectively The objective of this blog post is to provide readers with a clear understanding of how to identify and determine the value of 'a' in absolute value functions. By following the guidelines outlined in this post, readers will be able to calculate 'a' accurately and apply it in solving mathematical problems involving absolute value functions. Key Takeaways • Definition of absolute value function • Finding a in absolute value function • Examples of solving for a • Graphing absolute value functions • Applications of absolute value functions Understanding The Basics of Absolute Value A Definition of absolute value and its geometrical interpretation on the number line Absolute value is a mathematical concept that represents the distance of a number from zero on the number line. It is denoted by two vertical bars surrounding the number. For example, the absolute value of -5 is written as |-5|, which equals 5. Geometrically, this means that -5 is 5 units away from zero on the number line. Introduction to the standard form of an absolute value function: An absolute value function is a type of piecewise function that is defined by two separate equations based on the input value. The standard form of an absolute value function is represented as: Where a is a constant that represents the point where the graph of the function intersects the x-axis. Understanding how to find a in an absolute value function is crucial for graphing and solving equations involving absolute values. Understanding Mathematical Functions: How to find 'a' in absolute value function When dealing with mathematical functions, it is important to understand how different variables affect the overall function. In the case of an absolute value function in the form of 'ax + b', the variable 'a' plays a crucial role in determining the behavior of the function. Let's delve into how we can find the value of 'a' in an absolute value function. 1. Understanding the Absolute Value Function 'ax + b' The absolute value function in the form of 'ax + b' represents a linear function with an absolute value component. The variable 'a' determines the slope of the linear function, while the variable 'b' represents the y-intercept. The absolute value component ensures that the function is always positive, regardless of the input value. 2. Finding the Value of 'a' When trying to find the value of 'a' in an absolute value function 'ax + b', we can follow these steps: • Step 1: Identify two points on the function. These points can be any two distinct points on the graph of the function. • Step 2: Use the coordinates of the two points to set up a system of equations. The general form of the absolute value function 'ax + b' can be used to create two equations with the given points. • Step 3: Solve the system of equations to find the value of 'a'. This can be done through substitution or elimination methods. 3. Example Calculation Let's consider an example to illustrate how to find the value of 'a' in an absolute value function 'ax + b': Given points (1, 3) and (2, 5) on the function 'ax + b', we can set up the following equations: 1. 3 = a(1) + b 2. 5 = a(2) + b Solving these equations simultaneously will help us determine the value of 'a' in the absolute value function. By following these steps and understanding the behavior of the absolute value function 'ax + b', you can effectively find the value of 'a' and further analyze the function's characteristics. Understanding Mathematical Functions: How to find a in absolute value function In mathematics, an absolute value function is a function that returns the distance of a number from zero on the number line. The absolute value of a number x, denoted as |x|, is always positive or zero. The absolute value function is defined as: |x| = c Explanation of the variables and constants in the equation • x: This variable represents the input value for which we want to find the absolute value. It can be any real number. • |x|: This symbol denotes the absolute value of the number x. It always returns a non-negative value. • c: This constant represents the output value of the absolute value function. It is the distance of the number x from zero on the number line. When solving for c in the absolute value function, we are essentially finding the distance of the input number x from zero. This distance is always positive or zero, regardless of the sign of the input number. For example, if we have the absolute value function |3| = c, we are looking for the value of c that represents the distance of 3 from zero. Since 3 is 3 units away from zero on the number line, the value of c in this case would be 3. Similarly, if we have the absolute value function |-5| = c, we are finding the distance of -5 from zero. Even though -5 is a negative number, its distance from zero is still 5 units. Therefore, the value of c in this case would be 5. By understanding the variables and constants in the absolute value function equation, we can easily find the value of c by determining the distance of the input number from zero on the number line. The Role of 'a' in Absolute Value Functions An absolute value function is a mathematical function that contains an absolute value expression. The variable 'a' in an absolute value function plays a crucial role in determining the shape and behavior of the graph. Let's explore how 'a' affects the function: A. How 'a' affects the steepness and direction of the absolute value graph When 'a' is greater than 1, the graph of the absolute value function becomes steeper. This means that the function will rise more quickly and have a sharper turn at the vertex. On the other hand, when 'a' is between 0 and 1, the graph becomes less steep, resulting in a more gradual rise and a smoother turn at the vertex. The value of 'a' also determines the direction in which the graph opens. If 'a' is positive, the graph will open upwards, forming a V-shape. Conversely, if 'a' is negative, the graph will open downwards, creating an upside-down V-shape. B. The difference between positive and negative values of 'a' When 'a' is positive, the absolute value function will have a minimum value at the vertex. This minimum value represents the lowest point on the graph. On the other hand, when 'a' is negative, the function will have a maximum value at the vertex, indicating the highest point on the graph. It is important to note that the sign of 'a' affects the symmetry of the graph. A positive 'a' results in a symmetrical graph with respect to the y-axis, while a negative 'a' leads to a graph that is symmetrical with respect to the x-axis. C. Real-world examples illustrating the impact of 'a' on the function’s graph One real-world example that demonstrates the impact of 'a' on an absolute value function is the pricing strategy of a company. If 'a' represents the profit margin, a higher value of 'a' would indicate a steeper increase in profit as sales volume increases. Conversely, a lower value of 'a' would result in a more gradual rise in profit. Another example could be the temperature variation throughout the day. If 'a' represents the rate of temperature change, a positive 'a' would show a rapid increase in temperature during the day, while a negative 'a' would indicate a quick drop in temperature at night. Steps to Find 'a' in Absolute Value Functions When working with absolute value functions, finding the value of 'a' is essential for accurately graphing the function. There are two main methods to determine 'a' in absolute value functions: using two points on the line and solving a system of equations, and a graphical approach focusing on the vertex and slope. Method 1: Using two points on the line and solving a system of equations One way to find 'a' in an absolute value function is by using two points on the line and solving a system of equations. This method involves substituting the x and y values of the points into the absolute value function and solving for 'a'. Example of solving with given points: • Given points: (2, 5) and (-3, 4) • Substitute the points into the absolute value function: |y| = a|x| • For point (2, 5): 5 = a(2) => a = 5/2 • For point (-3, 4): 4 = a(-3) => a = -4/3 • Compare the values of 'a' obtained from both points Method 2: Graphical approach - understanding the vertex and slope Another method to determine 'a' in an absolute value function is through a graphical approach. By understanding the vertex and slope of the absolute value function's graph, you can identify the value of 'a'. How the graph helps in determining 'a': • The vertex of the absolute value function is the point where the graph changes direction • The slope of the graph indicates how steep the function is • By analyzing the vertex and slope, you can infer the value of 'a' in the function Comparison of methods and when to use each Both methods have their advantages and are useful in different scenarios. The first method of using two points and solving a system of equations is more precise and accurate, providing an exact value for 'a'. On the other hand, the graphical approach is more visual and intuitive, allowing for a quick estimation of 'a' based on the graph of the function. It is recommended to use the first method when you need an exact value of 'a' for precise calculations or graphing. The graphical approach can be used for a quick analysis or estimation of 'a' when a precise value is not necessary. Common Challenges and Solutions Issue: Misinterpreting the graph’s vertex as one of the points Sub-point: Understanding the vertex of an absolute value function One common mistake when dealing with absolute value functions is misinterpreting the vertex as one of the points on the graph. The vertex of an absolute value function is the point where the graph changes direction, not a point on the graph itself. Sub-point: Tips for accurate graph reading To avoid this confusion, it is important to understand the concept of the vertex and how it relates to the graph of an absolute value function. When analyzing the graph, pay close attention to where the graph changes direction, as this will indicate the location of the vertex. Issue: Confusing the absolute value function with quadratic or other function types Sub-point: Recognizing the characteristics of an absolute value function Another challenge that arises is confusing the absolute value function with other function types, such as quadratic functions. It is important to recognize the distinct characteristics of an absolute value function, such as the V-shape of the graph and the absence of negative values. Sub-point: Tips for accurate equation setup When setting up the equation for an absolute value function, remember that the absolute value function is defined as |x|, where x represents the input value. Make sure to correctly identify the absolute value expression in the equation to avoid confusion with other function types. Solutions to these issues, including tips for accurate graph reading and equation setup • Practice identifying the vertex of an absolute value function on various graphs to improve your understanding. • Study the characteristics of different function types to distinguish between an absolute value function and other types. • Double-check your equation setup to ensure that you have correctly identified the absolute value expression.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-find-absolute-value-function","timestamp":"2024-11-11T16:13:29Z","content_type":"text/html","content_length":"219647","record_id":"<urn:uuid:7325b8d4-dd40-4b30-b056-badaf4043280>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00793.warc.gz"}
Entanglement spectrum and emergent integrability in quantum many-body systems Entanglement spectrum and emergent integrability in quantum many-body systems Papic, Z. (2016). Entanglement spectrum and emergent integrability in quantum many-body systems. Perimeter Institute. https://pirsa.org/16080090 Papic, Zlatko. Entanglement spectrum and emergent integrability in quantum many-body systems. Perimeter Institute, Aug. 16, 2016, https://pirsa.org/16080090 @misc{ pirsa_PIRSA:16080090, doi = {10.48660/16080090}, url = {https://pirsa.org/16080090}, author = {Papic, Zlatko}, keywords = {Condensed Matter}, language = {en}, title = {Entanglement spectrum and emergent integrability in quantum many-body systems}, publisher = {Perimeter Institute}, year = {2016}, month = {aug}, note = {PIRSA:16080090 see, \url{https://pirsa.org}} Talk number Quantum many-body systems are challenging to study because of their exponentially large Hilbert spaces, but at the same time they are an area for exciting new physics due to the effects of interactions between particles. For theoretical purposes, it is convenient to know if such systems can be expressed in a simpler way in terms of some nearly-free quasiparticles, or more generally if one can construct a large set of operators that approximately commute with the system’s Hamiltonian. In this talk I will discuss two ways of using the entanglement spectrum to tackle these questions. In the first part, I will show that strongly disordered systems in the many-body localized phase have a universal power-law structure in their entanglement spectra. This is a consequence of their local integrability, and distinguishes such states from typical ground states of gapped systems. In the second part, I will introduce a notion of “interaction distance” and show that the entanglement spectrum can be used to quantify “how far” an interacting ground state is from a free (Gaussian) state. I will discuss some examples of quantum spin chains and outline a few future directions. [1] M. Serbyn, A. Michailidis, D. Abanin, Z. Papic, arXiv:1605.05737. [2] C. J. Turner, K. Meichanetzidis, Z. Papic, and J. K. Pachos, arXiv:1607.02679.
{"url":"https://pirsa.org/16080090/","timestamp":"2024-11-02T15:42:56Z","content_type":"text/html","content_length":"50673","record_id":"<urn:uuid:35faf11d-63a4-406a-98f8-d466c19a5d9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00517.warc.gz"}
Perfect Samples - OpenQuant Assume that you are taking samples of size $s$ from a population. For each sample, let $p$ represent the probability that every point in that sample is an inlier and let $e$ represent the proportion of points in the sample that are outliers. Derive an equation in terms of $s$, $p$ and $e$, to calculate the number of samples ($N$) needed in order to be 99% confident that there it at least one sample drawn that contains no outliers.
{"url":"https://openquant.co/questions/perfect-samples","timestamp":"2024-11-05T13:10:07Z","content_type":"text/html","content_length":"32020","record_id":"<urn:uuid:aafe1779-4de0-47f9-9a81-c79bd0868941>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00133.warc.gz"}
Re: st: Two way table of regression coefficients or intercepts with -est Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Two way table of regression coefficients or intercepts with -esttab- From Nick Cox <[email protected]> To [email protected] Subject Re: st: Two way table of regression coefficients or intercepts with -esttab- Date Fri, 23 Sep 2011 15:07:06 +0100 Please remember the request to explain where user-written commands you refer to come from. On Fri, Sep 23, 2011 at 2:58 PM, Richard Herron <[email protected]> wrote: > Can I make a two-way table of regression coefficients with -esttab-? > Sometimes I like to make a two-way table with either intercepts or > coefficients and I would like to automate it. > For example, here I generate three random variables, then form > quintiles on -a- and -b-, then find the mean of -c- in each of the 25 > groups (intersections of -a_q- and -b_q-) with -regress-. This is > fairly easy with -table-, but is less extensible to displaying > intercepts, significance, etc (and I am not sure how to generate a > LaTeX table from -table-). Can I so this with -esttab- or -estout-? > Thanks! > * --- begin code --- > * generate data > clear > set obs 2000 > generate a = rnormal() > generate b = rnormal() > generate c = rnormal() > * generate quantiles for for a and b > xtile a_q = a, nquantiles(5) > xtile b_q = b, nquantiles(5) > * I would like something like this two-way table of means, but that is > extensible to intercepts and/or more coefficients > table a_q b_q, c(mean c) > * but the best I can do is this very wide table that is not very readable > eststo clear > bysort a_q b_q: eststo: quietly regress c > esttab, not noconstant nogap compress > * --- end code --- * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2011-09/msg01064.html","timestamp":"2024-11-08T07:55:15Z","content_type":"text/html","content_length":"11281","record_id":"<urn:uuid:242ae462-8b60-43d7-becf-959dd18d3573>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00071.warc.gz"}
Dividing Fractions with Whole Numbers - Explanation, Problems and FAQs What Does Dividing Fractions by Whole Numbers Mean? In this section, we will learn about Dividing Fractions With Whole Numbers. Before jumping into the main part let us understand the basic concepts of fractions and whole numbers. What are Fractions? The portion/part of the whole thing is represented by a fraction. There are 2 parts to the fraction a denominator and a numerator. The top number is called the numerator, and the denominator is the number on the bottom. Ex: 4/6 is a fraction. 4 is the numerator which is represented above the line and 6 is the denominator which is represented below the line. Here 4/6 can be written as 1/3 which is part of the number What are Whole Numbers? The whole numbers are described as the positive integers including zero. No decimal or fractional element is found in the whole number. In other words, a number that is not a fraction is a whole The mathematical notation for whole numbers is W = {0, 1, 2, 3, 4, 5, 6, 7 , ………..} Now we have understood the basic concepts of Fractions and Whole numbers. Let us look into how to the fractions by whole numbers and vice versa. How to Divide Fractions with Whole Numbers Here let us discuss the step to dividing fractions by whole numbers. Step 1: The first step in dividing fractions by whole numbers is simply to write the fraction followed by the sign of the division and the whole number by which we need to divide it. Ex: If we want to divide a fraction 5/4 by a whole number 3. We can represent this step as follows. \[\frac{5}{4}\] ÷ 3 Step 2: Convert the whole number into a fraction: To convert a whole number to a fraction, simply place the number over the number 1. The whole number becomes the numerator and 1 becomes the fraction Ex: Let us look at the same example which is discussed in step 1. Here we have to convert whole number 3 into a fraction just by replacing 1 in the denominator which doesn’t change the value of 3. \[\frac{5}{4}\] ÷ \[\frac{3}{1}\] Step 3: Take a reciprocal of the whole number which we are dividing the fraction. To find the reciprocal reverse the numerator and denominator. Ex: \[\frac{3}{1}\]can be written as \[\frac{1}{3}\] Step 4: After we take the reciprocal the division process will become a multiplication. Ex: \[\frac{5}{4}\] x \[\frac{1}{3}\] Step 5: Now multiply the numerator and denominator of the fractions to obtain the new fraction. Ex: \[\frac{5}{4}\] x \[\frac{1}{3}\] = \[\frac{5}{12}\] Step 6: Simplify the fraction if necessary. To simply find the lowest common denominator, which means that both the numerator and denominator can be separated by any number that is equally divided into both numbers. Ex: \[\frac{2}{16}\] can be simplified as \[\frac{1}{8}\]. Now let us some problems on dividing Fractions with Whole Numbers by using the above-mentioned steps. Problems on Dividing Fractions by Whole Numbers. 1) Divide the Fraction 3/7 by the Whole Number 3. Step 1: Write the fraction followed by the sign of the division \[\frac{3}{7}\] ÷ 3 Step 2: Convert the whole number into a fraction 3 can be written as 3/1. Step 3: Take a reciprocal of the whole number. 3/1 can be written as ⅓. Step 4: Division process becomes multiplication. \[\frac{3}{7}\] x \[\frac{1}{3}\] Step 5: Multiply numerator and denominator of the fractions. \[\frac{3}{7}\] x \[\frac{1}{3}\] = \[\frac{3}{21}\] Step 6: Simplify the fraction. \[\frac{3}{21}\] = \[\frac{1}{7}\] The final fraction obtained after dividing 3/7 by the whole number 3 is 1/7. 2) Divide the Fraction 5/2 by the Whole Number 10. Step 1: Write the fraction followed by the sign of the division \[\frac{5}{2}\] ÷ 10 Step 2: Convert the whole number into a fraction 10 can be written as 10/1. Step 3: Take a reciprocal of the whole number. 10/1 can be written as 1/10. Step 4: Division process becomes multiplication. \[\frac{5}{2}\] x \[\frac{1}{10}\] Step 5: Multiply numerator and denominator of the fractions. \[\frac{5}{2}\] x \[\frac{1}{10}\] = \[\frac{5}{20}\] Step 6: Simplify the fraction. \[\frac{5}{20}\] = \[\frac{1}{4}\] The final fraction obtained after dividing 5/2 by the whole number 10 is 1/4. How to Divide Numbers with Fractions Here we find the steps to divide the whole number by a fraction. Step 1: Make a fraction out of the whole number. Make the whole number the numerator of a fraction denominator as 1. Ex: Whole number 5 can be written in fraction form as 5/1. Step 2: Find the reciprocal of the fraction. To find the reciprocal of the fraction reverse the numerator and denominator. Ex: The reciprocal of the fraction 5/7 is 7/5 which is obtained by reversing the numerator and the denominator. Step 3: Since we have found the reciprocal of the fraction, the division process will now be a multiplication process. Ex: \[\frac{5}{1}\] x \[\frac{7}{5}\] Step 4: Multiply the numerator and denominator to find the fraction. Ex: \[\frac{5}{1}\] x \[\frac{7}{5}\] = \[\frac{35}{5}\] Step 5: Simplify the fraction if necessary. To simply find the lowest common denominator and divide both the numerator and denominator by that number. Ex: \[\frac{35}{5}\] is having the lowest common denominator as 5. So dividing both numerator and denominator by 5 we get the simplified answer as 7/1 or 7. Let us solve some problems on dividing whole numbers by fractions. Problems on How to Divide Numbers with Fractions. 1) Divide the Whole Number 7 by the Fraction 3/4. Step 1: Make a fraction out of the whole number. Here the whole number 7 can be written as 7/1 in fraction form. Step 2: Find the reciprocal of the fraction. ¾ reciprocal is 4/3. Step 3: Division process becomes multiplication. \[\frac{7}{1}\] x \[\frac{4}{3}\] Step 4: Multiply the numerator and denominator. \[\frac{7}{1}\] x \[\frac{4}{3}\] = \[\frac{28}{3}\] Further simplification cannot be done. So the final answer obtained after dividing the whole number 7 by the fraction ¾ is 28/3. 2) Divide the Whole Number 12 by the Fraction 8/3. Step 1: Make a fraction out of the whole number. Here the whole number 12 can be written as 12/1 in fraction form. Step 2: Find the reciprocal of the fraction. 8/3 reciprocal is 3/8. Step 3: Division process becomes multiplication. \[\frac{12}{1}\] x \[\frac{3}{8}\] Step 4: Multiply the numerator and denominator. \[\frac{12}{1}\] x \[\frac{3}{8}\] = \[\frac{36}{8}\] Step 5: Simplify the fraction. Here the lowest common denominator which divides both numerator and denominator is 4. So 36/8 can be simplified to 9/2. So the final answer obtained after dividing the whole number 12 by the fraction 8/3 is 9/2. • When any fraction is divided by a whole number the final answer will always be a fraction. • When a whole number is divided by a fraction the final answer will be either a fraction or a whole number. FAQs on Dividing Fractions with Whole Numbers Ans: Fractions are pieces of a whole or a set that is similar in size. Each part of a whole divided into equal parts is a fraction of the whole. Ans: The real numbers that contain zero and all the positive integers are whole numbers. It excludes negative integers and fractional numbers. 3. Do We Get a Fraction of a Whole Number by Dividing Fractions with Whole Numbers? Ans: Whatever be the fraction of the whole number, when we divide the fraction by the whole number we always end up with a fraction.
{"url":"https://www.vedantu.com/maths/dividing-fractions-with-whole-numbers","timestamp":"2024-11-03T00:35:03Z","content_type":"text/html","content_length":"283588","record_id":"<urn:uuid:36bc70c6-68ef-46ef-b0b4-4fe11b5c8064>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00016.warc.gz"}
What Does Interest Per Annum Mean? Unlock Financial Savvy! - House and Home Online What Does Interest Per Annum Mean? Unlock Financial Savvy! Interest per annum refers to the interest rate over a period of one year, assuming that the interest is compounded annually. It represents the cost of borrowing or the return on investment over a one-year period. Understanding the concept of interest per annum is crucial for managing finances, investments, and loans. Whether it’s calculating interest on a savings account or understanding the interest rate on a loan, the per annum interest rate plays a significant role in determining the overall cost or return over a year. By comprehending how interest per annum works, individuals and businesses can make informed financial decisions and effectively manage their assets and liabilities. Let’s delve deeper into the meaning, calculation, and practical applications of interest per annum to gain a comprehensive understanding of its significance in the world of finance. Decoding Interest Per Annum Interest per annum refers to the interest rate over a period of one year, assuming that the interest is compounded annually. It is commonly used to describe interest rates and is an essential concept in finance. The interest rate is the amount of interest due per period, expressed as a proportion of the amount lent, deposited, or borrowed. The total interest earned or paid depends on factors such as the principal sum, the interest rate, the compounding frequency, and the length of time. To calculate simple interest, you can use the formula: Interest = Principal amount (P) Interest rate (R) Number of time periods (T). For example, if you have a principal amount of $1,000, an interest rate of 5%, and a one-year time period, the interest would be calculated as: Interest = $1,000 0.05 1 = $50. Understanding interest per annum is crucial for making informed financial decisions, whether it’s for savings accounts, loans, or investments. Remember, per annum means once per year, and it is important to consider the compounding frequency when dealing with interest rates. Calculating Annual Interest An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed. The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed. The per annum interest rate refers to the interest rate over a period of one year with the assumption that the interest is compounded every year. For instance, if you have a savings account with a per annum interest rate of 5%, it means that the interest is paid or compounded once per year. When it comes to calculating annual interest, there are two types to consider: simple interest and compound interest. Simple interest is calculated using the formula: Interest = Principal amount (the beginning balance) Interest rate (usually per year, expressed as a decimal) Number of time periods (generally one-year time periods). Compound interest, on the other hand, takes into account the compounding frequency, which can be daily, monthly, quarterly, or annually. It is calculated using the formula: A = P(1 + r/n)^(nt), where A is the future value, P is the principal amount, r is the annual interest rate, n is the number of compounding periods per year, and t is the number of years. The Impact Of Compounding An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed. The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed. Per annum interest rate refers to the interest rate over a period of one year with the assumption that the interest is compounded every year. This compounding frequency has a significant impact on the long-term effects on savings. The more frequent the compounding, the higher the effective annual rate and the greater the impact of compounding on the savings. Comparing Interest Rates Interest per annum refers to the annual interest rate, usually expressed as a percentage, that a borrower pays on a loan or a lender receives on an investment. It is calculated based on the principal amount, the interest rate, and the compounding frequency. Simple interest is calculated as P x R x T, where P is the principal amount, R is the interest rate, and T is the time period. Comparing interest rates can be confusing, especially when considering different loan offers. When looking at loan offers, it’s important to understand the difference between the Annual Percentage Rate (APR) and the Annual Percentage Yield (APY). The APR is the interest rate charged per year, whereas the APY takes into account the compounding of interest. Additionally, it’s important to look for any fees or charges associated with the loan offer. When comparing loan offers, it’s best to look at the total cost of the loan, including any fees or charges, to determine which offer is the most cost-effective. Understanding interest rates and loan offers can help you make informed financial decisions. Interest In Everyday Finance Interest per annum refers to the amount of interest charged or earned over a one-year period. It’s a key factor in calculating the total interest on a loan or investment, and is typically expressed as a percentage of the principal amount. Understanding this concept is crucial for managing personal finances and making informed decisions about borrowing and saving. What is Interest Per Annum? Interest is the amount of money charged by a lender to a borrower for the use of borrowed money. Interest per annum refers to the interest rate charged on a loan or earned on a deposit over a period of one year, with the assumption that the interest is compounded annually. This means that the interest earned or charged is added to the principal amount at the end of each year, and the interest for the following year is calculated based on the new total. Interest in Everyday Finance Savings Accounts and CDs: When you deposit money in a savings account or a Certificate of Deposit (CD), the bank pays you interest on that amount. The interest rate is usually stated as an annual percentage rate (APR) and the interest earned is added to your account balance at the end of the year. Credit Cards and Mortgages: When you borrow money through a credit card or a mortgage, you are charged interest on the amount borrowed. The interest rate is usually stated as an annual percentage rate (APR) and is added to your outstanding balance at the end of each billing cycle or year. Advanced Interest Calculations Interest per annum refers to the interest rate over a one-year period, assuming that the interest is compounded annually. It is commonly used to describe interest rates on loans or savings accounts. Calculating interest per annum involves considering the principal amount, the interest rate, and the length of time. Advanced Interest Calculations Using Financial Calculators When it comes to calculating interest per annum, financial calculators can be incredibly helpful. These tools allow you to input the principal amount, interest rate, and compounding frequency to determine the total interest over a given period of time. Whether you are dealing with simple or compound interest, financial calculators can help you save time and avoid errors. Excel for Interest Projections If you are looking to project interest over multiple periods, Microsoft Excel can be a powerful tool. By using the appropriate formulas and functions, you can easily calculate the total interest over a given period of time for simple or compound interest. This can be incredibly useful for financial planning and decision-making. Frequently Asked Questions What Does 4% Interest Per Annum Mean? 4% interest per annum means that for every year, you will earn 4% of the principal amount as interest. What Does 12% Interest Per Annum Mean? The term “12% interest per annum” means that the interest rate is 12% per year. This rate is applied to the principal amount of a loan or investment, and the interest is calculated and compounded How Do You Calculate Interest Per Annum? To calculate interest per annum, use the formula: Interest = Principal amount × Interest rate × Time period. The interest rate is usually expressed as a decimal per year. What Does 7% Interest Per Annum Mean? 7% interest per annum means that for every $100 borrowed, $7 in interest will be charged annually. Understanding interest per annum is crucial for making informed financial decisions. It represents the annual interest rate on a loan or investment, impacting the total amount paid or earned. By grasping this concept, individuals can better manage their finances and make strategic choices for the future.
{"url":"https://houseandhomeonline.com/what-does-interest-per-annum-mean/","timestamp":"2024-11-05T22:39:47Z","content_type":"text/html","content_length":"159888","record_id":"<urn:uuid:89d3a57c-b497-49e1-8ed2-a3a1764f4a3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00857.warc.gz"}
on and Course Offerings Winter 2025▲ See complete information about these courses in the course offerings database. For more information about a specific course, including course type, schedule and location, click on its title. General Physics I PHYS 111 - Rutkowski, Todd C. An introduction to classical mechanics. Topics include kinematics, Newton's laws, solids, fluids, and wave motion. General Physics II PHYS 112 - Mazilu, Irina A continuation of PHYS 111. Topics include thermodynamics, electricity, magnetism, and optics. General Physics II PHYS 112 - Nguyen, Thai Son (Son) A continuation of PHYS 111. Topics include thermodynamics, electricity, magnetism, and optics. General Physics II PHYS 112 - Sukow, David W. A continuation of PHYS 111. Topics include thermodynamics, electricity, magnetism, and optics. General Physics II (FY Only) PHYS 112A - McClain, Thomas J. (Tom) A continuation of PHYS 111. Topics include thermodynamics, electricity, magnetism, and optics. Mathematical Methods for Physics and Engineering PHYS 225 - Shobeiry, Poorya Study of a collection of mathematical techniques particularly useful in upper-level courses in physics and engineering: vector differential operators such as gradient, divergence, and curl; functions of complex variables; Fourier analysis; orthogonal functions; matrix algebra and the matrix eigenvalue problem; ordinary and partial differential equations. Newtonian Mechanics PHYS 230 - Mazilu, Dan A. A thorough study of Newton's laws of motion, rigid body motion, and accelerated reference frames. Modeling and Simulation of Physical Systems PHYS 265 - Mazilu, Irina An introduction to the innovative field of modeling and analysis of complex physical systems from such diverse fields as physics, chemistry, ecology, epidemiology, and a wide range of interdisciplinary, emerging fields such as econophysics and sociophysics. Topics vary according to faculty expertise and student interest. The goal is to seek the underlying physics laws that govern such seemingly diverse systems and to provide contemporary mathematical and computational tools for studying and simulating their dynamics. Includes traditional lectures as well as workshops and computational labs, group presentations, and seminars given by invited speakers. Intermediate Special Topics in Physics: Advanced Physics Lab PHYS 295B - Rutkowski, Todd C. This laboratory course will provide students with hands-on experience studying a variety of phenomena encountered in advanced fields of physics, including particle physics, plasma physics, quantum mechanics, quantum computing, and optics. The lecture portion of the course will cover error analysis methods, common software used during manuscript preparation, and will examine recent journal articles for examples of the style and quality required for publication in the field. Quantum Mechanics PHYS 340 - McClain, Thomas J. (Tom) A study of the postulates and formalism of quantum theory emphasizing the Schroedinger approach. The probabilistic theory is applied to one-dimensional bound and scattering states and the three-dimensional central force problem. Investigation of spin and angular momentum, Clebsch-Gordan coefficients, indistinguishable particles, and perturbation theory. Mathematical formalism includes operators, commutators, Hilbert space, and Dirac notation. Directed Individual Study: Non-Equilibrium Statistical Physics PHYS 402A - Mazilu, Irina Advanced work and reading in topics selected by the instructor to fit special needs of advanced students. Directed Individual Research PHYS 421 - Mazilu, Dan A. / Mazilu, Irina Directed research in physics. Directed Individual Research PHYS 421 - Sukow, David W. Directed research in physics. Honors Thesis PHYS 493 - Mazilu, Irina Honors Thesis. Fall 2024▲ See complete information about these courses in the course offerings database. For more information about a specific course, including course type, schedule and location, click on its title. General Physics I PHYS 111 - Mazilu, Dan A. An introduction to classical mechanics. Topics include kinematics, Newton's laws, solids, fluids, and wave motion. General Physics I PHYS 111 - Mazilu, Irina An introduction to classical mechanics. Topics include kinematics, Newton's laws, solids, fluids, and wave motion. General Physics I PHYS 111 - Nguyen, Thai Son (Son) An introduction to classical mechanics. Topics include kinematics, Newton's laws, solids, fluids, and wave motion. General Physics I (FY Only) PHYS 111A - Rutkowski, Todd C. An introduction to classical mechanics. Topics include kinematics, Newton's laws, solids, fluids, and wave motion. General Physics II PHYS 112 - Nguyen, Thai Son (Son) A continuation of PHYS 111. Topics include thermodynamics, electricity, magnetism, and optics. Stellar Evolution and Cosmology PHYS 151 - Sukow, David W. An introduction to the physics and astronomy of stellar systems and the universe. Topics include the formation and lifecycle of stars, stellar systems, galaxies, and the universe as a whole according to "Big Bang" cosmology. Observational aspects of astronomy are also emphasized, including optics and telescopes, star maps, and knowledge of constellations. Geometry, trigonometry, algebra, and logarithms are used in the course. Electrical Circuits PHYS 207 - Aiken, Paul Same as ENGN 207. A detailed study of electrical circuits and the methods used in their analysis. Basic circuit components, as well as devices such as operational amplifiers, are investigated. The laboratory acquaints the student both with fundamental electronic diagnostic equipment and with the design and behavior of useful circuits. Phys 207 Lab PHYS 207L - Aiken, Paul A detailed study of electrical circuits and the methods used in their analysis. Basic circuit components, as well as devices such as operational amplifiers, are investigated. The laboratory acquaints the student both with fundamental electronic diagnostic equipment and with the design and behavior of useful circuits. Modern Physics PHYS 210 - Sukow, David W. An introduction to the special theory of relativity and the physics of the atom. Topics in relativity include the Lorentz transformations, relativistic velocity addition, and relativistic momentum and energy. Topics in atomic physics include the wave description of matter, introductory quantum mechanics, the hydrogen atom, and the historical experiments that led to the modern Nuclear Physics PHYS 315 - Mazilu, Dan A. Topics include radioactivity, nuclear reactions, high-energy physics, and elementary particles. Directed Individual Study: Quantum Computing PHYS 403A - Mazilu, Irina Directed Individual Research PHYS 421 - Mazilu, Dan A. / Mazilu, Irina Directed research in physics. Directed Individual Research PHYS 421 - Sukow, David W. Directed research in physics. Directed Individual Research PHYS 422 - Mazilu, Irina Directed research in physics. Honors Thesis PHYS 493 - Mazilu, Irina Honors Thesis. Spring 2024▲ See complete information about these courses in the course offerings database. For more information about a specific course, including course type, schedule and location, click on its title. Supervised Study Abroad: Big Science in Twenty-First Century Europe PHYS 125 - McClain, Thomas J. (Tom) Though the United States has often been at the forefront of big science since the middle of the twentieth century, there are indications that this may be changing. In this course, we will learn about particle physics and gravitational wave astronomy as we travel to two of the premier ``Big Science" sites in Europe: the large hadron collider at CERN in Geneva and the VIRGO gravitational wave detector in Tuscany. While in Europe, we will also examine the question of how twenty-first century science is able to thrive in centuries-old European societies.
{"url":"https://my.wlu.edu/physics-and-engineering-department/about-the-department/physics-courses","timestamp":"2024-11-06T10:36:25Z","content_type":"text/html","content_length":"36872","record_id":"<urn:uuid:83b28c81-5735-470a-acda-8de2ed3248ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00027.warc.gz"}
er P Thanks Roland! That worked! Thanks Marco, however interpolation does not give me an analytic function, as Piecewise does. But I meant how to calculate the number of infected over a specified period of time, say t=0 to t=10 with your model. Hi, I am trying to set up a code to plot a streamplot from the results from a SIR model. The problem to solve is **r′=F(r)** where r=(x,y) and F(x,y)=(−b*x*y/n,b*x*y/n−k*y) and where b=3.3925; k= 2.95; n=157759; How can I set up... Hi, I have the given command sol = Flatten[DSolve[y'[x] == y[x]^3/x^3 + y[x]/x + 1, y[x], x]] which yields the solution Solve[ArcTan[(-1 + (2 y[x])/x)/Sqrt[3]]/Sqrt[3] + 1/3 Log[1 + y[x]/x] - 1/6 Log [1 - y[x]/x + y[x]^2/x^2]... As written in the discussion, this problem is there *what ever the coefficients are*. In your case they give 0=0. If you change them, they don't give this equality, and still it doesn't solve. Try this: c=3*10^8 T=2 h=1.0545718*... Hello, I would like to solve the following PDE for Psi(x,y,z,t) ![enter image description here][1] [1]: https://community.wolfram.com//c/portal/getImageAttachment?filename= Screenshotfrom2021-08-2615-21-55.png&userId=967554 where alpha,... Thanks for that, it show now the function, but the plot is still blank. PS: The interval of the plot is a little bit large. It is the absolute center which is important. How do I"zoom" in on the center? Hello, I have a a polar function given by: u[r_, phi_, n_] := Piecewise[{{BesselJ[1.5 r, n]*Exp[I n phi], 0 1}}] u0[r_, phi_, k_] := Sum[I^(-n) BesselJ[n, r] Exp[I n phi], {k, -5, 5}]; u1[r_, phi_, n_] := ...
{"url":"https://community.wolfram.com/web/sergiomanzetti/home?p_p_id=user_WAR_userportlet&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-1&p_p_col_count=1&_user_WAR_userportlet_tabs1=Discussions&tabs2=Participating","timestamp":"2024-11-04T04:15:15Z","content_type":"text/html","content_length":"59429","record_id":"<urn:uuid:19a198e9-6aa2-40f3-97a6-5765a880bd65>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00795.warc.gz"}
Volts to eV Calculator - calculator What Is Volts to eV? Volts to eV Calculator: In scientific and engineering contexts, the conversion between volts (V) and electron volts (eV) is crucial. A volt measures the electric potential difference between two points in an electrical circuit. In contrast, an electron volt is a unit of energy used in particle physics. It represents the amount of kinetic energy gained or lost by an electron moving through an electric potential difference of one volt. Understanding the relationship between these two units helps in analyzing energy levels, particle interactions, and other physical phenomena. This conversion is especially relevant in fields such as quantum mechanics, solid-state physics, and material science. Related Calculator- Distribution Polygraph Chart Placeholder What Is a Volts to eV Calculator Website? A Volts to eV calculator website is an online tool that enables users to convert values from volts (V) to electron volts (eV). This type of calculator is used to translate electrical potential differences into energy measurements that are often used in scientific research and applications. It simplifies the process of converting between these units, making it easier for researchers, students, and engineers to work with energy values in their respective fields. How to Use a Volts to eV Calculator Website Using a Volts to eV calculator is a straightforward process. Follow these steps: 1. Access the Calculator: Visit a website that provides a Volts to eV calculator. 2. Input the Voltage Value: Enter the value in volts that you want to convert into electron volts. 3. Perform the Conversion: Click the 'Calculate' button to convert the voltage value into electron volts. 4. Review the Result: The calculator will display the equivalent energy in electron volts. What Is the Formula of Volts to eV Calculator? The conversion formula from volts to electron volts is derived from the relationship between energy and electric potential. The formula is as follows: Energy (eV) = Voltage (V) × Charge of an Electron (e) • Voltage (V): The electric potential difference in volts. • Charge of an Electron (e): The elementary charge, approximately 1.602 x 10^-19 coulombs. Therefore, the energy in electron volts is calculated by: eV = V × 1.602 × 10^-19 C Advantages and Disadvantages of Volts to eV Calculators • Ease of Use: Provides a quick and efficient way to convert values between volts and electron volts. • Accuracy: Delivers precise conversion results when the correct input is provided. • Accessibility: Available online and often free to use, making it accessible to anyone needing these conversions. • Limited Scope: The calculator only performs conversions and does not offer deeper insights or additional analysis. • Dependence on Accurate Input: The accuracy of the result depends on the correct entry of voltage values. • Specific Application: Primarily useful in scientific contexts; may not be relevant for general use outside specialized fields. Additional Information Understanding the conversion between volts and electron volts is vital in fields such as particle physics and material science. While volts measure the potential difference in an electrical circuit, electron volts provide a more relevant unit for energy at the atomic and subatomic levels. Utilizing a Volts to eV calculator facilitates easier and more accurate conversions, aiding in scientific research and analysis. Frequently Asked Questions What is the significance of converting volts to eV? Converting volts to electron volts (eV) is significant in scientific fields where energy values are more conveniently expressed in eV rather than volts. This conversion helps in understanding energy levels, particle physics, and other applications where precise energy measurements are critical. How accurate are Volts to eV calculators? Volts to eV calculators are highly accurate as long as the input values are correct. The precision of the conversion depends on the accurate representation of the elementary charge used in the Can I use a Volts to eV calculator for practical applications? While a Volts to eV calculator is designed for theoretical conversions, the results can be useful in practical applications within scientific research, especially when dealing with atomic or subatomic energy levels. However, it is not typically used for general electrical engineering tasks. Are there free Volts to eV calculators available online? Yes, many Volts to eV calculators are available for free on educational and scientific websites. These calculators are usually accessible and provide accurate results for the conversion. What should I do if I need to convert large values of voltage? When converting large values of voltage, ensure that the calculator you use can handle the input range accurately. Most online calculators can manage a wide range of values, but it’s important to verify the precision of the tool you choose.
{"url":"https://calculatordna.com/volts-to-ev-calculator/","timestamp":"2024-11-06T17:46:12Z","content_type":"text/html","content_length":"90947","record_id":"<urn:uuid:66cedc80-22ba-4382-a6ee-69da034f9b26>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00877.warc.gz"}
Dialogue Logic DiaLogic: A System for Dialogical Logic Dialogical Logic [5,6] (also: dialogue logic) was originally proposed by Prof. Paul Lorenzen, who taught philosophy in Erlangen. Essential contributions are due to Kuno Lorenz, who later on became a professor of Philosophy in Saarbruecken. Native speakers of English are referred to Walter Felscher’s articles on dialogical logic [2,3]. The latest and most comprehensive textbook (in German) has been written by Rüdiger Inhetveen [7]. The central idea of dialogical logic is to provide a pragmatic foundation of logic on a game-theoretic basis — hence introducing semantics by a nutual agreement on moves —, but it is as well suited for teaching logic. So, dialogical logic is commited to proof-theory; model theory does not play a foundational, but an illustrative role. Any successful proof in dialogical logic consists in the construction of a model, otherwise of a counter-example. So, in dialogical logic, the validity of some formula F is examined in two person, perfect information games . There are two players moving alternatively, both having complete information of the current situation of the game. The player who claims being able to justify F is called the proponent, his adversary the opponent. The initial state, the dialogue setting, is determined by a (possibly empty) set of hypotheses (introduced into the game by the opponent) and a thesis F, the formula to be shown valid, brought in by the proponent. The consecutive moves of the dialogue game are attacks upon formulae asserted earlier or defences against previous attacks. Some moves include the introduction of formulae that might be subject to subsequent attacks. The legal moves that a player can perform are defined by so-called particle rules and frame rules. For each logical connective PHI a particle rule is given which specifies how attacks upon moves asserting formulae that have PHI as main connective and defences against such attacks have to be performed. The frame rules organize the exchange of arguments, i.e., they impose restrictions on when attacks and defences may take place in the dialogue. Depending on the choice of frame rules, dialogical logic covers constructive („intuitionistic“) and classical logics as well. A dialogue game is won by a player, if the other player cannot perform any action that conforms with the dialogue rules. The proponent is said to have a winning strategy for a formula F, if he is able to win any game where formula F is the thesis (and a given set of hypotheses) by appropriate choices of his statements. A formula F is valid, if the proponent has a winning strategy for F. Winning strategies might not be unique. Felscher [3,4] presents a proof that the notion of winning strategies in dialogue games (wrt. a well-defined set of dialogue rules) coincides with the notion of provability in Gentzen’s calculus LJ for intuitionistic logic. Recently, the idea of „logic games“ as well as further approaches to dialogical logic have received a lot of attention, e.g. in publications by Barth, van Benthem, Hintikka, Hamblin, Krabbe, Mackenzie, Walton, and others. Theorem Provers Implementing Dialogical Logic We know of at least two implementations in the 1970s . One has been made in Erlangen by Gerrit Haas [4] at the Computer Science Institute (IMMD) on a PDP 7. Another implementation had been done (in Fortran on a Control Data 6600 and later ported to a CD 3300 in Erlangen) in Austin, Texas, where Lorenzen spent some time as a visiting professor (around 1970). Both systems implemented only the propositional part of dialogical logic. Today, we know of no other implementation of Dialogical Logic except DiaLogic as presented here. Later on, COLOSSEUM, a theorem prover for constructive predicate logic — without an interactive mode to play dialogue games — has been implemented in Prolog by Claus Zinn. For details, please contact zinnwerk.com. DiaLogic — A System for Dialogical Logic As far as we know, DiaLog is the first system which implements Dialogical Logic for propositional, full predicate, and modal logic. We had the following goals in mind: 1. To better understand the dialogue calculus. 2. To have a system for teaching purposes. 3. To promote Dialogical Logic which is rather unknown in the Automatic Deduction community. Ad 1: In the history of Dialogical Logic there has always been a debate about the ‚correct‘ specification of particle and frame rules. It is the case, that different versions of frame and particle rules lead to different logics. In the literature on Dialogical Logic, numerous versions of them (mostly kept rather informal) can be found, cf. [5,6]. In order to experiment with the dialogue rules, DiaLogic offers the possibility to redefine both particle and frame rules by means of a rule language. There are predefined rules for effective, classical and modal logic. Ad 2: Due to its friendly user interface, DiaLogic can easily been used for teaching purposes. Dialogical Logic has been a part of the curriculum for students of philosophy in Erlangen up to now. Ad 3: As documented in [0], the first full implementation of Dialogical Logic has been made available to the Automated Deduction community. Dialogical Logic is an interesting proof calculus which is worth further investigation. A description of the system for Dialogical Logic is provided in [0,1]. The latter description is in German and describes mostly implementation details. However, there is a chapter describing the use of DiaLogic. We have no further user or reference manuals available Here you see a snapshot of system run with its graphical user interface. The interface is divided into four parts: • The pull-down menu, • The dialogue tableau, split into an opponent and proponent component, • The dialogue trace pane, • The command interpreter pane. Here is a snapshot of the stategy grapher (for the PEIRCE formula proven in the illustration above): System features are: • fully automated proof search; • interactive mode (user plays the proponent’s part); • Dialogue trace pane displaying all speech acts performed; • Dialogue tableau showing the current status of the dialogue; • Strategy grapher showing the winning strategy, if a proof has been found; • Rule language to (re-)define particle and frame rules; • Running with GUI on Unix and Windows platforms; • Running without GUI (command line interface) also on any other platform, even DOS. The system has been implemented in Scheme and Tk (the Scheme system STk offers both) by Jürgen Ehrensberger within his master’s project under the supervision of Dr. Claus Zinn and Prof. Dr. Guenther Goerz at the Dept. of Computer Science / 8, (Artificial Intelligence) of the University of Erlangen-Nuremberg. System requirements To run the DiaLogic system, you need a Scheme interpreter. We have tested DiaLogic with several Scheme implementations obeying the R5RS standard, in particular with SCM (Version 2a6) and STk (Version 4.0.1). The Scheme Library SLIB is required. We recommend strongly to use the Berkeley modification of STk 4.0.1, called UCB Scheme, version 4.0.1-ucb1.16, which is available for Linux, Windows, and Mac OS X. It contains everything you need (including the portable Scheme library SLIB) and is very easy to install. • To run DiaLogic with a GUI you need either a Unix/Linux (X11) or Windows platform and STk. • To run DiaLogic without the GUI (i.e., in command line mode), a DOS or any other platform providing a text-based shell, and SCM suffice. How to obtain the system [0] Ehrensberger Jürgen, and Zinn, Claus. DiaLog — A system for dialogue logic, CADE-14, Conference on Automated Deduction, Townsville, North Queensland, Australia. Lecture Notes in Artificial Intelligence, Vol. 1249, Springer 1997 446–460. [1] Ehrensberger, Jürgen. Ein System für Dialogische Logik. Diplomarbeit am Institut für Mathematische Maschinen und Datenverarbeitung, Lehrstuhl Künstliche Intelligenz (Informatik 8). Universität Erlangen-Nürnberg, 1996. [PDF] [2] Felscher, Walter. Dialogues, Strategies and Intuitionistic Provability . Annals of Pure and Applied Logic 28 (1985), 217–254 [3] Felscher, Walter. Dialogues as a Foundation for Intuitionistic Logic Handbook of Philosophical Logic. Vol. III. Ed. by Dov M. Gabbay, and Franz Guenthner. Dordrecht: D. Reidel, 1986. 341–372. [4] Haas, Gerrit. Programme zur dialogischen Logik . Arbeitsberichte des IMMD. Bd. 3, Nr. 4. Universität Erlangen-Nürnberg, Oktober 1970. [5] Lorenzen, Paul. Ein dialogisches Konstruktivitätskriterium . Infinitistic Methods. Proceedings of the Symposium on Foundations of Mathematics (Warszawa 1959). Oxford: Pergamon, 1961. 193–200. (Reprinted in P. Lorenzen/K. Lorenz: Dialogische Logik . 9–16). [6] Lorenzen Paul, and Lorenz, Kuno. Dialogische Logik . Darmstadt: Wissenschaftliche Buchgesellschaft, 1978. [7] Inhetveen, Rüdiger. Logik — Eine dialog-orientierte Einführung . Leipzig: Teubner, 2002.
{"url":"https://cs.fau.de/ag-digital-humanities/dienste/dialogue-logic/","timestamp":"2024-11-14T01:04:33Z","content_type":"text/html","content_length":"42203","record_id":"<urn:uuid:1bd2c0f9-73f1-4634-bf6e-03aeff2cc9db>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00685.warc.gz"}
Log-Normal Distribution Definition Uses and How To Calculate - SAXA fundLog-Normal Distribution Definition Uses and How To Calculate Log-Normal Distribution Definition Uses and How To Calculate What is Log-Normal Distribution? Log-normal distribution is a probability distribution that is commonly used in statistics and finance to model variables that are positively skewed and have a wide range of values. It is a continuous probability distribution of a random variable whose logarithm is normally distributed. Definition and Explanation The log-normal distribution is defined by two parameters: the mean (μ) and the standard deviation (σ) of the logarithm of the variable. The mean of the log-normal distribution, denoted as μ, represents the location parameter, while the standard deviation, denoted as σ, represents the scale parameter. The log-normal distribution is characterized by the fact that the logarithm of the variable follows a normal distribution. This means that if we take the natural logarithm of each value of the variable, the resulting values will be normally distributed. The log-normal distribution is often used to model variables that are the product of many independent, positive random variables. Examples of variables that can be modeled using the log-normal distribution include stock prices, asset returns, income, and population sizes. Uses and How to Calculate Log-Normal Distribution The log-normal distribution has various uses in statistics and finance. It is commonly used in financial modeling to estimate the prices of financial assets, such as stocks and options. It is also used in risk management to model the distribution of potential losses. To calculate the log-normal distribution, you need to know the mean (μ) and the standard deviation (σ) of the logarithm of the variable. Once you have these parameters, you can use the following formula to calculate the probability density function (PDF) of the log-normal distribution: Log-Normal Distribution PDF • f(x) is the probability density function • x is the value of the variable • μ is the mean of the logarithm of the variable • σ is the standard deviation of the logarithm of the variable • π is a mathematical constant approximately equal to 3.14159 • ln(x) is the natural logarithm of x • exp(y) is the exponential function e^y Practical Applications and Mathematical Formula The log-normal distribution has practical applications in various fields. In finance, it is used to model the distribution of asset prices and returns. In biology, it is used to model the distribution of population sizes. In economics, it is used to model the distribution of income and wealth. The mathematical formula for the log-normal distribution is derived from the properties of the normal distribution and the exponential function. It provides a convenient way to model variables that are positively skewed and have a wide range of values. Definition and Explanation The log-normal distribution is a probability distribution of a random variable whose logarithm is normally distributed. In other words, if we take the natural logarithm of a log-normal distributed variable, the resulting values will follow a normal distribution. The log-normal distribution is characterized by two parameters: the mean (μ) and the standard deviation (σ) of the logarithm of the variable. These parameters determine the shape and location of the The log-normal distribution is commonly used in various fields, including finance, economics, and biology, to model variables that are inherently positive and have a skewed distribution. Examples of variables that can be modeled using the log-normal distribution include stock prices, income levels, and the size of biological organisms. To calculate the log-normal distribution, you need to know the values of the mean (μ) and the standard deviation (σ) of the logarithm of the variable. Once you have these values, you can use mathematical formulas or statistical software to calculate the probability density function (PDF) and cumulative distribution function (CDF) of the log-normal distribution. The PDF of the log-normal distribution is given by the formula: where x is the value of the variable, μ is the mean of the logarithm of the variable, and σ is the standard deviation of the logarithm of the variable. The CDF of the log-normal distribution is given by the formula: F(x) = ∫[0,x] f(t) dt where f(t) is the PDF of the log-normal distribution and the integral is taken from 0 to x. Uses and How to Calculate Log-Normal Distribution The log-normal distribution is widely used in various fields, including finance, economics, and engineering. It is particularly useful for modeling variables that are expected to have a skewed distribution, such as stock prices, income levels, and population sizes. Calculation of Log-Normal Distribution To calculate the log-normal distribution, you need to know the mean (μ) and standard deviation (σ) of the corresponding normal distribution. The formula for calculating the log-normal distribution is as follows: X = e^(μ + σZ) • X is the random variable following a log-normal distribution • e is the mathematical constant approximately equal to 2.71828 • μ is the mean of the corresponding normal distribution • σ is the standard deviation of the corresponding normal distribution • Z is a random variable following a standard normal distribution (mean = 0, standard deviation = 1) Once you have the values of μ and σ, you can use the formula to calculate the log-normal distribution for any given value of X. This can be done using statistical software or programming languages that have built-in functions for calculating the log-normal distribution. It is important to note that the log-normal distribution is defined only for positive values of X. If you need to calculate the log-normal distribution for negative values, you can use the absolute value of X and then multiply the result by -1. Practical Applications and Mathematical Formula The log-normal distribution has various practical applications in fields such as finance, economics, engineering, and biology. It is commonly used to model variables that are always positive and have a skewed distribution. Here are some examples of its applications: 1. Financial Analysis In finance, the log-normal distribution is often used to model the returns of assets such as stocks and commodities. It is particularly useful in option pricing models, where the distribution of underlying asset prices is assumed to follow a log-normal distribution. Traders and investors can use the log-normal distribution to estimate the probabilities of different price movements and make informed decisions. 2. Risk Management The log-normal distribution is also used in risk management to model the distribution of potential losses. By fitting historical data to a log-normal distribution, risk managers can estimate the likelihood of extreme events and calculate value-at-risk (VaR), which is a measure of potential losses within a specified confidence level. This helps organizations assess and mitigate their exposure to financial risks. 3. Biological Sciences The log-normal distribution is characterized by its probability density function (PDF), which can be expressed as: • f(x) is the probability density function • x is the random variable • μ is the mean of the natural logarithm of x • σ is the standard deviation of the natural logarithm of x • e is the base of the natural logarithm • π is the mathematical constant pi Emily Bibb simplifies finance through bestselling books and articles, bridging complex concepts for everyday understanding. Engaging audiences via social media, she shares insights for financial success. Active in seminars and philanthropy, Bibb aims to create a more financially informed society, driven by her passion for empowering others.
{"url":"https://saxafund.org/log-normal-distribution-definition-uses-and-how-to/","timestamp":"2024-11-06T13:55:44Z","content_type":"text/html","content_length":"105282","record_id":"<urn:uuid:f5ec9286-1dcb-43af-bab6-1bc2b014b18b>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00077.warc.gz"}
23624 The iterative conception has to appropriate Replacement, to justify the ordinals Full Idea: The iterative conception justifies Power Set, but cannot justify a satisfactory theory of von Neumann ordinals, so ZFC appropriates Replacement from NBG set theory. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], 09.9) A reaction: The modern approach to axioms, where we want to prove something so we just add an axiom that does the job. 23625 Limitation of Size justifies Replacement, but then has to appropriate Power Set Full Idea: The limitation of size conception of sets justifies the axiom of Replacement, but cannot justify Power Set, so NBG set theory appropriates the Power Set axiom from ZFC. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], 09.9) A reaction: Which suggests that the Power Set axiom is not as indispensable as it at first appears to be. 23628 The connective 'and' can have an order-sensitive meaning, as 'and then' Full Idea: The sentence connective 'and' also has an order-sensitive meaning, when it means something like 'and then'. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], 10.4) A reaction: This is support the idea that orders are a feature of reality, just as much as possible concatenation. Relational predicates, he says, refer to series rather than to individuals. Nice point. 23627 'Before' and 'after' are not two relations, but one relation with two orders Full Idea: The reason the two predicates 'before' and 'after' are needed is not to express different relations, but to indicate its order. Since there can be difference of order without difference of relation, the nature of relations is not the source of order. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], 10.3) A reaction: This point is to refute Russell's 1903 claim that order arises from the nature of relations. Hossack claims that it is ordered series which are basic. I'm inclined to agree with 10669 Plural reference is just an abbreviation when properties are distributive, but not otherwise Full Idea: If all properties are distributive, plural reference is just a handy abbreviation to avoid repetition (as in 'A and B are hungry', to avoid 'A is hungry and B is hungry'), but not all properties are distributive (as in 'some people surround a table'). From: Keith Hossack (Plurals and Complexes [2000], 2) A reaction: The characteristic examples to support plural quantification involve collective activity and relations, which might be weeded out of our basic ontology, thus leaving singular quantification as sufficient. 23626 Transfinite ordinals are needed in proof theory, and for recursive functions and computability Full Idea: The transfinite ordinal numbers are important in the theory of proofs, and essential in the theory of recursive functions and computability. Mathematics would be incomplete without From: Keith Hossack (Knowledge and the Philosophy of Number [2020], 10.1) A reaction: Hossack offers this as proof that the numbers are not human conceptual creations, but must exist beyond the range of our intellects. Hm. 23621 Numbers are properties, not sets (because numbers are magnitudes) Full Idea: I propose that numbers are properties, not sets. Magnitudes are a kind of property, and numbers are magnitudes. …Natural numbers are properties of pluralities, positive reals of continua, and ordinals of series. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], Intro) A reaction: Interesting! Since time can have a magnitude (three weeks) just as liquids can (three litres), it is not clear that there is a single natural property we can label 'magnitude'. Anything we can manage to measure has a magnitude. 23622 We can only mentally construct potential infinities, but maths needs actual infinities Full Idea: Numbers cannot be mental objects constructed by our own minds: there exists at most a potential infinity of mental constructions, whereas the axioms of mathematics require an actual infinity of numbers. From: Keith Hossack (Knowledge and the Philosophy of Number [2020], Intro 2) A reaction: Doubt this, but don't know enough to refute it. Actual infinities were a fairly late addition to maths, I think. I would think treating fictional complete infinities as real would be sufficient for the job. Like journeys which include imagined roads. 10668 We are committed to a 'group' of children, if they are sitting in a circle Full Idea: By Quine's test of ontological commitment, if some children are sitting in a circle, no individual child can sit in a circle, so a singular paraphrase will have us committed to a 'group' of children. From: Keith Hossack (Plurals and Complexes [2000], 2) A reaction: Nice of why Quine is committed to the existence of sets. Hossack offers plural quantification as a way of avoiding commitment to sets. But is 'sitting in a circle' a real property (in the Shoemaker sense)? I can sit in a circle without realising it. 10664 Complex particulars are either masses, or composites, or sets Full Idea: Complex particulars are of at least three types: masses (which sum, of which we do not ask 'how many?' but 'how much?'); composite individuals (how many?, and summing usually fails); and sets (only divisible one way, unlike composites). From: Keith Hossack (Plurals and Complexes [2000], 1) A reaction: A composite pile of grains of sand gradually becomes a mass, and drops of water become 'water everywhere'. A set of people divides into individual humans, but redescribe the elements as the union of males and females? 10665 Leibniz's Law argues against atomism - water is wet, unlike water molecules Full Idea: We can employ Leibniz's Law against mereological atomism. Water is wet, but no water molecule is wet. The set of infinite numbers is infinite, but no finite number is infinite. ..But with plural reference the atomist can resist this argument. From: Keith Hossack (Plurals and Complexes [2000], 1) A reaction: The idea of plural reference is to state plural facts without referring to complex things, which is interesting. The general idea is that we have atomism, and then all the relations, unities, identities etc. are in the facts, not in the things. I like it. 10663 A thought can refer to many things, but only predicate a universal and affirm a state of affairs Full Idea: A thought can refer to a particular or a universal or a state of affairs, but it can predicate only a universal and it can affirm only a state of affairs. From: Keith Hossack (Plurals and Complexes [2000], 1) A reaction: Hossack is summarising Armstrong's view, which he is accepting. To me, 'thought' must allow for animals, unlike language. I think Hossack's picture is much too clear-cut. Do animals grasp universals? Doubtful. Can they predicate? Yes.
{"url":"http://www.philosophyideas.com/search/response_philosopherTh.asp?era_no=M&era=New%20millenium%20(2001-%20)&visit=list&order=chron&PN=3605&expand=yes","timestamp":"2024-11-05T06:54:09Z","content_type":"application/xhtml+xml","content_length":"45853","record_id":"<urn:uuid:cd4e36ed-29f6-4567-87c2-36cb323c480e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00855.warc.gz"}
How do you determine the indefinite integrals? | Socratic How do you determine the indefinite integrals? 1 Answer There is no one method for finding indefinite integrals. There are many methods used, but no one method for all integrals. A problem the can be solved by integration by parts (reversing the product rule) may or may not be solvable by integration by u-substitution (reversing the chain rule). Other problems require other techniques. Some functions have indefinite integrals that cannot be finitely expressed without the integration symbol. Introductory calculus courses typically include several techniques of integration in their syllabus. Impact of this question 4876 views around the world
{"url":"https://socratic.org/questions/how-do-you-determine-the-indefinite-integrals","timestamp":"2024-11-10T18:11:43Z","content_type":"text/html","content_length":"33334","record_id":"<urn:uuid:b0bc4915-c256-4833-a596-d6a15126a548>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00129.warc.gz"}
Piston Engines - Compression Ratios Cylinder volume and compression ratios in piston engines. The static compression ratio in a piston engine is the ratio between the volume of the combustion chamber when the piston in the cylinder is at the bottom of its stroke and the volume of the combustion chamber when the piston is at the top of its stroke. It can be expressed as CR = (V[d] + V[c]) / V[c] = (π d^2 s / 4 + V[c]) / V[c] (1) CR = compression ratio V[d] = piston displacement volume - total volume swept by the piston (cm^3, in^3) V[c] = clearence volume - volume left in the cylinder when the pistion is at the top dead center (cm^3, in^3) d = piston diameter (cm, in) s = piston stroke (cm, in) Related Topics • Engineering related topics like Beaufort Wind Scale, CE-marking, drawing standards and more. Related Documents
{"url":"https://www.engineeringtoolbox.com/compression-ratio-piston-engine-d_2189.html","timestamp":"2024-11-05T23:42:02Z","content_type":"text/html","content_length":"30477","record_id":"<urn:uuid:0c9365df-4232-4348-922b-b516f04a974d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00773.warc.gz"}
Ray diagrams for convex mirrors Curved Mirrors How to draw a ray diagram (p. 533-534) For spherical mirrors, there are three different reference rays. The intersection of any two rays locates the image Slide 15 Rules for drawing reference rays (p. 534) Slide 16 How to draw a ray diagram Ray 1 Ray 2 The intersection Of any 2 rays gives the image location Slide 17 Objects inside the focal point Slide 18 Sample Problem (p.536 #2) A concave shaving mirror has a focal length of 33 cm. Calculate the image position of a cologne bottle placed in front of the mirror at a distance of 93 cm. Draw a ray diagram to confirm your Slide 19 Draw the diagram The image is inverted and about half the height of the object. Slide 20 Convex Mirrors Convex mirrors take objects in a large field of view and produce a small image Side-view mirrors on cars are convex mirrors. That’s why they say “objects are closer than they appear” Slide 21 Convex Spherical Mirrors (p. 537) A convex spherical mirror (diverging mirror) is silvered so that light is reflected from the sphere’s outer, convex surface The image distance is always negative! The image is always a virtual image! The focal length is negative ! Slide 22 Ray diagrams for convex mirrors The focal point and center of curvature are behind the mirror’s surface A virtual, upright image is formed behind the mirror The magnification is always less than 1 Slide 23 Drawing the reference rays Ray 1 is drawn parallel to the principal axis beginning at the top of the object. It reflects from the mirror along a line that intersects the focal point Slide 24 Ray 2 Ray 2 starts from the top of the object and goes as though its going to intersect the focal point but it reflects parallel to the principal axis Ray 1 Ray 2 Slide 25 Ray 3 Ray 3 starts at the top of the object and goes as though its going to intersect the center of curvature
{"url":"https://www.sliderbase.com/spitem-1483-2.html","timestamp":"2024-11-12T07:05:05Z","content_type":"text/html","content_length":"15659","record_id":"<urn:uuid:0111ff95-b5de-4d69-9722-4ab55d4ee84a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00428.warc.gz"}
Updating search results... 268 Results Conditional Remix & Share Permitted CC BY-NC Date Added: Conditional Remix & Share Permitted CC BY-NC Distributions and Variability Type of Unit: Project Prior Knowledge Students should be able to: Represent and interpret data using a line plot. Understand other visual representations of data. Lesson Flow Students begin the unit by discussing what constitutes a statistical question. In order to answer statistical questions, data must be gathered in a consistent and accurate manner and then analyzed using appropriate tools. Students learn different tools for analyzing data, including: Measures of center: mean (average), median, mode Measures of spread: mean absolute deviation, lower and upper extremes, lower and upper quartile, interquartile range Visual representations: line plot, box plot, histogram These tools are compared and contrasted to better understand the benefits and limitations of each. Analyzing different data sets using these tools will develop an understanding for which ones are the most appropriate to interpret the given data. To demonstrate their understanding of the concepts, students will work on a project for the duration of the unit. The project will involve identifying an appropriate statistical question, collecting data, analyzing data, and presenting the results. It will serve as the final assessment. Conditional Remix & Share Permitted CC BY-NC Students calculate the mean absolute deviation (MAD) for three data sets and use it to decide which data set is best represented by the mean.The concept of mean absolute deviation (MAD) is introduced. Students understand that the sum of the deviation of the data from the mean is zero. Students calculate the MAD and understand its significance. Students find the mean and MAD of a sample set of data.Key ConceptsThe mean absolute deviation (MAD) is a measure of how much the values in a data set deviate from the mean. It is calculated by finding the distance of each value from the mean and then finding the mean of these distances.Goals and Learning ObjectivesGain a deeper understanding of mean.Understand that the mean absolute deviation (MAD) is a measure of how well the mean represents the data.Compare data sets using measures of center (mode, median, mean) and spread (range and MAD).Show that the sum of deviations from the mean is zero. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students make a box plot for their typical-sixth-grader data from Lesson 7 and write a summary of what the plot shows.Using the line plot from Lesson 4, students construct a box plot. Students learn how to calculate the five-number summary and interquartile range (IQR). Students apply this knowledge to the data used in Lesson 7 and describe the data in terms of the box plot. Class discussion focuses on comparing the two graphs and what they show for the sets of data.Key ConceptsA box-and-whisker plot, or box plot, shows the spread of a set of data. It shows five key measures, called the five-number summary.Lower extreme: The smallest value in the data setLower quartile: The middle of the lower half of the data, and the value that 25% of the data fall belowMedian: The middle of the data setUpper quartile: The middle of the upper half of the data, and the value that 25% of the data are aboveUpper extreme: The greatest value in the data setThis diagram shows how these values relate to the parts of a box plot.The length of the box represents the interquartile range (IQR), which is the difference between the lower and upper quartile.A box plot divides the data into four equal parts. One quarter of the data is represented by the left whisker, two quarters by each half of the box, and one quarter by the right whisker. If one of these parts is long, the data in that quarter are spread out. If one of these quarters is short, the data in that quarter are clustered together.Goals and Learning ObjectivesLearn how to construct box plots, another tool to describe data.Learn about the five-number summary, interquartile range, and how they are related to box plots.Compare a line plot and box plot for the same set of data. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students critique and improve their work on the Self Check from Lesson 13.Key ConceptsMeasures of spread (five-number summary) show characteristics of the data. It is possible to generate an appropriate data set with this information.Goals and Learning ObjectivesApply knowledge of statistics to solve problems.Identify the five-number summary, and understand measures of center and use their properties to solve problems.Track and review choice of strategy when problem solving. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Groups begin presentations for their unit project. Students provide constructive feedback on others' presentations.Key ConceptsThe unit project serves as the final assessment. Students should demonstrate their understanding of unit concepts:Measures of center (mean, median, mode) and spread (MAD, range, interquartile range)The five-number summary and its relationship to box plotsRelationship between data sets and line plots, box plots, and histogramsAdvantages and disadvantages of portraying data in line plots, box plots, and histogramsGoals and Learning ObjectivesPresent projects and demonstrate an understanding of the unit concepts.Provide feedback for others' presentations.Review the concepts from the unit. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Remaining groups present their unit projects. Students discuss teacher and peer feedback.Key ConceptsThe unit project serves as the final assessment. Students should demonstrate their understanding of unit concepts:Measures of center (mean, median, mode) and spread (MAD, range, interquartile range)The five-number summary and its relationship to box plotsRelationship between data sets and line plots, box plots, and histogramsAdvantages and disadvantages of portraying data in line plots, box plots, and histogramsGoals and Learning ObjectivesPresent projects and demonstrate an understanding of the unit concepts.Provide feedback for others' presentations.Review the concepts from the unit.Review presentation feedback and reflect. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students collect data to answer questions about a typical sixth grade student. Students collect data about themselves, working in pairs to measure height, arm span, etc. Students discuss characteristics they would like to know about sixth grade students, adding these topics to a preset list. Data are collected and organized such that there is a class data set for each topic for future use. Students are asked to think about how this data could be represented and organized.Key ConceptsFor data to be useful, it must be collected in a consistent and accurate way. For example, for height data, students must agree on whether students should be measured with shoes on or off, and whether heights should be measured to the nearest inch, half inch, or centimeter.Goals and Learning ObjectivesGather data about sixth grade students.Consider how data are collected. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC In this lesson, students draw a line plot of a set of data and then find the mean of the data. This lesson also informally introduces the concepts of the median, or middle value, and the mode, or most common value. These terms will be formally defined in Lesson 6.Using a sample set of data, students review construction of a line plot. The mean as fair share is introduced as well as the algorithm for mean. Using the sample set of data, students determine the mean and informally describe the set of data, looking at measures of center and the shape of the data. Students also determine the middle 50% of the data.Key ConceptsThe mean is a measure of center and is one of the ways to determine what is typical for a set of data.The mean is often called the average. It is found by adding all values together and then dividing by the number of values.A line plot is a visual representation of the data. It can be used to find the mean by adjusting the data points to one value, such that the sum of the data does not change.Goals and Learning ObjectivesReview construction of a line plot.Introduce the concept of the mean as a measure of center.Use the fair-share method and standard algorithm to find the mean. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students make a histogram of their typical-student data and then write a summary of what the histogram shows.Students are introduced to histograms, using the line plot to build them. They investigate how the bin width affects the shape of a histogram. Students understand that a histogram shows the shape of the data, but that measures of center or spread cannot be found from the graph.Key ConceptsA histogram groups data values into intervals and shows the frequency (the number of data values) for each interval as the height of a bar.Histograms are similar to line plots in that they show the shape and distribution of a data set. However, unlike a line plot, which shows frequencies of individual data values, histograms show frequencies of intervals of values.We cannot read individual data values from a histogram, and we can't identify any measures of center or spread.Histograms sometimes have an interval with the most data values, referred to as the mode interval.Histograms are most useful for large data sets, where plotting each individual data point is impractical.The shape of a histogram depends on the chosen width of the interval, called the bin width. Bin widths that are too large or too small can hide important features of the data.Goals and Learning ObjectivesLearn about histograms as another tool to describe data.Show that histograms are used to show the shape of the data for a wider range of data.Compare a line plot and histogram for the same set of data. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students use the Box Plot interactive, which allows them to create line plots and see the corresponding box plots. They use this tool to create data sets with box plots that satisfy given criteria.Students investigate how the box plot changes as the data points in the line plot are moved. Students can manipulate data points to change aspects of the box plot and to see how the line plot changes. Students create box plots that fit certain criteria.Key ConceptsThis lesson focuses on the connection between a data set and its box plot. It reinforces the idea that a box plot shows the spread of a data set, but not the individual data points.Students will observe the following similarities and differences between line plots and box plots:Line plots allow us to see and count individual values, while box plots do not.Line plots allow us to find the mean and the mode of a set of data, while box plots do not.Box plots are useful for very large data sets, while line plots are not.Box plots give us a better picture of how the values in a data set are distributed than line plots do, and they allow us to see measures of spread easily.Goals and Learning ObjectivesExperiment with different line plots to see the effect on the corresponding box plots.Create data sets with box plots that satisfy different criteria.Compare and contrast line plots and box Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Lesson OverviewStudents complete a card sort that requires them to match sets of statistics with the corresponding line plots.Students match cards with simple line plots to the corresponding card with measures of center. Some cards include mode, mean, median, and range, and some have one or two measures missing. Students discuss how they determined which cards would match.Key ConceptsTo complete the card sort in this lesson efficiently, students must be able to relate statistical measures with line plots. If they start with the measures that are easy to see, they can narrow down the possible matches.The mode is the easiest measure to see immediately. It is simply the number with the tallest column of dots.The range can be found easily by subtracting the least value in the plot from the greatest.The median can be found fairly quickly by counting to the middle dot or by pairing dots on the ends until reaching the middle.The mean must be calculated by adding data values and dividing.Goals and Learning ObjectivesApply knowledge of measures of center and range to solve problems.Discuss and review strategy choices when problem solving. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students will apply what they have learned in previous lessons to analyze and draw conclusions about a set of data. They will also justify their thinking based on what they know about the measures (e.g., I know the mean is a good number to use to describe what is typical because the range is narrow and so the MAD is low.).Students analyze one of the data sets about the characteristics of sixth grade students that was collected by the class in Lesson 2. Students construct line plots and calculate measures of center and spread in order to further their understanding of the characteristics of a typical sixth grade student.Key ConceptsNo new mathematical ideas are introduced in this lesson. Instead, students apply the skills they have acquired in previous lessons to analyze a data set for one attribute of a sixth grade student. Students make a line plot of the data and find the mean, median, range, MAD, and outliers. They use these results to determine a typical value for their data.Goals and Learning ObjectivesDescribe an attribute of a typical sixth grade student using line plots and measures of center (mean and median) and spread (range and MAD).Justify thinking about which measures are good descriptors of the data set. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students form groups and identify a question to investigate for the unit project. Each group submits a proposal outlining the statistical question, the data collection method, and a prediction of results.Key ConceptsStudents will apply what they have learned from the first two lessons to begin the unit project.Goals and Learning ObjectivesChoose a statistical question to answer over the course of the unit.Determine the necessary data collection method.Predict the results.Write a proposal that outlines the project. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC GalleryCreate a Data SetStudents will create data sets with a specified mean, median, range, and number of data values.Bouncing Ball Experiment How high does the class think a typical ball bounces (compared to its drop height) on its first bounce? Students will conduct an experiment to find out.Adding New Data to a Data Set Given a data set, students will explore how the mean changes as they add data values.Bowling Scores Students will create bowling score data sets that meet certain criteria with regard to measures of center.Mean Number of Fillings Ten people sit in a dentist's waiting room. The mean number of fillings they have in their teeth is 4, yet none of them actually have 4 fillings. Students will explain how this situation is possible.Forestland Students will examine and interpret box plots that show the percentage of forestland in 20 European countries.What's My Data?Students will create a data set that fits a given histogram and then adjust the data set to fit additional criteria.What's My Data 2? Students will create a data set that fits a given box plot and then adjust the data set to fit additional criteria.Compare Graphs Students will make a box plot and a histogram that are based on a given line plot and then compare the three graphs to decide which one best represents the data.Random Numbers What would a data set of randomly generated numbers look like when represented on a histogram? Students will find out!No Telephone? The U.S. Census Bureau provides state-by-state data about the number of households that do not have telephones. Students will examine two box plots that show census data from 1960 and 1990 and compare and analyze the data.Who Is Taller?Who is taller—the boys in the class or the girls in the class? Students will find out by separating the class height data gathered earlier into data for boys and data for girls. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students explore how adjusting the bin width or adding, deleting, or moving data values affects a histogram.Students use the Histogram interactive to explore how the bin width can affect how the data are displayed and interpreted. Students also explore how adjusting the line plot affects the histogram.Key ConceptsAs students learned in the last lesson, a histogram shows data in intervals. It shows how much data is in each bin, but it does not show individual data. In this lesson, students will see that the same histogram can be made with different sets of data. Students will also see that the bin width can greatly affect how the histogram looks.Goals and Learning ObjectivesExplore what the shape of the histogram tells us about the data set and how the bin width affects the shape of the histogram.Clarify similarities and differences between histograms and line plots.Compare a line plot and histogram for the same set of data. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students write statistical questions that can be used to find information about a typical sixth grade student. Then, the class works together to informally plan how to find the typical arm span of a student in their class.Key ConceptsStatistical thinking, in large part, must deal with variability; statistical problem solving and decision making depend on understanding, explaining, and quantifying the variability in the data.“How tall is a sixth grader?” is a statistical question because all sixth graders are not the same height—there is variability.Goals and Learning ObjectivesUnderstand what a statistical question is.Realize there is variability in data and understand why.Describe informally the range, median, and mode of a set of data. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Students analyze the data they have collected to answer their question for the unit project. They will also complete a short Self Check.Students are given class time to work on their projects. Students should use the time to analyze their data, finding the different measures and/or graphing their data. If necessary, students may choose to use the time to collect data. Students also complete a short pre-assessment (Self Check problem).Key ConceptsStudents will look at all of the tools that they have to analyze data. These include:Graphic representations: line plots, box plots, and histogramsMeasures of center and spread: mean, median, mode, range, and the five-number summaryStudents will use these tools to work on their project and to complete an assessment exercise.Goals and Learning ObjectivesComplete the project, or progress far enough to complete it outside of class.Review measures of center and spread and the three types of graphs explored in the unit.Check knowledge of box plots and measures of center and spread. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC In this lesson, students are given criteria about measures of center, and they must create line plots for data that meet the criteria. Students also explore the effect on the median and the mean when values are added to a data set.Students use a tool that shows a line plot where measures of center are shown. Students manipulate the graph and observe how the measures are affected. Students explore how well each measure describes the data and discover that the mean is affected more by extreme values than the mode or median. The mathematical definitions for measures of center and spread are formalized.Key ConceptsStudents use the Line Plot with Stats interactive to develop a greater understanding of the measures of center. Here are a few of the things students may discover:The mean and the median do not have to be data points.The mean is affected by extreme values, while the median is not.Adding values above the mean increases the mean. Adding values below the mean decreases the mean.You can add values above and below the mean without changing the mean, as long as those points are “balanced.”Adding values above the median may or may not increase the median. Adding values below the median may or may not decrease the median.Adding equal numbers of points above and below the median does not change the median.The measures of center can be related in any number of ways. For example, the mean can be greater than the median, the median can be greater than the mean, and the mode can be greater than or less than either of these measures.Note: In other courses, students will learn that a set of data may have more than one mode. That will not be the case in this lesson.Goals and Learning ObjectivesExplore how changing the data in a line plot affects the measures of center (mean, median).Understand that the mean is affected by outliers more than the median is.Create line plots that fit criteria for given measures of center. Material Type: Chris Adcock Date Added: Conditional Remix & Share Permitted CC BY-NC Equations and Inequalities Type of Unit: Concept Prior Knowledge Students should be able to: Add, subtract, multiply, and divide with whole numbers, fractions, and decimals. Use the symbols <, >, and =. Evaluate expressions for specific values of their variables. Identify when two expressions are equivalent. Simplify expressions using the distributive property and by combining like terms. Use ratio and rate reasoning to solve real-world problems. Order rational numbers. Represent rational numbers on a number line. Lesson Flow In the exploratory lesson, students use a balance scale to find a counterfeit coin that weighs less than the genuine coins. Then continuing with a balance scale, students write mathematical equations and inequalities, identify numbers that are, or are not, solutions to an equation or an inequality, and learn how to use the addition and multiplication properties of equality to solve equations. Students then learn how to use equations to solve word problems, including word problems that can be solved by writing a proportion. Finally, students connect inequalities and their graphs to real-world situations.
{"url":"https://openspace.infohio.org/browse?f.provider=pearson","timestamp":"2024-11-13T22:39:17Z","content_type":"text/html","content_length":"166537","record_id":"<urn:uuid:2c8a71ab-f414-41d9-8ef6-5a99ec08cc47>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00570.warc.gz"}
Consider a plane wave with peak pressure amplitude impedance wave impedance C.15. (Assume • pressure must be continuous everywhere, and • velocity in must equal velocity out (the junction has no state). Since power is pressure times velocity, these constraints imply that signal power is conserved at the junction. Expressed mathematically, the physical constraints at the junction can be written as follows: As derived in §C.7.3, we also have the Ohm's law relations: These equations determine what happens at the junction. To obey the physical constraints at the impedance discontinuity, the incident plane-wave must split into a reflected plane wave transmitted plane-wave signal power is conserved. The physical pressure on the left of the junction is Define the junction pressure velocity Then we can write Note that We have solved for the transmitted and reflected velocity waves given the incident wave and the two Using the Ohm's law relations, the pressure waves follow easily: Define the reflection coefficient of the scattering junction as Then we get the following relations in terms of pressure waves Signal flow graphs for pressure and velocity are given in Fig.C.16. It is a simple exercise to verify that signal power is conserved by checking that So far we have only considered a plane wave incident on the left of the junction. Consider now a plane wave incident from the right. For that wave, the impedance steps from C.17. Note that the transmission coefficient is one plus the reflection coefficient in either direction. This signal flow graph is often called the ``Kelly-Lochbaum'' scattering junction [297]. Figure C.17: Signal flow graph for plane waves incident on either the left or right of an impedance discontinuity. Also shown are delay lines corresponding to sampled traveling plane-wave components propagating on either side of the scattering junction. There are some simple special cases: • e.g., rigid wall reflection) • e.g., open-ended tube) Next Section: Plane-Wave Scattering at an AnglePrevious Section: Total Energy in a Rigidly Terminated String
{"url":"https://www.dsprelated.com/freebooks/pasp/Plane_Wave_Scattering.html","timestamp":"2024-11-03T22:28:26Z","content_type":"text/html","content_length":"43215","record_id":"<urn:uuid:a391cf40-5358-443d-8936-43651a2ff9a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00743.warc.gz"}
Line Chart Connector Colors | CanvasJS Charts Good day! Tell me a way to create a graph in the attached image. It is necessary that the connecting lines of the points have certain colors. Suppose if points 1 and 2 are connected, then their connecting line is blue, let’s say if points 2 and 1 are connected, then their color is red. Is it possible to make the specified line color for connecting even dots 0-2-4-6-8-0 blue and 0-1-3-5-7-9-0 blue, and for lines 0-9-8-7- 5-3-1-0 and 0-8-6-4-2-0 red.The graph is dynamic. Example Can you please provide some sample data so that we can understand your scenario better and help you out? Based on the screenshot shared, you can easily achieve the requirement by setting the markerBorderThickness, markerColor, markerBorderColor, and lineColor as shown in the code snippet below – type: "line", markerBorderThickness: 2, markerColor: "white", dataPoints: [ {x: 4, y: 10, markerBorderColor: "blue", lineColor: "blue"}, {x: 9, y: 23, markerBorderColor: "blue", lineColor: "red"}, {x: 16, y: 16, markerBorderColor: "red", lineColor: "blue"}, {x: 12, y: 33, markerBorderColor: "blue", lineColor: "blue"}, {x: 17, y: 44, markerBorderColor: "blue", lineColor: "red"} Also, please take a look at this JSFiddle for an example on same. Indranil Deo Team CanvasJS To render a dynamic chart, you will have to create a function that checks the dataPoint value before assigning it to the chart options/data and accordingly customize the markerBorderColor and lineColor. The function can be called at regular intervals to add new dataPoint using setInterval() as shown in the code snippet below – var updateChart = function () { yVal = yVal + Math.round(5 + Math.random() *(-5-5)); if(dps.length && yVal < dps[dps.length - 1].y) dps.push({x: xVal,y: yVal, markerBorderColor: "red"}); dps[dps.length - 2].lineColor = "red"; else if(dps.length) dps.push({x: xVal, y: yVal, markerBorderColor: "blue"}); dps[dps.length - 2].lineColor = "blue"; dps.push({x: xVal, y: yVal, markerBorderColor: "blue"}); setInterval(function(){updateChart()}, updateInterval); Please take a look at this documentation page for a step-by-step tutorial to create a dynamic chart that updates dataPoints at regular intervals. Also, take a look at this JSFiddle for a working In case this doesn’t fulfill your requirement, kindly brief us further about the logic based on which the line color is defined. Indranil Singh Deo Team CanvasJS You can set markerBorderColor & lineColor based on your condition (Comparing current y-value with the previous y-value and checking if it’s even or odd) to achieve this. Please find the code-snippet below – var updateChart = function () { yVal = yVal + Math.round(5 + Math.random() *(-5-5)); if(dps.length && yVal < dps[dps.length - 1].y) //Setting markerBorderColor and lineColor for each specific dataPoint color = (yVal % 2 === 0 ? "red" : "blue"); dps.push({x: xVal, y: yVal, markerBorderColor: color}); dps[dps.length - 2].lineColor = color; else if(dps.length) //Setting markerBorderColor and lineColor for each specific dataPoint color = (yVal % 2 === 0 ? "blue" : "red"); dps.push({x: xVal, y: yVal, markerBorderColor: color}); dps[dps.length - 2].lineColor = color; dps.push({x: xVal, y: yVal, markerBorderColor: "blue"}); Please take a look at this JSFiddle for the complete code. Indranil Singh Deo Team CanvasJS Considering this thread as duplicate of Tabular presentation of graph data and hence closing the same. Adithya Menon Team CanvasJS You must be logged in to reply to this topic.
{"url":"https://canvasjs.com/forums/topic/line-chart-connector-colors/","timestamp":"2024-11-01T19:47:59Z","content_type":"text/html","content_length":"201295","record_id":"<urn:uuid:712dca9c-8869-4ea0-9732-3faf66d97bd0>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00053.warc.gz"}
Micrometers to Feet Conversion (µm to ft) - Inch Calculator Micrometers to Feet Converter Enter the length in micrometers below to convert it to feet. Do you want to convert feet to micrometers? How to Convert Micrometers to Feet To convert a measurement in micrometers to a measurement in feet, divide the length by the following conversion ratio: 304,800 micrometers/foot. Since one foot is equal to 304,800 micrometers, you can use this simple formula to convert: feet = micrometers ÷ 304,800 The length in feet is equal to the length in micrometers divided by 304,800. For example, here's how to convert 500,000 micrometers to feet using the formula above. feet = (500,000 µm ÷ 304,800) = 1.64042' Our inch fraction calculator can add micrometers and feet together, and it also automatically converts the results to US customary, imperial, and SI metric values. Micrometers and feet are both units used to measure length. Keep reading to learn more about each unit of measure. What Is a Micrometer? One micrometer is equal to one-millionth (1/1,000,000) of a meter, which is defined as the distance light travels in a vacuum in a ^1/[299,792,458] second time interval. The micrometer, or micrometre, is a multiple of the meter, which is the SI base unit for length. In the metric system, "micro" is the prefix for millionths, or 10^-6. A micrometer is sometimes also referred to as a micron. Micrometers can be abbreviated as µm; for example, 1 micrometer can be written as 1 µm. To get an idea of the actual physical length of a micrometer, one human hair is 40-50 µm thick, demonstrating how small this unit of measure is. Learn more about micrometers. What Is a Foot? The foot is a unit of length measurement equal to 12 inches or ^1/[3] of a yard. Because the international yard is legally defined to be equal to exactly 0.9144 meters, one foot is equal to 0.3048 The foot is a US customary and imperial unit of length. Feet can be abbreviated as ft; for example, 1 foot can be written as 1 ft. Feet can also be denoted using the ′ symbol, otherwise known as a prime, though a single-quote (') is often used instead of the prime symbol for convenience. Using the prime symbol, 1 ft can be written as 1′. Measurements in feet are most commonly taken using either a standard 12" ruler or a tape measure, though there are many other measuring devices available. Feet are sometimes referred to as linear feet, which are simply a measurement of length in feet. You might be interested in our feet and inches calculator, which can add feet with other units of measurement such as inches, centimeters, or meters. Learn more about feet. We recommend using a ruler or tape measure for measuring length, which can be found at a local retailer or home center. Rulers are available in imperial, metric, or a combination of both values, so make sure you get the correct type for your needs. Need a ruler? Try our free downloadable and printable rulers, which include both imperial and metric measurements. Micrometer to Foot Conversion Table Table showing various micrometer measurements converted to feet. Micrometers Feet 1 µm 0.0000032808' 2 µm 0.0000065617' 3 µm 0.0000098425' 4 µm 0.000013123' 5 µm 0.000016404' 6 µm 0.000019685' 7 µm 0.000022966' 8 µm 0.000026247' 9 µm 0.000029528' 10 µm 0.000032808' 100 µm 0.000328' 1,000 µm 0.003281' 10,000 µm 0.032808' 100,000 µm 0.328084' 1,000,000 µm 3.2808' 1. National Institute of Standards and Technology, U.S. Survey Foot: Revised Unit Conversion Factors, https://www.nist.gov/pml/us-surveyfoot/revised-unit-conversion-factors More Micrometer & Foot Conversions
{"url":"https://www.inchcalculator.com/convert/micrometer-to-foot/","timestamp":"2024-11-14T18:19:36Z","content_type":"text/html","content_length":"67735","record_id":"<urn:uuid:cd58f522-4678-48a3-986e-486747c3cb45>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00046.warc.gz"}
s white numbers If you can read this then the page CSS failed to load. Most likely this is because you are using an older Version-4 browser, or else one which does not properly support modern W3C standards. Either way, please upgrade your browser to something more modern & standards compliant! Mysterious white numbers on lenses - meaning? Some of your M lenses have small white double-digit numbers next to the infinity marking on the lens barrel - what do these mean? These numbers tell you the exact focal length of your lens and they only appear only on the M 50, 90 and 135mm lenses. What you do is drop the last digit of the nominal focal length and then append the two white numbers to give you the exact value. This is best explained by the following examples: My 50mm Summicron has the number "22" engraved on it - this indicates the exact focal length as being 52.2mm. Another 50mm lens has 98 written on it, making the exact focal length 49.8mm. A third lens (a 90mm) has 00, this means that the lens has a focal length of exactly 90.0mm. Pretty easy. This, along with the older, more cryptic, #2-#8 precision coding scheme, was discussed in more detail on the greenspun.com Leica forum at <Greenspun.com: #005cTt> So why bother? What does it matter? Dennis Painter provides the following explanation: […] The precision of the Leica M system rangefinder is such that two lenses which have an actual focal length variation of even a tenth of a millimetre require a different helical mount to translate the focus extension correctly to the rangefinder assembly within the camera body. Leica lenses have always had the focus helicoid matched to the lens head. In the past the 50mm helicoids had code numbers stamped under the infinity lock. Each number representing the exact focal length for which the helicoid was cut. [This is the cryptic #2-#8 scheme I note above.] […] There are two reasons for variation from the nominal focal length. In some cases the actual focal length used for the lens computation may be different than the nominal. The best example of this is the original Summicron which had a computed focal length of 51.9 mm. (note, I no longer have a reference but this is from memory). The second reason for focal length variation is due to manufacturing tolerances. Grinding lens elements to the exact curvatures and thickness specified in the lens formula is not economically feasible. Thus slight variations within manufacturing tolerances can result in the lens having a focal length slightly different than the formula. A note about possible broken links This FAQ has over 900 external links. Over time it is inevitable some of them will break. If you are bothered by this, see this detailed topic elsewhere in the FAQ.
{"url":"http://leica.nemeng.com/039b.php","timestamp":"2024-11-02T20:23:30Z","content_type":"application/xhtml+xml","content_length":"8763","record_id":"<urn:uuid:8628d454-7e22-415a-bcd0-6786ce67bbb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00369.warc.gz"}
Course Descriptions Explore these course descriptions to learn more about the content, concepts, techniques, and technology covered in each core and elective course within the FSU Interdisciplinary Data Science Master's Degree Program, or IDS. A detailed list of core coursework and elective course requirements is available under the Student tab, Coursework link. Core Curriculum CAP 5771 - Data Mining (3). Prerequisite: ISC 3222 or ISC 3313 or ISC 4304C or COP 3330 or COP 4530 or instructor permission. This course enables students to study data mining concepts and techniques, including characterization and comparison, association rules mining, classification and prediction, cluster analysis, and mining complex types of data. Students also examine applications and trends in data mining. PHI 5699 - Data Ethics (3). This course examines ethical questions related to the analysis, management, and application of data. Through case studies and class discussions, students will develop their ability to recognize and analyze ethical issues that arise in their work as data scientists. Topics may include privacy, accountability, consent, transparency, and fairness. MAP 5196 - Mathematics for Data Science (3). This course covers basic mathematical methods and tools used in data analysis. The topics will include singular value decomposition of matrices, low-rank models, Lagrange multipliers, convex minimization, gradient descent methods, as well as applications. CAP 5768 - Introduction to Data Science (3). Prerequisites: Graduate standing in science or engineering, or permission of the instructor. Some familiarity with basic concepts in linear algebra and probability theory. Some basic knowledge of algorithm designs and some experience with Python or Java programming. This course will serve as an introduction and overview of the fundamentals of data science. Specific topics will include an overview of data management fundamentals, information retrieval, introductory machine learning concepts and frameworks, basic data visualization architectures, an overview of architectures of large-scale data management and analytical systems, and distributed computational paradigms. STA 5207 - Applied Regression Methods (3). Prerequisite: One of STA 2122, STA 4322, or STA 5126. This course discusses topics such as general linear hypothesis, analysis of covariance, multiple correlation and regression, response surface methods. STA 5910 - Professional Development (1). This course familiarizes students with the importance of “soft skills” when working as a statistician, including written and oral communication, working effectively in teams, translating client goals into statistical analyses, and otherwise excelling in a collaborative role. Classes will be led by experts who have experience as collaborative statisticians in a variety of fields. STA 5635 - Applied Machine Learning (3). Prerequisite: STA 3032 or instructor permission. This course is a hands-on introduction to statistical methods for supervised, unsupervised, and semi-supervised learning. It explores fundamental techniques including but not limited to Support Vector Machines, Decision Trees, Linear Discriminant Analysis, Random Forests, Neural Networks, and different flavors of Boosting. Elective Courses Course descriptions for elective courses are listed by IDS program major area of study. ISC 5XXX - Computational Probabilistic Modeling (3). In this course, students are introduced to probabilistic programming and modeling for modern data science and machine learning applications. Algorithms for predictive inference are covered from a theoretical and practical viewpoint with an emphasis on implementation in Python. Topics include an introduction to probability and learning theory, graph-based methods, machine learning with neural networks, dimensionality reduction, and algorithms for big data. ISC 5XXX - Data Science for Health (3). This course will focus on the applied data science pipeline of data acquisition, data processing and integration, data modeling and analysis, and validation and delivery, commonly used in the Health industry. Topics include data normalization, scientific visualization, multivariate regression, and Artificial Neural Networks (dense, convolutional, recurrent, and adversarial). The examples and projects of this course contain 1D to 4D health data of electrocardiogram sequences, X-ray, Magnetic resonance imaging (MRI), and functional MRI images. ISC 5228 - Monte Carlo Methods (3). Prerequisites: ISC 5305; MAC 2311, 2312. This course introduces probabilistic modeling and Monte Carlo methods (MCMs) suitable for graduate students in science, technology, and engineering. Students learn discrete event simulation, MCMs and their probabilistic foundations, and the application of MCMs to various fields. In particular, Markov chain MCMs are introduced, as are the application of MCMs to problems in linear algebra and the solution of partial differential equations. ISC 5305 - Scientific Programming (3). Prerequisites: working knowledge of one programming language (C++, Fortran, Java), or instructor permission. This course focuses on object-oriented coding in C++, Java and Fortran 90 with applications to scientific programming. Discussion of class hierarchies, pointers, function and operator overloading and portability. Examples include computational grids and multidimensional arrays. ISC 5307 - Scientific Visualization (3). Prerequisites: CGS 4406, ISC 5305, or instructor permission. The course covers the theory and practice of scientific visualization. Students learn how to use state-of-the-art visualization toolkits, create their own visualization tools, represent both 2-D and 3-D data sets, and evaluate the effectiveness of their visualizations. ISC 5315 - Applied Computational Science I (4). Prerequisites: ISC 5305; MAP 2302; or instructor permission. This course provides students with high-performance computational tools necessary to investigate problems arising in science and engineering, with an emphasis on combining them to accomplish more complex tasks. A combination of coursework and lab work provides the proper blend of theory and practice with problems culled from the applied sciences. Topics include numerical solutions to ODEs and PDEs, data handling, interpolation and approximation, and visualization. ISC 5318 - High-Performance Computing (3). Prerequisites: ISC 5305 or equivalent or instructor permission. This course introduces high-performance computing, a term which refers to the use of parallel supercomputers, computer clusters, as well as software and hardware in order to speed up computations. Students learn to write faster code that is highly optimized for modern multi-core processors and clusters, using modern software-development tools and performance analyzers, specialized algorithms, parallelization strategies, and advanced parallel programming constructs. ISC 5935 - Data Assimilation (3). Data assimilation (DA) methods combine numerical models and observationsto arrive at the best possible representation of a physical system. This course aims to build a robust theoretical foundation in the subject and explore some of the computational challenges in large scientific and engineering applications. Students will gain hands-on experience by implementing their own algorithms and will complete a final project on a preferred research topic. CAP 5605 - Artificial Intelligence (3). Prerequisite: COP 4530. This course is an introduction, representing knowledge, controlling attention, exploiting constraints, basic LISP programming, basic graph searching methods, game-playing and dealing with adversaries, understanding vision, theorem proving by computer, and computer programs utilizing artificial intelligence techniques. CAP 5619 - Deep and Reinforcement Learning Fundamentals (3). Prerequisite: Senior or grad standing in science or engineering; or instructor permission. Requires some familiarity with basic concepts in linear algebra and probability theory, some basic knowledge of algorithm design, and programming experience with Python. : This course covers fundamental principles and techniques in deep and reinforcement learning, as well as convolutional neural networks, recurrent and recursive neural networks, backpropagation algorithms, regularization and optimization techniques for training such networks, dynamic programming, Monte Carlo, temporal difference, function approximation reinforcement learning algorithms, and applications of deep and reinforcement learning. The course also covers active research topics in deep and reinforcement learning areas. CAP 5769 - Advanced Topics in Data Science (3). Prerequisite: COP 4530 (Computer Science undergraduate students); or CAP 5768 and graduate standing in science or engineering majors; or instructor permission. Familiarity with basic linear algebra, probability, algorithms, some Python or Java skills.: This course will emphasize practical techniques for working with large-scale, heterogeneous data. Specific topics covered will include fundamentals of data management, data models, data cleaning, fusion, information retrieval, statistical modeling, machine learning, deep learning, data pipelines, visualization, "big data" management systems, distributed computational frameworks and paradigms and tools. The goal is to provide advanced theoretical foundations, hands-on experience and train students to become capable data scientists, develop their analytical skills, provide them experience with real-world systems. CAP 5778 - Advanced Data Mining (3). Prerequisite: Students need to have a working knowledge of probability theory, linear algebra and common data mining algorithms. They should have taken a course covering the fundamentals of data structures, algorithms and generic programming (or equivalent). This course will discuss techniques for processing and mining large amounts of data. It will cover the basic data mining techniques for data classification and clustering; it will also include advanced concepts such as semi-supervised learning, locality-sensitive hashing, recommender systems, PageRank and large-scale machine learning algorithms. CDA 5125 - Parallel and Distributed Systems (3). Prerequisite: COP 4610. This course introduces various systems aspects of parallel and distributed computing. Topics include parallel computer architectures, interconnects, parallel programming paradigms, compilation techniques, runtime libraries, performance evaluation, performance monitoring and tuning, as well as tools for parallel and distributed computing. CDA 5155 - Computer Architecture (3). Prerequisite: CDA 3101. This course focuses on computer system components; microprocessor and minicomputer architecture; stack computers; parallel computers; overlap and pipeline processing; networks and protocols; performance evaluation; architecture studies of selected systems. CEN 5035 - Software Engineering (3). Prerequisites: CEN 4021, COP 4020, and COP 4530. This course surveys software engineering and a detailed study of topics from requirements analysis and specification, programming methodology, software testing and validation, performance and design evaluation, software project management, and programming tools and standards. CIS 5370 - Computer Security (3). Prerequisites: COP 4610. In this course, topics include computer security threats and attacks, covert channels, trusted operating systems, access control, entity authentication, security policies, models of security, database security, administering security, physical security and TEMPEST, and brief introductions to network security and legal and ethical aspects of security. A research paper or project is required. CIS 5379 - Computer Security Fundamentals for Data Science (3). Prerequisite: CGS 3465. This is an introduction to computer security course, targeted towards graduate students in data science. This course covers a broad range of topics within computer security, such as cryptographic algorithms, security protocols, network authentication, and software security. CNT 5505 - Data and Computer Communications (3). Prerequisites: CDA 3100 and COP 4610. This course offers an overview of networks; data communication principles; data link layer; routing in packet switched networks; flow and congestion control; multiple access communication protocols; local area network protocols and standards; network interconnection; transport protocols; integrated services digital networks (narrowband and broadband); and switching techniques and fast packet switching. CNT 5605 - Computer and Network Administration (3). Prerequisite: COP 4610. This course covers UNIX user commands and shell programming. Also covered are problem solving and diagnostic methods, system startup and shutdown, device files and installing devices, disk drives and file systems, NFS, NIS, DNS, sendmail. Students also learn how to manage a WWW site, manage UNIX software applications, system security, and performance tuning. Legal and professional issues, ethics and policies are covered. COP 5570 - Concurrent, Parallel, and Distributed Programming (3). Prerequisite: COP 4610. This course covers UNIX and C standards, file I/O, file access and attributes, directories, the standard I/O library, systems administration files, the process environment, process control, process relationships, signals, terminal I/O, daemon processes, interprocess communication, and pseudo terminals. COP 5611 - Advanced Operating Systems (3). Prerequisites: CDA 3100, COP 4610, and introductory probability or statistics. This course focuses on design principles of batch, multiprogramming, and time-sharing systems; distributed systems; problems of concurrency. COP 5725 - Database Systems (3). Prerequisites: COP 4610 and COP 4710. This course examines the use of a generalized database management system; characteristics of database systems; hierarchical, network, and relational models; file organizations. COT 5405 - Advanced Algorithms (3). Prerequisite: COP 4530. This course covers algorithms, formal proofs of correctness, and time complexity analysis for network flow problems, approximation of NP hard combinatorial optimization problems, parallel algorithms, cache-aware algorithms, randomized algorithms, computational geometry, string algorithms, and other topics requiring advanced techniques for proof of correctness or time/space complexity analysis. MAD 5XXX - Principles and Foundations of Machine Learning (3). This course will provide an in-depth treatment of the mathematical principles and methods underlying several machine learning algorithms ranging from dimension reduction to deep neural networks. MAD 5XXX - Numerical Linear Algebra (3). Prerequisites: MAC 2313; MAS 3105. This course provides the theoretical and computational concepts, techniques and tools to design, analyze and evaluate algorithms for fundamental problems in numerical linear algebra. MTG 5356 - Topological Data Analysis (3). Prerequisites: MAA 4224; MAS 3105. Topological data analysis (TDA) applies ideas from algebraic topology to data science. The course serves as an introduction to TDA, with a focus on persistent homology, presenting a balance of high-level theory and real-world applications to data analysis. MAD 5306 - Graphs and Networks (3). Prerequisite: MAS 3105. This course covers examples of networks in science and technology, mathematical principles of network theory, types of network centrality, random networks and the large-scale structure of networks. MAD 5403 - Foundations of Computational Mathematics (3). Prerequisites: MAS 3105; competence in a programming language suitable for numeric computation. Analysis and implementation of numerical algorithms. Matrix analysis, conditioning, errors, direct and iterative solution of linear systems, rootfinding, systems of nonlinear equations, numerical optimization. MAD 5404 – Foundations of Computational Mathematics II (3). Prerequisite: MAD 5403. Interpolation, quadrature, approximation theory, numerical methods for ordinary differential equations and partial differential equations. MAD 5420 - Numerical Optimization (3). Prerequisites: MAC 2313; MAS 3105; C, C++, or Fortran. This course covers topics from unconstrained and constrained minimization as well as global minimization. MAP 5345 – Partial Differential Equations (3). This course covers the separation of variables; Fourier series; Sturm-Liouville problems; multidimensional initial boundary value problems; nonhomogeneous problems; Bessel functions and Legendre polynomials. STA 5066 - Data Management and Analysis with SAS (3). Prerequisite: Previous background in statistics at least through linear regression or instructor permission. This course introduces SAS software in lab-based format. SAS is the world’s most widely used statistical package for managing and analyzing data. The objective of this course is for students to develop the skills necessary to address data management and analysis issues using SAS. This course includes a complete introduction to data management for scientific and industrial data and an overview of SAS statistical procedures. STA 5067 - Advanced Data Management and Analysis with SAS (3). Prerequisite: STA 5066. This course presents additional methods for managing and analyzing data with the SAS system. It covers as many of the following topics as time permits: advanced data step topics, manipulation of data with Proc SQL, the SAS Macro Facility, simulation with the data step and analyses with Proc IML. STA 5106 - Computational Methods in Statistics (3). Prerequisites: At least one previous course in statistics above STA 1013 and some previous programming experience; or instructor permission. STA 5107 - Computational Methods in Statistics II (3). Prerequisite: STA 5106. The course is a continuation of STA 5106 in computational techniques for linear and nonlinear statistics. The course also covers statistical image understanding, elements of pattern theory, simulated annealing, Metropolis-Hastings algorithm, and Gibbs STA 5166 - Statistics in Applications I (3). Prerequisite: MAC 2313. This course introduces topics such as comparison of two treatments, random sampling, randomization and blocking with two comparisons, statistical inference for means, variances, proportions and frequencies, and analysis of variance. STA 5167 - Statistics in Applications II (3). Prerequisite: STA 5166. This course focuses on topics such as special designs in analysis of variance, linear and nonlinear regression, least squares and weighted least squares, case analysis, model building, non-least squares estimation. STA 5238 - Applied Logistic Regression (3). Prerequisite: STA 3032 or an equivalent upper division course that covers basic statistics at least through linear regression. This course is an applied introduction to logistic regression, one of the most commonly used analytic tools in statistical studies. Topics include fitting the model, interpretation of the model, model building, assessing model fit, model validation, and model uncertainty. STA 5326 - Distribution Theory and Inference (3). Prerequisites: MAC 2313; at least one previous course in statistics or probability. This course is an introduction to probability, random variables, distributions, limit laws, conditional distributions, and expectations. STA 5327 – Statistical Inference (3). Prerequisites: STA 5326; STA 5166. This course introduces students to the basics of statistical inference and its applications. The overarching goal is to introduce statistical techniques to estimate and provide uncertainty measures of the estimates themselves of key quantities of a population e.g. mean, median, location shift, variance, etc STA 5507 - Applied Nonparametric Statistics (3). Prerequisite: A course in statistics above STA 1013 or instructor permission. This course focuses on applications of nonparametric tests, estimates, confidence intervals, multiple comparison procedures, multivariate nonparametric methods, and nonparametric methods for censored data. STA 5707 - Applied Multivariate Analysis (3). Prerequisite: One of STA 5167, STA 5207, or STA 5327. This course discusses inference about mean vectors and covariance matrices, canonical correlation, principal components, discriminant analysis, cluster analysis, and computer techniques. STA 5856 - Time Series and Forecasting Methods (3). Prerequisite: One of STA 5167, STA 5207, or STA 5327. This course explores autoregressive, moving average and mixed models, autocovariance and autocorrelation functions, model identification, forecasting techniques, seasonal model identification estimation and forecasting, intervention and transfer function model identification, estimation and forecasting. STA 5939 - Introduction to Statistical Consulting (3). Prerequisite: STA 5167, or STA 5327, or instructor permission. This course consists of the formulation of statistical problems from client information, the analysis of complex data sets by computer, and practical consulting experience.
{"url":"https://datascience.fsu.edu/students/course-descriptions","timestamp":"2024-11-11T09:37:53Z","content_type":"text/html","content_length":"60595","record_id":"<urn:uuid:66d7762f-5c41-4474-8e3b-1c5f327810bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00301.warc.gz"}
TensorFlow Distributions: A Gentle Introduction | TensorFlow Probability In this notebook, we'll explore TensorFlow Distributions (TFD for short). The goal of this notebook is to get you gently up the learning curve, including understanding TFD's handling of tensor shapes. This notebook tries to present examples before rather than abstract concepts. We'll present canonical easy ways to do things first, and save the most general abstract view until the end. If you're the type who prefers a more abstract and reference-style tutorial, check out Understanding TensorFlow Distributions Shapes. If you have any questions about the material here, don't hesitate to contact (or join) the TensorFlow Probability mailing list. We're happy to help. Before we start, we need to import the appropriate libraries. Our overall library is tensorflow_probability. By convention, we generally refer to the distributions library as tfd. Tensorflow Eager is an imperative execution environment for TensorFlow. In TensorFlow eager, every TF operation is immediately evaluated and produces a result. This is in contrast to TensorFlow's standard "graph" mode, in which TF operations add nodes to a graph which is later executed. This entire notebook is written using TF Eager, although none of the concepts presented here rely on that, and TFP can be used in graph mode. import collections import tensorflow as tf import tensorflow_probability as tfp tfd = tfp.distributions except ValueError: import matplotlib.pyplot as plt Basic Univariate Distributions Let's dive right in and create a normal distribution: n = tfd.Normal(loc=0., scale=1.) <tfp.distributions.Normal 'Normal' batch_shape=[] event_shape=[] dtype=float32> We can draw a sample from it: <tf.Tensor&colon; shape=(), dtype=float32, numpy=0.25322816> We can draw multiple samples: <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([-1.4658079, -0.5653636, 0.9314412], dtype=float32)> We can evaluate a log prob: <tf.Tensor&colon; shape=(), dtype=float32, numpy=-0.9189385> We can evaluate multiple log probabilities: n.log_prob([0., 2., 4.]) <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([-0.9189385, -2.9189386, -8.918939 ], dtype=float32)> We have a wide range of distributions. Let's try a Bernoulli: b = tfd.Bernoulli(probs=0.7) <tfp.distributions.Bernoulli 'Bernoulli' batch_shape=[] event_shape=[] dtype=int32> <tf.Tensor&colon; shape=(), dtype=int32, numpy=1> <tf.Tensor&colon; shape=(8,), dtype=int32, numpy=array([1, 0, 0, 0, 1, 0, 1, 0], dtype=int32)> <tf.Tensor&colon; shape=(), dtype=float32, numpy=-0.35667497> b.log_prob([1, 0, 1, 0]) <tf.Tensor&colon; shape=(4,), dtype=float32, numpy=array([-0.35667497, -1.2039728 , -0.35667497, -1.2039728 ], dtype=float32)> Multivariate Distributions We'll create a multivariate normal with a diagonal covariance: nd = tfd.MultivariateNormalDiag(loc=[0., 10.], scale_diag=[1., 4.]) <tfp.distributions.MultivariateNormalDiag 'MultivariateNormalDiag' batch_shape=[] event_shape=[2] dtype=float32> Comparing this to the univariate normal we created earlier, what's different? tfd.Normal(loc=0., scale=1.) <tfp.distributions.Normal 'Normal' batch_shape=[] event_shape=[] dtype=float32> We see that the univariate normal has an event_shape of (), indicating it's a scalar distribution. The multivariate normal has an event_shape of 2, indicating the basic event space of this distribution is two-dimensional. Sampling works just as before: <tf.Tensor&colon; shape=(2,), dtype=float32, numpy=array([-1.2489667, 15.025171 ], dtype=float32)> <tf.Tensor&colon; shape=(5, 2), dtype=float32, numpy= array([[-1.5439653 , 8.9968405 ], [-0.38730723, 12.448896 ], [-0.8697963 , 9.330035 ], [-1.2541095 , 10.268944 ], [ 2.3475595 , 13.184147 ]], dtype=float32)> nd.log_prob([0., 10]) <tf.Tensor&colon; shape=(), dtype=float32, numpy=-3.2241714> Multivariate normals do not in general have diagonal covariance. TFD offers multiple ways to create multivariate normals, including a full-covariance specification (parameterized by a Cholesky factor of the covariance matrix), which we use here. covariance_matrix = [[1., .7], [.7, 1.]] nd = tfd.MultivariateNormalTriL( loc = [0., 5], scale_tril = tf.linalg.cholesky(covariance_matrix)) data = nd.sample(200) plt.scatter(data[:, 0], data[:, 1], color='blue', alpha=0.4) plt.axis([-5, 5, 0, 10]) plt.title("Data set") Multiple Distributions Our first Bernoulli distribution represented a flip of a single fair coin. We can also create a batch of independent Bernoulli distributions, each with their own parameters, in a single Distribution b3 = tfd.Bernoulli(probs=[.3, .5, .7]) <tfp.distributions.Bernoulli 'Bernoulli' batch_shape=[3] event_shape=[] dtype=int32> It's important to be clear on what this means. The above call defines three independent Bernoulli distributions, which happen to be contained in the same Python Distribution object. The three distributions cannot be manipulated individually. Note how the batch_shape is (3,), indicating a batch of three distributions, and the event_shape is (), indicating the individual distributions have a univariate event space. If we call sample, we get a sample from all three: <tf.Tensor&colon; shape=(3,), dtype=int32, numpy=array([0, 1, 1], dtype=int32)> <tf.Tensor&colon; shape=(6, 3), dtype=int32, numpy= array([[1, 0, 1], [0, 1, 1], [0, 0, 1], [0, 0, 1], [0, 0, 1], [0, 1, 0]], dtype=int32)> If we call prob, (this has the same shape semantics as log_prob; we use prob with these small Bernoulli examples for clarity, although log_prob is usually preferred in applications) we can pass it a vector and evaluate the probability of each coin yielding that value: b3.prob([1, 1, 0]) <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([0.29999998, 0.5 , 0.29999998], dtype=float32)> Why does the API include batch shape? Semantically, one could perform the same computations by creating a list of distributions and iterating over them with a for loop (at least in Eager mode, in TF graph mode you'd need a tf.while loop). However, having a (potentially large) set of identically parameterized distributions is extremely common, and the use of vectorized computations whenever possible is a key ingredient in being able to perform fast computations using hardware accelerators. Using Independent To Aggregate Batches to Events In the previous section, we created b3, a single Distribution object that represented three coin flips. If we called b3.prob on a vector \(v\), the \(i\)'th entry was the probability that the \(i\)th coin takes value \(v[i]\). Suppose we'd instead like to specify a "joint" distribution over independent random variables from the same underlying family. This is a different object mathematically, in that for this new distribution, prob on a vector \(v\) will return a single value representing the probability that the entire set of coins matches the vector \(v\). How do we accomplish this? We use a "higher-order" distribution called Independent, which takes a distribution and yields a new distribution with the batch shape moved to the event shape: b3_joint = tfd.Independent(b3, reinterpreted_batch_ndims=1) <tfp.distributions.Independent 'IndependentBernoulli' batch_shape=[] event_shape=[3] dtype=int32> Compare the shape to that of the original b3: <tfp.distributions.Bernoulli 'Bernoulli' batch_shape=[3] event_shape=[] dtype=int32> As promised, we see that that Independent has moved the batch shape into the event shape: b3_joint is a single distribution (batch_shape = ()) over a three-dimensional event space (event_shape = Let's check the semantics: b3_joint.prob([1, 1, 0]) <tf.Tensor&colon; shape=(), dtype=float32, numpy=0.044999998> An alternate way to get the same result would be to compute probabilities using b3 and do the reduction manually by multiplying (or, in the more usual case where log probabilities are used, summing): tf.reduce_prod(b3.prob([1, 1, 0])) <tf.Tensor&colon; shape=(), dtype=float32, numpy=0.044999994> Indpendent allows the user to more explicitly represent the desired concept. We view this as extremely useful, although it's not strictly necessary. Fun facts: • b3.sample and b3_joint.sample have different conceptual implementations, but indistinguishable outputs: the difference between a batch of independent distributions and a single distribution created from the batch using Independent shows up when computing probabilites, not when sampling. • MultivariateNormalDiag could be trivially implemented using the scalar Normal and Independent distributions (it isn't actually implemented this way, but it could be). Batches of Multivariate Distirbutions Let's create a batch of three full-covariance two-dimensional multivariate normals: covariance_matrix = [[[1., .1], [.1, 1.]], [[1., .3], [.3, 1.]], [[1., .5], [.5, 1.]]] nd_batch = tfd.MultivariateNormalTriL( loc = [[0., 0.], [1., 1.], [2., 2.]], scale_tril = tf.linalg.cholesky(covariance_matrix)) <tfp.distributions.MultivariateNormalTriL 'MultivariateNormalTriL' batch_shape=[3] event_shape=[2] dtype=float32> We see batch_shape = (3,), so there are three independent multivariate normals, and event_shape = (2,), so each multivariate normal is two-dimensional. In this example, the individual distributions do not have independent elements. Sampling works: <tf.Tensor&colon; shape=(4, 3, 2), dtype=float32, numpy= array([[[ 0.7367498 , 2.730996 ], [-0.74080074, -0.36466932], [ 0.6516018 , 0.9391426 ]], [[ 1.038303 , 0.12231752], [-0.94788766, -1.204232 ], [ 4.059758 , 3.035752 ]], [[ 0.56903946, -0.06875849], [-0.35127294, 0.5311631 ], [ 3.4635801 , 4.565582 ]], [[-0.15989424, -0.25715637], [ 0.87479895, 0.97391707], [ 0.5211419 , 2.32108 ]]], dtype=float32)> Since batch_shape = (3,) and event_shape = (2,), we pass a tensor of shape (3, 2) to log_prob: nd_batch.log_prob([[0., 0.], [1., 1.], [2., 2.]]) <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([-1.8328519, -1.7907217, -1.694036 ], dtype=float32)> Broadcasting, aka Why Is This So Confusing? Abstracting out what we've done so far, every distribution has an batch shape B and an event shape E. Let BE be the concatenation of the event shapes: • For the univariate scalar distributions n and b, BE = ().. • For the two-dimensional multivariate normals nd. BE = (2). • For both b3 and b3_joint, BE = (3). • For the batch of multivariate normals ndb, BE = (3, 2). The "evaluation rules" we've been using so far are: • Sample with no argument returns a tensor with shape BE; sampling with a scalar n returns an "n by BE" tensor. • prob and log_prob take a tensor of shape BE and return a result of shape B. The actual "evaluation rule" for prob and log_prob is more complicated, in a way that offers potential power and speed but also complexity and challenges. The actual rule is (essentially) that the argument to log_prob must be broadcastable against BE; any "extra" dimensions are preserved in the output. Let's explore the implications. For the univariate normal n, BE = (), so log_prob expects a scalar. If we pass log_prob a tensor with non-empty shape, those show up as batch dimensions in the output: n = tfd.Normal(loc=0., scale=1.) <tfp.distributions.Normal 'Normal' batch_shape=[] event_shape=[] dtype=float32> <tf.Tensor&colon; shape=(), dtype=float32, numpy=-0.9189385> <tf.Tensor&colon; shape=(1,), dtype=float32, numpy=array([-0.9189385], dtype=float32)> n.log_prob([[0., 1.], [-1., 2.]]) <tf.Tensor&colon; shape=(2, 2), dtype=float32, numpy= array([[-0.9189385, -1.4189385], [-1.4189385, -2.9189386]], dtype=float32)> Let's turn to the two-dimensional multivariate normal nd (parameters changed for illustrative purposes): nd = tfd.MultivariateNormalDiag(loc=[0., 1.], scale_diag=[1., 1.]) <tfp.distributions.MultivariateNormalDiag 'MultivariateNormalDiag' batch_shape=[] event_shape=[2] dtype=float32> log_prob "expects" an argument with shape (2,), but it will accept any argument that broadcasts against this shape: nd.log_prob([0., 0.]) <tf.Tensor&colon; shape=(), dtype=float32, numpy=-2.337877> But we can pass in "more" examples, and evaluate all their log_prob's at once: nd.log_prob([[0., 0.], [1., 1.], [2., 2.]]) <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([-2.337877 , -2.337877 , -4.3378773], dtype=float32)> Perhaps less appealingly, we can broadcast over the event dimensions: <tf.Tensor&colon; shape=(), dtype=float32, numpy=-2.337877> nd.log_prob([[0.], [1.], [2.]]) <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([-2.337877 , -2.337877 , -4.3378773], dtype=float32)> Broadcasting this way is a consequence of our "enable broadcasting whenever possible" design; this usage is somewhat controversial and could potentially be removed in a future version of TFP. Now let's look at the three coins example again: b3 = tfd.Bernoulli(probs=[.3, .5, .7]) Here, using broadcasting to represent the probability that each coin comes up heads is quite intuitive: <tf.Tensor&colon; shape=(3,), dtype=float32, numpy=array([0.29999998, 0.5 , 0.7 ], dtype=float32)> (Compare this to b3.prob([1., 1., 1.]), which we would have used back where b3 was introduced.) Now suppose we want to know, for each coin, the probability the coin comes up heads and the probability it comes up tails. We could imagine trying: b3.log_prob([0, 1]) Unfortunately, this produces an error with a long and not-very-readable stack trace. b3 has BE = (3), so we must pass b3.prob something broadcastable against (3,). [0, 1] has shape (2), so it doesn't broadcast and creates an error. Instead, we have to say: b3.prob([[0], [1]]) <tf.Tensor&colon; shape=(2, 3), dtype=float32, numpy= array([[0.7, 0.5, 0.3], [0.3, 0.5, 0.7]], dtype=float32)> Why? [[0], [1]] has shape (2, 1), so it broadcasts against shape (3) to make a broadcast shape of (2, 3). Broadcasting is quite powerful: there are cases where it allows order-of-magnitude reduction in the amount of memory used, and it often makes user code shorter. However, it can be challenging to program with. If you call log_prob and get an error, a failure to broadcast is nearly always the problem. Going Farther In this tutorial, we've (hopefully) provided a simple introduction. A few pointers for going further: • event_shape, batch_shape and sample_shape can be arbitrary rank (in this tutorial they are always either scalar or rank 1). This increases the power but again can lead to programming challenges, especially when broadcasting is involved. For an additional deep dive into shape manipulation, see the Understanding TensorFlow Distributions Shapes. • TFP includes a powerful abstraction known as Bijectors, which in conjunction with TransformedDistribution, yields a flexible, compositional way to easily create new distributions that are invertible transformations of existing distributions. We'll try to write a tutorial on this soon, but in the meantime, check out the documentation
{"url":"https://www.tensorflow.org/probability/examples/TensorFlow_Distributions_Tutorial","timestamp":"2024-11-06T08:28:37Z","content_type":"text/html","content_length":"144351","record_id":"<urn:uuid:6ae3fef1-7c5c-49ba-8d22-54bb12210b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00865.warc.gz"}
Cross Product Calculator - Find the Vector Cross Product [Free] What is a Cross Product Calculator? Cross product calculator is a digital tool that is used to cross-multiply methods to solve two fractions in a few seconds. Our calculator cross product evaluates the unknown variable from the two fractions and cross multiply in which numerator and denominator of the given fraction. What is Meant by Cross Multiplication? Cross multiplication is an algebraic process in which two fractions are involved in which three values are known and one value is unknown. To find the unknown value this cross multiplication method is used. In this method, we multiply the numerator of one fraction by the denominator of the other fraction and vice versa and you get the unknown value. Additionally, understanding how to calculate cross product is essential for solving equations involving vectors in mathematics and physics. Formula Used for Cross-Multiplication The cross multiplication formula consists of three known variables a,b,c, and one unknown variable x in fraction terms. $$ \frac{a}{b} \times \frac{x}{c} $$ $$ a \times c \;=\; b \times x $$ A cross multiply calculator simplifies solving such equations by quickly computing the unknown value based on the given fractions. Step-by-step Calculations Process of Cross Product Solver Cross multiplication calculator has a built-in formula of cross multiply in its server to find the fraction problems for an unknown variable and it gives the solution of the unknown variable x value. Our cross product of vectors calculator uses a simple and quick method to solve the fraction values and calculate the cross product. First, you need to enter the input value in the cross product vector calculator, it checks the known and unknown values in fractions. Then it applies the cross-multiplication rule a/b * x/c in which the numerator of the first fraction is multiplied with a denominator of the second fraction and the denominator of the first fraction is multiplied with a numerator of the second fraction. After multiplying the fraction, it simplifies the a*c=b*x with the help of algebraic rules and gets the solution of the unknown variable as x=a*c/b. You can get a complete understanding of this cross-multiplication concept and how our cross product calculator calculates the fraction for unknown variables with the help of an example that is given below. Solved Example for Cross-Multiplication Let's see an example of a cross-multiplication question with a solution where all the steps are mentioned which are used behind the cross product of two vectors calculator. Simplify the following: $$ \frac{4}{5} \;=\; \frac{8}{a} $$ Cross multiply, $$ 4 \times a \;=\; 8 \times 5 $$ $$ 4a \;=\; 40 $$ Isolate variable a, $$ a \;=\; \frac{40}{4} \;=\; 10 $$ $$ a \;=\; 10 $$ How to Solve x Value in Cross Product Calculator Cross multiply calculator has a user-friendly design tool that enables you to calculate the unknown variable value from two fractions. Before using our cross product solver, you must follow some simple steps, and you do not face any inconvenience during the calculation. These steps are: 1. Enter the value of A in its respective input box. 2. Enter the value of B in the second input box. 3. Enter the value of C in the third input box. 4. Review your input value before hitting the calculate button to start the evaluation process in the vector cross product calculator. 5. Click the “Calculate” button to get the result of your given fraction problem. 6. You should try out the calculator cross product, first, you can use the load example so that you can be assured that it provides an accurate solution. 7. Click on the “Recalculate” button to get a refresh page for more solutions of cross multiplication problems for unknown variables. Final Result of Cross Multiply Calculator Cross product solver gives you the solution to a given fraction problem when you add the input function to it. It provides you with solutions with a detailed procedure for finding the unknown value of x from the given fraction instantly. It may contain as: The result option gives you a solution for the cross-multiplication problems to find the new numeric value of x. It provides you with a solution where all the evaluation processes are in a step-by-step way of the Cross multiplication problem when you click on this option. Benefits of Using Calculator Cross Product Cross multiplication calculator provides you with multiple benefits whenever you use it to calculate fraction problems to find the value of unknown variable x immediately. These benefits are: • Cross product of vectors calculator is a free-of-cost tool so you can use it anytime to find the value of x from a given fraction problem in real time. • Cross product calculator is a handy tool that allows you to get the solution to various types of fraction problems using the cross-multiplication method • You can try out our cross product vector calculator to practice new examples and get a strong hold on the cross-multiplication concept • Our cross multiply calculator saves you time and effort from doing fraction question calculations and provides the value of the x variable. • Cross product solver is a reliable tool that provides you with accurate solutions whenever you use it for the evaluation of variable value without any manmade mistakes. • It provides the solution with a complete process in a stepwise method and you get clarity on the cross-multiplication method.
{"url":"https://pinecalculator.com/cross-product-calculator","timestamp":"2024-11-06T11:27:04Z","content_type":"text/html","content_length":"46289","record_id":"<urn:uuid:e316ae0b-1383-48f6-a200-af86a63037ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00207.warc.gz"}
IFIC Literature Database Archidiacono, M., Gariazzo, S., Giunti, C., Hannestad, S., & Tram, T. (2020). Sterile neutrino self-interactions: H-0 tension and short-baseline anomalies. J. Cosmol. Astropart. Phys., 12(12), Archidiacono, M., Giusarma, E., Hannestad, S., & Mena, O. (2013). Cosmic Dark Radiation and Neutrinos. Adv. High. Energy Phys., 2013, 191047–14pp. Archidiacono, M., Giusarma, E., Melchiorri, A., & Mena, O. (2013). Neutrino and dark radiation properties in light of recent CMB observations. Phys. Rev. D, 87(10), 103519–10pp. Archidiacono, M., Giusarma, E., Melchiorri, A., & Mena, O. (2012). Dark radiation in extended cosmological scenarios. Phys. Rev. D, 86(4), 043509–7pp. Archidiacono, M., Lopez-Honorez, L., & Mena, O. (2014). Current constraints on early and stressed dark energy models and future 21 cm perspectives. Phys. Rev. D, 90(12), 123016–10pp. Diamanti, R., Giusarma, E., Mena, O., Archidiacono, M., & Melchiorri, A. (2013). Dark radiation and interacting scenarios. Phys. Rev. D, 87(6), 063509–8pp. Gariazzo, S., Archidiacono, M., de Salas, P. F., Mena, O., Ternes, C. A., & Tortola, M. (2018). Neutrino masses and their ordering: global data, priors and models. J. Cosmol. Astropart. Phys., 03(3), 011–22pp. Giusarma, E., Archidiacono, M., de Putter, R., Melchiorri, A., & Mena, O. (2012). Sterile neutrino models and nonminimal cosmologies. Phys. Rev. D, 85(8), 083522–9pp. Giusarma, E., Corsi, M., Archidiacono, M., de Putter, R., Melchiorri, A., Mena, O., et al. (2011). Constraints on massive sterile neutrino species from current and future cosmological data. Phys. Rev. D, 83(11), 115023–10pp.
{"url":"https://references.ific.uv.es/refbase/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20author%20RLIKE%20%22Archidiacono%2C%20M%5C%5C.%22%20ORDER%20BY%20author%2C%20year%20DESC%2C%20publication&submit=Cite&citeStyle=APA&citeOrder=&orderBy=author%2C%20year%20DESC%2C%20publication&headerMsg=&showQuery=0&showLinks=1&formType=sqlSearch&showRows=10&rowOffset=0&client=&viewType=","timestamp":"2024-11-02T09:18:56Z","content_type":"text/html","content_length":"77033","record_id":"<urn:uuid:6c759aa7-c06b-4fe3-a3a4-570888975ac5>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00343.warc.gz"}
Phase Shift - (Electrical Circuits and Systems I) - Vocab, Definition, Explanations | Fiveable Phase Shift from class: Electrical Circuits and Systems I Phase shift refers to the change in the phase angle of a waveform, which indicates how far a wave is shifted from a reference point in time. This shift can influence how voltages and currents interact in electrical systems, affecting parameters like apparent, real, and reactive power, reflected impedance in matching circuits, and the generation of three-phase voltages. congrats on reading the definition of Phase Shift. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. In AC circuits, phase shift is essential for understanding the relationship between voltage and current, where they may not reach their maximum or minimum values simultaneously. 2. Phase shifts are measured in degrees (°) or radians, where a complete cycle corresponds to 360° or 2π radians. 3. In three-phase systems, a standard phase shift is 120° between each phase, which ensures balanced power delivery. 4. Reactive power is affected by phase shift because it reflects energy storage in inductive and capacitive components, impacting overall circuit efficiency. 5. Phase shifts play a critical role in impedance matching; proper alignment can minimize reflections and maximize power transfer in transmission lines. Review Questions • How does phase shift affect the relationship between voltage and current in an AC circuit? □ Phase shift impacts how voltage and current waveforms relate to each other. In AC circuits, when there is a phase difference, it means that the peaks and troughs of the voltage and current do not occur at the same time. This leads to differences in real power (the actual work done) and reactive power (the energy that oscillates back and forth), which can decrease overall efficiency in power systems. • Discuss the importance of phase shift in three-phase voltage generation systems. □ In three-phase systems, phase shift is crucial because it allows for balanced load distribution across all phases. Each phase is typically shifted by 120°, which results in a continuous supply of power and smoother operation of motors. This arrangement minimizes fluctuations and provides a more stable electrical output compared to single-phase systems. • Evaluate how phase shifts can influence impedance matching in transmission lines. □ Phase shifts significantly influence impedance matching as they determine how effectively power is transferred from one component to another. When impedance is matched correctly with respect to phase angle, it reduces signal reflections that can occur at discontinuities. If there's a mismatch due to an improper phase shift, it can lead to power loss and inefficiencies in signal transmission, making it vital for designing effective communication systems. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/electrical-circuits-systems-i/phase-shift","timestamp":"2024-11-03T13:01:31Z","content_type":"text/html","content_length":"157807","record_id":"<urn:uuid:e6df8da9-8e41-4f05-864f-4a56d4a00014>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00305.warc.gz"}
Physics Important Questions Chapter 12 Electromagnetic Induction Maharashtra Board Class 12 Physics Important Questions Chapter 12 Electromagnetic Induction Balbharti Maharashtra State Board 12th Physics Important Questions Chapter 12 Electromagnetic Induction Important Questions and Answers. Maharashtra State Board 12th Physics Important Questions Chapter 12 Electromagnetic Induction Question 1. Describe Faraday’s magnet and coil experiment. What conclusion can be drawn from the experiment? Faraday’s magnet and coil experiment: 1. The terminals of a copper coil of several turns are connected to a sensitive galvanometer. 2. A bar magnet is moved swiftly towards the coil with its N-pole facing the coil. As long as the magnet is in motion, the galvanometer shows a deflection [from figure (a)]. 3. If the magnet is now moved swiftly away from the coil, again the galvanometer shows a deflection, but now in the opposite direction. 4. The galvanometer shows a deflection when the experiment is repeated with the S-pole of the magnet facing the coil [from figure (b)]. However, the effect of bringing the S-pole towards the coil is the same as that of taking the N-pole away from the coil and vice versa. 5. The same results are obtained when the magnet is held still and the coil is moved towards or away from the magnet. Conclusion : A current is induced in an electric circuit whenever the magnetic flux linked with the circuit keeps on changing as a result of relative motion of a magnet and the circuit. Question 2. Describe Faraday’s coil-coil experiment. What conclusion can be drawn from the experiment? Faraday’s coil-coil experiment: (1) A copper coil P of several turns is connected in series to a rheostat, a tap key and a battery. The terminals of another copper coil Q of several turns are connected to a sensitive galvanometer. The coils are placed close to each other such that when a current is passed through coil P by closing the key K, the magnetic flux through P is linked with coil Q. (2) On closing the key K, the rise of current in coil P changes the flux linked with the coil Q nearby as shown by a momentary deflection (throw) of the galvanometer G, from below figure. A similar deflection in the same direction is seen if the key closed and either coil is moved swiftly towards the other. (3) On releasing the tap key, the current in the coil P does not reduce to zero instantaneously. With the decreasing flux through its turns, and a consequent decrease in the flux linked with coil Q, there is an opposite throw of the galvanometer. A similar deflection in the same direction is seen if the key is kept closed and either coil is moved swiftly away from the other. Conclusion : A current is induced in an electric circuit whenever the magnetic flux linked with the circuit keeps on changing, either as a result of changing current in a nearby circuit or due to relative motion between them. Question 3. Will an induced current be always produced in a coil whenever there is a change of magnetic flux linked with it ? Yes, provided the coil is in a closed circuit. Question 4. What is the basis of Lena’s law of electromagnetic Induction? Law of conservation of energy is the basis of Lenz’s law of electromagnetic inductIon. Question 5. Express Faraday-Lena’s law of electromagnetic induction in an equation form. Suppose dΦ[m] Is the change in the magnetic flux through a coil or circuit in time dt. Then, by Faraday’s second law of electromagnetic induction, the magnitude of the einf Induced is e ∝ \(\frac{d \Phi_{\mathrm{m}}}{d t}\) or e = k\(\frac{d \Phi_{\mathrm{m}}}{d t}\) where dΦ[m]/dt is the rate of change of magnetic flux linked with the coil and k is a constant of proportionality. The Sl units of e (the volt) and dΦ[m] df (the weber per second) are so selected that the constant of proportionality, k, becomes unity. Combining Faraday’s law and Lents law of electromagnetic induction, the induced emf e = – \(\frac{d \Phi_{\mathrm{m}}}{d t}\) where the minus sign is Included to indicate the polarity of the induced emf as given by Lents law. This polarity simply determines the direction of the induced current in a dosed loop. If a coil has N tightly wound loops, the induced emf will be N times greater than for a single loop, so that e = – N \(\frac{d \Phi_{\mathrm{m}}}{d t}\) where \(\frac{d \Phi_{\mathrm{m}}}{d t}\) is the rate of change of magnetic flux through one loop. Question 6. State the causes of induced current and explain them on the basis of Lena’s law. According to Lena’s law, the direction of the induced emf or current is such as to oppose the change that produces it. The change that induces a current may be (i) the motion of a conductor in a magnetic field or (ii) the change of the magnetic flux through a stationary circuit. In the first case, the direction of induced emf in the moving conductor Is such that the direction of the side-thrust exerted on the conductor by the magnetic field is opposite in direction to its motion. The motion of the conductor is, therefore, opposed. In the second case, the induced current sets up a magnetic field of its own which within the area bounded by the circuit is (a) opposite to the original magnetic field if this field is increasing, but (b) is in the same direction as the original field, if the field is decreasing. Thus, it is the change in magnetic flux through the circuit (not the flux itself) which is opposed by the induced Question 7. In one version of Faraday’s coil-coil experiment, the two coils are wound on the same iron ring as shown, where closing and opening the switch induces a current in the other coil. How do the multiple-loop coils and iron ring enhance the observation of induced emf? The magnetic flux through a coil is directly proportional to the number of turns a coil has. Hence, with multiloop coils in Faraday’s coil-coil experiment, the induced emf is directly proportional to N. Also, the permeability of iron being many orders of magnitude greater than air, the magnetic field lines of the primary coil P are confined to the iron ring and almost all the flux is linked with the secondary coil S. Thus, increased flux and better flux linkage enhances the magnitude of the induced emf. Question 8. A circular conducting loop in a uniform magnetic field is stretched to an elongated ellipse as shown below. The magnetic field points into the page. Will an emf be induced in the loop? If so, state why and give the direction of the induced current. Looking in the direction of the magnetic field, there will be an induced current in the clockwise sense. For the same perimeter, the area of a circle is greater than that of an ellipse. Hence, stretching the loop reduces the inward flux through its plane. To oppose this decreasing flux, a current is induced in the clockwise sense so that the field due to the induced current is into the plane of the diagram. Question 9. A bar magnet is dropped vertically through a thick copper ring as shown. What is the direction of the force exerted by the coil on the magnet? Explain. The magnetic flux through the loop increases when the magnet approaches the loop, and decreases after the magnet has passed through. The induced current in the loop opposes the cause producing the change in flux which, in this case, is the falling magnet. Therefore, the motion of the magnet’ is opposed, first with a repulsion and then with an attraction. The force, in both cases, is upward in the + z-direction. The magnetic dipole moment of the falling magnet is directed up. Therefore, looking down the z-axis, the induced current is clockwise when the magnet is approaching the loop, so that the magnetic moment of the loop points down; subsequently, as the magnet recedes, the induced current is anticlockwise. Question 10. Briefly explain the jumping ring experiment. Elihu Thompson’s jumping ring experiment is an outstanding demonstration of Faraday’s laws and Lenz’s law of electromagnetic induction. The apparatus consists of a cylindrical laminated iron- cored solenoid. A conducting non-magnetic ring, usually copper or aluminium, is placed over the extended vertical core of the solenoid. When an alternating current is passed through the solenoid, the ring is thrown off high into the air. Due to ac, the magnetic field of the solenoid changes continuously. This induces eddy current in the ring. By Lenz’s law, the magnetic field produced by the induced eddy current in the ring opposes the changing magnetic field of the solenoid. Consequently, the two magnetic fields repel each other, making the ring jump. The iron core increases the magnetic field of the solenoid. Often, the ring is cooled with liquid nitrogen. The colder the ring, the less is its resistance and greater the eddy current in it. More current means a greater magnetic field and even higher jumps. Question 11. Explain what you understand by magnetic flux. The total number of magnetic lines of force passing normally through a given area in a magnetic field, is called the magnetic flux through that area. Consider a very small area dA in a uniform magnetic field of induction \(\vec{B}\). The area dA can be represented by a vector \(\overrightarrow{d A}\) perpendicular to it. [Note : The area vector is perpendicular to the sur-face, so it can point either up and to the right as shown or down and to the left. Although either choice is acceptable, choosing the direction that is closest to the magnetic field is convenient and usually the one we choose.] Question 12. How do you find the magnetic flux through a finite area A ? Consider a small area element \(\overrightarrow{d A}\) of a finite area A bounded by contour C, from below figure. Suppose this area is situated in a magnetic field \(\vec{B}\). In general, the magnetic field may not be uniform over the area A. Then, the magnetic flux through the area element is dΦ[m] = \(\vec{B} \cdot \overrightarrow{d A}\) = B (dA) cos θ where θ is the angle between \(\vec{B}\) and \(\overrightarrow{d A}\), so that the flux through the area A is Φ[m] = \(\int d \Phi_{\mathrm{m}}=\int_{A} \vec{B} \cdot \overrightarrow{d A}=\int_{A}\) B(dA)cos θ The integration is over the entire area A. \(\vec{B}\) can be taken out of the integral if and. only if \(\vec{B}\) is the same everywhere over A, in which case, Φ[m] = \(\int_{A}\) B (dA) cos θ = B cos θ \(\int_{A}\) dA = BA cos θ where \(\int_{A}\) dA is just the total area A. Question 13. State an expression for the magnetic flux through a loop of finite area A inside a uniform magnetic field \(\vec{B}\). Hence discuss Faraday’s second law, given that the magnetic flux varies with Consider a conducting loop of finite area A, situated in a uniform magnetic field \(\vec{B}\). We choose the direction of the area vector \(\vec{A}\) that is closest to the magnetic field. For the area vector in below figure, the fingers of the right hand must be turned in the sense of the arrow on the contour of the loop. Since \(\vec{B}\) is the same everywhere over A, the flux through the area A is Φ[m] = BA cos θ where θ is the angle between \(\vec{B}\) and \(\vec{A}\). Faraday’s discovery was that the rate of change of flux dΦ[m]/ dt is related to the work done on taking a unit positive charge around the contour in the reverse direction. This work done is just the induced emf. Accordingly we express Faraday’s second law of electromagnetic induction as |e| = \(\frac{d \Phi_{\mathrm{m}}}{d t}=\frac{d}{d t}\) (BA cos θ) If B, A and θ are all constants in time, no emf is induced in the loop. An emf will be induced if at least one of these parameters changes with time. B and A may change in magnitude; the loop may turn, thereby changing θ. Question 14. When is the magnetic flux through an area element (i) maximum (ii) zero? Explain. When an area element dA is placed in a magnetic field \(\vec{B}\), the magnetic flux through the element is dΦ[m] = B(dA) cos θ …………. (1) where 8 is the angle between \(\vec{B}\) and the area vector \(\overrightarrow{d A}\). (i) The maximum value of cos θ = 1 when θ = 0. Thus, from Eq. (1), the magnetic flux is maximum, dΦ[m] = B(dA), when the magnetic induction is in the direction of the area vector. (ii) The minimum value of cos θ = 0 when θ = 90°. Then, the magnetic flux is minimum, dΦ[m] = 0, when the magnetic induction is perpendicular to the area vector. Question 15. State the SI units and dimensions of (i) magnetic induction (ii) magnetic flux. (i) Magnetic induction, B : SI unit : the tesla (T) : 1 T = 1 Wb / m^2 Dimensions: [B] = [MT^-2I^-1]. (ii) Magnetic flux, Φ[m]: SI unit : the weber (Wb) Dimensions : [Φ[m]] = [B][A] = [MT^-2I^-1][L^2] = [ML^2T^-2I^-1] Question 16. State the relation between the SI units volt and weber. 1 volt = 1 weber per second (1 V = 1 Wb/s). Question 17. Explain how Lenz’s law is incorporated into Faraday’s second law of electromagnetic induction by introducing a minus sign. Consider a conducting loop of area A in a uniform external magnetic field \(\vec{B}\) with its plane perpendicular to the field, i.e., its area vector \(\vec{A}\) is parallel to \(\vec{B}\) , from below figure. We choose the x-axis along \(\vec{B}\), so that \(\vec{B}=B \hat{i}\) and \(\vec{A}=A \hat{i}\). Suppose the magnitude of the magnetic induction increases with time. Then, \(\vec{A}\) remaining constant, the induced emf by Faraday-Lenz’s second law of electromagnetic induction is e = \(-\frac{d \Phi_{\mathrm{m}}}{d t}=-\frac{d}{d t}(B A)=-A \frac{d B}{d t}\) ………….. (1) Since we have assumed that B is increasing with time, dB / dt is a positive quantity. Also, A = |\(\vec{A}\)| is positive by definition. Hence, the right hand side of Eq. (1) is a negative quantity. The right hand rule for area vector fixes the positive sense of circulation around the loop as the clockwise sense. Then, by Lenz’s law the induced current in the loop is in the anticlockwise sense. The sense of the induced emf is the same as the sense of the current it drives. With the clockwise sense fixed as positive, the anticlockwise sense of the induced current is negative. Hence, the sense of e is also negative. That is, the left hand side of Eq. (1) is indeed a negative quantity. Thus, introducing a minus sign in Faraday’s second law incorporates Ienz’s law into Faraday’s law. 18. Solve the following Question 1. A coil of effective area 25 m^2 is placed in a field-free region. Subsequently, a uniform magnetic field that rises uniformly from zero to 1.25 T in 0.15 s is applied perpendicular to the plane of the coil. What is the magnitude of the emf induced in the coil? Data : NA = 25 m^2, B[f] = 1.25 T, B[i] = 0, A t = 0.15 s Initial magnetic flux, Φ[i] = 0 (∵ B[i] = 0) Final magnetic flux, Φ[f] = NAB[f] e = –\(\frac{d \Phi}{d t}=-\frac{\left(\Phi_{\mathrm{f}}-\Phi_{\mathrm{i}}\right)}{d t}\) Question 2. A rectangular coil of length 0.5 m and breadth 0.4 m has resistance of 5 Ω. The coil is placed in a magnetic field of induction 0.05 T and its direction is perpendicular to the plane of the coil. If the magnetic induction is uniformly reduced to zero in 5 milliseconds, find the emf and current induced in the coil. Data : l =0.5 m, b = 0.4 m, R = 5Ω, B = 0.05 T, B[f] = 0, dt = 5 × 10^-3 s Area of the coil, A = lb = 0.5 × 0.4 = 0.2 m^2 Initial magnetic flux, Φ[i] = AB[i] = 0.02 × 0.05 = 0.01 Wb Final magnetic flux, Φ[f] = 0 (∵ B[f] = 0) Question 3. A square wire loop with sides 0.5 m is placed with its plane perpendicular to a magnetic field. The resistance of the loop is 5 Ω. Find at what rate the magnetic induction should be changed so that a current of 0.1 A is induced in the loop. Data : l = 0.5 m, R = 5 Ω, I = 0.1 A A = l^2 = 0.5 × 0.5 = 0.25 m^2 The magnitude of the induced emf, |e| = \(\frac{d \Phi}{d t}=\frac{d}{d t}\) (BA) = A \(\frac{d B}{d t}\) since the area (A) of the coil is constant. The induced current, I = \(\frac{|e|}{R}=\frac{A}{R} \frac{d B}{d t}\) ∴ The time rate of change of magnetic induction, \(\frac{d B}{d t}=\frac{I R}{A}=\frac{0.1 \times 5}{0.25}\) = 2 T/s Question 4. The magnetic flux through a loop of resistance 0.1 Ω is varying according to the relation Φ = 6t^2 + 7t + 1, where Φ is in mihiweber and t is in second. What is the emf induced in the loop at t = 1 s and the magnitude of the current? Data: R = 0.1 Ω, Φ[m] = 6t^2 + 7t + 1 mWb, t = 1 s (i) The induced emf, |e| = \(\frac{d \Phi_{\mathrm{m}}}{d t}\) = \(\frac{d}{d t}\)(6t^2 + 7t + 1) = (12t + 7) mV = 12(1) + 7 = 19 mV (ii) The magnitude of the current = \(\frac{|e|}{R}\) = \(\frac{19 \mathrm{mV}}{0.1 \Omega}\) = 190 mA Question 5. A wire 88 cm long is bent into a circular loop and kept with its plane perpendicular to a magnetic field of induction 2.5 Wb/m^2. Within 0.5 second, the coil is changed to a square and the magnetic induction is increased by 0.5 Wb/m^2. Calculate the emf induced in the wire. Data: l = 88 cm, B[i] = 2.5 Wb/m^2, B[f] = 3 Wb/m^2, ∆t = 0.5 s For the circular loop, l = 2πr ∴ r = \(\frac{l}{2 \pi}=\frac{88}{2 \times(22 / 7)}\) = 14 cm = 0.14 m Area of the circular loop, A[i] = πr^2 = \(\frac{22}{7}\) (0.14)^2 = 0.0616 m^2 Initial magnetic flux, Φ[i] = A[i]B[i] = 0.0616 × 2.5 = 0.154 Wb For the square loop, length of each side = \(\frac{88}{4}\) cm = 22 cm = 0.22 m 4 Area of the square loop, A[f] = (0.22)^2 = 0.0484 m^2 ∴ Final magnetic flux, Φ[f] = A[f]B[f] = 0.0484 × 3 = 0.1452 Wb Induced emf, e = – \(\frac{\Phi_{\mathrm{f}}-\Phi_{1}}{\Delta t}=\frac{\Phi_{1}-\Phi_{\mathrm{f}}}{\Delta t}\) ∴ e = \(\frac{0.154-0.1452}{0.5}\) = 8.8 × 10^-3 × 2 = 1.76 × 10^-2 V Question 6. A 1000 turn, 20 cm diameter coil is rotated in the Earth’s magnetic field of strength 5 × 10^-5 T. The plane of the coil was initially perpendicular t0 the Earth’s field and is rotated to be parallel to the field in 10 ms? Find the average emf induced. Data: N = 1000, d = 0.2 m, B = 5 × 10^-5 T, ∆t = 10 ms = 10^-2 s Radius of coil, r = d/2 = 10^-1 m Induced emf, e = -N \(\frac{\Delta \Phi_{\mathrm{m}}}{\Delta t}=-N \frac{\Phi_{\mathrm{f}}-\Phi_{1}}{\Delta t}\) Initial area, A[i] = πr^2 and initial flux, NΦ[i] = NBA[i] NB (πr^2) Final flux, Φ[f] = 0, since the plane of the coil is parallel to the field lines. Question 7. A television loop antenna has diameter of 11 cm. The magnetic field of the TV signal is uniform, normal to the plane of the loop and changing at the rate of 0.16 T/s. What is the magnitude of the emf induced in the antenna? Question 8. The magnetic field through a wire loop, of radius 12 cm and resistance 8.5 Ω, changes with time as shown in the graph below. The magnetic field is uniform and perpendicular to the plane of the loop. Calculate the emf induced in the loop as a function of time. Hence, find the induced emf in the time interval (a) t = 0 to t = 2 s (b) t = 2 s to t = 4s (c) t = 4s to t = 6s. Solution : Data : r = 0.12 m, R = 8.5 Ω This is the emf induced in the loop as a function of time. \(\frac{d B}{d t}\) is the slope of the B-t graph Question 19. What is motional emf? An emf induced in a conductor or circuit moving in a magnetic field is called motional emf. Question 20. Determine the motional emf induced in a straight conductor moving in a uniform magnetic field with constant velocity. Consider a straight wire AB resting on a pair of conducting rails separated by a distance l lying wholly in a plane perpendicular to a uniform magnetic field \(\vec{B}\). \(\vec{B}\) points into the page and the rails are stationary relative to the field and are connected to a stationary resistor R. Suppose an external agent moves the rod to the right with a constant speed v, perpendicular to its length and to \(\vec{B}\). As the rod moves through a distance dx = vdt in time dt, the area of the loop ABCD increases by dA = ldx = lv dt. Therefore, in time dt, the increase in the magnetic flux through the loop, dΦ[m] = BdA = Blvdt By Faraday’s law of electromagnetic induction, the magnitude of the induced emf e = \(\frac{d \Phi_{\mathrm{m}}}{d t}=\frac{B l v d t}{d t}\) = Blv Question 21. Determine the motional emf induced in a straight conductor moving in a uniform magnetic field with constant velocity on the basis of Lorentz force. Consider a straight rod or wire PQ of length l, lying wholly in a plane perpendicular to a uniform magnetic field of induction B , as shown in below figure; \(\vec{B}\) points into the page. Suppose an external agent moves the wire to the right with a constant velocity \(\vec{v}\) perpendicular to its length and to \(\vec{B}\). The free electrons in the wire experience a Lorentz force \ (\vec{F}\) ( = q\(\vec{v}\) × \(\vec{B}\)). According to the right-hand rule for cross products, the Lorentz force on negatively charged electrons is downward. The Lorentz force \(\vec{F}\) moves the free electrons in the wire from P to Q so that P becomes positive with respect to Q. Thus, there will be a separation of the charges to the two ends of the wire until an electric field builds up to oppose further motion of the charges. In moving the electrons a distance l along the wire, the work done by the Lorentz force is W = Fl = (qvB sin θ) l = qvBl since the angle between \(\vec{v}\) and \(\vec{B}\), θ = 90°. Since electrical work done per unit charge is emf, the induced emf in the wire is e = \(\frac{W}{q}\) = vB l Alternatively, the electric field due to the separation of charges is \(\vec{F} / q=\vec{v} \times \vec{B}\). Since \(\vec{v}\) is perpendicular to B, the magnitude of the field = vB. Electric field = \(\frac{\text { p.d. }(e) \text { between } \mathrm{P} \text { and } \mathrm{Q}}{\text { distance } \mathrm{PQ}(l)}\) Therefore, the p.d. or emf induced in the wire PQ is e = v B l Question 22. Determine the motional emf induced in a straight conductor rotating in a uniform magnetic field with constant angular velocity. Suppose a rod of length l is rotated anticlockwise, around an axis through one end and perpendicular to its length, in a plane perpendicular to a uniform magnetic field of induction \(\vec{B}\), as shown in below figure; \(\vec{B}\) points into the page. Let the constant angular speed of the rod be ω. Consider an infinitesimal length element dr at a distance r from the rotation axis. In one rotation, the area traced by the element is dA = 2πrdr. Therefore, the time rate at which the element traces out the area is \(\frac{d A}{d t}\) = frequency of rotation × dA = f dA where f = \(\frac{\omega}{2 \pi}\) is the frequency of rotation. ∴ \(\frac{d A}{d t}=\frac{\omega}{2 \pi}\) (2πrdr) = ωr dr Therefore, the magnitude of the induced emf in the element is |de| = \(\frac{d \Phi_{\mathrm{m}}}{d t}=B \frac{d A}{d t}\) = B ωr dr Since the emfs in all the elements of the rod will be in series, the total emf induced across the ends of the rotating rod is |e| = \(\int d e=\int_{0}^{l} B \omega r d r=B \omega \int_{0}^{l} r d r=B \omega \frac{l^{2}}{2}\) For anticlockwise rotation in B pointing into the page, the pivot point O\(\vec{B}\) is at a higher potential. [Note : To understand the polarity of the emf across the ends of the rod, imagine that the rod slides along a wire that forms a circular arc MPN of radius /, as shown below. Assume that the resistor R furnishes all of the resistance in the closed loop. As 9 increases, so does the inward flux through the loop due to \(\vec{B}\). To counteract this increase, the magnetic field due to the induced current must be directed out of the page in the region enclosed by the loop. Therefore, the current in the loop POMP circulates anticlockwise with the motional emf directed from P to O.] 23. Solve the following Question 1. A straight metal wire slides to the right at a constant 5 m/s along a pair of parallel metallic rails 25 cm apart. A 10 Ω resistor connects the rails on the left end. The entire setup lies wholly inside a uniform magnetic field of strength 0.5 T, directed into the page. Find the magnitude and direction of the induced current in the circuit. Data : v = 5 m/s, l = 0.25 m, R = 10 Ω, B = 0.5T The induced current, i = \(\frac{e}{R}=\frac{B l v}{R}=\frac{(0.5)(0.25)(5)}{10}\) = 0.0625 A Since the magnetic flux into the page through the | closed conducting loop increases, the induced current in the loop must be anticlockwise. Alternatively, Fleming’s right hand rule gives the direction of induced current in the moving wire from bottom to top. Question 2. A straight conductor (rod) of length 0.3 m is rotated about one end at a constant 6280 rad/s in a plane normal to a uniform magnetic field of induction 5 × 10^-5 T. Calculate the emf induced between its ends. Data : l = 0.3 m, ω = 6280 rad/s, B = 5 × 10^-5 T In one rotation, the rod traces out a circle of radius l, i.e., an area, A = πl^2. Therefore, the time rate at which the rod traces out the area is Question 3. A metal rod 1/\(\sqrt{\pi}\) m long rotates about one of its ends in a plane perpendicular to a magnetic field of induction 4 × 10^-3 T. Calculate the number of revolutions made by the rod per second if the emf induced between the ends of the rod is 16 m V. Solution : Data : r = l = \(\frac{1}{\sqrt{\pi}}\) m, B = 4 × 10^-3 T, |e| = 16 mV = 16 × 10^-3 V In one rotation, the rod traces out a circle of radius Z, i.e., an area, A = πl^2 Therefore, the time rate at which the rod traces out the area is Question 4. A cycle wheel with 10 spokes, each of length 0. 5 m, is moved at a speed of 18 km/h in a plane normal to the Earth’s magnetic induction of 3.6 × 10^-5 T. Calculate the emf induced between (i) the axle and the rim of the cycle wheel (ii) ends of a single spoke and ten spokes. Data : r = l = 0.5 m, v = 18 km/h = \(\frac{18000}{3600}\) = 5 m/s, Since the spokes have common ends (the axle and wheel rim), they are connected in parallel. Hence, the emf induced between the end of a single spoke and the other common end of ten spokes is also 4.5 × 10^-5 V. Since the total emf of this parallel combination of identical emfs e is equal to a single emf e, the emf induced between the axle and wheel rim is equal to 4.5 × 10^-5 V. Question 24. Briefly describe with necessary diagrams the experimental setup to investigate the phenomenon of electromagnetic induction for a magnet swinging through a coil. Apparatus: A permanent magnet is mounted at the centre of the arc of a semicircular aluminium frame of radius 50 cm. The whole frame is pivoted at its centre and can oscillate freely in its plane, from figure (a). Movable weights m[1] and m[2] on the radial arms of the frame can be symmetrically positioned to adjust the period of oscillation from about 1.5s to 3s. The magnet can freely pass through a copper coil of about 10000 turns. When the magnet swings through and out of the coil, the magnetic flux through the coil changes, inducing an emf. The amplitude of the swing can be read from the graduations on the arc. Since the induced emf will be small, it may be measured by connecting the terminals of the coils to a CRO (cathode-ray oscilloscope, or they may be connected to a 100 pF capacitor through a diode, from figure (b), and the voltage across the capacitor is measured. The resistor in series with the diode helps to adjust the capacitor charging time ( = RC). [Note : Real-time graphs can be captured using a datalogger connected to a computer. The datalogger uses rotary motion, voltage and magnetic field sensors to measure the angle, the induced voltage and the magnetic flux, respectively.] Question 25. In the experiment to investigate the phenomenon of electromagnetic induction for a magnet swinging through a coil, relate the graphical representations (flux-time and voltage-time) with the motion of the magnet. In the demonstration of a magnet swinging through a coil, a voltage is induced in the coil as the magnet swings through it. For the discussion, we assume the length of the magnet to be smaller (about half) than the length of the coil and the North pole of the magnet swings into the coil from the left. (The polarity of the induced voltage pulse depends on the polarity of the magnet.) We take the magnetic flux linked with the coil to be nearly zero when the magnet is high up away from the coil. As the magnet moves through it the coil and recedes, the magnetic field through the coil increases to its maximum and then decreases. There is a substantial magnetic field at the coil only when it is very near the magnet. Moreover, the speed of the magnet is maximum when it is at the centre of the coil, since it is then at the mean position of its oscillation. Thus the magnetic field changes quite slowly when the magnet is far away and rapidly as it approaches the coil, from figure (a). The flux through the coil increases as the north pole approaches the left end of the coil, and reaches a maximum when the magnet is exactly midway in the coil, as shown by the portion be in from figure (a). By Lenz’s law, the induced emf will produce a leftward flux that will seek to oppose the increasing magnetic flux of the magnet through the coil. The interval cd, when the flux is maximum but remains constant and induced emf is zero, corresponds to the situation where the magnet is wholly inside the coil. Once the magnet swings past the centre of the coil, the flux through the coil starts to decrease-the interval de. To reinforce the decreasing flux of the magnet through the coil, a rightward flux is now induced, thereby flipping the polarity of the induced emf. If we use a coil that is shorter than the magnet, the time interval cd for which the induced emf remains zero would have been shorter. The times f[1] and f[2] in from figure (a) are the points of inflection of the curve, and in from figure (b) are obviously the minimum and maximum of the induced emf, respectively. The sequence of two pulses, one negative and one positive, occurs during just half a cycle. On the return swing of the magnet, they are repeated in the same order. Question 26. In the experiment to investigate the phenomenon of electromagnetic induction for a magnet swinging through a coil, show that the peak induced emf is directly proportional to the speed of the magnet (or show that the peak induced emf is directly proportional to the angular amplitude and inversely proportional to the time period). In the experiment, a magnet is swung through a coil in a radius R. The angular position θ of the magnet is measured from the vertical, the mean position of the swing. The angular amplitude is θ[0]. The kinetic energy of the system is \(\frac{1}{2}\) Iω^2 and the potential energy (relative to the lowest position of the magnet) is MgR(1 – cos θ), where M is mass of the system. Conservation of energy gives, for small θ, as required. The rate of change of flux through the coil is essentially proportional to the velocity of the magnet as it passes through the coil. By choosing different amplitudes of oscillation of the magnet, we can alter this velocity. Question 27. What is an ac generator? State the principle of an ac generator. An electric generator or dynamo converts mechanical energy into electric energy, just the opposite of what an electric motor does. Principle : An AC generator works on electro-magnetic induction : When a coil of wire rotates between two poles of a permanent magnet such that the magnetic flux through the coil changes periodically with time due to a change in the angle between the area vector and the magnetic field, an alternating emf is induced in the coil causing a current to pass when the circuit is closed. Question 28. Briefly describe the construction of a simple ac generator. Obtain an expression for the emf induced in a coil rotating with a uniform angular velocity in a uniform magnetic field. Show graphically the variation of the emf with time (t). OR Describe the construction of a simple ac generator and explain its working. Construction : A simplified diagram of an ac generator is shown in below figure 12.18. It consists of many loops of wire wound on an armature that can rotate in a magnetic field. When the armature is turned by some mechanical means, an emf is generated in the rotating coil. Consider the coil to have N turns, each of area A, and rotated with a constant angular speed ω – about an axis in the plane of the coil and perpendicular to a uniform magnetic field \(\vec{B}\), as shown in the figure. The frequency of rotation of the coil is f = ω / 2π. Working : The angle 9 between the magnetic field \(\vec{B}\) and the area of the coil \(\vec{A}\) at any instant t is θ = ωt (assuming θ = 0° at t = 0). At this position, the magnetic flux through the coil is Φ[m] = \(N \vec{B} \cdot \vec{A}\) = NBA cos θ = NBA cos ωt ∴ e = e[0] sin ωt, where e[0] = NBAω. Therefore the induced emf varies as sin cot and is called sinusoidally alternating emf. In one rotation of the coil, sin cot varies between +1 and – 1 and hence the induced emf varies between +e[0] and -e[0]. The maximum value e[0] of an alternating emf is called the peak value or amplitude of the emf. The sinusoidal variation of emf with time t is shown in above figure. The emf changes direction at the end of every half rotation of the coil. The frequency of the alternating emf is equal to the frequency/of rotation of the coil. The period of the alternating emf is T = \(\frac{1}{f}\) Imagine looking at the coil of the ac generator from the slip rings along the rotation axis in Fig. 12.18. The magnetic flux, rate of change of flux and sign of the induced emf are shown in the table below for the different orientations of the coil as in below figure. ┃Coil orientation│Flux Φ[m] │dΦ[m]/dt │Induced emf┃ ┃1 │Positive maximum │Momentarily zero (constant flux) │Zero ┃ ┃2 │Positive │Decreasing (negative) │Positive ┃ ┃3 │Zero │Decreasing (negative) │Positive ┃ ┃4 │Negative │Decreasing (negative) │Positive ┃ ┃5 │Negative maximum │Momentarily zero (constant flux) │Zero ┃ ┃6 │Negative │Increasing (positive) │Negative ┃ ┃7 │Zero │Increasing (positive) │Negative ┃ ┃8 │Positive │Increasing (positive) │Negative ┃ ┃9 │Return to positive maximum│Momentarily zero (constant flux) │Zero ┃ Question 29. How does a dc generator differ from an ac generator? A dc generator is much like an ac generator, except that the slip rings at the ouput are replaced by a split-ring commutator, just as in a dc motor. The output of a dc generator is a pulsating dc as shown in Fig. 12.22. For a smoother output, a capacitor filter is connected in parallel with the output (see below figure for reference). Question 30 Explain back emf in a motor. A generator converts mechanical energy into electrical energy, whereas a motor converts electrical energy into mechanical energy. Also, motors and generators have the same construction. When the coil of a motor is rotated by the input emf, the changing magnetic flux through the coil induces an emf, consistent with Faraday’s law of induction. A motor thus acts as a generator whenever its coil rotates. According to Lenz’s law, this induced emf opposes any change, so that the input emf that powers the motor is opposed by the motor’s self-generated emf. This self-generated emf is called a back emf because it opposes the change producing it. Question 31. A motor draws more current when it starts than when it runs at its full (i.e., operating) speed. Explain. When a pump or refrigerator (or other large motor) starts up, lights in the same circuit dim briefly. The back emf is effectively the generator output of a motor, and is proportional to the angular velocity co of the motor. Hence, when the motor is first turned on, the back emf is zero and the coil receives the full input voltage. Thus, the motor draws maximum current when it is first turned on. As the motor speeds up, the back emf grows, always opposing the driving emf, and reduces the voltage across the coil and the amount of current it draws. This explains why a motor draws more current when it first comes on, than when it runs at its normal operating speed. The effect is noticeable when a high power motor, like that of a pump, refrigerator or washing machine is first turned on. The large initial current causes the voltage at the outlets in the same circuit to drop. Due to the IR drop produced in feeder lines by the large current drawn by the motor, lights in the same circuit dim briefly. [Note : A motor is designed to run at a certain speed for a given applied voltage. A mechanical overload on the motor slows it down appreciably. If the rotation speed is reduced, the back emf will not be as high as designed for and the current will increase. At too low speed, the large current can even burn its coil. On the other hand, if there is no mechanical load on the motor, its angular velocity will increase until the back emf is nearly equal to the driving emf. Then, the motor uses only enough energy to overcome friction.] Question 32. What is back torque in a generator? In an electric generator, the mechanical rotation of the armature induces an emf in its coil. This is the output emf of the generator. Under no-load condition, there is no current although the output emf exists, and it takes little effort to rotate the armature. However, when a load current is drawn, the situation is similar to a current-carrying coil in an external magnetic field. Then, a torque is exerted, and this torque opposes the rotation. This is called back torque or counter torque. Because of the back torque, the external agent has to apply a greater torque to keep the generator running. The greater the load current, the greater is the back torque. 33. Solve the following Question 1. An ac generator spinning at a rate of 750 rev/min produces a maximum emf of 45 V. At what angular speed does this generator produce a maximum emf of 102 V ? Data : e[1] = 45 V, f[1] = 750 rpm, e[2] = 102 V e = NABω = NAB(2πf) ∴ e ∝ f ∴ f[2] = \(\frac{e_{2}}{e_{1}}\) × f[1] = \(\frac{102}{45}\) × 750 = 1700 rpm This is the required frequency of the generator coil. Question 2. An ac generator has a coil of 250 turns rotating at 60 Hz in a magnetic field of \(\frac{0.6}{\pi}\) T. What must be the area of each turn of the coil to produce a maximum emf of 180 V ? Data : N = 250, f = 60 Hz, B = \(\frac{0.6}{\pi}\) T e[0] = NABω = NAB (2πf) ∴ A = \(\frac{e_{0}}{N B 2 \pi f}=\frac{180}{(250)(0.6 / \pi)(2 \pi \times 60)}=\frac{18}{25 \times 72}\) = 10^-2 m^2 This must be the area of each turn of the coil. Question 3. A dynamo attached to a bicycle has a 200 turn coil, each of area 0.10 m^2. The coil rotates half a revolution per second and is placed in a uniform magnetic field of 0.02 T. Find the maximum voltage generated in the coil. Data : N = 200, A = 0.1 m^2, f = 0.5 Hz, B = 0.02T e[0] = NABω = NAB (2πf) Therefore, the maximum voltage generated, e[0] = (200)(0.1)(0.02)(2 × 3.142 × 0.5) = 1.26 V Question 4. A motor has a coil resistance of 5 Ω. If it draws 8.2 A when running at full speed and connected to a 220 V line, how large is the back emf ? Data : R = 5 Ω, I = 8.2 A, e[appIied] = 220 V e[appIied] – e[back] =IR = 0 ∴ e[back] = [appIied] – IR = 220 – (8.2)(5) = 220 – 42 = 178 V Question 5. The back emf in a motor is 100 V when operating . at 2500 rpm. What would be the back emf at 1800 rpm? Assume the magnetic field remains unchanged. Data : e[1] = 100 V, f[1] = 2500 rpm, f[2] = 1800 rpm The back emf is proportional to the angular speed. ∴ \(\frac{e_{2}}{e_{1}}=\frac{f_{2}}{f_{1}}\) ∴ e[2] = \(\frac{f_{2}}{f_{1}}\) × e[1] = \(\frac{1800}{2500}\) × 100 = 72V This is the back emf at lower speed. Question 6. The armature windings of a dc motor have a resistance of 10 Ω. The motor is connected to a 220 V line, and when the motor reaches full speed at normal load, the back emf is 160 V. Calculate (a) the current when the motor is just starting up (b) the current at full speed, (c) What will be the current if the load causes it to run at half speed ? Data : R = 10 Ω, e[appIied] = 220 V, e[back] = 160 V, f[2] = f[1]/2 . e[appIied] – e[back] – IR = 0 (a) At start up, back emf is zero. ∴ I[start] = \(\frac{e_{\text {applied }}}{R}=\frac{220}{10}\) = 22 A (b) At full speed, I[normal] = \(\frac{e_{\text {applied }}-e_{\text {back }}}{R}=\frac{220-160}{10}=\frac{60}{10}\) = 6 A (c) Back emf is proprtional to rotational speed. Thus, if the motion is running at half the speed, back emf is half the original value, i.e., 80 V. Therefore, at half speed, I[2] = \(\frac{e_{\text {applied }}-e_{2}}{R}=\frac{220-80}{10}=\frac{140}{10}\) = 14 A Question 34. Find an expression for the power expended in pulling a conducting loop out of a magnetic field. When an external agent produces a relative motion between a conducting loop and an external magnetic field, a magnetic force resists the motion, requiring the applied force to do positive work. The work done is transferred to the material of the loop as thermal energy because of the electrical resistance of the material to the current that is induced by the motion. Proof : Consider a rectangular wire loop ABCD of width l, with its plane perpendicular to a uniform magnetic field of induction \(\vec{B}\). The loop is being pulled out of the magnetic field at a constant speed v, as shown in below figure (a). At any instant, let x be the length of the part of the loop in the magnetic field. As the loop moves to the right through a distance dx = vdt in time dt, the area of the loop inside the field changes by dA = ldx = lvdt. And, the change in the magnetic flux dΦ[m] through the loop is dΦ[m] = BdA = Blvdt ………….. (1) Then, the time rate of change of magnetic flux is \(\frac{d \Phi_{\mathrm{m}}}{d t}=\frac{B l v d t}{d t}\) = B l v ……………. (2) By Faraday’s second law, the magnitude of the induced emf is |e| = \(\frac{d \Phi_{\mathrm{m}}}{d t}\) = B l v ………….. (3) Due to the motion of the loop, the tree electrons (charge, e) in the wire inside the field experience Lorentz force \(e \vec{v} \times \vec{B}\). In the wire PQ this force moves the Free electrons 1mm P to Q making them travel in the anticlockwise sense around the 1oop. Therefore, the induced conventional current I is in the clockwise sense, as shown. From figure (b) shows the equivalent circuit of the loop, where the induced emf e is a distributed emf and R is the total resistance of the loop. ∴ I = \(\frac{|e|}{R}=\frac{B l v}{R}\) …………… (4) Now, a straight current carrying conductor of length L in a magnetic held experiences a torce \(\vec{F}=I \vec{L} \times \vec{B}\) whose direction can be found using Fleming’s Left hand rule. Accordingly, forces \(\vec{F}_{2}\) and \(\vec{F}_{3}\) on wires AH and CD, respectively, are equal in magnitude (= Ix8), opposite in direction and have the same line of action- Hence, they balance each other. There is no torce on the wire BC as it hes outside the field. The force \(\vec{F}_{1}\) on the wire AD has magnitude F[1] = IlB and Is directed towards the left. To move the loop with constant velocity \(\vec{v}\), an external force \(\vec{F}=-\vec{F}_{1}\) must be applied. Therefore, in magnitude, Because B, l and R are constants a force of constant magnitude F is required to move the loop at constant speed v. Thus, the power or the rate of doing work by the external agent is P = \(\vec{F} \cdot \vec{v}\) = Fv = \(\frac{B^{2} l^{2} v^{2}}{R}\) ………….. (5) Question 35. Why and where are eddy currents undesirable ? How are they minimized ? Eddy currents result in generation of heat (energy loss) in the cores of transformers, motors, induction coils, etc. To minimize the eddy currents, instead of a solid metal block, cores are made of thin insulated metal strips or laminae. Question 36. If a magnet is dropped through a long thick- walled vertical copper tube, it attains a constant velocity after some time. Explain. Every thin transverse section of a thick-walled vertical copper tube is an annular disc. The downward motion of the magnet causes increased magnetic flux through such conducting discs. By Lenz’s . law, the induced or eddy current around the discs produces a magnetic field of its own to oppose the change in flux due to the magnet’s motion. Initially, as the magnet falls under gravity, its speed increases. But, quickly the vertically upward force on the magnet due to the induced current becomes equal in magnitude to the gravitational force on the magnet and the net force on the magnet becomes zero. The subsequent motion of the magnet is at this constant terminal speed. Question 37. Describe in brief an experiment to demonstrate that eddy currents oppose the cause producing them. Apparatus : A strong electromagnet; two thick copper discs (4″ dia, \(\frac{1}{4}\)” thick), each attached to a rod about 30″ long. One of the discs has several vertical slots, about 80 % of the way up. The pendulums can be suspended from a lab stand by a pivot mount and made to oscillate between closely-spaced pole pieces of the electromagnet. Experiment: When the electromagnet is not turned on, both the pendulums swing freely with some damping due to air resistance. When the electromagnet is turned on, the slotted pendulum still swings, although a little more damped, but the solid pendulum practically stops dead between the pole pieces of the magnet immediately. Conclusion : As the pendulums enter or exit the magnetic field, the changing magnetic flux sets up eddy currents in the discs. The sense of the eddy currents is so as to produce a torque that opposes the rotation of the discs about their pivot. This opposing torque produces a breaking action, damping the oscillations. In the case of the solid disc, the continuous volume of the disc offers large unbroken path to the swirling electrons. Thus, the eddy current builds up to a large magnitude. The thicker the disc, the larger is the eddy current and, consequently, the larger the damping. In the case of the slotted disc, the vertical slots do not allow large eddy current and, consequently, the damping is small. Question 38. A solid conducting plate swings like a pendulum about a pivot into a region of uniform magnetic field, as shown in the diagram. As it enters and leaves the field, show and explain the directions of the eddy current induced in the plate and the force on the plate. Figure shows the eddy currents in the conducting plate as it enters and leaves the magnetic field. In both cases, it experiences a force \(\vec{F}\) opposing its motion. As the plate enters from the left, the magnetic flux through the plate increases. This sets up an eddy current in the anticlockwise direction, as shown. Since only the right-hand side of the current loop is inside the field, by Fleming’s right hand rule (FRH rule), an unopposed force acts on it to the left. There is no eddy current once the plate is completely inside the uniform field. When the plate leaves the field on the right, the decreasing flux causes an eddy current in the clockwise direction. The damping magnetic force on the current is to the left, further slowing the motion. The eddy current in the plate results in mechanical energy being dissipated as thermal energy. Each time the plate enters and leaves the field, a part of its mechanical energy is transformed into thermal energy. After a few swings, the mechanical energy becomes zero and the motion comes to a stop with the warmed-up plate hanging vertically. 39. Solve the following Question 1. A metal rod of resistance of 15 Ω is moved to the right at a constant 60 cm/s along two parallel conducting rails-25 cm apart and shorted at one end. A magnetic field of magnitude 0.35 T points into the page, (a) What are the induced emf and current in the rod? (b) At what rate is thermal energy generated? Data: R = 15Ω, v = 0.6 m/s, l = 0.25m, B = 0.35T (a) Induced emf, e = Blv = (0.35)(0.25)(0.6) = 0.0525 V = 52.5 mV The current in the rod, I = \(\frac{e}{\mathrm{R}}=\frac{52.5}{15}\) = 3 5 mA (b) Power dissipated, P = eI = 0.0525 × 3.5 × 10^-4 = 0.184 mW Question 2. A conducting rod 10 cm long is being pulled along horizontal, frictionless conducting rails at a con-stant 5 m/s. The rails are shorted at one end with a metal strip. There is a uniform magnetic field of strength 1.2 T out of the page in the region in which the rod moves. If the resistance of the rod is 0.5 Ω, what is the power of the external agent pulling the rod? Assume that the resistance of the rails is negligibly small. Data: l = 0.1 m, B = 1.2T, v = 5 m/s. R = 0.5 Ω Power, P = \(\frac{(B l v)^{2}}{R}=\frac{(1.2 \times 0.1 \times 5)^{2}}{0.5}\) = 0.72 W Question 40. Explain the concept of self induction. Consider an isolated coil or circuit in which there is a current I. The current produces a magnetic flux linked with the coil. The magnetic flux linked with the coil can be changed by varying the current in the coil itself, e.g., by breaking and closing the circuit. This produces a self-induced emf in the coil, called a back emf because it opposes the change producing it. It sets up an induced current in the coil itself in the same direction as the original current opposing its decrease when the key K is suddenly opened. When the key K is closed, the induced current is opposite to the conventional current, opposing its increase. When the current through a coil changes continuously, e.g., by a time-varying applied emf, the magnetic flux linked with the coil also goes on changing. The production of induced emf in a coil, due to the changes of current in the same coil, is called self induction. Question 41. Explain and define the self inductance of a coil. Define the coefficient of self induction. When the current through a coil goes on changing, the magnetic flux linked with the coil also goes on changing. The magnetic flux (NΦ[m]) linked with the coil at any instant is directly proportional to the current (I) through the coil at that instant. NΦ[m] ∝ I ∴ NΦ[m] = LI where L is a constant, dependent on the geometry of the coil, called the self inductance or the coefficient, of self induction of the coil. The self-induced emf in the coil is Definition : The self inductance or the coefficient of self induction of a coil is defined as the emf induced in the coil per unit time rate of change of current in the same coil. OR (using L = NΦ[m] /I), the self inductance of a coil is the ratio of magnetic flux linked with the coil to the current in it. Question 42. State and define the SI unit of self inductance. Give its dimensions. Write the SI unit and dimensions of the coefficient of self induction. The SI unit of self inductance or coefficient of self induction or inductance as it is commonly called is called the henry (H). The self-inductance of a coil is 1 henry, if an emf of 1 volt is induced in the coil when the current through the same coil changes at the rate of 1 ampere per second. The dimensions of self inductance or coefficient of self induction are [ML^2T^-2I^-2]. 1 henry = 1 H = 1 V/A.s = 1 T.m^2/A [ Note : The unit henry is named in honour of Joseph Henry (1797-1878) US physicist.] Question 43. What is an inductor? An inductor is a coil of wire with significant self inductance. If the coil is wound on a nonmagnetic cylinder or former, such as ceramic or plastic, it is called an air-core inductor; its circuit symbol is Question 44. Current passes through a coil shown from left to right. In which direction is th induced emf. if the current is (a) increasing with time (b) decreasing in time? From Lent’s law, the induced emf must oppose the diange in the magnetic flux. (a) When the current mcreases to the right, so is the magnetic flux. To oppose the increasing flux to the tight. the induced emi Is to the left. i.e.. the point A is at a positive potential relative to point B. (b) When the current to the right is decreasing the induced emf acts to boost up the flux to the right and points to the tight, so that the point A is at a negative potential relative to point B. Question 45. Derive an expression for the energy stored in the magnetic field of an inductor. Derive an expression for the electrical work done in establishing a steady current in a coil of self inductance L. Consider an inductor of sell inductance L connected in a circuit When the circuit is dosed, the current in the circuit increases and so does the magnetic flux linked with the coiL At any instant the magnitude of the induced emf is e = L \(\frac{d i}{d t}\) The power consumed in the inductor is P = ei = L \(\frac{d i}{d t}\) ∙ i [Alternatively, the work done in moving a charge dq against this emf e is dw = edq = L \(\frac{d i}{d t}\) ∙ dq = Li ∙ di (∵ \(\frac{d q}{d t}\) = i) This work done is stored in the magnetic field of the inductor. dw = du.] The total energy stored In the magnetic field when the current increases from 0 to I In a time interval from 0 to t can be determined by integrating this expression : U[m] = \(\int_{0}^{t} P d t=\int_{0}^{I} L i d i=L \int_{0}^{I} i d i=\frac{1}{2} L I^{2}\) which is the required expression for the stored magnetic energy. [Note: Compare this with the electric energy stored in a capacitor, U[e] = \(\frac{1}{2}\)CV^2] Question 46. State the expression for the energy stored in’the magnetic field of an inductor. Hence, define its self inductance. When a steady current is passed through an inductor of self inductance L the energy stored in the magnetic field of the inductor is U[m] = \(\frac{1}{2}\)Li^2]. Therefore, for unit current, L = 2U[m] Hence, we may define the self inductance of a coil as numerically equal to twice the energy stored in its magnetic field for unit current through the inductor. Question 47. What is the role of an inductor in an ac circuit ? As a circuit element, an inductor slows down changes in the current in the circuit. Thus, it provides an electrical inertia and is said to act as a ballast. In a non-inductive coil (L ≅ 0), electrical energy is converted into heat due to ohmic resistance of the coil (Joule heating). On the other hand, an inductive coil or an inductor stores part of the energy in the magnetic field of its coils when the current through it is increasing; this energy is released when the current is decreasing. Thus, an inductor limits an alternating current more efficiently than a non-inductive coil or a pure resistor. Question 48. State the expressions for the effective or equivalent inductance of a combination of a number of inductors connected (a) in series (b) in parallel. Assume that their mutual inductance can be ignored. We assume that the inductors are so far apart that their mutual inductance is negligible. (a) For a series combination of a number of inductors, L[1], L[2], L[3], …, the equivalent inductance is L[series] = L[1] + L[2] + L[3]+ …….. (b) For a parallel combination of a number of inductors, L[1], L[2], L[3], …, the equivalent inductance is \(\frac{1}{L_{\text {parallel }}}=\frac{1}{L_{1}}+\frac{1}{L_{2}}+\frac{1}{L_{3}}+\ldots\) Question 49. Obtain an expression for the self inductance of a solenoid. Consider a long air-cored solenoid of length Z, diameter d and N turns of wire. We assume that the length of the solenoid is much greater than its diameter so that the magnetic field inside the solenoid may considered to be uniform, that is, end effects in the solenoid can be ignored. With a steady current I in the solenoid, the magnetic field within the solenoid is B = µ[0]nI ………….. (1) where n = N/l is the number of turns per unit length. So the magnetic flux through one turn is Φ[m] = BA = µ[0]nIA ……….. (2) Hence, the self inductance of the solenoid, L = \(\frac{N \Phi_{\mathrm{m}}}{I}\) =(nl)µ[0]nA = µ[0]n^2lA = µ[0]n^2 V ………….. (3) = µ[0]n^2l\(\frac{\pi d^{2}}{4}\) …………. (4) where V = lA is the interior volume of the solenoid. Equation (3) or (4) gives the required expression. [Note: It is evident thatthe self inductance of a long solenoid depends only on its physical properties – such as the number of turns of wire per unit length and the volume, and not on the magnetic field or the current. This is true for inductors in general.] . Question 50. State the expression for the self inductance of a solenoid. Hence show that the SI unit of magnetic permeability is the henry per metre. The self inductance of an air-cored long solenoid of volume V and number of turns per unit length n is L = µ[0]n^2V. Since [n^2] = [L^-2], n^2V has the dimension of length. The SI unit of the L being the henry, the SI unit of magnetic permeability (µ[0]) is the henry per metre (H / m). . µ[0] = 4π × 10^-7 H/m = 4π × 10^-7 T∙m/A Question 51. Derive an expression for the self inductance of a narrow air-cored toroid of circular cross section. Consider a narrow air-cored toroid of circular cross section of radius r, central radius R and number of turns N. So that, assuming r << R, the magnetic field in the toroidal cavity is considered to be uniform, equal to B = \(\frac{\mu_{0} N I}{2 \pi R}\) = µ[0]nI ………….. (1) where n = \(\frac{N}{2 \pi R}\) is the number of turns of the wire 2nR per unit length. The area of cross section, A = πr^2. The magnetic flux through one turn is Φ[m] = BA = µ[0]nIA ………… (2) Hence, the self inductance of the toroid, L = \(\frac{N \Phi_{\mathrm{m}}}{I}\) = (2πRn) µ[0]nA = µ[0]2πRn^2A = µ[0]n^2V …………… (3) = \(\frac{\mu_{0} N^{2} r^{2}}{2 R}\) ………….. (4) where V = 2πRA is the volume of the toroidal cavity. Equation (3) or (4) gives the required expression. Question 52. Obtain an expression for the energy density of a magnetic field. Consider a short length ¡ near the middle of a long, tightly wound solenoid, of cross-sectional area A, number of turns per unit length n and carrying a steady current I. For such a solenoid, the magnetic field is approximately uniform everywhere inside and zero outside. So, the magnetic energy U[m] stored by this length l of the solenoid lies entirely within the volume Al. The magnetic field inside the solenoid is B = µ[0]nI …………… (1) and if L be the inductance of length l of the solenoid, L = µ[0] n^2lA …………… (2) The stored magnetic energy, U[m] = \(\frac{1}{2}\)LI^2 …………. (3) and the energy density of the magnetic field (energy per unit volume) is Equation (6) gives the magnetic energy density in vacuum at any point in a magnetic field of induction B, irrespective of how the field is produced. [Note : Compare Eq.(6) with the electric energy density in vacuum at any point in an electric field of intensity e, u[e] = \(\frac{1}{2}\) ε[0]e^2. Both u[e] and u[m] are proportional to the square of the appropriate field magnitude.] Question 53. Determine the magnetic energy stored per unit length of a coaxial cable, represented by two coaxial cylindrical shells of radii a (inner) and b (outer), and carrying a current I. Hence derive an expression for the self inductance of the coaxial cable of length l. Figure (a) shows a coaxial cable represented by two hollow, concentric cylindrical conductors along which there is electric current in opposite directions. The magnetic field between the conductors can be found by applying Ampere’s law to the dashed path of radius r{a < r < b) in figure (a). Because of the cylindrical symmetry, B is constant along the path, and \(\oint \vec{B} \cdot \overrightarrow{d l}\) = B (2πr) = u[0]I ∴ B = \(\frac{\mu_{0} I}{2 \pi r}\) ……………… (1) A similar application of Ampere’s law for r > b and r < a, shows that B = 0 in both the regions. Therefore, all the magnetic energy is stored between the two conductors of the cable. The energy density of the magnetic field is u[m] = \(\frac{B^{2}}{2 \mu_{0}}\) …………….. (2) Therefore, substituting for B from Eq. (1) into Eq. (2), the magnetic energy stored in a cylindrical shell of radius r, thickness dr and length l is dU[m] = u[m]dV = u[m](2πr ∙ dr ∙ l) Equating the right hand sides of Eqs. (4) and (6), 54. Solve the following Question 1. A coil of self inductance 5 H is connected in series with a switch and a battery. After the switch is closed, the steady state value of the current is 5 A. The switch is then suddenly opened, causing the current to drop to zero in 0.2 s. Find the emf developed across the inductor (coil) as the switch is opened. Data : L = 5 H, I[i] = 5 A, I[f] = 0, ∆t = 0.2 s The rate of change of current, \(\frac{d I}{d t}=\frac{I_{\mathrm{f}}-I_{\mathrm{i}}}{\Delta t}=\frac{0-5}{0.2}\) = – 25 A/s ∴ The induced emf, e = -L \(\frac{d I}{d t}\) = -5(-25) = 125 V Question 2. A toroidal coil has an inductance of 47 mH. Find the maximum self-induced emf in the coil when the current in it is reversed from 15 A to -15 A in 0.01 s. Data : L = 4.7 × 10^-2 H, I[i] = 15A, I[[i]] = -15 A, ∆f = 0.01 s The rate of change of current, \(\frac{d I}{d t}=\frac{I_{\mathrm{f}}-I_{\mathrm{i}}}{\Delta t}=\frac{(-15)-15}{0.01}\) = – 3000 A/s ∴ The maximum self-induced emf, e = – L \(\frac{d I}{d t}\) (4.7 × 10^-2) (- 3000) = 141 V Question 3. An emf of 2 V is induced in a closely-wound coil of 50 turns when the current through it increases uniformly from O to 5 A in 0.1 s. (a) What is the self inductance of the coil? (b) What is the flux through each turn of the coil for a steady current at 5A? Data : e = 2 V, N = 50, I[i] = 0, I[f] = 5A, ∆t = 0.1 s (a) The rate of change of current This is the flux through each turn. Question 4. At the instant the current through a coil is 0.2 A, the energy stored in its magnetic field is 6 mJ. What is the self indudance of the coil ? Data: I = 0.2A, U[m] = 6 × 10^-3 J U[m] = \(\frac{1}{2}\) LI^2 Therefore, self inductance of the coil is Question 5. A coil of self inductance 3 H and resistance 100 Ω carries a steady current of 2 A. (a) What is the energy stored in the magnetic field of the coil? (b) What is the energy per second dissipated in the resistance of the coil ? Data : L = 3 H, R = 100 Ω, I = 2 A (a) Magnetic energy stored, U[m] = \(\frac{1}{2}\) LI^2 = \(\frac{1}{2}\) (3) (2)^2 = 6 J (b) Power dissipated in the resistance of the coil, P = I^2R = (2)^2(100) = 400 W Question 6. A 10 H inductor carries a current of 25 A. Flow much ice at 0 °C could be melted by the energy stored in the magnetic field of the inductor ? [Latent heat of fusion of ice, L[f] = 335 J/g] Data : L = 10 H, Z = 25 A, L[f] = 335 J/g Magnetic energy stored, U[m] = \(\frac{1}{2}\) LI^2 = \(\frac{1}{2}\) (10) (25)^2 = 3125 J Heat energy required to melt ice at 0 °C of mass m, H = mL[f] Equating H with U[m], m = \(\frac{U_{\mathrm{m}}}{L_{\mathrm{f}}}=\frac{3125}{335}\) = 9.328 g Therefore, 9.328 g of ice could be melted by the energy stored. Question 7. A solenoid 40 cm long has a cross-sectional area of 0.9 cm2 and is tightly wound with wire of diameter 1 mm. Calculate the self inductance of the solenoid. Data : D = 1 mm, l = 40 cm = 0.4 m, A = 0.9 cm^2 = 9 × 10^-5 m^2, I[i] = 10 A, I[f] = 0, ∆t = 0.1 s, μ[0] = 4π × 10^-7 H/m The number of turns per unit length, n = \(\frac{1}{1 \mathrm{~mm}}\) = 1 mm^-1 = 10^3 m^-1 Self inductance of the solenoid, L = μ[0]n^2lA = (4π × 10^-7)(10^3)^2(0.4)(9 × 10^-5) = 16 × 9 × 3.142 × 10^-7 = 4.524 × 10^-5 H Question 8. A solenoid of 1000 turns is wound with wire of diameter 0.1 cm and has a self inductance of 2.4 π × 10^-5 H. Find (a) the cross-sectional area of the solenoid (b) the magnetic flux through one turn of the solenoid when a current of 3 A flows through it. Data: N = 1000, D = 0.1 cm, L = 2.4π × 10^-5 H, I = 3A, μ[0] = 4π × 10^-7 H/m The number of turns per unit length. n = \(\frac{1}{1 \mathrm{~mm}}\) = 1 mm^-1 = 10^3 m^-1 and the length of the solenoid, l = ND = 1000 × 0.1 = 100 cm = 1 m L = μ[0]n^2lA (a) The area of cross section, A = \(\frac{L}{\mu_{0} n^{2} l}=\frac{2.4 \pi \times 10^{-5}}{\left(4 \pi \times 10^{-7}\right)\left(10^{3}\right)^{2}(1)}=\frac{24 \pi}{4 \pi} \times 10^{-5}\) = 6 × 10^-5 m^2 (b) Magnetic flux through one turn, Φ[m] = BA = (μ[0]nI)A = (4π × 10^-7)(10^3)(3)(6 × 10^-5) = 72π × 10^-9 Wb Question 9. A toroid of circular cross section of radius 0.05 m has 2000 windings and a self inductance of 0.04 H. What is (a) the current through the windings when the energy in its magnetic field is 2 × 10^-6 J (b) the central radius of the toroid ? Data : r = 0.05 m, N = 2000, L = 0.04 H, U[m] = 2 × 10^-6 J, μ[0] = 4π × 10^-7 H/m (a) U[m] = \(\frac{1}{2}\) LI^2 Therefore, the current in the windings, Question 10. A coaxial cable, whose outer radius is five times its inner radius, is carrying a current of 1.5 A. What is the magnetic field energy stored in a 2 m length of the cable ? Data : b/a = 5, I = 1.5A, l = 2m, \(\frac{\mu_{0}}{4 \pi}\) = 10^-7 H/m The total magnetic energy in a given length of a current-carrying coaxial cable, U[m] = \(\left(\frac{\mu_{0}}{4 \pi}\right) I^{2} l \log _{e} \frac{b}{a}\) Therefore, the required magnetic energy is U[m] = (10^-7)(1.5)^2(2)log[e]5 = 4.5 × 10^7 × 2.303 × log[10]5 = 4.5 × 10^-7 × 2.303 × 0.6990 = 7.24 × 10^-7 J Question 55. Explain the concept/phenomenon of mutual induction. Explain and define mutual inductance of a coil with respect to another coil. Define the coefficient of mutual induction. The production of induced emf in a coil due to the change of current in the same coil is called self induction. In above figure (a), a current I[1] in coil 1 sets up a magnetic flux Φ[21] through one turn of a neighbouring coil 2, magnetically linking the two coils. Then, the flux through the N[2] turns of coil 2, i.e., the flux linkage of coil 2, is N[2]Φ[21]. N[2]Φ[21] ∝ I[1] ∴ N[2]Φ[21] = M[21]I[1] …………. (1) where the constant of proportionality, M[21], is called the coefficient of mutual induction of coil 2 with respect to coil 1. If the current I[1] in coil 1 changes with time, the varying flux linkage induces an emf e[2] in coil 2. e[2] = – \(\frac{d}{d t}\) (N[2]Φ[21]) = – M[21] \(\frac{d I_{1}}{d t}\) …………. (2) Similarly, if we interchange the roles of the two coils and set up a current I[2] in coil 2 [from figure (b)], Then, the flux linkage of N1 turns of coil 1 is N[1]Φ[12] and N[1]Φ[12] = M[12]I[2] ………… (3) where M[12] is the coefficient of mutual induction of coil 1 with respect to coil 2. And, for a varying current I[2](t), the induced emf in coil 1 is We define mutual inductance using Eq. (5) or Eq. (6). The mutual inductance or the coefficient of mutual induction of two magnetically linked coils is equal to the flux linkage of one coil per unit current in the neighbouring coil. The mutual inductance or the coefficient of mutual induction of two magnetically linked coils is numerically equal to the emf induced in one coil (secondary) per unit time rate of change of current in the neighbouring coil (primary). Question 56. State and define the SI unit of mutual inductance. Give its dimensions. The SI unit of mutual inductance is called the henry (H). The mutual inductance of a coil (secondary) with respect to a magnetically linked neighbouring coil (primary) is one henry if an emf of 1 volt is induced in the secondary coil when the current in the primary coil changes at the rate of 1 ampere per second. The dimensions of mutual inductance are [ML^2T^-2I^-2] (the same as those of self inductance). Question 57. Two coils A and B have mutual inductance 2 × 10^-2 H. If the current in the coil A is 5 sin (10πt) ampere, find the maximum emf induced in the coil B. The emf induced in the coil B, |e[B]| = M \(\frac{d I_{\mathrm{A}}}{d t}\) =(2 × 10^-2)[5 cos (10πt)] × 10π ∴ |e[B]|[max] = π volts. Question 58. A long solenoid, of radius R, has n turns per unit length. An insulated coil C of IV turns is wound over it as shown. Show that the mutual inductance for the coil-solenoid combination is given by M = We assume the solenoid to be ideal and that all the flux from the solenoid passes through the outer coil C. For a steady current Is through the solenoid, the uniform magnetic field inside the solenoid is B = μ[0]nI[s] ……………… (1) Then, the magnetic flux through each turn of the coil due to the current in the solenoid is Φ[CS] = BA = (μ[0]nI[s])(πR^2) ………….. (2) Thus, their mutual inductance is M = \(\frac{N \Phi_{\mathrm{CS}}}{I_{\mathrm{S}}}\) = μ[0]πR^2nN ………….. (3) Equation (2) is true as long as the magnetic field of the solenoid is entirely contained within the cross section of the coil C. Hence, M does not depend on the shape, size, or possible lack of close packing of the coil. Question 59. A solenoid of N[1] turns has length l[1] and radius R[1], and a second smaller solenoid of N[2] turns has length l[2] and radius R[2]. The smaller solenoid is placed coaxially and completely inside the larger solenoid. What is their mutual inductance ? Assuming the larger solenoid to be ideal, the magnetic field within it may be considered uniform, so the flux through the small solenoid due to the larger solenoid is also uniform. Assuming a current I[1] in the larger solenoid, the magnitude of the magnetic field at points within the small solenoid due to the larger one is B[1] = μ[0]\(\frac{N_{1}}{l_{1}}\) I[1] Then, the flux Φ[21] through each turn of the small coil is Φ[21] = B[1]A[2] where is A[2] = πR^2[2], the area enclosed by the turn. Thus, the flux linkage in the small solenoid with its N[2] turns is N[2]Φ[21] = N[2]B[1]A[2] Thus, their mutual inductance is M = \(\frac{N_{2} \Phi_{21}}{I_{1}}=N_{2}\left(\mu_{0} \frac{N_{1}}{l_{1}}\right)\left(\pi R_{2}^{2}\right)=\mu_{0} \pi \frac{N_{1} N_{2}}{l_{1}} R_{2}^{2}\) which is the required expression. Question 60. What is meant by coefficient of magnetic coupling? For two inductively coupled coils, the fraction of the magnetic flux produced by the current in one coil (primary) that is linked with the other coil (secondary) is called the coefficient of magnetic coupling between the two coils. The coupling coefficient K shows how good the coupling between the two coils is; 0 ≤ K ≤ 1. In the ideal case when all the flux of the primary passes through the secondary, K=l. For coils which are not coupled, K = 0. Two coils are tightly coupled if K > 0.5 and loosely coupled if K < 0.5. [ Note ; For iron-core coupled circuits, the value of K may be as high as 0.99, for air-core coupled circuits, K varies between 0.4 to 0.8. ] Question 61. State the factors which magnetic coupling coefficient of two coils depends on. The coefficient of magnetic coupling between two coils depends on 1. the permeability of the core on which the coils are wound 2. the distance between the coils 3. the angle between the coil axes. Question 62. When is the magnetic coupling coefficient of two coils (i) maximum (ii) minimum? The coefficient of magnetic coupling between two coils is 1. maximum when the coils are wound on the same ferrite (iron) core such that the flux linkage is maximum, 2. minimum for air-cored coils with the coil axes perpendicular. Question 63. Show that the mutual inductance for a pair of inductively coupled coils/circuits of self inductances L[1] and L[2] is given by M = K\(\sqrt{L_{1} L_{2}}\), where K is the coupling coefficient. Consider a pair of inductively coupled coils having N[1] and N[2] turns, shown in figure A current l[1](t) sets up a flux N[1]Φ[1](t) in coil 1 and induces a current l[2](t) and flux N[2]Φ[2](t) in coil 2. Then, the self inductances of the coils are Alternate method : Consider a pair of inductively coupled coils shown in above figure.We assume that I[1](t), I[2](t) are zero at t = 0. as also the magnetic energy of the system. The induced emfs are The net energy Input to the system shown in figure at time t is given by If one current enters a dot marked terminal while the other leaves a dot marked terminal, Eq. (2) becomes W(t) = \(\frac{1}{2}\) L[1](I[1])^2 + \(\frac{1}{2}\) L[2](I[1])^2 – MI[1]I[2] …………. (3) The net electrical energy input to the system is non-negative, W(t) ≥ 0. We rearrange Eq.(3) as The first term in the parenthesis on the right hand side of Eq. (4) is positive for all values of I[1] and I[2] Thus, for the second term also to be non-negative, where the coupling coefficient K is a non-negtive number, 0 ≤ K ≤ 1, and is independent of the reference directions of the currents in the coils. Question 64. What is a transformer? State the principle of working of a transformer. A transformer is an electrical device which uses mutual induction to transform electrical power at one alternating voltage into electrical power at another alternating voltage (usually different), without change of frequency of the voltage. Principle : A transformer works on the principle that a changing current through one coil creates a changing magnetic flux through an adjacent coil which in turn induces an emf and a current in the second coil. Question 65. What are step-up and step-down transformers? 1. Step-up transformer : It increases the amplitude of the alternating emf, i.e., it changes a low voltage alternating emf into a high voltage alternating emf with a lower current. 2. Step-down transformer : It decreases the amplitude of the alternating emf, i.e., it changes a high voltage alternating emf into a low voltage alternating emf with a higher current. Question 66. Describe the construction and working of a transformer with a neat labelled diagram. Construction : A transformer consists of two coils, primary and secondary, wound on two arms of a rectangular frame called the core. (1) Primary coil : It consists of an insulated copper wire wound on one arm of the core. Input voltage is applied at the ends of this coil. In a step-up transformer, thick copper wire is used for primary coil. In a step-down transformer, thin copper wire is used for primary coil. (2) Secondary coil : It consists of an insulated copper wire wound on the other arm of the core. The output voltage is obtained at the ends of this coil. In a step-up transformer, thin copper wire is used for secondary coil. In a step-down transformer, thick copper wire is used for secondary coil. (3) Core : It consists of thin rectangular frames of soft iron stacked together, but insulated from each other. A core prepared by stacking thin sheets rather than using a single thick sheet helps reduce eddy currents. Working : When the terminals of the primary coil are connected to a source of an alternating emf (input voltage), there is an alternating current through it. The alternating current produces a time varying magnetic field in the core of the transformer. The magnetic flux associated with the secondary coil thus varies periodically with time according to the current in the primary coil. Therefore, an alternating emf (output voltage) is induced in the secondary coil. Question 67. Derive the relationship \(\frac{V_{\mathrm{P}}}{V_{\mathrm{S}}}=\frac{I_{\mathrm{S}}}{I_{\mathrm{P}}}\) for a transformer. An alternating emf VP from an ac source is applied across the primary coil of a transformer. This sets up an alternating current IP in the primary circuit and also produces an alternating magnetic flux through the primary coil such that V[P] = -N[P] \(\frac{d \Phi_{\mathrm{P}}}{d t}\), where N[P] is the number of turns of the primary coil and Φ[P] is the magnetic flux through each turn. Assuming an ideal transformer (i.e., there is no leakage of magnetic flux), the same magnetic flux links both the primary and the secondary coils, i. e., Φ[P] = Φ[S] As a result, the alternating emf induced in the secondary coil, V[S] = = N[S] \(\frac{d \Phi_{\mathrm{S}}}{d t}\) = – N[S] \(\frac{d \Phi_{\mathrm{P}}}{d t}\) where N[S] is the number of turns of the secondary coil. If the secondary circuit is completed by a resistance R, the secondary current is I[S] = V[S]/R, assuming the resistance of the coil to be far less than R. Ignoring power losses, the power delivered to the primary coil equals that taken out of the secondary coil, so V[P]I[P] = V[S]I[S]. ∴ \(\frac{V_{\mathrm{P}}}{V_{\mathrm{S}}}=\frac{I_{\mathrm{S}}}{I_{\mathrm{P}}}\) which is the required expression. Question 68. Derive the relation \(\frac{V_{\mathrm{S}}}{V_{\mathrm{P}}}=\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) for a transformer. Hence, explain a step-up and a step-down trans-former. Also, show that \(\frac Derive expressions for the emf and current for a transformer in terms of the turns ratio. An alternating emf VP from an ac source is applied across the primary coil of a transformer, shown in figure. This sets up an alternating current fP in the primary circuit and also produces an alternating magnetic flux through the primary coil such that V[P] = -N[P] \(\frac{d \Phi_{\mathrm{P}}}{d t}\) ………….. (1) where N[P] is the number of turns of the primary coil and Φ[P] is the magnetic flux through each turn. Assuming an ideal transformer (i.e., there is no leakage of magnetic flux), the same magnetic flux links both the primary and the secondary coils, i.e., Φ[P] = Φ[S]. As a result, the alternating emf induced in the secondary coil, V[S] = – N[S] \(\frac{d \Phi_{\mathrm{S}}}{d t}\) = – N[S] \(\frac{d \Phi_{\mathrm{P}}}{d t}\) ……………… (2) where N[S] is the number of turns of the secondary coil. From Eqs. (1) and (2), \(\frac{V_{\mathrm{S}}}{V_{\mathrm{P}}}=\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) or V[S] = V[P] \(\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) …………… (3) Case (1) i If N[S] > N[P], V[S] > V[P]. Then, the trans-former is called a step-up transformer. Case (2) : If N[S] < N[P], V[S] < V[P]. Then the transformer is called a step-down transformer. Ignoring power losses, the power delivered to the primary coil equals that taken out of the secondary coil, so that V[P]I[P] = V[S]I[S] …………. (4) From Eqs. (3) and (4), Question 69. What is the turns ratio of a transformer? What can you say about its value for a (1) step-up transformer (2) step-down transformer? The ratio of the number of turns in the secondary coil (N[S]) to that in the primary coil (N[P]) is called the turns ratio of a transformer. The turns ratio \(\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) > 1 for a step-up transformer. The turns ratio \(\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) < 1 for a step-down transformer. Question 70. State any two factors on which the maximum value of the alternating emf induced in the secondary coil of a transformer depends. The maximum value of the alternating emf induced in the secondary coil of a transformer depends on 1. the ratio of the number of turns of the secondary coil to that of the primary coil 2. the maximum value of the alternating emf applied to the primary coil 3. the core of the transformer. Question 71. The primary coil of a transformer has 100 turns and the secondary coil has 200 turns. If the peak value of the alternating emf applied to the primary coil is 100 V, what is the peak value of the alternating emf obtained across the secondary coil? Question 72. Distinguish between a step-up and a step-down transformers. (Any two points) ┃Step-up transformer │Step-down transformer ┃ ┃1. The output voltage is more than the input voltage. │1. The output voltage is less than the input voltage. ┃ ┃2. The number of turns of the secondary coil is more than that of the primary coil.│2. The number of turns of the secondary coil is less than that of the primary coil.┃ ┃3. The output current is less than the input current. │3. The output current is more than that of the input current. ┃ ┃4. The primary coil is made of thicker copper wire than the secondary coil. │4. The secondary coil is made of thicker copper wire than the primary coil. ┃ 72. Solve the following Question 1. When a current changes from 4 A to 12 A in 0.5 s in the primary coil, an induced emf of 50 mV is generated in the secondary coil. What is the mutual inductance between the two coils ? What will be the emf induced in the secondary, if the current in the primary changes from 3 A to 9 A in 0.02 s ? Data : I[i1] =4 A, I[f1] = 12 A, ∆t[1] = 0.5 s, ∆t[2] = 0.02 s Question 2. A plane coil of lo turns is tightly wound around a solenoid of diameter 2 cm having 400 turns per centimeter. The relative permeability of the core is 800. Calculate the mutual inductance. Data: N = 10, R = 1 cm = 10^-2 m, n = 400 cm^-1 = 4 × 10^4 m^-1, k = 800, μ[0] = 4π × 10^4 H/m Mutual inductance, M = kμ[0]πR^2nN =(800)(4π × 10^-7)[π × (10^2)^2](4 × 10^4)(10) = 0.1264 H Question 3. Two coils of 100 turns and 200 turns have self inductances 25 mH and 40 mH, respectively. Their mutual inductance is 3 mH. If a 6 mA current in the first coil is changing at the rate of 4 A/s, calculate (a) 2 that links the first coil (b) self induced emf in the first coil (c) Φ[21] that links the second coil (d) mutually induced emf in the second coil. Data : N[1] = 100, N[2] = 200, L[1] = 25 mH, L[2] = 40 mH, I[1] = 6 mA, dI[1] /dt = 4 A/s (a) The flux per unit turn in coil 1, Φ[21] = \( \frac{L_{1} I_{1}}{N_{1}}=\frac{\left(25 \times 10^{-3}\right)\left(6 \times 10^{-3}\right)}{100}\) = 1.5 × 10^-6 Wb =1.5 μ Wb (b) The magnitude of the self induced emf in coil 1 is L[1] = \(\frac{d I_{1}}{d t}\) = (25 × 10^-3)(4) = 0.1 V (c) The flux per unit turn in coil 2, Φ[21] = \(\frac{M I_{1}}{N_{2}}=\frac{\left(3 \times 10^{-3}\right)\left(6 \times 10^{-3}\right)}{200}\) = 90 × 10^-9 Wb = 90 nWb (d) The mutually induced emf in coil 2 is e[21] = M \(\frac{d I_{1}}{d t}\) = (3 × 10^-3)(4) = 12 × 10^-3 V = 12 mV Question 4. The coefficient of mutual induction between primary and secondary coils is 2 H. Calculate the induced emf if a current of 4A is cut off in 2.5 × 10^-4 second. Data : M = 2 H, dI = – 4 A, dt = 2.5 × 10^-4 s The induced emf, e = – M \(\frac{d I}{d t}=-\frac{2 \times(-4)}{2.5 \times 10^{-4}}\) = \(\frac{8}{2.5}\) × 10^4 = 3.2 × 10^4 V Question 5. A current of 10 A in the primary of a transformer is reduced to zero at the uniform rate in 0.1 second. If the mutual inductance be 3 H, what is the emf induced in the secondary and change in the magnetic flux per turn in the secondary if it has 50 turns? This gives the change in the magnetic flux per turn in the secondary. Question 6. The primary and secondary coils of a transformer, assumed to be ideal, have 20 and 300 turns of wire, respectively. If the primary voltage is VP = 10 sincot (in volt), what is the maximum voltage in the secondary coil? Data : N[P] = 20, N[S] = 300, V[P] = 10 sin ωt V V[S] = \(\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) V[P] = \(\frac{300}{20}\) × 10 sin ωt = 150 sin ωt V This is of the form V[0] sin ωt, where V[0] is the peak (or maximum) voltage. ∴ The maximum voltage in the secondary coil is 150 V. Question 7. A transformer converts 200 V ac to 50 V ac. The secondary has 50 turns and the load across it draws 300 mA current. Calculate (i) the number of turns in the primary (ii) the power consumed. Data: V[P] = 200 V, V[S] = 50 V, N[S] = 50, I[S] = 300mA = 0.3 A (i) \(\frac{N_{\mathrm{P}}}{N_{\mathrm{S}}}=\frac{V_{\mathrm{P}}}{V_{\mathrm{S}}}\) ∴ The number of turns in the primary, N[P] = N[S]\(\frac{V_{\mathrm{P}}}{V_{\mathrm{S}}}\) = 50 × \(\frac{200}{50}\) = 200 (ii) Power consumed = V[S]I[S] = 50 × 0.3 = 15 W Question 8. A resistance of 3 Ω is connected to the secondary coil of 60 turns of an ideal transformer. Calculate the current (peak value) in the resistor if the primary has 1200 turns and is connected to 240 V (peak) ac supply. Assume that all the magnetic flux in the primary coil passes through the secondary coil and that there are no other losses. Data : R = 3 Ω, N[S] = 60, N[P] = 1200, V[P] = 240 V V[S] = \(\frac{N_{\mathrm{S}}}{N_{\mathrm{P}}}\) × V[P] = \(\frac{60}{1200}\) × 240 = 12 V (peak) ∴ The peak value of the current in the resistor in the transformer secondary coil is I[S] = \(\frac{V_{\mathrm{S}}}{R}=\frac{12}{3}\) = 4 A Question 9. The primary of a transformer has 40 turns and works on 100 V and 100 W. Find the number of turns in the secondary to step up the voltage to 400 V. Also calculate the current in the secondary and Solution : Data : N[P] = 40, V[P] = 100 V, P[P] = 100 W, V[S] = 400 V This gives the number of turns in the secondary coil. (ii) Assuming P[S] = P[S] = 100 W, V[S]I[S] = 100 W ∴ I[S] = \(\frac{100}{V_{\mathrm{S}}}=\frac{100}{400}\) = 0.25 A This gives the current in the secondary coil. (iii) V[P] . I[P] = P[P] ∴ I[P] = \(\frac{P_{\mathrm{P}}}{V_{\mathrm{P}}}=\frac{100}{100}\) = 1 A This gives the current in the primary coil. Question 10. A transformer converts 400 volt ac to 100 volt ac The secondary of the transformer has 50 turns and the load across it draws a current of 600 mA. What is the current in the primary, the power consumed and the number of turns in the primary? Data : V[P] = 400 V. V[S] = 100 V, N[S] = 50, I[S] = 0.6 A Assuming no power loss. P[P]V[P] = I[S]V[S] ∴ The current in the primary, Question 11. A step down transformer works on 220 V a mains. What is the efficiency of the transformer when a bulb of 100 Wf20 V is connected to the a mains and the current in the primary is 0.5 A ? Data: V[P] = 220V, V[S] = 20V, P[S] = 100W, I[P] = 0.5 A The Input power. P[P] = I[P]V[P] = (0.5)220) = 110 W The output power, P[S] = 100 W ∴ The efficiency of the transformer = \(\frac{\text { output power }}{\text { input power }}=\frac{100}{110}\) = 0.9091 or 90.91% Multiple Choice Questions Question 1. A circular loop is placed in a uniform magnetic field. The total number of magnetic field lines passing normally through the plane of the coil is called (A) the displacement current (B) the eddy current (C) the self inductance (D) the magnetic flux (D) the magnetic flux Question 2. According to Lenz’s law, the direction of the induced current in a closed conducting loop is such that the induced magnetic field attempts to (A) maintain the original magnetic flux through the loop (B) maximize the magnetic flux through the loop (C) maintain the magnetic flux through the loop to zero (D) minimize the magnetic flux through the loop. (A) maintain the original magnetic flux through the loop Question 3. A metallic conductor AB moves across a magnetic field as shown in the following figure. Which of the following statements is correct? (A) The free electrons experience a magnetic force and move to the lower part of the conductor. (B) The free electrons experience a magnetic force and move to the upper part of the conductor. (C) The positive and negative charges experience a magnetic force and move, respectively, to the upper and lower parts of the conductor. (D) The moving conductor gives rise to an emf but there is no separation of charges as they are bound in the solid structure. (A) The free electrons experience a magnetic force and move to the lower part of the conductor. Question 4. A bar magnet moves vertically down, approaching a circular conducting loop in the x-y plane. The direction of the induced current in the loop (looking down the z-axis) is (A) anticlockwise (B) clockwise (C) alternating (D) along negative z-axis. (A) anticlockwise Question 5. A moving conductor AB of length 1 makes a sliding electrical contacts at its ends with two parallel conducting rails. The rails are joined at the left edge (CD) by a resistance R to form a complete The rate at which the magnetic flux through the area bounded by the circuit changes is (A) Bv (B) Bl/v (C) Bvl (D) Bv/l. (C) Bvl Question 6. A metre gauge train is heading north with speed 54 km/h in the Earth’s magnetic field 3 × 10^-4 T. The emf induced across the axle joining the wheels is (A) 0.45 mV (B) 4.5 mV (C) 45 mV (D) 450 mV. (B) 4.5 mV Question 7. A conducting rod of length l rotates about one of its ends in a uniform magnetic field \(\vec{B}\) with a constant angular speed ω. If the plane of rotation is perpendicular to \(\vec{B}\), the emf induced between the ends of the rod is (A) \(\frac{1}{2}\)Bωl2 (B) πl^2Bω (C) Bωl^2 (D) 2Bωl^2. (A) \(\frac{1}{2}\)Bωl2 Question 8. A circular conducting loop of area 100 cm2 and resistance 3 Ω is placed in a magnetic field with its plane perpendicular to the field. If the field is spatially uniform but varies with time t (in second) as B(f) = 1.5 cos ωt tesla, the peak value of the current is (A) 3 mA (B) 5ω mA (C) 300ω mA (D) 500 mA. (B) 5ω mA Question 9. In a simple rectangular-loop ac generator, the time rate of change of magnetic flux is a maximum when (A) the induced emf has a minimum value (B) the plane of the coil is parallel to the magnetic field (C) the plane of the coil is perpendicular to the magnetic field (D) the emf varies sinusoidally with time. (B) the plane of the coil is parallel to the magnetic field Question 10. A simple generator has a 300 loop square coil of side 20 cm turning in a field of 0.7 T. How fast must it turn to produce a peak output of 210 V ? (A) 25 rps (B) 4 rps (C) 2.5 rps (D) 0.4 rps (B) 4 rps Question 11. A rectangular loop generator of 100 turns, each of area 1000 cm^2, rotates in a uniform field of 0.02 π tesla with an angular velocity of 60 π rad/s. The maximum value of \(\frac{d \Phi_{\mathrm{m}}} {d t}\) is (A) 12π V (B) 12π^2 Wb (C) 6π^2 V (D) 12π^2 V. (D) 12π^2 V. Question 12. A 250 loop circular coil of area 16π^2 cm^2 rotates at 100 rev/s in a uniform magnetic field of 0.5 T. The rms voltage output of the generator is nearly (A) 200\(\sqrt {2}\) V (B) 20\(\sqrt {2}\) V (C) 400 V (D) 2\(\sqrt {2}\) MV. (A) 200\(\sqrt {2}\) V Question 13. Two tightly wound solenoids have the same length and circular cross-sectional area, but the wire of solenoid 1 is half as thick as solenoid 2. The ratio of their inductances is (A) \(\frac{1}{4}\) (B) \(\frac{1}{2}\) (C) 2 (D) 4 (D) 4 Question 14. The wire of a tightly wound solenoid is unwound and used to make another tightly wound solenoid of twice the diameter. The inductance changes by a factor of (A) 4 (B) 2 (C) \(\frac{1}{2}\) (D) \(\frac{1}{4}\) (B) 2 Question 15. The back emf of a dc motor is 108 V when it is connected to a 120 V line and reaches full speed against its normal load. What will be its back emf if a change in load causes the motor to run at half speed ? (A) 66 V (B) 12 V (C) 60 V (D) 54 V (D) 54 V Question 16. A single rectangular loop of wire, of dimensions 0.8 m × 0.4 m and resistance 0.2 Ω, is in a region of uniform magnetic field of 0.5 T in a plane perpendicular to the field. It is pulled along its length at a constant velocity of 5 m/s. Once one of its shorter side is just outside the field, the force required to pull the loop out of the field is (A) 0.2 N (B) 0.5 N (C) 1 N (D) 2 N. (C) 1 N Question 17. A pivoted bar with slots falls through a magnetic field. The bar falls the quickest if it is made of [Assume identical plate and slot dimensions. Ignore air resistance.] (A) copper (B) a ferromagnetic (C) aluminium (D) plastic (D) plastic Question 18. Eddy currents are also called (A) Maxwell currents (B) Faraday currents (C) displacement currents (D) Foucault currents (D) Foucault currents Question 19. At a given instant the current and self-induced emf (e) in an inductor are directed as shown. If e = 60 V, which of the following is true? (A) The current is increasing at 2 A/s. 12 H (B) The current is decreasing at 5 A/s. (C) The current is increasing at 5 A/s. (D) The current is decreasing at 6 A/s. (C) The current is increasing at 5 A/s. Question 20. A metal ring is placed in a region of uniform magnetic field such that the plane of the ring is perpendicular to the direction of the field. The field strength is increasing at a constant rate. Which of the following graphs best shows the variation with time t of the induced current I in the ring ? Question 21. At a given instant, the current through a 60 mH inductor is 50 mA and increasing at 100 mA/ s. The energy stored at that instant is (A) 150 µJ (B) 75 µJ (C) 0.6 mJ (D) 0.3 mJ (B) 75 µJ Question 22. The magnetic field within an air-cored solenoid is 0.8 T. If the solenoid is 40 cm long and 2 cm in diameter, the energy stored in its magnetic field is (A) 32 J (B) 3.2 J (C) 6.4 kJ (D) 64 kJ (A) 32 J Question 23. The adjacent graph shows the E induced emf against time of a coil rotated in a uniform magnetic field at a certain frequency. 0; If the frequency of rotation is reduced to one half of its initial value, which one of the following graphs correctly shows the new variation of the induced emf with time. [All the graphs are drawn to the same scale.] Question 24. A transformer has 320 turns primary coil and 120 turns secondary coil. Which of the following statements is true ? (A) It changes current by a factor of \(\frac{8}{3}\) and is a step-up transformer. (B) It is a step-down transformer and changes current by a factor of \(\frac{8}{3}\). (C) It changes current by a factor of \(\frac{8}{3}\) and is a step-up transformer. (D) It is a step-down transformer and changes current by a factor of \(\frac{8}{3}\). (B) It is a step-down transformer and changes current by a factor of \(\frac{8}{3}\). Question 25. Input power at 11000 V is fed to a step-down transformer which has 4000 turns in its primary winding. In order to get output power at 220 V, the number of turns in the secondary must be (A) 20 (B) 80 (C) 400 (D) 800. (B) 80
{"url":"https://maharashtraboardsolutions.guru/maharashtra-board-class-12-physics-important-questions-chapter-12/","timestamp":"2024-11-10T02:47:10Z","content_type":"text/html","content_length":"222165","record_id":"<urn:uuid:6002be0c-f61e-4830-a471-25bef566ddcc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00674.warc.gz"}
Transformation • Genstat v21 Select menu: Data | Transformations This provides a range of standard transformations for data. 1. After you have imported your data, from the menu select Data | Transformations. Available data This lists variates that can be used for Data and Save In. Double-click a name to copy it into the current input field, alternatively you can just type it in from the keyboard. A number of standard transformations are available as listed below. The equations use X to represent the data variate, and may involve one or two additional scalar constants, denoted by c and m, values for which must be specified. Specifies a variate containing data to be transformed. Transformation Genstat Expression Linear m * (X + c) Power (X + c)**m Square root (X + c)**0.5 Log (base 10) log10(X + c) Log (base e) log(X + c) Antilog (base 10) 10**X – c Antilog (base e) exp(X) – c Exponential exp(X + c) Logit log(X / (c – X)) Inverse logit c / (1 + exp(-X)) Double log log(-log(X/c)) Inverse double log c * exp(-exp(X)) Complementary log-log log(-log(1 – (X/c))) Inverse complementary log-log c * (1 – exp(-exp(X))) Accumulation value in unit i of result is the sum of the first i units of X Differencing value in unit i of result is set to x[i] − x[i-c] Save in Specifies a data structure to contain the transformed values. Parameters in equation Lets you specify values for the additional constants, m and c, as appropriate. Display in spreadsheet Lets you display the results in a spreadsheet. You can select the sheet from the list of current open spreadsheets or request a new spreadsheet be created. Note: the number of rows of the spreadsheet must match the length of the results formed by the calculation, otherwise a new sheet will be used. Action Icons Pin Controls whether to keep the dialog open when you click Run. When the pin is down Restore Restore names into edit fields and default settings. Clear Clear all fields and list boxes. Help Open the Help topic for this dialog. See also
{"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/transformation/","timestamp":"2024-11-11T20:54:00Z","content_type":"text/html","content_length":"42688","record_id":"<urn:uuid:904986ec-85e6-4843-8aa2-53b851869eb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00816.warc.gz"}
1 Phase Power vs 3 Phase Power - TheElectricalGuy 1 Phase Power vs 3 Phase Power 1 Phase Power vs 3 Phase Power https://www.theelectricalguy.in/wp-content/uploads/2020/06/Thumbnail1-1-1024x576.jpg 1024 576 Gaurav J Gaurav J https://secure.gravatar.com/avatar/ Electrical energy is generated, transmitted and distributed in the form of three phase power. Homes and small premises are connected with single phase power. But most of the time you will find that, 3 phase power is preferred over 1 phase power. 3 phase machineries are more efficient than 1 phase. But why it is like that? Why 3 phase power is preferred over 1 phase and what is the basic difference between 3 phase and 1 phase power? If you want to get the answers and other details about 1 phase and 3 phase power in the easiest way, I would recommend you to watch the video. 1 Phase Power To understand the concept of 3 phase power, we first must understand the 1 phase power. So, consider that we have a generator with one winding and two terminals “A” and “A1”. A permanent magnet which is rotating inside the generator. Some external force let’s say turbine, is driving the magnet. When this magnet will start rotating, a sinusoidal voltage will get induce across the terminals of the winding. If you connect a resistor across the terminal of the winding, the resistor will start taking current. And if we draw the waveform for current, it will look like this. If you observe the waveforms, you’ll find that, current is in phase with voltage. In phase is simply means both voltage and current, • Start at same time • Reaches their peak at same time • And gets zero at same time. Now, to get the instantaneous power I’ll simply multiply the voltage with current. And the resultant power waveform will look like this. The point highlighted in the above waveform, is peak power point i.e. power at this point is highest. The funny thing about 1 phase power is that, the average power is one half of its peak value. You can do the math and find that if peak power is 2P then the average power is P only. Also, the power output is not constant. 2 Phase Power Now, let’s say in the previous single phase generator, I added one more winding with terminal “B” and “B1”, as shown. Please note, the winding B – B1 is displaced 90 deg from winding A – A1. The interesting part of this arrangement is that, when the magnet is in the initial position shown above, we can observe two things. • First is, voltage across winding A – A1 is maximum or we can say it is at its peak. • And second is, voltage across winding B – B1 is almost zero. This is because, when the magnet is in the initial position, the flux only cuts across the terminal in slot A & A1. So, of course when the magnet will rotate 90 deg mechanically, • The voltage across winding B – B1 will reach its positive peak • And the voltage across winding A – A1 will become zero, as shown below. Therefore, we can say that, these two voltages are out of phase by 90 deg. This simply means that, one voltage will reach its peak value before 90 deg than the other. Now, let me connect identical resistors across both the winding. Current Ir1 and Ir2 will start flowing through resistors. These currents are in phase with their respective voltage, and hence they are also out of phase with each other by 90 deg. The instantaneous power of both the registers will be as shown. Here you can notice that, when the power output of resistor R1 is zero, power output of resistor of R2 is maximum and vice versa. If we add instantaneous power of both the phases, we’ll find that, the resultant power is constant and equal to the peak power “Pm” of one phase, as shown in above figure. In conclusion, the power output of 2 phase generator is constant and better than the 1 phase generator. 3 Phase power Now, you might have already got why we use 3 phase power but, still to make things more clear, we’ll have a look at 3 phase power. Again, consider our 2-phase generator but this time we’ll add one more winding C – C1 and we’ll place these 3 windings 120 deg apart from each other as shown. When the magnet we’ll start rotating, an identical voltage will get induce across all the 3 windings. As we have placed windings, 120 deg apart from each other, the voltage induce in 3 phases will also be out of phase by 120 deg. Or simply, voltage of each phase will reach their peak value after 120 deg. Let me make things more clear. When the magnet is in position shown voltage across winding A-A1 is maximum. Position 1 When the magnet rotates by 120 deg, voltage across winding B-B1 is maximum. Position 2 Similarly, when magnet rotates by 240 deg from its initial position, voltage across winding C-C1 gets maximum. Position 3 Now, if we connect identical resistors across all the 3 windings. Current Ir1, Ir2 and Ir3 will start flowing through resistors. These currents are in phase with their respective voltage and hence, they are also out of phase with each other by 120 deg. Again, doing the same procedure we did for 2-phase generator, we’ll find that, the power output we are getting by adding instantaneous power of all 3 phases, is constant and it is (1.5 x Pm) of one phase. So, here we have achieved constant and more power by simply adding two extra windings to a single phase generator. And this is the reason why 3 phase power is preferred over 1 phase power. So, to summarise this topic: 1. The 1 phase power output is not constant and the average power is half than its maximum power. 2. 2 phase power output is constant at every instant and the average power is equal to the maximum power of one phase. 3. 3 phase power is also constant at every instance and the average power is 1.5 times the maximum power of one phase.
{"url":"https://www.theelectricalguy.in/tutorials/1-phase-power-vs-3-phase-power/","timestamp":"2024-11-12T20:07:15Z","content_type":"text/html","content_length":"238206","record_id":"<urn:uuid:50d9b303-1f8e-4d27-a2ac-3707633f9aec>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00765.warc.gz"}
StochasticDelayDiffEq having trouble with stiff models [solved] I am using SciML to try to integrate a stiff model with small noise. It is giving me tons of grief though, and I am not sure how to approach it. I even tried setting tolerances to very small values & increasing maxiter to several orders of magnitude above the default. As a test, I attempted to integrate it using zero noise, since I know the deterministic system can be integrated stably. However, as soon as I switch from the deterministic system with AutoVern7 (Rodas5()) to the system with zero noise amplitude using ImplicitRKMil() or SKenCarp() it interrupts the integration. With ImplicitRKMil() I get an error like: Interrupted. Larger maxiters is needed. With SKenCarp(): At t=1843.1833699767562, dt was forced below floating point epsilon 1.1758922554404307e-13, and step error estimate = 0.20233022763024602. Aborting. There is either an error in your model specification or the true solution is unstable. Here is an MWE: using DifferentialEquations using StochasticDelayDiffEq using Plots const w=[-0.0974439293080174, -0.0972907033232282, 0.05008580780639362, 31.382671443379348, 1.1925811288818222, 45.142984062837634] function eqns!(du,u,h,p,t) global w u1 =u[1] u2 = u[2] du[1]=CNORM*(u2 - delta*cos(2*pi*t + pi/2)) du[2]=CNORM*(1.0 - eps1*u2 - exp(u1) - eps2*exp(u1)*u2) function g!(du,u,h,p,t) #this works fine. h_=function(p,t) [-5.7734959978E+01,4.6557062215E-01] end stable_problem = DDEProblem(eqns!, h_(0,0),h_, (0.0,5000), (6.08,0); constant_lags = (w[4],w[5],w[6]),dependent_lags=[] png(plot(sol),"stable.png") #these results are sensible #this fails at time t=1843.1833... with a dt forced below floating point epsilon warning #note that the noise amplitude is zero #if I increase the noise slightly, it is the same thing. same_problem = SDDEProblem(eqns!, g!, h_(0,0), h_, (0.0,5000), (6.08,0); constant_lags = (w[4],w[5],w[6]),dependent_lags=[]) #this tells me I need larger maxiters and then fails also png(plot(sol2), "unstable.png") #if I decrease the integration time to 1000 this will produce some sort of nonempty figure, but it's way off png(plot(sol3), "unstable2.png") Any ideas? Solved by using the SROCK1() solver, per Solving stiff SDDE problems · Issue #1 · SciML/StochasticDelayDiffEq.jl · GitHub Just a quick final update that the implicit solvers appear to be working with StochasticDelayDiffEq. I just had to set dtmin a bit lower to avoid error. Actually, they work better than SROCK1 for this case.
{"url":"https://discourse.julialang.org/t/stochasticdelaydiffeq-having-trouble-with-stiff-models-solved/121725","timestamp":"2024-11-08T19:00:19Z","content_type":"text/html","content_length":"26493","record_id":"<urn:uuid:4c676552-f9ba-4ad1-80dc-1975a2e12c90>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00243.warc.gz"}
08261 Abstracts Collection – Structure-Based Compression of Complex Massive Data08261 Executive Summary – Structure-Based Compression of Complex Massive DataA Rewrite Approach for Pattern Containment – Application to Query Evaluation on Compressed DocumentsA Space-Saving Approximation Algorithm for Grammar-Based CompressionAn Efficient Algorithm to Test Square-Freeness of Strings Compressed by Balanced Straight Line ProgramAn In-Memory XQuery/XPath Engine over a Compressed Structured Text RepresentationClone Detection via Structural AbstractionCompression vs Queryability - A Case StudyOptimizing XML Compression in XQueCStorage and Retrieval of Individual GenomesSXSAQCT and XSAQCT: XML Queryable CompressorsThe XQueC Project: Compressing and Querying XML eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 9 10.4230/DagSemProc.08261.1 article Böttcher, Stefan Lohrey, Markus Maneth, Sebastian Rytter, Wojciech From June 22, 2008 to June 27, 2008 the Dagstuhl Seminar 08261 ``Structure-Based Compression of Complex Massive Data'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.1/DagSemProc.08261.1.pdf Data compression algorithms for compressed strings and trees XML-compression eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 4 10.4230/DagSemProc.08261.2 article Böttcher, Stefan Lohrey, Markus Maneth, Sebastian Rytter, Wojciech From 22nd June to 27th of June 2008, the Dagstuhl Seminar ``08261 Structure-Based Compression of Complex Massive Data'' took place at the Conference and Research Center (IBFI) in Dagstuhl. 22 researchers with interests in theory and application of compression and computation on compressed structures met to present their current work and to discuss future directions. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.2/DagSemProc.08261.2.pdf Compression Succinct Data Structure Pattern Matching Text Search XML Query eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 16 10.4230/DagSemProc.08261.3 article Fila-Kordy, Barbara In this paper we introduce an approach that allows to handle the containment problem for the fragment XP(/,//,[ ],*) of XPath. Using rewriting techniques we define a necessary and sufficient condition for pattern containment. This rewrite view is then adapted to query evaluation on XML documents, and remains valid even if the documents are given in a compressed form, as dags. https://drops.dagstuhl.de /storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.3/DagSemProc.08261.3.pdf Pattern Containment Compressed Documents eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 14 10.4230/DagSemProc.08261.4 article Sakamoto, Hiroshi A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log^n) time to achieve O((log^n) log n) approximation ratio to the optimum compression, where log^n is the maximum number of logarithms satisfying log log · · · logn > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g =(log n) and g=O(n/log_k n) for strings from a k-letter alphabet [12]. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.4/DagSemProc.08261.4.pdf Grammar based compression space efficient approximation eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 0 10.4230/DagSemProc.08261.5 article Matsubara, Wataru Inenaga, Shunsuke Shinohara, Ayumi In this paper we study the problem of deciding whether a given compressed string contains a square. A string x is called a square if x = zz and z = u^k implies k = 1 and u = z. A string w is said to be square-free if no substrings of w are squares. Many efficient algorithms to test if a given string is square-free, have been developed so far. However, very little is known for testing square-freeness of a given compressed string. In this paper, we give an O(max(n^2; n log^2 N))-time O(n^2)-space solution to test square-freeness of a given compressed string, where n and N are the size of a given compressed string and the corresponding decompressed string, respectively. Our input strings are compressed by balanced straight line program (BSLP). We remark that BSLP has exponential compression, that is, N = O(2^n). Hence no decompress-then-test approaches can be better than our method in the worst case. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.5/ DagSemProc.08261.5.pdf Square Freeness Straight Line Program eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 17 10.4230/ DagSemProc.08261.6 article Bonifati, Angela Leighton, Gregory Mäkinen, Veli Maneth, Sebastian Navarro, Gonzalo Pugliese, Andrea We describe the architecture and main algorithmic design decisions for an XQuery/XPath processing engine over XML collections which will be represented using a self-indexing approach, that is, a compressed representation that will allow for basic searching and navigational operations in compressed form. The goal is a structure that occupies little space and thus permits manipulating large collections in main memory. https://drops.dagstuhl.de/storage/ 16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.6/DagSemProc.08261.6.pdf Compressed self-index compressed XML representation XPath XQuery eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 10 10.4230/DagSemProc.08261.7 article Evans, William S. Fraser, Christoph W. Ma, Fei This paper describes the design, implementation, and application of a new algorithm to detect cloned code. It operates on the abstract syntax trees formed by many compilers as an intermediate representation. It extends prior work by identifying clones even when arbitrary subtrees have been changed. On a 440,000-line code corpus, 20- 50%of the clones it detected were missed by previous methods. The method also identifies cloning in declarations, so it is somewhat more general than conventional procedural abstraction. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.7/ DagSemProc.08261.7.pdf Clone Detection eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 9 10.4230/DagSemProc.08261.8 article Anantharaman, Siva Some compromise on compression is known to be necessary, if the relative positions of the information stored by semi-structured documents are to remain accessible under queries. With this in view, we compare, on an example, the ‘query-friendliness’ of XML documents, when compressed into straightline tree grammars which are either regular or context-free. The queries considered are in a limited fragment of XPath, corresponding to a type of patterns; each such query defines naturally a non-deterministic, bottom-up ‘query automaton’ that runs just as well on a tree as on its compressed dag. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.8/DagSemProc.08261.8.pdf Tree automata Tree Grammars Dags XML documents Queries eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 12 10.4230/DagSemProc.08261.9 article Arion, Andrei Bonifati, Angela Manolescu, Ioana Pugliese, Andrea We present our approach to the problem of optimizing compression choices in the context of the XQueC compressed XML database system. In XQueC, data items are aggregated into containers, which are further grouped to be compressed together. This way, XQueC is able to exploit data commonalities and to perform query evaluation in the compressed domain, with the aim of improving both compression and querying performance. However, different compression algorithms have different performance and support different sets of operations in the compressed domain. Therefore, choosing how to group containers and which compression algorithm to apply to each group is a challenging issue. We address this problem through an appropriate cost model and a suitable blend of heuristics which, based on a given query workload, are capable of driving appropriate compression choices. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/ DagSemProc.08261.9/DagSemProc.08261.9.pdf XML compression eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 14 10.4230/DagSemProc.08261.10 article Mäkinen, Veli Navarro, Gonzalo Sirén, Jouni Välimäki, Niko A repetitive sequence collection is one where portions of a emph{base sequence} of length $n$ are repeated many times with small variations, forming a collection of total length $N$. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies $O(N log N)$ bits, which very soon inhibits in-memory analyses. Recent advances in full-text emph{self-indexing} reduce the space of suffix tree to $O(N log sigma)$ bits, where $sigma$ is the alphabet size. In practice, the space reduction is more than $10$-fold for example on suffix tree of Human Genome. However, this reduction remains a constant factor when more sequences are added to the collection We develop a new self-index suited for the repetitive sequence collection setting. Its expected space requirement depends only on the length $n$ of the base sequence and the number $s$ of variations in its repeated copies. That is, the space reduction is no longer constant, but depends on $N/n$. We believe the structure developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.10/ DagSemProc.08261.10.pdf Pattern matching text indexing compressed data structures comparative genomics eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 27 10.4230/DagSemProc.08261.11 article Müldner, Tomasz Fry, Christopher Miziolek, Jan Krzysztof Durno, Scott Recently, there has been a growing interest in queryable XML compressors, which can be used to query compressed data with minimal decompression, or even without any decompression. At the same time, there are very few such projects, which have been made available for testing and comparisons. In this paper, we report our current work on two novel queryable XML compressors; a schema-based compressor, SXSAQCT, and a schema-free compressor, XSAQCT. While the work on both compressors is in its early stage, our experiments (reported here) show that our approach may be successfully competing with other known queryable compressors. https:// drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.11/DagSemProc.08261.11.pdf XML compression queryable eng Schloss Dagstuhl – Leibniz-Zentrum für Informatik Dagstuhl Seminar Proceedings 1862-4405 2008-11-20 8261 1 16 10.4230/DagSemProc.08261.12 article Arion, Andrei Bonifati, Angela Manolescu, Ioana Pugliese, Andrea We outline in this paper the main contributions of the XQueC project. XQueC, namely XQuery processor and Compressor, is the first compression tool to seamlessly allow XQuery queries in the compressed domain. It includes a set of data structures, that basically shred the XML document into suitable chunks linked to each other, thus disagreeing with the ’homomorphic’ principle so far adopted in previous XML compressors. According to this principle, the compressed document is homomorphic to the original document. Moreover, in order to avoid the time consumption due to compressing and decompressing intermediate query results, XQueC applies ‘lazy’ decompression by issuing the queries directly in the compressed domain. https://drops.dagstuhl.de/storage/16dagstuhl-seminar-proceedings/dsp-vol08261/DagSemProc.08261.12/ DagSemProc.08261.12.pdf XML compression Data structures XQuery querying
{"url":"https://drops.dagstuhl.de/entities/volume/DagSemProc-volume-8261/metadata/doaj-xml","timestamp":"2024-11-10T09:31:45Z","content_type":"application/xml","content_length":"23916","record_id":"<urn:uuid:a5203c88-c26f-4f40-8c5c-b83429a53ee7>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00704.warc.gz"}
Ubiquity: Ubiquity symposium: Computation and Computational Thinking The Need for Clear Definitions In any scientific discipline there are many reasons to use terms that have precise definitions. Understanding the terminology of a discipline is essential to learning a subject and precise terminology enables us to communicate ideas clearly with other people. In computer science the problem is even more acute: we need to construct software and hardware components that must smoothly interoperate across interfaces with clients and other components in distributed systems. The definitions of these interfaces need to be precisely specified for interoperability and good systems Using the term "computation" without qualification often generates a lot of confusion. Part of the problem is that the nature of systems exhibiting computational behavior is varied and the term computation means different things to different people depending on the kinds of computational systems they are studying and the kinds of problems they are investigating. Since computation refers to a process that is defined in terms of some underlying model of computation, we would achieve clearer communication if we made clear what the underlying model is. Rather than talking about a vague notion of "computation," my suggestion is to use the term in conjunction with a well-defined model of computation whose semantics is clear and which matches the problem being investigated. Computer science already has a number of useful clearly defined models of computation whose behaviors and capabilities are well understood. We should use such models as part of any definition of the term computation. However, for new domains of investigation where there are no appropriate models it may be necessary to invent new formalisms to represent the systems under study. Computational Thinking We consider computational thinking to be the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. An important part of this process is finding appropriate models of computation with which to formulate the problem and derive its solutions. A familiar example would be the use of finite automata to solve string pattern matching problems. A less familiar example might be the quantum circuits and order finding formulation that Peter Schor used to devise an integer-factoring algorithm that runs in polynomial time on a quantum computer. Associated with the basic models of computation in computer science is a wealth of well-known algorithm-design and problem-solving techniques that can be used to solve common problems arising in computing. However, as the computer systems we wish to build become more complex and as we apply computer science abstractions to new problem domains, we discover that we do not always have the appropriate models to devise solutions. In these cases, computational thinking becomes a research activity that includes inventing appropriate new models of computation. Corrado Priami and his colleagues at the Centre for Computational and Systems Biology in Trento, Italy have been using process calculi as a model of computation to create programming languages to simulate biological processes. Priami states "the basic feature of computational thinking is abstraction of reality in such a way that the neglected details in the model make it executable by a machine." [Priami, 2007] As we shall see, finding or devising appropriate models of computation to formulate problems is a central and often nontrivial part of computational thinking. Forces at Play In the last half century, what we think of as a computational system has expanded dramatically. In the earliest days of computing, a computer was an isolated machine with limited memory to which programs were submitted one at a time to be compiled and run. Today, in the Internet era, we have networks consisting of millions of interconnected computers and as we move into cloud computing, many foresee a global computing environment with billions of clients having universal on-demand access to computing services and data hosted in gigantic data centers located around the planet. Anything from a PC or a phone or a TV or a sensor can be a client and a data center may consist of hundreds of thousands of servers. Needless to say, the models for studying such a universally accessible, complex, highly concurrent distributed system are very different from the ones for a single isolated computer. Another force at play is that because of heat dissipation considerations the architecture of computers is changing. An ordinary PC today has many different computing elements such as multicore chips and graphics processing units, and an exascale supercomputer by the end of this decade is expected to be a giant parallel machine with up to a million nodes each with possibly a thousand processors. Our understanding of how to write efficient programs for these machines is limited. Good models of parallel computation and parallel algorithm design techniques are a vital open research area for effective parallel computing. In addition, there is increasing interest in applying computation to studying virtually all areas of human endeavor. One fascinating example is simulating the highly parallel biological processes found in human cells and organs for the purposes of understanding disease and drug design. Good computational models for biological processes are still in their infancy. And it is not clear we will ever be able to find a computational model for the human brain that would account for emergent phenomena such as consciousness or intelligence. The Theory of Computation The theory of computation has been and still is one of the core areas of computer science. It explores the fundamental capabilities and limitations of models of computation. A model of computation is a mathematical abstraction of a computing system. The most important model of sequential computation studied in computer science is the Turing machine, first proposed by Alan Turing in 1936. Let us briefly review the definition of a Turing machine to appreciate the detail necessary to understand even this familiar model of computation. We can think of a Turing machine as a finite-state control attached to a tape head that can read and write symbols on the squares of a semi-infinite tape. Initially, a finite string of length n representing the input is in the leftmost n squares of the tape. An infinite sequence of blanks follows the input string. The tape head is reading the symbol in the leftmost square and the finite control is in a predefined initial state. The Turing machine then makes a sequence of moves. In a move it reads the symbol on the tape under the tape head and consults a transition table in the finite-state control which specifies a symbol to be overprinted on the square under the tape head, a direction the tape head is to move (one square to the left or right), and a state to enter next. If the Turing machine enters an accepting halting state (one with no next move), the string of nonblank symbols remaining on the input tape at that point in time is its output. Mathematically, a Turing machine consists of seven components: a finite set of states; a finite input alphabet (not containing the blank); a finite tape alphabet (which includes the input alphabet and the blank); a transition function that maps a state and a tape symbol into a state, tape symbol, and direction (left or right); a start state; an accept state from which there are no further moves; and a reject state from which there are no further moves. We can characterize the configuration of a Turing machine at a given moment in time by three quantities: 1. the state of the finite-state control, 2. the string of nonblank symbols on the tape, and 3. the location of the input head on the tape. A computation of a Turing machine on an input w is a sequence of configurations the machine can go through starting from the initial configuration with w on the tape and terminating (if the computation terminates) in a halting configuration. We say a function f from strings to strings is computable if there is some Turing machine M that given any input string w always halts in the accepting state with just f(w) on its tape. We say that M computes f. The Turing machine provides a precise definition for the term algorithm: an algorithm for a function f is just a Turing machine that computes f. There are scores of models of computation that are equivalent to Turing machines in the sense that these models compute exactly the same set of functions that Turing machines can compute. Among these Turing-complete models of computation are multitape Turing machines, lambda-calculus, random access machines, production systems, cellular automata, and all general-purpose programming languages. The reason there are so many different models of computation equivalent to Turing machines is that we rarely want to implement an algorithm as a Turing machine program; we would like to use a computational notation such as a programming language that is easy to write and easy to understand. But no matter what notation we choose, the famous Church-Turing thesis hypothesizes that any function that can be computed can be computed by a Turing machine. Note that if there is one algorithm to compute a function f, then there is an infinite number. Much of computer science is devoted to finding efficient algorithms to compute a given function. For clarity, we should point out that we have defined a computation as a sequence of configurations a Turing machine can go through on a given input. This sequence could be infinite if the machine does not halt or one of a number of possible sequences in case the machine is nondeterministic. The reason we went through this explanation is to point out how much detail is involved in precisely defining the term computation for the Turing machine, one of the simplest models of computation. It is not surprising, then, as we move to more complex models, the amount of effort needed to precisely formulate computation in terms of those models grows substantially. Concurrent Models Many real-world computational systems compute more than just a single function—the world has moved to interactive computing [Goldin, Smolka, Wegner, 2006]. The term reactive system is used to describe a system that maintains an ongoing interaction with its environment. Examples of reactive systems include operating systems and embedded systems. A distributed system is one that consists of autonomous computing systems that communicate with one another through some kind of network using message passing. Examples of distributed systems include telecommunications systems, the Internet, air-traffic control systems, and parallel computers. Many distributed systems are also reactive systems. Perhaps the most intriguing examples of reactive distributed computing systems are biological systems such as cells and organisms. We could even consider the human brain to be a biological computing system. Formulation of appropriate models of computation for understanding biological processes is a formidable scientific challenge in the intersection of biology and computer science. Distributed systems can exhibit behaviors such as deadlock, livelock, race conditions, and the like that cannot be usefully studied using a sequential model of computation. Moreover, solving problems such as determining the throughput, latency, and performance of a distributed system cannot be productively formulated with a single-thread model of computation. For these reasons, computer scientists have developed a number of models of concurrent computation which can be used to study these phenomena and to architect tools and components for building distributed systems. There are many theoretical models for concurrent computation. One is the message-passing Actor model, consisting of computational entities called actors [Hewitt, Bishop, Steiger, 1973]. An actor can send and receive messages, make local decisions, create more actors, and fix the behavior to be used for the next message it receives. These actions may be executed in parallel and in no fixed order. The Actor model was devised to study the behavioral properties of parallel computing machines consisting of large numbers of independent processors communicating by passing messages through a network. Other well-studied models of concurrent computation include Petri nets and the process calculi such as pi-calculus and mu-calculus. Many variants of computational models for distributed systems are being devised to study and understand the behaviors of biological systems. For example, Dematte, Priami, and Romanel [2008] describe a language called BlenX that is based on a process calculus called Beta-binders for modeling and simulating biological systems. We do not have the space to describe these concurrent models in any detail. However, it is still an open research area to find practically useful concurrent models of computation that combine control and data for many areas of distributed computing. Benefits of Models of Computation In addition to aiding education and understanding, there are many practical benefits to having appropriate models of computation for the systems we are trying to build. In cloud computing, for example, there are still a host of poorly understood concerns for systems of this scale. We need to better understand the architectural tradeoffs needed to achieve the desired levels of reliability, performance, scalability and adaptivity in the services these systems are expected to provide. We do not have appropriate abstractions to describe these properties in such a way that they can be automatically mapped from a model of computation into an implementation (or the other way around). In cloud computing, there are a host of research challenges for system developers and tool builders. As examples, we need programming languages, compilers, verification tools, defect detection tools, and service management tools that can scale to the huge number of clients and servers involved in the networks and data centers of the future. Cloud computing is one important area that can benefit from innovative computational thinking. Mathematical abstractions called models of computation are at the heart of computation and computational thinking. Computation is a process that is defined in terms of an underlying model of computation and computational thinking is the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. Useful models of computation for solving problems arising in sequential computation can range from simple finite-state machines to Turing-complete models such as random access machines. Useful models of concurrent computation for solving problems arising in the design and analysis of complex distributed systems are still a subject of current research. About the Author Alfred V. Aho, is Lawrence Gussman Professor in the Computer Science Department at Columbia University. He served as Chair of the department from 1995 to 1997, and in the spring of 2003. The author would like to thank Peter Denning and Jeannette Wing for their thoughtful comments on the importance of computational thinking. The author is also grateful to Jim Larus for his insights into the problems confronting cloud computing and to Corrado Priami for many stimulating conversations on computational thinking in biology. Dematte, L., Priami, C., and Romanel, A. The BlenX language, a tutorial. In M. Bernardo, P. Degano, and G. Zavattaro (Eds.): SFM 2008, LNCS 5016, pp. 313-365, Springer, 2008. Denning, P. J. Beyond computational thinking. Comm. ACM, pp. 28-30, June 2009. Goldin, D., Smolka, S., and Wegner, P. Interactive Computation: The New Paradigm, Springer, 2006. Hewitt, C., Bishop, P., and Steiger, R. A universal modular ACTOR formalism for artificial intelligence. In Proc. of the 3rd IJCAI, pp. 235-245, Stanford, USA, 1973. Priami, C. Computational thinking in biology, Trans. on Comput. Syst. Biol. VIII, LNBI 4780, pp. 63-76, Springer, 2007. Schor, P. Algorithms for quantum computation: discrete logarithms and factoring. In Proc. 35th Annual Symposium on Foundations of Computer Science, IEEE Press, Los Almitos, CA, 1994. Turing, A. On computable numbers with an application to the Entscheidungsproblem. In Proc. London Mathematical Society 42, pp. 230-265, 1936. Wing, J. Computational thinking. Comm. ACM, pp. 33-35, March 2006. DOI: 10.1145/1895419.1922682 ©2011 ACM $10.00 Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
{"url":"https://ubiquity.acm.org/article.cfm?id=1922682","timestamp":"2024-11-13T18:41:44Z","content_type":"text/html","content_length":"40805","record_id":"<urn:uuid:04b779ab-643c-4fb5-9155-c02d315ac616>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00773.warc.gz"}
missing-whitespace-around-arithmetic-operator (E226) Derived from the pycodestyle linter. Fix is always available. This rule is unstable and in preview. The --preview flag is required for use. Checks for missing whitespace arithmetic operators. According to PEP 8, there should be one space before and after an arithmetic operator (+, -, /, and *). Use instead:
{"url":"https://docs.astral.sh/ruff/rules/missing-whitespace-around-arithmetic-operator/","timestamp":"2024-11-07T04:04:32Z","content_type":"text/html","content_length":"23195","record_id":"<urn:uuid:0652fa03-5a67-4da3-8b60-e2675de46f0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00297.warc.gz"}
CountMin Sketch CountMin Sketch The CountMin sketch, as described in Cormode and Muthukrishnan in http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf, is used for approximate Frequency Estimation. For an item \(x\) with frequency \(f_x\), the sketch provides an estimate, \(\hat{f_x}\), such that \(f_x \approx \hat{f_x}.\) The sketch guarantees that \(f_x \le \hat{f_x}\) and provides a probabilistic upper bound which is dependent on the size parameters. The sketch provides an estimate of the occurrence frequency for any queried item but, in contrast to the Frequent Items Sketch, this sketch does not provide a list of heavy hitters. class count_min_sketch(*args, **kwargs) Static Methods: deserialize(bytes: bytes) _datasketches.count_min_sketch Reads a bytes object and returns the corresponding count_min_sketch suggest_num_buckets(relative_error: float) int Suggests the number of buckets needed to achieve an accuracy within the provided relative_error. For example, when relative_error = 0.05, the returned frequency estimates satisfy the ‘relative_error’ guarantee that never overestimates the weights but may underestimate the weights by 5% of the total weight in the sketch. Returns the number of hash buckets at every level of the sketch required in order to obtain the specified relative error. suggest_num_hashes(confidence: float) int Suggests the number of hashes needed to achieve the provided confidence. For example, with 95% confidence, frequency estimates satisfy the ‘relative_error’ guarantee. Returns the number of hash functions that are required in order to achieve the specified confidence of the sketch. confidence = 1 - delta, with delta denoting the sketch failure probability. Non-static Methods: Overloaded function. 1. get_estimate(self, item: int) -> float Returns an estimate of the frequency of the provided 64-bit integer value 2. get_estimate(self, item: str) -> float Returns an estimate of the frequency of the provided string Overloaded function. 1. get_lower_bound(self, item: int) -> float Returns a lower bound on the estimate for the given 64-bit integer value 2. get_lower_bound(self, item: str) -> float Returns a lower bound on the estimate for the provided string Returns the maximum permissible error for any frequency estimate query Returns the size in bytes of the serialized image of the sketch Overloaded function. 1. get_upper_bound(self, item: int) -> float Returns an upper bound on the estimate for the given 64-bit integer value 2. get_upper_bound(self, item: str) -> float Returns an upper bound on the estimate for the provided string Returns True if the sketch has seen no items, otherwise False Merges the provided other sketch into this one property num_buckets The configured number of buckets for the sketch property num_hashes The configured number of hashes for the sketch property seed The base hash seed for the sketch Serializes the sketch into a bytes object Produces a string summary of the sketch property total_weight The total weight currently inserted into the stream Overloaded function. 1. update(self, item: int, weight: float = 1.0) -> None Updates the sketch with the given 64-bit integer value 2. update(self, item: str, weight: float = 1.0) -> None Updates the sketch with the given string
{"url":"https://apache.github.io/datasketches-python/main/frequency/count_min_sketch.html","timestamp":"2024-11-08T01:19:01Z","content_type":"text/html","content_length":"23448","record_id":"<urn:uuid:f2fad83c-e82a-4862-a737-ca3197323f0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00539.warc.gz"}
Work Log [Work Log] Work Log December 08, 2014 Finished implementing and debugging neuron skeleton-image-to-graph code. It was deceptively difficult; there are lots of corner cases when tracing pixels that lead to unexpected behavior. I had to totally redesign how junctions were calculated three times. I also twice refactored how previously-visited pixels were recorded, and how chains are split at junctions. We define the "neighbor set" of a pixel as the set of all nonzero 8-connected neighbors with the caveat that diagonal neighbors are not "blocked" by a horizontal neighbor. The idea here is that we want all paths through the graph to be unique, and "blocked" diagonal neighbors have two paths: one in which the diagonal neighbor is adjacent to the current pixel, and one in which it is adjacent to the blocking pixel. Omitting it from the neighbor set solves the problem by eliminating the extra path, while guaranteeing a path still exists that passes through the offending neighbor pixel. In practice, this prevents chains from sneaking around previously visited pixels by visiting diagonal neighbors. It also is central to the definition of a "junction" below. We use a modified floodfill algorithm to explore the entire graph. First, we add one or more seed pixels to the queue. While the queue is not empty, we dequeue a pixel, add it to the current pixel chain, and enqueue its neighbors. We also check if it is a junction pixel; if so, we add the current pixel-chain to the chain set, and begin a new empty chain. We define a junction as any pixel with a neighbor set of size three or more. If during any iteration, no pixels are added to the queue, the chain has reeached its end; we add it to the chain set and begin a new empty chain. Several additional details are important. Because we allow several seed pixels, our depth-first search may encounter the seed pixels while they still exist deeper in the queue. The side effect is that the seed pixels may be added to two different chains. We resolve this by marking pixels when they are added to a chain, and if a pixel is dequeued that is already added to a chain, we discard it. After finding all chains, at most one endpoint is associated with a junction. We require that the junction point appear in every pixel-chain that enters the junction, so some chains need their second junction point added. Recall that a chain only terminates at junctions or if it has no unclaimed neighbors. For any non-junction endpoints, any neighbors (not counting the chain's antecedent pixel) must be junctions. To prove there is at most one junction neighbor, assume it had two junction endpoints. Then the pixel would have three neighbors: two endpoints and its antecedent pixel. This implies the pixel is a junction itself -- a contradition. Thus, adding a junction to a dangling endpoint amounts to finding an unclaimed neighbor. Two adjacent junction pixels must be handled separately. Posted by Kyle Simek blog comments powered by
{"url":"http://vision.cs.arizona.edu/ksimek/research/2014/12/08/work-log/","timestamp":"2024-11-07T03:54:02Z","content_type":"application/xhtml+xml","content_length":"9305","record_id":"<urn:uuid:9ea443ea-bd9d-4b0d-9f70-dde1254519eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00175.warc.gz"}
1600 CC to HP Calculator | Online Calculators 1600 CC to HP Calculator The 1600 CC to HP Calculator is designed to quickly convert cubic centimeters (CCs) into horsepower (HP) for engines with a displacement of 1600 CCs. This tool is perfect for car enthusiasts, mechanics, and anyone needing a fast and easy way to estimate horsepower from engine displacement. The input is pre-filled with 1600 CCs for convenience, and you can simply click “Calculate” to get the horsepower result. To use this calculator: 1. The Enter CCs field is set to 1600 by default. 2. Press the Calculate button to see the converted horsepower result in the HP Result field. 3. If you wish to reset the calculator, use the Reset button to clear the result field and start over. How To Calculate Example 1 Step Calculation Enter CCs 1600 CC Divide CC by 15 1600 ÷ 15 = 106.67 HP HP Result 106.67 HP In this example, the default value of 1600 CCs is divided by 15, resulting in an approximate horsepower value of 106.67 HP. This calculation shows how engine displacement converts to horsepower. Example 2 Step Calculation Enter CCs 1200 CC Divide CC by 15 1200 ÷ 15 = 80.00 HP HP Result 80.00 HP For the second example, entering 1200 CCs gives a horsepower output of 80 HP after dividing by 15. This provides a quick estimation of the engine’s power. Leave a Comment
{"url":"https://lengthcalculators.com/1600-cc-to-hp-calculator/","timestamp":"2024-11-06T18:33:43Z","content_type":"text/html","content_length":"58609","record_id":"<urn:uuid:4e5b9f9b-3e51-425f-aa98-48bfd86b5c50>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00396.warc.gz"}
Handbook of Writing for the Mathematical Sciences - SILO.PUB File loading please wait... Citation preview Handbook of Writing for the Mathematical Sciences This page intentionally left blank Second Edition Handbook of Writing for the Mathematical Sciences NICHOLAS J. HIGHAM University of Manchester Manchester, England 513JTL Society for Industrial and Applied Mathematics Philadelphia Copyright ©1998 by the Society for Industrial and Applied Mathematics. 10987654 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688. Library of Congress Cataloging-in-Publication Data Higham, Nicholas J., 1961Handbook of writing for the mathematical sciences / Nicholas J. Higham. ~ 2nd ed. Includes bibliographical references and indexes. ISBN 0-89871-420-6 1. Mathematics—Authorship. 2. Technical writing. I. Title. QA42.H54 1998 808'.06651-dc21 98-7284 w SlaJlL. is a registered trademark. Preface to the Second Edition Preface to the First Edition 1 General Principles 2 Writer's Tools and Recommended Reading 2.1 Dictionaries and Thesauruses 2.2 Usage and Style Guides 2.3 Technical Writing Guides 2.4 General Reading Answers to the Questions at the Start of the Chapter . . . 3 Mathematical Writing 3.1 What Is a Theorem? 3.2 Proofs 3.3 The Role of Examples 3.4 Definitions 3.5 Notation 3.6 WTords versus Symbols 3.7 Displaying Equations 3.8 Parallelism 3.9 Dos and Don'ts of Mathematical Writing Punctuating Expressions Otiose Symbols Placement of Symbols "The" or "A" Notational Synonyms Referencing Equations Miscellaneous 4 English Usage 4.1 A or An? 4.2 Abbreviations 4.3 Absolute Words 4.4 Active versus Passive 4.5 Adjective and Adverb Abuse 4.6 -al and -age 4.7 Ambiguous "This" and "It" 4.8 British versus American Spelling 4.9 Capitalization 4.10 Common Misspellings or Confusions 4.11 Consistency 4.12 Contractions 4.13 Dangling Participle 4.14 Distinctions 4.15 Elegant Variation 4.16 Enumeration 4.17 False If 4.18 Hyphenation 4.19 Linking Words 4.20 Misused and Ambiguous Words 4.21 Numbers 4.22 Omit These Words? 4.23 Paragraphs 4.24 Punctuation 4.25 Say It Better, Think It Gooder 4.26 Saying What You Mean 4.27 Sentence Opening 4.28 Simplification 4.29 Synonym Selection 4.30 Tense 4.31 What to Call Yourself 4.32 Word Order 5 When English Is a Foreign Language 5.1 Thinking in English 5.2 Reading and Analysing Other Papers 5.3 Distinctions 5.4 Articles 5.5 Ordinal Numbers 5.6 Negatives 5.7 Constructions 5.8 5.9 5.10 •5.11 5.12 5.13 5.14 5.15 Connecting Words and Phrases Spelling Keeping It Simple Using a Dictionary Punctuation Computer Aids English Language Qualifications Further Reading 6 Writing a Paper 6.1 Audience 6.2 Organization and Structure 6.3 Title 6.4 Author List 6.5 Date 6.6 Abstract 6.7 Key Words and Subject Classifications 6.8 The Introduction 6.9 Computational Experiments 6.10 Tables 6.11 Citations 6.12 Conclusions 6.13 Acknowledgements 6.14 Appendix 6.15 Reference List 6.16 SpecificsandDeprecatedPractic.es Capitalization Dangling Theorem Footnotes Numbering Mathematical Objects Plagiarism The Invalid Theorem "This Paper Proves . . . " 7 Revising a Draft 7.1 How to Revise 7.2 Examples of Prose 7.3 Examples Involving Equations 7.4 Examples from My Writing 7.5 A Revised Proof 7.6 A Draft Article for Improvement 8 Publishing a Paper 8.1 Choosing a Journal 8.2 Submitting a Manuscript 8.3 The Refereeing Process 8.4 How to Referee 8.5 The Role of the Copy Editor 8.6 Checking the Proofs 8.7 Author-Typeset TfiX 8.8 Copyright Issues 8.9 A SIAM Journal Article T^X Papers Non-TfiX Papers 9 Writing and Defending a Thesis 9.1 The Purpose of a Thesis 9.2 Content 9.3 Presentation 9.4 The Thesis Defence 9.5 Further Reading 10 Writing a Talk 10.1 What Is a Talk? 10.2 Designing the Talk 10.3 Writing the Slides Legibility of the Slides How Many Slides? Handwritten or Typeset? 10.4 Example Slides 10.5 Further Reading 11 Giving a Talk 11.1 Preparation 11.2 Delivery 11.3 Further Reading 12 Preparing a Poster 12.1 What Is a Poster? 12.2 A Poster Tells a Story 12.3 Designing Your Poster 12.4 Transportation and the Poster Session 12.5 A Word to Conference Organizers 13 TEX and ^IfeX 13.1 What are TgX and MfeX? 13.2 Tips for Using Wl^t Dashes Delimiters Figures in WF^ File Names and Internet Addresses Labels Macros Miscellaneous Mathematics Quotes, Dates, Lists and Paragraphs Running MfcX, BieTEX and Makelndex Source Code Spacing in Formulas Ties and Spaces 13.3 BiBTEX 13.4 Indexing arid Makelndex 13.5 Further Sources of Information 14 Aids and Resources for Writing and Research 14.1 Internet Resources Newsgroups Digests Netlib e-MATH 14.2 Library Classification Schemes 14.3 Review, Abstract and Citation Services 14.4 Text Editors 14.5 Spelling Checking, Filters and Pipes 14.6 Style Checkers A The Greek Alphabet B Summary of TfiX and WI&X. Symbols C GNU Emacs Commands D Mathematical and Other Organizations E Prizes for Expository Writing Name Index Subject Index Preface to the Second Edition In the five years since the first edition of this book was published I have received numerous email messages and letters from readers commenting on the book and suggesting how it could be improved. I have also built up a large file of ideas based on my own experiences in reading, writing, and editing and in examining and supervising theses. With the aid of all this information I have completely revised the book. The most obvious changes in this second edition are the new chapters. • Writing and Defending a Thesis. Since many of the readers of the book are graduate students, advice on how to write a thesis and how to handle the thesis defence was a natural addition. • Giving a Talk. The revised chapter "Writing a Talk" from the first edition gives advice on preparing slides for a talk. The new chapter explains how to deliver a talk in front of an audience. • Preparing a Poster. The poster is growing in popularity as a medium of communication at conferences and elsewhere, yet many of us have little experience of preparing posters. • Tf^X and J^TfjK. Since the first edition of this book was published, I^T^}X2£ has become the official version of I^I^X, thereby solving many of the problems involving, for example, incompatible dialects of BTj^X, font handling, and inclusion of PostScript figures in a M^X document. I have moved the discussion of TjjX, r^TfrjX, and their associated tools to a new chapter. Many more tips on the use of Tf Forman S. Acton (1970), Numerical Methods That Work [3]. > Albert H. Beiler (1966), Recreations in the Theory of Numbers [19]. > David M. Burton (1980), Elementary Number Theory [44]. [> Gene H. Golub and Charles F. Van Loan (1996), Matrix Computations [108], 0 Paul R. Halmos (1982), A Hilbert Space Problem Book [125]. > Donald E. Knuth (1973-1981), The Art of Computer Programming [157]. (Knuth was awarded the 1986 Leroy P. Steele Prize by the AMS for these three volumes.) > Beresford N. Parlett (1998), The Symmetric Eigenvalue Problem [217]. > G. W. Stewart (1973), Introduction to Matrix Computations [261]. > Gilbert Strang (1986), Introduction to Applied Mathematics [262]. Also worth studying are papers or books that have won prizes for expository writing in mathematics. Appendix E lists winners of the Chauvenet Prize, the Lester R. Ford Award, the George Polya Award, the Carl B. Allendoerfer Award, the Beckenbach Book Prize and the Merten M. Hasse Prize. This page intentionally left blank Chapter 2 Writer's Tools and Recommended Reading / use three dictionaries almost every day. — JAMES A. MICHENER, Writer's Handbook (1992) The purpose of an ordinary dictionary is simply to explain the meaning of the words . . . . The object aimed at in the present undertaking is exactly the converse of this: namely,—The idea being given, to find the word, or words, by which that idea may be most fitly and aptly expressed. — PETER MARK ROGET, Thesaurus of English Words and Phrases (1852) The dictionary and thesaurus interruptions are usually not about meaning in the gross sense (what's the correct use of "oppugn"), but about precision, and about finding the right word.., What did the examples that von Neumann and I constructed do to the conjugacy conjecture for shifts... did they contradict, contravene, gainsay, dispute, disaffirm, disallow, abnegate, or repudiate it?. . . Writing can stop for 10 or 15 minutes while I search and weigh. — PAUL R. HALMOS, / Want to be a Mathematician: An Automathography in Three Parts (1985) Mathematicean. One that is ski/led in Augurie, Geometr/e, and Astronomie. — HENRY COCKERAM1, English Dictionarie (1623) Quoted in [255]. 5 2.1. Dictionaries and Thesauruses Apart from pen, paper and keyboard, the most valuable tool for a writer in any subject is a dictionary. Writing or reading in the mathematical sciences you will come across questions such as the following: 1. What is the plural of modulus: moduli or moduluses? 2. Which of parameterize and parametrize is the correct spelling? 3. What is a gigaflop? 4. When was the mathematician Abel born and what was his nationality? 5. What is the meaning of mutatis mutandis? 6. Who was Procrustes (as in the "orthogonal Procrustes problem")? 7. When should you use "special" and when "especial"? 8. What are the differences between mind-bending, mind-blowing and mind-boggling? All the answers can be found in general-purpose dictionaries (and are given at the end of this chapter). As these questions illustrate, dictionaries are invaluable for choosing a word with just the right shade of meaning, checking on spelling and usage, and even finding encyclopedic information. Furthermore, the information about a word's history provided in a dictionary etymology can make it easier to use the word precisely. The most authoritative dictionary is the Oxford English Dictionary (OED) [215], It was originally published in parts between 1884 and 1928, and a four volume supplement was produced from 1972-1986. A twenty volume second edition of the dictionary was published in 1989; it defines more than half a million words, using 2.4 million illustrative quotations. The OED traces the history of words from around 1150. In 1992 a compact disc (CD-ROM) version of the OED was published. It contains the full text of the printed version (at about a third of the price) and the accompanying software includes powerful search facilities. Other large dictionaries are Webster's Third New International Dictionary [294], which was published in the United States in 1961 and has had three supplements, The American Heritage Dictionary of the English Language [7], the Random House Unabridged Dictionary [233], and The New Shorter Oxford English Dictionary, in two volumes [214]. 2.1. DICTIONARIES AND THESAURUSES For everyday use the large dictionaries are too unwieldy and too thorough, so a more concise dictionary is needed. The Concise Oxford Dictionary (COD) [213] is now in its ninth edition (1995). It is the favourite of many, and is suitable for American use, as American spellings and usages are included. (The COD was my main dictionary of reference in writing this book.) Other dictionaries suitable for regular use by the writer include, from the United States: • The American Heritage College Dictionary [6]. • The Random House Webster's College Dictionary [234]. • Merriam-Webster's Collegiate Dictionary [203]. Most main entries state the date of first recorded use of the word. Contains usage and synonym notes and appendices "Biographical Names" and "Geographical Names". • Webster's New World College Dictionary [293]. From Britain: • The Chambers Dictionary [54]. Renowned for its rich vocabulary, which includes literary terms, Scottish words and many archaic and obsolete words. Also contains some humorous entries: eclair is denned as "a cake, long in shape but short in duration . . . " . • The Collins English Dictionary [60]. Contains extensive encyclopedic entries, both biographical and geographical, strong coverage of scientific and technical vocabulary, and usage notes. • The Longman Dictionary of the English Language [182]. The same comments apply as for the Collins. Has an extensive collection of notes on usage, synonyms and word history. The American dictionaries listed, but not the British ones, show allowable places to divide words when they must be broken and hyphenated at the end of a line. To make good use of dictionaries, it helps to be aware of some of their characteristics. Order of definitions. For words with several meanings, most dictionaries give the most common or current meanings first, but some give meanings in their historical sequence. The historical order is the one used by the Oxford English Dictionary, since its purpose is to trace the development of words from their first use to the present day. The Merriam-Webster's Collegiate also uses the historical order, but for a desk dictionary intended for quick reference this order can be disorienting. For example, under the headword nice, Merriam-Webster's Collegiate lists "showing fastidious or finicky tastes" before "pleasing, agreeable". Etymologies. Etymologies vary in their location within an entry, in the style in which they are presented (for example, the symbol < may be used for "from"), and in their depth and amount of detail. Some words with interesting etymologies are diploma, OK, shambles, symposium, and sine. Scientific and technical vocabulary. Since there are vastly more scientific and technical terms than any general dictionary can accommodate, there is much variation in the coverage provided by different dictionaries. Up-to-date vocabulary. The constantly changing English language is monitored by lexicographers (Johnson's "harmless drudges"), who add new words and meanings to each new edition of their dictionaries. Coverage of modern vocabulary varies between dictionaries, depending on the year of publication and the compilers' tastes and citation files (which usually include material submitted by the general public). British versus American spelling and usage. Since much mathematical science is written for an international audience it is useful to be able to check differences in British and American spelling and usage. Most British and American dictionaries are good in this respect. General-purpose dictionaries do not always give correct definitions of mathematical terms. In a comparison of eight major British and American dictionaries I found errors in definitions of terms such as determinant, eigenvector2, polynomial, and power series [141]. Annotated lists of dictionaries and usage guides are given by Stainton [253], [254]. Comparisons and analyses of dictionaries are also given by Quirk and Stein [232, Chap. 11] and Burchfield [43], Specialized dictionaries can also be useful to the mathematical writer. There are many dictionaries of mathematics, one example being the Penguin dictionary [206], which is small and inexpensive yet surprisingly thorough. Schwartzman's The Words of Mathematics [247] explains the etymology of words used in mathematics (see also [248]). The synonyms provided in a thesaurus can be helpful in your search for an elusive word or a word with the right connotation. Roget's Thesaurus, first published in 1852, is the classic one. The words in Roget's Thesaurus are traditionally arranged according to the ideas they express, instead of alphabetically, though versions are now available in dictionary form. The Bloomsbury Thesaurus [32] is arranged according to a new classification 2 One dictionary offers this definition of eigenvector: a vector that in one dimension under a given rotational, reflectional, expanding, or shrinking operation becomes a number that is a multiple of itself. 2.2. USAGE AND STYLE GUIDES designed to be more appropriate for modern English than that of Roget, and it has a very detailed index. Rodale's The Synonym Finder [236] is a large thesaurus arranged alphabetically. Thesauruses are produced by all the major publishers of dictionaries. 2.2. Usage and Style Guides Every writer should own and read a guide to English usage. One of the most accessible is The Elements of Style by Strunk and White [263]. Zinsser [304] says this is "a book that every writer should read at least once a year", and, as if following this advice, Luey [185] says "I read it once a year without fail." An even shorter, but equally readable, guide is Lambuth et al.'s The Golden Book on Writing [170]. Fowler's Dictionary of Modern English Usage [83] is a much longer and more detailed work, as is its predecessor, The King's English, by the Fowler brothers [84]. A favourite of mine is the revision [298] by Flavell and Flavell of the 1962 Current English Usage by Wood. Gowers's influential Complete Plain Words [115] stems from his Plain Words of 1948, which was written to improve the use of English in the British civil service. Partridge's Usage and Abusage [218] is another valuable guide, this one in dictionary form. Excellent advice on punctuation is given by Carey in Mind the Stop [52] and by Bernstein [28]. For a whimsical treatment, see The New WellTempered Sentence by Gordon [112]. Bryson's Dictionary of Troublesome Words [41] offers practical, witty advice on usage, while Safire [243] presents fifty "fumblerules" (mistakes that call attention to the rules) accompanied by pithy explanatory essays. The books On Newspaper Style and English our English by Waterhouse [287], [288] make fascinating and informative reading, though they are hard to use for reference since they lack an index; [287] is a standard handbook for journalists, but is of much wider interest. Baker's The Practical Stylist [13] is a widely used course text on writing; it has thorough discussions of usage, style and revision and gives many illustrative examples. Day's Scientific English [69] contains general advice on grammar and usage, with particular reference to English in scientific writing. Perry's The Fine Art of Technical Writing [221] offers selective, practical advice on the psychology, artistry and technique of technical writing, which the author defines as "all writing other than fiction". In Miss Thistlebottom 's Hobgoblins [26] Bernstein provides an antidote for those brainwashed by over-prescriptive usage guides, in the form of letters to his (fictional) English schoolteacher. Two other books by Bernstein, The Careful Writer [25] and Dos, Don'ts and Maybes of English Usage [27], are also useful guides. Gordon's The Transitive Vampire [111] is a grammar guide in the same fanciful vein as [112]. The Chicago Manual of Style [58], first published in 1906, is a long arid comprehensive guide to book production, style and printing. It is the standard reference for authors and editors in many organizations. It includes chapters on typesetting mathematics and preparing bibliographies and indexes. Turabian's A Manual for Writers of Term Papers, Theses, and Dissertations [278], first published in 1937, is based on the guidelines in The Chicago Manual of Style but its aim is more limited, as defined in the title, so it does not discuss bookmaking and copy editing. Words into Type [249] is another thorough guide for authors and editors, covering manuscript and index preparation, copy editing style, grammar, typographical style and the printing process. Other valuable references on editing, copy editing and proofreading are Hart's Rules [131], which describes the house style of Oxford University Press; Butcher's Copy-Editing [45], which is regarded as the standard British work on copy editing; Eisenberg's Guide to Technical Editing [77]; O'Connor's How to Copyedit Scientific Books and Journals [208]; Stainton's The Fine Art of Copy editing [254]; and Tarutz's Technical Editing [270]. Some interesting techniques for revising a sentence by analysing its structure are presented by Lanham in Revising Prose [175]. 2.3. Technical Writing Guides Several guides to mathematical writing are available. Halmos's essay "How to Write Mathematics" [121] is essential reading for every mathematician; it contains much sound advice not found elsewhere. Halmos's "automathography" [127] includes insight into mathematical writing, editing and refereeing; it begins with the sentence "I like words more than numbers, and I always did." Transcripts of a lecture course called "Mathematical Writing" that was given by Knuth in 1987 at Stanford are collected in Mathematical Writing [164], which I highly recommend. This manual contains many anecdotes and insights related by Knuth and his guest lecturers, including Knuth's battle with the copy editors at Scientific American and his experiences in writing the book Concrete Mathematics [116]. Other very useful guides are Flanders's article [80] for authors who write in the journal American Mathematical Monthly; Gillman's booklet Writing Mathematics Well [104] on preparing manuscripts for Mathematical Association of America journals; Steenrod's essay "How to Write Mathematics" [256]; Krantz's wide-ranging A Primer of Mathematical Writing [167]; and Swanson's guide Mathematics into Type [267] for mathematical copy editors and authors. Knuth's book on T£JX [161] contains much general advice on how to typeset mathematics, and an old guide to this subject is The Printing of Mathematics [55]. 2.3. TECHNICAL WRITING GUIDES Most books and papers on mathematical writing, including this one, are aimed primarily at graduate students and advanced undergraduate students. Maurer [197] gives advice on mathematical writing aimed specifically at undergraduate students, covering a number of basic issues omitted elsewhere. Guides to writing in other scientific disciplines often contain much that is relevant to the mathematical writer; an example is the book by Pechenik [219], which is aimed at biology students. General guides to scientific writing that I recommend are those by Barrass [14], [15], Cooper [62], Ebel, Bliefert and Russey [76], Kirkman [153], O'Connor [209] (this is a revised and extended version of an earlier book by O'Connor and Woodford [210]), and Turk and Kirkman [280]. The book edited by Woodford [300] contains three examples of short papers in both original and revised forms, with detailed annotations. Particularly informative and pleasant to read are Booth's Communicating in Science [36] and Day's How to Write and Publish a Scientific Paper [68]. The journal IEEE Transactions on Professional Communication publishes papers on many aspects of technical communication, including how to write papers and give talks. A selection of 63 papers from this and other journals is collected in Writing and Speaking in the Technology Professions: A Practical Guide [18]. How to Do It [180] contains 47 chapters that give advice for medical doctors, but many of them are of general interest to scientists. Chapter titles include "Write a Paper", "Referee a Paper", "Attract the Reader", "Review a Book", "Use an Overhead Projector", and "Apply for a Research Grant". Many of the chapters originally appeared in the British Medical Journal. Van Leunen's A Handbook for Scholars [283] is a unique and indispensable guide to the mechanics of scholarly writing, covering reference lists, quotations, citations, footnotes and style. This is the place to look if you want to know how to prepare a difficult reference or quotation (what date to list for a reprint of a work from a previous century, or how to punctuate a quotation placed in mid-sentence). There is also an appendix on how to prepare a CV. Luey's Handbook for Academic Authors [185] offers much useful advice to the writer of an academic book. O'Connor has written a book about how to edit and manage scientific books and journals [207]. Thirty-one essays discussing how writing is being used to teach mathematics in undergraduate courses are contained in Using Writing to Teach Mathematics [259]. A useful source for examples of expository mathematical writing is the annotated bibliography of Gaffney and Steen [87], which contains more than 1100 entries. Finally, Pemberton's book How to Find Out in Mathematics [220] tells you precisely what the title suggests. It includes information on mathematical dictionaries (including interlingual ones) and encyclopedias, mathematical histories and biographies, and mathematical societies, periodicals and abstracts. Although it appeared in 1969, the book is still worth consulting. 2.4. General Reading The three books by Zinsser [302], [303], [304] are highly recommended; all are informative and beautifully written. In Writing with a Word Processor [302] Zinsser summarizes his experience in moving to a computer from his trusty typewriter. His book Writing to Learn contains chapters on "Writing Mathematics" and "Writing Physics and Chemistry"; they explain how writing can be used in the teaching of these subjects and give examples of good writing. Michener's Writer's Handbook [204] provides insight into how this prodigious writer worked. The reader is led through the development of parts of two of Michener's books (one fiction, one nonfiction), from early drafts to proofs to the published versions. Mitchell [205] gives hints on writing with a computer, with good examples of how to revise drafts. Valuable insight into the English language—its history, its eccentricities, and its uses—is provided by Bryson [42], Crystal [66] and Potter [229]. Answers to the Questions at the Start of the Chapter 1. The plural of modulus is moduli. 2. The Concise Oxford Dictionary gives only the spelling parametrize, but the Longman Dictionary of the English Language, Merriam- Webster's Collegiate Dictionary and Oxford English Dictionary give both parameterize and parametrize. 3. From the Collins English Dictionary: "gigaflop... n. Computer technol. a measure of processing speed, consisting of a thousand million floating-point operations a second. [C20 ...]". 4. From the entry for Abelian group in the Collins English Dictionary: "Niels Henrik Abel (1802-29), Norwegian mathematician". 5. Mutatis mutandis means "with necessary changes" (The Chambers Dictionary). 2.4. GENERAL READING 6. Procrustes was "a villainous son of Poseidon in Greek mythology who forces travelers to fit into his bed by stretching their bodies or cutting off their legs" (Merriam-Webster's Collegiate Dictionary). 7. From the Collins English Dictionary (usage note after especial): "Especial and especially have a more limited use than special and specially. Special is always used in preference to especial when the sense is one of being out of the ordinary . . . Where an idea of pre-eminence or individuality is involved, either especial or special may be used." 8. From the Longman Dictionary of the English Language, all three words being labelled adj, informal: mind-bending means "at the limits of understanding or credibility", mind-blowing means "1 of or causing a psychic state similar to that produced by a psychedelic drug 2 mentally or emotionally exhilarating; overwhelming", mind-boggling means "causing great surprise or wonder". This page intentionally left blank Chapter 3 Mathematical Writing Suppose you want to teach the "cat" concept to a very young child. Do you explain that a cat is a relatively small, primarily carnivorous mammal with retractile claws, a distinctive sonic output, etc. ? I'll bet not. You probably show the kid a lot of different cats, saying "kitty" each time, until it gets the idea. To put it more generally, generalizations are best made by abstraction from experience. — RALPH P. BOAS, Can We Make Mathematics Intelligible? (1981) A good notation should be unambiguous, pregnant, easy to remember: it should avoid harmful second meanings, and take advantage of useful second meanings; the order and connection of signs should suggest the order and connection of things. — GEORGE POLYA, How to Solve It (1957) We have not succeeded in finding or constructing a definition which starts out "A Bravais lattice is . . . " ' , the sources we have looked at say "That was a Bravais lattice." — CHARLES KITTEL, Introduction to Solid State Physics (1971) Notation is everything. CHARLES F. VAN LOAN, FFTs and the Sparse Factorization Idea (1992) The mathematical writer needs to be aware of a number of matters specific to mathematical writing, ranging from general issues, such as choice of notation, to particular details, such as how to punctuate mathematical expressions. In this chapter I begin by discussing some of the general issues and then move on to specifics. 3.1. What Is a Theorem? What are the differences between theorems, lemmas, and propositions? To some extent, the answer depends on the context in which a result appears. Generally, a theorem is a major result that is of independent interest. The proof of a theorem is usually nontrivial. A lemma3 is an auxiliary result—a stepping stone towards a theorem. Its proof may be easy or difficult. A straightforward and independent result that is worth encapsulating but that does not merit the title of a theorem may also be called a lemma. Indeed, there are some famous lemmas, such as the Riemann-Lebesgue Lemma in the theory of Fourier series and Farkas's Lemma in the theory of constrained optimization. Whether a result should be stated formally as a lemma or simply mentioned in the text depends on the level at which you are writing. In a research paper in linear algebra it would be inappropriate to give a lemma stating that the eigenvalues of a symmetric positive definite matrix are positive, as this standard result is so well known; but in a textbook for undergraduates it would be sensible to formalize this result. It is not advisable to label all your results theorems, because if you do so you miss the opportunity to emphasize the logical structure of your work and to direct attention to the most important results. If you are in doubt about whether to call a result a lemma or a theorem, call it a lemma. The term proposition is less widely used than lemma and theorem and its meaning is less clear. It tends to be used as a way to denote a minor theorem. Lecturers and textbook authors might feel that the modest tone of its name makes a proposition appear less daunting to students than a theorem. However, a proposition is not, as one student thought, "a theorem that might not be true". A corollary is a direct or easy consequence of a lemma, theorem or proposition. It is important to distinguish between a corollary, which does not imply the parent result from which it came, and an extension or generalization of a result. Be careful not to over-glorify a corollary by failing to label it as such, for this gives it false prominence and obscures the role of the parent result. 3 The plural of lemma is lemmata, or, more commonly, lemmas. 3.2. PROOFS How many results are formally stated as lemmas, theorems, propositions or corollaries is a matter of personal style. Some authors develop their ideas in a sequence of results and proofs interspersed with definitions and comments. At the other extreme, some authors state very few results formally. A good example of the latter style is the classic book The Algebraic Eigenvalue Problem [296] by Wilkinson, in which only four titled theorems are given in 662 pages. As Boas [33] notes, "A great deal can be accomplished with arguments that fall short of being formal proofs." A fifth kind of statement used in mathematical writing is a conjecture— a statement that the author thinks may be true but has been unable to prove or disprove. The author will usually have some strong evidence for the veracity of the statement. A famous example of a conjecture is the Goldbach conjecture (1742). which states that every even number greater than 2 is the sum of two primes; this is still unproved. One computer scientist (let us call him Alpha) joked in a talk "This is the Alpha and Beta conjecture. If it turns out to be false I would like it to be known as Beta's conjecture." However, it is not necessarily a bad thing to make a conjecture that is later disproved: identifying the question that the conjecture aims to answer can be an important contribution. A hypothesis is a statement that is taken as a basis for further reasoning, usually in a proof—for example, an induction hypothesis. Hypotheses that stand on their own are uncommon; two examples are the Riemann hypothesis and the continuum hypothesis. 3.2. Proofs Readers are often not very interested in the details of a proof but want to know the outline and the key ideas. They hope to learn a technique or principle that can be applied in other situations. When readers do want to study the proof in detail they naturally want to understand it with the minimum of effort. To help readers in both circumstances, it is important to emphasize the structure of a proof, the ease or difficulty of each step, and the key ideas that make it work. Here are some examples of the sorts of phrases that can be used (most of these are culled from proofs by Parlett in [217]). The aim/idea is to Our first goal is to show that Now for the harder part. The trick of the proof is to find . . . is the key relation. The only, but crucial use of ... is that MATHEMATICAL WRITING To obtain ... a little manipulation is needed. The essential observation is that When you omit part of a proof it is best to indicate the nature and length of the omission, via phrases such as the following. It is easy/simple/straightforward to show that Some tedious manipulation yields An easy/obvious induction gives After two applications of ... we find An argument similar to the one used in ... shows that You should also strive to keep the reader informed of where you are in the proof and what remains to be done. Useful phrases include First, we establish that Our task is now to Our problem reduces to It remains to show that We are almost ready to invoke We are now in a position to Finally, we have to show that The end of a proof is often marked by the halmos symbol D (see the quote on page 24). Sometimes the abbreviation QED (Latin: quod erat demonstrandum = which was to be demonstrated) is used instead. There is much more to be said about writing (and devising) proofs. References include Franklin and Daoud [85], Gamier and Taylor [101], Lamport [173], Leron [177] and Polya [228]. 3.3. The Role of Examples A pedagogical tactic that is applicable to all forms of technical writing (from teaching to research) is to discuss specific examples before the general case. It is tempting, particularly for mathematicians, to adopt the opposite approach, but beginning with examples is often the more effective way to explain (see Boas's article [33] and the quote from it at the beginning of this chapter, a quote that itself illustrates this principle!). A good example of how to begin with a specific case is provided by Strang in Chapter 1 of Introduction to Applied Mathematics [262]: The simplest model in applied mathematics is a system of linear equations. It is also by far the most important, and we begin 3.4. DEFINITIONS this book with an extremely modest example: After some further introductory remarks, Strang goes on to study in detail both this 2 x 2 system and a particular 4 x 4 system. General n x n matrices appear only several pages later. Another example is provided by Watkins's Fundamentals of Matrix Computations [289]. Whereas most linear algebra textbooks introduce Gaussian elimination for general matrices before discussing Cholesky factorization for symmetric positive definite matrices, Watkins reverses the order, giving the more specific but algorithmically more straightforward method first. An exercise in a textbook is a form of example. I saw a telling criticism in one book review that complained "The first exercise in the book was pointless, so why do the others?" To avoid such criticism, it is important to choose exercises and examples that have a clear purpose and illustrate a point. The first few exercises and examples should be among the best, to gain the reader's confidence. The same reviewer complained of another book that "it hides information in exercises and contains exercises that are too difficult." Whether such criticism is valid depends on your opinion of what are the key issues to be transmitted to the reader and on the level of the readership. Again, it helps to bear such potential criticism in mind when you write. 3.4. Definitions Three questions to be considered when formulating a definition are "why?", "where?" and "how?" First, ask yourself why you are making a definition: is it really necessary? Inappropriate definitions can complicate a presentation and too many can overwhelm a reader, so it is wise to imagine yourself being charged a large sum for each one. Instead of defining a square matrix A to be contractive with respect to a norm || • || if ||A|| < 1, which is not a standard definition, you could simply say "A with \\A\\ < 1" whenever necessary. This is easy to do if the property is needed on only a few occasions, and saves the reader having to remember what "contractive" means. For notation that is standard in a given subject area, judgement is needed to decide whether the definition should be given. Potential confusion can often be avoided by using redundant words. For example, if p(A) is not obviously the spectral radius of the matrix A you can say "the spectral radius p(A)". The second question is "where?" The practice of giving a long sequence of definitions at the start of a work is not recommended. Ideally, a definition should be given in the place where the term being defined is first used. If it is given much earlier, the reader will have to refer back, with a possible loss of concentration (or worse, interest). Try to minimize the distance between a definition and its place of first use. It is not uncommon for an author to forget to define a new term on its first occurrence. For example, Steenrod uses the term "grasshopper reader" on page 6 of his essay on mathematical writing [256], but does not define it until it occurs again on the next page. To reinforce notation that has not been used for a few pages you may be able to use redundancy. For example, "The optimal steplength a* can be found as follows." This implicit redefinition either reminds readers what a* is, or reassures them that they have remembered it correctly. Finally, how should a term be defined? There may be a unique definition or there may be several possibilities (a good example is the term M-matrix, which can be defined in at least fifty different ways [23]). You should aim for a definition that is short, expressed in terms of a fundamental property or idea, and consistent with related definitions. As an example, the standard definition of a normal matrix is a matrix A € C raXn for which A*A = AA* (where * denotes the conjugate transpose). There are at least 70 different ways of characterizing normality [119], but none has the simplicity and ease of use of the condition A*A = AA*. By convention, if means if and only if in definitions, so do not write "The graph G is connected if and only if there is a path from every node in G to every other node in G." Write "The graph G is connected if there is a path from every node in G to every other node in G" (and note that this definition can be rewritten to omit the symbol G). It is common practice to italicize the word that is being defined: "A graph is connected if there is a path from every node to every other node." This has the advantage of making it perfectly clear that a definition is being given, and not a result. This emphasis can also be imparted by writing "A graph is defined to be connected if ...", or "A graph is said to be connected if ...." If you have not done so before, it is instructive to study the definitions in a good dictionary. They display many of the attributes of a good mathematical definition: they are concise, precise, consistent with other definitions, and easy to understand. Definitions of symbols are usually made with a simple equality, perhaps preceded by the word "let" if they are in-line, as in "let q(x) = ax2+bx+c." Various other notations have been devised to give emphasis to a definition, 3.5. NOTATION If you use one of these special notations you must use it consistently, otherwise the reader may not know whether a straightforward equality is meant to be a definition. 3.5. Notation Consider the following extract. Let H~k = Qg&kQk, partition X = [Xv, X2] and let X = range (^i). Let U* denote the nearest orthonormal matrix to Xi in the 2-norm. These two sentences are full of potentially confusing notation. The distinction between the hat and the tilde in Hk and Hk is slight enough to make these symbols difficult to distinguish. The symbols X and X are also too similar for easy recognition. Given that X is used, it would be more consistent to give it a subscript 1. The name Hk is unfortunate, because H is being used to denote the conjugate transpose, and it might be necessary to refer to Hk ! Since A* is a standard synonym for AH, the use of a superscripted asterisk to denote optimality is confusing. As this example shows, the choice of notation deserves careful thought. Good notation strikes a balance among the possibly conflicting aims of being readable, natural, conventional, concise, logical and aesthetically pleasing. As with definitions, the amount of notation should be minimized. Although there are 26 letters in the alphabet and nearly as many again in the Greek alphabet, our choice diminishes rapidly when we consider existing connotations. Traditionally, e and 6 denote small quantities, i, j. k, m and n are integers (or i or j the imaginary unit), A is an eigenvalue and TT and e are fundamental constants; TT is also used to denote a permutation. These conventions should be respected. But by modifying and combining eligible letters we widen our choice. Thus 7 and A yield, for example, A, ~A, A, A', 7^, A^ A, A, A. Particular areas of mathematics have their own notational conventions. For example, in numerical linear algebra lower case Greek letters represent TT scalars, lower case roman letters represent column vectors, and upper case Greek or roman letters represent matrices. This convention was introduced by Householder [143]. In his book on the symmetric eigenvalue problem [217], Parlett uses the symmetric letters A, H, M, T, U, V, W, X, Y to denote symmetric matrices and the symmetric Greek letters A, 0, $, A to denote diagonal matrices. Actually, the roman letters printed above are not symmetric because they are slanted, but Parlett's book uses a sans serif mathematics font that yields the desired symmetry. Parlett uses this elegant, but restrictive, convention to good effect. We can sometimes simplify an expression by giving a meaning to extreme cases of notation. Consider the display There are really only two cases: i > j and i < j. This structure is reflected and the display made more compact if we define the empty product to be 1, and write (Here, I have put "if" before each condition, which is optional in this type of display.) Incidentally, note that in a matrix product the order of evaluation needs to be specified: HILi -A» could mean A\A? • • . An or AnAn-\ ...A\. Notation also plays a higher level role in affecting the way a method or proof is presented. For example, the n x n matrix multiplication C = AB can be expressed in terms of scalars, or at the matrix-vector level, where B = [bi,b2,... ,bn] is a partition into columns. One of these two viewpoints may be superior, depending on the circumstances. A deeper 3.5. NOTATION example is provided by the fast Fourier transform (FFT). The discrete Fourier transform (DFT) is a product y = Fnx, where Fn is the unitary Vandermondc matrix with ( r , s ) element w ( r - 1 )( s - 1 ) (1 < r,s < n), and (jj = exp(—27U/n). The FFT is a way of forming this product in O(nlogn) operations. It is traditionally expressed through equations such as the following (copied from a numerical methods textbook): The language of matrix factorizations can be used to give a higher level description. If n = 2m, the matrix Fn can be factorized as where FIn is a permutation matrix and fim = diag(l,o>,... ,w m ~ 1 ). This factorization shows that an n-point DFT can be computed from two n/2point transforms, and this reduction is the gist of the radix-2 FFT. The book Computational Frameworks for the Fast Fourier Transform by Van Loan [284]. from which this factorization is taken, shows how, by using matrix notation, the many variants of the FFT can be unified and made easier to understand. An extended example of how notation can be improved is given by Gillman in the appendix titled "The Use of Symbols: A Case Study" of Writing Mathematics Well [104]. Gillman takes the proof of a theorem by Sicrpinski (1933) and shows how simplifying the notation leads to a better proof. Knuth set his students the task of simplifying Gillman's version even further, and four solutions are given in [164, §21]. Mathematicians are always searching for better notation. Knuth [163] describes two notations that he and his students have been using for many years and that he thinks deserve widespread adoption. One is notation for the Stirling numbers. The other is the notation 0 for x > y. In words, this sentence is read as "It is easy to see that f ( x , y) is greater than zero for x greater than y." The first > translates to "is greater than" and the second to "greater than", so there is a lack of parallelism, which the reader may find disturbing. A simple cure is to rewrite the sentence: It is easy to see that f ( x , y) > 0 when x > y. It is easy to see that if x > y then f ( x , y) > 0. 3.9. Dos and Don'ts of Mathematical Writing Punctuating Expressions Mathematical expressions are part of the sentence and so should be punctuated. In the following display, all the punctuation marks are necessary. (The second displayed equation might be better moved in-line.) The three most commonly used matrix norms in numerical analysis are particular cases of the Holder p-norm Otiose Symbols Do not use mathematical symbols unless they serve a purpose. In the sentence "A symmetric positive definite matrix A has real eigenvalues" there is no need to name the matrix unless the name is used in a following sentence. Similarly, in the sentence "This algorithm has t — Iog2 n stages", the "i — " can be omitted unless t is defined in this sentence and used immediately. Watch out for unnecessary parentheses, as in the phrase "the matrix (A — A/) is singular." Placement of Symbols Avoid starting a sentence with a mathematical expression, particularly if a previous sentence ended with one, otherwise the reader may have difficulty parsing the sentence. For example, "A is an ill-conditioned matrix" (possible confusion with the word "A") can be changed to "The matrix A is ill-conditioned." Separate mathematical symbols by punctuation marks or words, if possible, for the same reason. Bad: If x > 1 f ( x ) < 0. Fair: If x > 1, f ( x ) < 0. Good: If x > 1 then f(x) < 0. Bad: Since p ^ + q 1 = 1, || • \\p and || • \\g are dual norms. Good: Since p~l + q~l = 1, the norms || • |L and || • L are dual. Bad: It suffices to show that \\H\\P = nl/P, l Capable to do. Although this is logically correct, convention requires that we say "capable of doing". > We have the possibility to obtain an asymptotic series for the solution. We do not normally say "possibility to". Better is It is possible to obtain ... (passive voice), or, shorter, We can obtain . . . . > This result was proved already in [5]. Already should be deleted or replaced by earlier or previously. Alternative: This result has already been proved in [5]. t> The solution has been known since ten years. This type of construction occurs in those European languages in which one word serves for both for and since. In this example, since should be replaced by for. > This approach permits to exploit the convexity of f . The phrase should be permits us to exploit (active voice) or permits exploitation of (passive voice). Or, depending on the context, it may be acceptable to shorten the sentence to This approach exploits the convexity of f. > To our experience rience . . . . The correct phrase is In our expe- > We invoke again Theorem 4.1. This sentence is correct, but does not sound quite right to a native speaker of English. Better is We invoke Theorem 4.1 again or Once again we invoke Theorem 4.1. 5.2. READING AND ANALYSING OTHER PAPERS [> The method is easy to use and performant. There is no word performant (even though the verb converge, for example, produces the adjective convergent). "Performs well" is probably the intended meaning. > In the next section we give some informations about the network of processor used. Here, the problem is with the plurality of nouns: it should be information (an uncountable noun) and Part of the process of thinking in English is to write your research notes in English from the start and to annotate the books and papers you study in English. 5.2. Reading and Analysing Other Papers Read as many well-written papers in your field as you can. Ask a friend or colleague for recommendations. Analyse the following aspects. Vocabulary. What kinds of words occur frequently in technical writing? Look up words that you don't know in a dictionary. For technical terms you may need to use a mathematical or scientific dictionary or encyclopedia. Note the range of applicability of particular words. Synonyms. Notice different ways of saying the same thing and different ways of ordering sentences. Use what you learn to avoid monotony in your writing. Collocations. These are groups of words that commonly appear together. For example, feeble and fragile are synonyms for weak, but weak has more meanings, and while we readily say "weak bound" we never say "feeble bound" or "fragile bound". As another example, we say "uniquely determined" or "uniquely specified" but not "uniquely fixed" or "uniquely decided". Build up a list of the collocations you find in mathematical writing. Idioms. Idioms are expressions whose meanings cannot be deduced from the words alone, but are established by usage. Here are some examples of idioms that are sometimes found in technical writing (and more commonly in speaking), with the idiom in the left column and a definition in the right column. 62 By and large. End up. In that. It goes without saying. On the other hand. On the whole. Over and above. Rule of thumb. Start from scratch. Trial and error. WHEN ENGLISH Is A FOREIGN LANGUAGE Taking everything into account. Reach a state eventually. In so far as. Something so obvious that it needn't be said. From the other point of view. In general, ignoring minor details. In addition to. A rule based on experience and estimation rather than precise calculation. To start from the beginning with no help. Attempting to achieve a goal by trying different possibilities to find one that works. Errors in the use of idioms tend to be very conspicuous. It is good advice to avoid idioms until you are sure how to use them correctly. 5.3. Distinctions Satisfy, Verify. These words can be difficult to distinguish for some nonnative speakers, who often incorrectly use verify for satisfy. In mathematics, verify means to establish the truth of a statement or equation, and is a synonym for check; it is the mathematician who verifies. On the other hand, a quantity satisfies an equation if it makes it true. Thus we write "We now verify that a; is a global maximizing point of /" but "We have to show that x satisfies the sufficient conditions for a global maximizer." 5.4. Articles Some languages either do not have articles (words such as "the", "a" and "an") or use them in a different way than in English, so it is difficult for speakers of these languages to use articles correctly in English. The rules of article use are complicated. Swan [266] explains them and identifies two of the most important. (1) Do not use the (with plural or uncountable nouns) to talk about things in general. Examples: "Mathematics is interesting" (not "The mathematics is interesting"); "Indefinite integrals do not always have closed form solutions" (not "The indefinite integrals do not always have the closed form solutions"). (2) Do not use singular countable nouns without articles. Examples: "the derivative is", "a derivative is", but not "derivative is". In certain circumstances an article is optional. The sentences "A matrix with the property (3.2) is well conditioned" and "A matrix with property 5.5. ORDINAL NUMBERS (3.2) is well conditioned" are both correct. Mistakes in the use of articles are undesirable, but they do not usually obscure the meaning of a sentence. 5.5. Ordinal Numbers Here are examples of how to describe the position of a term in a sequence relative to a variable fc: (zeToth, firsi, second, third, four^/t, . . . ) Generally, to describe the term in position k±i for a constant i, you append to (k ± ?') the ending of the ordinal number for position i (th, st, or nd), which can be found in a dictionary or book of grammar. 5.6. Negatives A double negative results when two words with negative meanings are used together. Double negatives are commonly used in some languages (for example, French and Spanish) as a way of expressing a single negative idea. In English, however, two negative words combine to give a positive meaning. Double negatives are sometimes used for special effect, but they should be avoided in technical writing. Examples of double negatives are > We do not know nothing about the location of the roots. (Literally means "We know something." Replace by We know nothing or We do not know anything.) > The method hasn't never failed to work in our experience. (Literally means "The method has failed." Replace never by ever or hasn't by has.) 5.7. Constructions Certain constructions are common in mathematical writing. You may find it helpful to make a list for reference, beginning with the following entries. The left column contains constructions, and the right column examples. Let ... be If ... then Suppose (that) . . . is/are We define . . . to be It is easy to see/show that From ... we have By substituting . . . into . . . we obtain A lower bound for Without loss of Let / be a continuous function. If a > — 1 then the integral exists. Suppose g is differentiable. Suppose that A and B have no eigenvalue in common. We define a problem to be stable if ... It is easy to show that the error decays as t increases. From (5.2) we have the inequality . . . By substituting (1.9) into (7.3) we obtain . . . A lower bound for h can be obtained from ... Without loss of generality we can assume that x > 0. 5.8. Connecting Words and Phrases In this section I give examples of the use of words and phrases that connect statements. Most of the examples are followed by comments on the degree of emphasis; note, however, that the emphasis imparted sometimes depends on the context in which the word or phrase appears, so extrapolation from these examples should be done with care. Mastering these connectives, and the differences between them, is an important part of learning to write technical English. This section is loosely based on [78, pp. 191-194]. Combinations Statement a: Direct methods are used to solve linear systems. Statement b: Iterative methods are used to solve linear systems. and both also Direct methods and iterative methods are used to solve linear systems. (No emphasis on either type of method.) Both direct and iterative methods are used to solve linear systems. (Similar to and.) Direct methods, and also iterative methods, are used to solve linear systems. Direct methods are used to solve linear systems, as also are iterative methods. (Slight emphasis on direct methods.) 5.8. CONNECTING WORDS AND PHRASES as well as Direct methods, as well as iterative methods, are used to solve linear systems. (Similar to also.) not only . . . but also Linear systems can be solved not only by direct methods but also by iterative methods. (Emphasizes that there is more than one possibility-) apart from/in addition to Apart from (in addition to) direct methods, iterative methods are used to solve linear systems. (Emphasizes that there is more than one type of method; slightly more emphasis on direct methods than also but less than not only ... but also.) moreover/furthermore The name of Gauss is attached to the most well-known method for solving linear systems, Gaussian elimination. Moreover (furthermore), a popular iterative technique also takes his name: the Gauss-Seidel method. (Stresses the statement after moreover/furthermore.) Implications or Explanations Statement a: The problem has a large condition number. Statement b: The solution is sensitive to perturbations in the data. as/because/since As (because, since) the problem has a large condition number, the solution is sensitive to perturbations in the data. The solution is sensitive to perturbations in the data, as (because, since) the problem has a large condition number. due to The sensitivity of the solution to perturbations in the data is due to the ill condition of the problem. (More emphatic than as.) in view of/owing to/on account of In view of (owing to, on account of)7 the ill condition of the problem, the solution is sensitive to perturbations in the data. (More emphatic than as.) 7 Due to would be incorrect here; see §4.14. WHEN ENGLISH Is A FOREIGN LANGUAGE given Given the ill condition of the problem, the solution is necessarily sensitive to perturbations in the data. (Inevitable result of the stated condition.) it follows that The problem has a large condition number. It follows that the solution is sensitive to perturbations in the data. (Puts more emphasis on the first statement than as.) consequently/therefore/thus The problem has a large condition number and consequently (therefore, thus) the solution is sensitive to perturbations in the data. (Intermediate between as and it follows that. Consequently and therefore are preferable to thus at the beginning of a sentence.) Modifications and Restrictions Statement a: Runge-Kutta methods are widely used for solving non-stiff differential equations. Statement b: For stiff differential equations, methods based on backward differentiation formulae (BDF) are preferred. alternatively If the differential equations are non-stiff, RungeKutta methods can be used; alternatively, if the differential equations are stiff, BDF methods are preferred. although Although Runge-Kutta methods are widely used for non-stiff differential equations, BDF methods are preferred when the differential equations are stiff. (More emphasis on BDF methods than alternatively.) though Runge-Kutta methods are widely used for nonstiff differential equations, though BDF methods are preferred when the differential equations are stiff. (Though is weaker than although, and it tends to be used inside a sentence rather than at the beginning. In this example though could be replaced by although, which would give greater 5.8. CONNECTING WORDS AND PHRASES emphasis to the BDF methods.) If the differential equations are non-stiff, RurigeKutta methods can be used, but if the differential equations are stiff, BDF methods are preferred. (Similar to BDF methods are used for stiff differential equations, whereas Runge-Kutta methods are used for non-stiff equations. by contrast Runge-Kutta methods are widely used for nonstiff differential equations. By contrast, for stiff equations BDF methods are the methods of choice. Except for stiff differential equations, for which BDF methods are preferred, Runge-Kutta methods are widely used. (Clearly defined limitation or restriction.) however8/on the other hand Runge-Kutta methods are widely used for solving non-stiff differential equations. However (on the other hand), for stiff differential equations BDF methods are preferred. (Note that however and on the other hand are not always interchangeable. On the other hand is applicable only when there are two possibilities, corresponding to our two hands! This example is similar to although and though, but it merely joins the two statements.) nevertheless BDF methods are much less well known than Runge-Kutta methods. Nevertheless, there is great demand for BDF codes. (The second statement is true even though the first statement is true. It would not be correct to replace however by nevertheless in the previous example, although these two words are sometimes interchangeable.) despite/in spite of Despite (in spite of) the stiff nature of the differential equations that arise in his chemical reaction problems, Professor X prefers to use his favourite Runge-Kutta code. ^However can be used with another meaning. "However Runge-Kutta methods are used" means "no matter how Runge-Kutta methods are used". WHEN ENGLISH Is A FOREIGN LANGUAGE (Even though his differential equations are stiff, Professor X uses his Runge-Kutta code.) instead of/rather than For stiff differential equations we use BDF methods instead of (rather than) RungeKutta methods. Instead of using (rather than use) our usual Runge-Kutta code we turned to a BDF code because we thought the differential equations might be stiff. (Rather than and instead of are generally interchangeable but their meaning is different: rather than implies a conscious choice, whereas instead of merely states an alternative.) Conditions Statement a: The indefinite integral does not have a closed form solution. Statement b: Numerical integration provides an approximation to the definite integral. if If the integral cannot be evaluated in closed form, numerical integration should be used to obtain an approximation. Unless the integral can be evaluated in closed form, numerical integration should be used to obtain an approximation. (Using the converse of the logical condition for if.) whether or not Whether or not a closed form exists, numerical integration will always provide an approximation to the integral. (A numerical approximation can be obtained independent of whether a closed form exists for the integral.) provided (providing) that Provided (providing) that a closed form solution does not exist, the student may resort to numerical integration. (More restrictive than if. The that following provided or providing can often be omitted, as it can in this sentence.) 5.9. SPELLING Emphasis Statement: Gaussian elimination with row interchanges does not break down; a zero pivot is a welcome event for it signals that the column is already in triangular form and so no operations need be performed during that step of the reduction. indeed Gaussian elimination with row interchanges does not break down; indeed, if a zero pivot occurs it signals that the column is already in triangular form and so no operations need be performed during that step of the reduction. (Introducing a further observation that builds on the first one.) actually A zero pivot in Gaussian elimination with row interchanges is actually a welcome event, for it signals that the column is already in triangular form and so no operations need be performed on that step of the reduction. (Here, actually emphasizes the truth of the statement. It could be replaced by indeed, but in the first example indeed could not be replaced by actually; this is because both words have several slightly different meanings.) clearly A zero pivot in Gaussian elimination with row interchanges signals that the column is already in triangular form and that no operations need be performed on that step of the reduction, so the method clearly (obviously, certainly) cannot break down. 5.9. Spelling Many English words have alternative spellings. These alternatives fall broadly into two classes. The first class contains those words that are spelled differently in British and American English. Table 5.1 illustrates some of the main differences. A special case worthy of mention is the informal abbreviation for mathematics: maths (British English), math (American English). Obviously, you should use British spellings or American spellings but not a mixture in the same document. The second class of words is those that have alternative spellings in British English. WHEN ENGLISH Is A FOREIGN LANGUAGE Table 5.1. British versus American spelling. British spelling behaviour catalogue centre defence grey manoeuvre marvellous modelled modelling skilful speciality American spelling behavior catalog or catalogue center defense gray maneuver marvelous modeled modeling skillful specialty embed, imbed learnt, learned. The past tense of the verb learn. The advantage of learnt is that it avoids confusion with the other meaning of learned, which is "having much knowledge acquired by study" (and which is pronounced differently from the first meaning). spelt, spelled. The past tense of the verb spell. In the last two examples, the -ed ending is the form used in American English. Other examples are acknowledgement benefited encyclopaedia focused judgement acknowledgment benefitted encyclopedia focussed judgment There is a host of words, mostly verbs, that can take an -ise or -ize ending. Examples are criticize and minimize. The -ize ending is used in American English, while the -ise ending tends to be preferred in British English (even though The Concise Oxford Dictionary gives prominence to the -ize form). A number of words, including the following ones, take only an -ise ending: advise, arise, comprise, compromise, concise, devise, disguise, exercise, expertise, likewise, otherwise, precise, premise, reprise, revise, supervise, surmise, surprise, treatise. 5.10. KEEPING IT SIMPLE Verbs ending in -yse (such as analyse and catalyse] are spelt -yse in British English and -yze in American English. Several plurals have alternative spellings: appendices formulae indices lemmata appendixes formulas indexes lemmas vertexes There are also pairs of words that have different meanings in British English but for which one of the pair is often (or always) used with both meanings in American English. Examples: ensure and insure (insure is frequently used for ensure in American English), practice and practise (see §4.14). These differences are subtle, and mastering them is not vital to producing clear and effective prose (native speakers also find them confusing). For a good explanation of the reasons for the often haphazard spelling of English, and the reasons for the differences between British and American spelling, I recommend the book by Bryson [42]. Finally, it is important to be aware of words that have very similar spellings or pronunciations, but different meanings. Examples: accept (agree to receive) adapt (modify) advice (noun) affect (verb: influence) complement (the rest) dependant (noun) device (noun: scheme) discreet (careful) precede (go before) principal (main) sign stationary except (but, excluding) adopt (take up, accept) advise (verb) effect (noun: result, verb: bring about) (see also §4.14) compliment (flattering remark) dependent (adjective) devise (verb: invent) discrete (not continuous) proceed (continue) principle (rule) sine (trigonometric function) stationery (materials for writing) Keeping It Simple The best way to avoid making errors is to keep your writing simple. Use short words and sentences and avoid complicated constructions. Such writing may not be elegant, but it is likely to be unambiguous and readily understood. As your knowledge of English improves you can be more ambitious with your sentence structure and vocabulary. Here is an example giving simple and complicated versions of the same paragraph. WHEN ENGLISH Is A FOREIGN LANGUAGE Simple: We note that if the transformation matrix Hp has large elements then Ap is likely to have much larger elements than AP-I. Therefore we can expect large rounding errors in the pth stage, in which case the matrix Fp will have large norm. Complicated: This conclusion is reinforced if we observe that a transformation with a matrix Hp having large elements is likely to give an Ap with much larger elements than A p _i, and the rounding errors in this stage are therefore likely to be comparatively large, leading to an Fp with a large norm. Using a Dictionary In addition to a bilingual dictionary, you should buy a monolingual English dictionary and a thesaurus (preferably hardback ones, as you will be using them a lot!). Most bilingual dictionaries do not provide enough detail or wide enough coverage for the writer of scientific English. Also, using a monolingual dictionary will help you to think in English. Instead of, or in addition to, a general-purpose dictionary, you may want to acquire a dictionary written for advanced learners of English, of which there are several. These dictionaries have several notable features. They • describe a core vocabulary of contemporary English; • use simple language in their definitions (the Longman's Dictionary mentioned below uses a special denning vocabulary of 2000 words); • give guidance on grammar, style and usage; • provide examples illustrating typical contexts; • show pronunciation; • in some cases, show allowable places to divide a word when it must be split at the end of a line. (This information can also be found in special-purpose dictionaries of spelling and word division.) Three such dictionaries described as "outstanding" by Quirk and Stein [232] are the Longman Dictionary of Contemporary English [181], Collins Cobuild English Dictionary [59] and the Oxford Advanced Learner's Dictionary of Current English [212]. Another dictionary that you may find useful is Collins Plain English Dictionary [61], which has very easy to read definitions. Whereas a dictionary provides meanings for a word, a thesaurus lists alternatives for it that have approximately the same meaning (synonyms). 5.11. USING A DICTIONARY Most thesauruses are arranged alphabetically, like a dictionary. In preparation for using your dictionary and thesaurus you should learn the abbreviations they use (these are usually listed at the front) and make sure you understand the grammatical terms noun, adjective, adverb and (transitive or intransitive) verb. When you are looking for the right word to express an idea, pick whatever words you already know (or that you find in a bilingual dictionary) and look them up in the thesaurus. Then look up in the dictionary the definitions of the synonyms you find and try to decide which is the most appropriate word. This may take some time. It is worth making a note to summarize the search you conducted, as you may later want to retrace your steps. Watch out for "false friends" —two words in different languages that are very similar but have different meanings. Examples: French: actuellement = at present, currently. Cf. actually. Italian: eventuale = possible. Cf. eventual. German: bekommen = get, receive. Cf. become. Part of the task of choosing a word is choosing the correct part of speech. Suppose you write, as a first attempt, The interested Jacobian matrices are those with large, positive dominate eigenvalues. The two words most likely to be wrong are interested and dominate, as they can take several different forms. The Concise Oxford Dictionary (COD) says dominate is a verb, one meaning of which is "have a commanding influence on". The word we are looking for is an adjective, as it describes a property of eigenvalues. The previous dictionary entry is dominant, an adjective meaning most influential or prevailing. This is the correct word. The word interested is the correct part of speech: it is an adjective, as it should be since it describes the Jacobian matrices. The COD defines interested as meaning "having a private interest; not impartial or disinterested". It is therefore incorrect to talk about an interested Jacobian matrix. The word we require is interesting, another adjective meaning "causing curiosity; holding the attention". This example indicates how useful a dictionary can be if you are unsure of vocabulary and grammar. It is a good idea to look up in the dictionary every nontrivial word you write, to check spelling, meaning and part of speech. Do this after you have written a paragraph or section, so as not to interrupt your train of thought. Using analogy to reason about English vocabulary works often, but not always. The noun indication is related to the verb indicate, and the pattern (noun, verb) = (-ion, -ate) is common. There are exceptions, however, and the one that most often causes trouble to the mathematical writer is the pair perturbation and perturb—there is no verb perturbate. You will not be able to write perfect English simply by using a dictionary, because even the learner's dictionaries cannot tell you everything you need to know. The best way to learn the subtler aspects of English is to ask someone more fluent in English than yourself to comment on what you have written. If you are not used to writing in English it is almost obligatory to obtain such advice before you submit a paper for publication. Even better is to have a fluent speaker as a co-author. Make sure that you learn from the corrections and suggestions you receive, and keep a note of common mistakes, so as to avoid them. 5.12. Punctuation There can be differences in punctuation between one language and another. In English, a decimal point separates the integer part of a number from its fractional part (TT « 3.141) but in some European languages a comma is used instead (TT K> 3,141). In English, a comma is used to indicate thousands (2,135), but in French, until quite recently, a full stop was used for this purpose (2.135). (Note that a full stop in British English is a period in American English). Some examples of different sentence punctuation follow, where denotes a sentence or phrase in the given language. Question: English: ? Greek: ; (romanized) Japanese: Spanish: l ? Quotation: English: " French, Italian, German: „ Swedish:" " Spanish: < " or » " Semicolon: English: Greek: ; • (but now little used) Is there any unnecessary repetition? > Can you convert a sentence from the passive to the active voice? O Is every claim fully supported? t> Are the mathematical arguments and equations correct? > Is the notation consistent? Can it be simplified or made more logical? > Have quotations, references and numerical results been copied into the paper correctly? > Is due reference made to the work of other authors (beware of "citation amnesia")? > Are equations, results and the reference list properly numbered? Are all the cross-references and bibliographic citations correct? "A draft of this book contained the phrase "chop sentences mercifully" instead of "chop sentences mercilessly"! Figure 7.1. Check-list for revising. 7.2. EXAMPLES OF PROSE O Abstract—This paper discusses the aims and methods of the FTEsol project and in this context discusses the architecture and design of the control system being produced as the focus of the project. Comments. (1) The phrases in this context and as the focus of the project serve no useful purpose and can be deleted. (2) As mentioned in §6.6, it is not a good idea to begin an abstract with This paper. A complete rewrite is needed: "The aims and methods of the FTEsol project are discussed, together with the architecture and design of the control system being produced." This version avoids the repeated discusses in the original. A more direct alternative, if the use of we is allowed, is "We discuss the aims and methods of the FTEsol project > Nsolve finds numerically the 5 complex solutions to this fifth-order equation. Comments. There is a lack of parallelism in 5 and fifth, which the reader may find disturbing. I would write "the five complex solutions to this fifth-order equation". > Tables give a systematic and orderly arrangement of items of information. Tabular layout has the particular virtue of juxtaposing items in two dimensions for easy comparison and contrast. Tables eliminate tedious repetition of words, phrases and sentence patterns that can instead be put at the tops of columns and the sides of rows in the table. Although tables do not make much impact by visual display, it is possible, by careful arrangements, to emphasize and highlight particular items or groups of information. Comments. This is the first paragraph of a section on tables from a manual on technical writing. The first sentence reads like a dictionary definition and is surely not telling readers anything they do not know already. The ideas in the third and fourth sentences are not clearly expressed. Revised version: Tables juxtapose items in two dimensions for easy comparison and contrast. Their row and column labels save the tedious repetition of words that would be necessary if the information were presented in textual form. Although tables lack the visual impact of REVISING A DRAFT graphs, information can be grouped and highlighted by careful use of rules and white space. > In order to pinpoint the requirements for an effective microchip development environment sufficiently to definitively obtain answers to the above questions, it is essential to be able to interview a wide variety of microchip developers who are knowledgeable and experienced in such matters. Comments. (1) In order is superfluous. (2) To definitively obtain is a split infinitive (to-adverb-verb) that is more naturally written as to obtain definitive answers. It is probably better to replace the whole phrase by to answer. (3) Essential to be able to is probably better replaced by necessary to. > Due to the advances in computer graphics and robotics, a new interest in geometric investigations has now arisen within computer science focusing mainly on the computational aspects of geometry, forming the research field known as computational geometry. Comments. A new interest... has now arisen is a passive phrase that is easily improved. At the same time, we can remove the awkward and incorrect due to (see §4.14). The phrase focusing ... aspects of geometry seems unnecessary. My revised version: Advances in computer graphics and robotics have stimulated a new interest in geometric investigation within computer science, forming the research field known as computational geometry. > In terms of fractals, a straight line has a dimension of one, an irregular line has a dimension of between one and two, and a line that is so convoluted as to completely fill a plane has a dimension approaching the dimension of the plane, namely a dimension of two. Fractal dimensions assign numbers to the degree of convolution of planar curves. Comments. This extract is taken from a model paper given in a book about writing a scientific paper. The previous two sentences had also been about fractals. The phrase "in terms of fractals" is unnecessary and the order of these two sentences should be 7.2. EXAMPLES OF PROSE reversed. "Fractal dimensions" do not "assign numbers" but are numbers. It is not clear, grammatically, whether the "namely a dimension of two" applies to the convoluted line or the plane. The split infinitive "to completely fill" can be replaced by "to fill". My rewritten version: Fractal dimensions are numbers that measure the degree of convolution of planar curves. A straight line has dimension one, an irregular line has a dimension between one and two, and a line that is so convoluted as to fill a plane has a dimension approaching two, which is the dimension of the plane. l> Data flow analysis determines the treatment of every parameter and COMMON variable by every subprogram with sufficient precision that nonportable parameter passing practices can be detected. Comments. This sentence is difficult to understand on its first reading. The phrase by every subprogram delays the punch-line and the everys also delay comprehension. It is not clear whether "with sufficient precision" applies to the subprograms or the analysis. In fact, the latter was intended. A better version is "Data flow analysis determines the treatment of parameters and COMMON variables with sufficient precision that nonportable parameter passing practices can be detected." D> [12] reports an eigenproblem from the automobile industry. The eigenvalues of interest are those ones having real part greater than zero. Comments. A citation makes a weak start to a sentence and jolts the eye. In the second sentence ones is unnecessary and the sentence can be made more concrete. Revised version: Jones reports an eigenproblem from the automobile industry [12]. The eigenvalues of interest are those lying in the right half plane. > Command names have been defined as two letter sequences because it is believed that users prefer to avoid verbosity. REVISING A DRAFT Comments. Who is the believer? It is believed is better replaced by we believe, or evidence suggests, or our experience has shown, preferably with a reference. A hyphen is needed in two-letter sequences, otherwise the meaning is two sequences of letters. > When dealing with sets of simple figures, a basic problem is the determination of containment relations between elements of the set. Comments. When dealing is a dangling participle: we are not told who is dealing with the sets. A better version results on omitting when dealing and changing the determination of to to determine. Even better is A basic problem for sets of simple figures is to determine containment relations between elements of the set. > In the function, exp(:r) is first tested for overflow. If it does, then inf is returned. Comments. "If it does" refers to "tested for overflow" rather than "overflows" , which the intended meaning requires. The second sentence can be replaced by "If overflow is detected then inf is returned." > It is anticipated that the early versions of the system will provide definitive enough information that it will be reasonable to design with some assurance a variety of other systems which should be broader in scope. Comments. (1) Anticipated should be expected. This usage is described as "avoided by careful writers and speakers of English" in the Collins English Dictionary [60] but is so frequent that it may one day become accepted. Anticipate means to take action against (The enemy had anticipated our move). (2) There are not degrees of definitiveness. (3) Reasonable should probably be possible. (4) There is a wicked which. The last phrase could be replaced by systems of broader scope. 7.2. EXAMPLES OF PROSE > As far as the minimum eigenvalues of the other boundary element matrices are concerned they can only be small if the value of k is close to a corresponding element of Sp. Comments. "As far as ... is/are concerned" is a phrase more appropriate to speech than writing. This sentence can be shortened considerably: "The minimum eigenvalues of the other boundary element matrices can be small only if k is close to a corresponding clement of Sp." > This object is achieved by utilizing a set of properties which the signal is known or is hypothesized as possessing. Comments. This passive sentence can be shortened and made active: "We achieve this aim by using properties that we know or hypothesize the signal to possess." This simplified form shows that the sentence says little. The paragraph in which the sentence appears should be rewritten. t> The main purpose of any scientific article is to convey in the fewest number of words the ideas, procedures and conclusions of an investigator to the scientific community. Whether or not this admirable aim is accomplished depends to a large extent on how skillful the author is in assembling the words of the English language. Comments. These are the opening two sentences of a medical journal editorial titled "Use, Misuse and Abuse of Language in Scientific Writing". The italics are those of Gregory [117], who points out that the italicized words can be omitted without loss of meaning. [> Mathematica was found to be a suitable environment in which to perform the computational experiments. Comments. The following rewrite is much shorter, avoids the passive voice, and takes for granted the "suitability": "We carried out our computational experiments in Mathematica." t> These observations simply imply that nearby orbits separate from the orbit of 7 after many iterations of the map G. Hardy and Littlewood (1979) prove a classical theorem that is useful in this context. REVISING A DRAFT Comments. There is nothing intrinsically wrong with this extract, but the astute writer may wish to rewrite to remove two minor infelicities. First, the near repetition in the phrase "simply imply" could distract the reader. Second, the eye naturally tends to read "iterations of the map G. Hardy"—symbols can cause trouble at the end of a sentence as well as at the beginning! 7.3. Examples Involving Equations > Theorem. Let A 6 R mx « be a given matrix, and let A = UEVT be a singular value decomposition of A. Then the problem max.{Re trace(AQ) : Q € K n x n is orthogonal} has the solution Q = VUT, and the value of the maximum is cr\(A) -\ CTn(j4), where { Let A be n x n. Show that if for any Hermitian matrix H, trace(.H"A) = 0, then A = 0. Comments. This question is ambiguous because any can mean whichever or at least one in everyday English. As Halmos [121] recommends, any should always be replaced by each or every in mathematical writing. > Suppose now that the assumption a = 1 fails. Comments. An assumption does not fail or succeed: it is either invalid or valid. Better wording might be Suppose now that the condition a — I is not satisfied, or Suppose now that a ^ 1. > Introduction Throughout this paper \\A\\p denotes the Frobenius norm (Xlij a fj) °f a real-valued matrix A and A+ denotes the Moore-Penrose pseudo-inverse of A. We define the set Given A, B e K m x n we are interested in the orthogonal Procrustes problem in its pure rotation form: Comments. This is a weak start. It can be improved by stating the problem in the first sentence. We can delay the definition of A+ until it is first used. "Real-valued" can be shortened to "real", but we can dispense with it altogether by using the R m x n notation. The "Given" phrase does not read well—interest in the mathematical problem is surely independent of being given matrices A and B. Revised first sentence: REVISING A DRAFT The pure rotation form of the orthogonal Procrustes problem is and ||A\\F = (Y^,i,j °%j) ls *ne Probenius norm. The writer should now go on to mention applications in which this problem arises, explain what is known about its solution, and state the purpose of the paper. t> Theorem. Let A be an n x p complex matrix with rank p. We define the py.p positive definite Hermitian matrix S — A* A and the nxp matrix Q — AS"2. Let U be the set of all n x p orthonormal matrices. Then the following is true: Comments. (1) The rank assumption implies p < n. It is clearer, though less concise, to say "Let A be an n x p complex matrix with n > p = rank(A)." (2) There is no need to introduce the symbol 5. If the existence of (A*A)~1^2 is thought not to be obvious (note the preferable slashed fraction in the exponent), it can be established in the proof. The symbol U can also be dispensed with. (3) The inequality is an equality. The second sentence onwards can be simplified as follows: Let Q — A(A*A)~l/z. Then Q is orthonormal and > The optimality of the constant 7rlogn/4 in inequality (14.3) is due to Smith [10]. Comments. This sentence suggests that Smith made the constant optimal. Better is was first established by Smith. 7.4. EXAMPLES FROM MY WRITING > (From an abstract:) The bound is derived in the case of k (0 < k < p) explanatory variables measured with error. Comments. The intrusive inequalities and the all-purpose phrase "in the case of" can be removed, and the reader told, or reminded, what p is, by writing "The derivation of the bound allows for any k of the p explanatory variables to be measured with error." 7.4. Examples from My Writing Here are some examples from my own writing of how I improved drafts. (1) Original first two sentences of paper: > Summation of floating point numbers is a ubiquitous and fundamental operation in scientific computing. It is required when evaluating inner products, means, variances, norms, and all kinds of functions in nonlinear problems. Improved version (shorter, less passive, more direct): Sums of floating point numbers are ubiquitous in scientific computing. They occur when evaluating inner products, means, variances, norms, and all kinds of nonlinear functions. An alternative (avoids the dangling participle "when evaluating" , at the cost of a more passive construction): Sums of floating point numbers occur everywhere in scientific computing: in the evaluation of inner products, means, variances, norms, and all kinds of nonlinear functions. (2) Original: t> Here, instead of immediately feeding each correction e,; back into the summation, the corrections are accumulated as e = ^"=1 e* (W recursive summation) and then the global correction e is added to the computed s u m . Improved version (omits the unnecessary mathematical notation): REVISING A DRAFT Here, instead of immediately feeding each correction back into the summation, the corrections are accumulated by recursive summation and then the global correction is added to the computed sum. (3) Original first sentence of abstract: D> If a stationary iterative method is used to solve a linear system Ax = b in floating point arithmetic how small can the method make the error and the residual? Improved version (avoids the misleading if, more direct): How small can a stationary iterative method for solving a linear system Ax = b make the error and the residual in the presence of rounding errors? (4) Original first sentence of paper: O A block algorithm in matrix computations is one that is denned in terms of operations on submatrices rather than matrix elements. A copy editor removed the words one that is. This changes the meaning, since the sentence now states a property rather than gives a definition, but I felt that the shorter sentence was an improvement. 7.5. A Revised Proof Gershgorin's theorem, a well-known theorem in numerical linear algebra, specifies regions in the complex plane in which the eigenvalues of a matrix must lie. Here is the theorem and a proof that is correct, but can be improved. Theorem 1 (Gershgorin, 1931) The eigenvalues of A € C n x n lie in the union of the n disks in the complex plane 7.5. A REVISED PROOF Proof. The proof is by contradiction. Let A be an eigenvalue of A and x an eigenvector associated with A and assume that A 0 DI for i = 1 , . . . , n. Then from and we have Taking absolute values and applying the triangle inequality gives Assume that Xk = max, \Xi • Then, dividing the A;th inequality by \Xk , we have showing that A is contained in the disk { A : |A — a^k < Sj^fe \akj }i which is a contradiction. Several different proofs of Gershgorin's theorem exist, and this one is the most elementary and direct. The presentation of the proof has several failings, though. 1. The proof is too detailed and contains some unnecessary equations and inequalities. 2. The proof by contradiction is unnecessary, since the assumption that is to be contradicted is not used in the proof. 3. The "assumption" on fc is really a definition of k and is clearer if phrased as such. 4. In the last line of the proof the disk can be described by its name, Dk. The following proof uses the same reasoning but is much more concise and no less clear. Proof. Let A be an eigenvector of A and x a corresponding eigenvector, and let |x/t| = max, x,|. Prom the fcth equation in Ax = Xx we have and since l^l/lx^l < 1 it follows that A belongs to the kth disk, DkOf course a proof has to be written with the intended audience in mind. For an undergraduate text the revised proof is probably too concise and some intermediate steps could be added. 7.6. A Draft Article for Improvement Below is a shortened version of an article that I wrote for an undergraduate mathematics magazine. I have introduced over twenty errors of various kinds, though most are relatively minor. How many can you spot? If you are an inexperienced writer, criticizing this "draft" will be a valuable exercise. Numerical Linear Algebra in the Sky In aerospace computations, transformations between different co-ordinate systems are accomplished using the direction cosine matrix (DCM), which is defined as the solution to a time dependent matrix differential equation. The DCM is 3 x3 and exactly orthogonal, but errors in computing it lead to a loss of orthogonality. A simple remedy, first suggested in a research paper hi 1969 is to replace the computed DCM by the nearest orthogonal matrix every few steps. These computations are done in realtime by an aircrafts on-board computer so it is important that the amount of computation be kept to a minimum. One suitable method for computing a nearest orthogonal matrix is described in this Article. We begin with the case of 1 x 1 matrices—scalars. (a) Let x\ be a nonzero real number and define the sequence: 7.6. A DRAFT ARTICLE FOR IMPROVEMENT If you compute the first few terms on your calculator for different x\ (e.g. x\ = 5, Xi = —3) you'll find that Xk converges to ±1; the sign depending on the sign of xi. Prove that this will always be the case (Hint: relate x/t+i ± 1 to Xk ± 1 and then divide this two relations). This result can be interpreted as saying that the iteration computes the nearest real number of modulus one to xi. (b) This scalar iteration can be generalized to matrices without loosing it's best approximation property. For a given nonsingular Xi & R n x n define (This is one of those very rare situations where it really is necessary to compute a matrix inverse!) Here, X^ denotes the transpose of the inverse of Xk. Natural questions to ask are: Is the iteration well defined (i.e., is Xk always nonsingular)? Does it converge? If so, to what matrix? To investigate the last question suppose that Xk —> X. Then X will satisfy X = x+%~T, or X = X~T, or thus X is orthogonal! Moreover, X is not just any orthogonal matrix. It is the nearest one to X\ as shown by the following Theorem 1 where the norm is denoted by This is the matrix analogue of the property stated in (a). Returning to the aerospace application, the attractive feature of iteration (1) is that if we don't wait to long before "re-orthogonalising" our computed iterates then just one or two applications of the iteration (1) will yield the desired accuracy with relatively little work. D Here are the corrections I would make to the article. (In repeating the exercise myself some time after preparing this section, I could not find all the errors!) 1. First paragraph: hyphenate time-dependent; comma after 1969; aircraft's; comma after on-board computer; article in lower case. 2. Second paragraph: no colon after sequence. In display, •£- instead of 1/^fc, and replace "...,." by "....". Comma after e.g. and instead of semicolon after ±1. These two relations; modulus 1. 3. Third paragraph: losing; its. Right parenthesis in display (1) is too large and full stop needed at end of display. 4. Fourth paragraph: third X should be in mathematics font, not roman; (X+X~T)/2. The equation XTX = I should not be displayed and it should be followed by a semicolon instead of a comma. (The spacing in XTX should be tightened up—see page 192 for how to do this in WF$i.) Comma after X\; following theorem. 5. No need to number the theorem as it's the only one. It should begin with words: The matrix X\ satisfies. K n x ™ instead of Rnxn (two changes). Comma at end of first display. "/„" is inconsistent with "7" earlier: make both /. Denoted should be defined. In first sum of second display, i = 1. The parentheses are too large. 6. Last paragraph: too long. Wrong opening quotes. For consistency with generalized (earlier in article), spell as re-orthogonalizing. Logical omission: I haven't shown that the iteration converges, or given or referred to a proof of the theorem. Chapter 8 Publishing a Paper In the old days, when table making was a handcraft, some table makers felt that every entry in a table was a theorem {and so it is) and must be correct.... One famous table maker used to put in errors deliberately so that he would be able to spot his work when others reproduced it without his permission. — PHILIP J. DAVIS, Fidelity in Mathematical Discourse: Is One and One Really Two? (1972) The copy editor is a diamond cutter who refines and polishes, removes the flaws and shapes the stone into a gem. The editor searches for errors and inaccuracies, and prunes the useless, the unnecessary Qualifiers and the redundancies. The editor adds movement to the story by substituting active verbs for passive ones, specifics for generalities. — FLOYD K. BASKETTE, JACK Z. SISSORS and BRIAN S. BROOKS, The Art of Editing (1992) Lotto's law states that the number of people producing n papers is proportional to 1/n2. — FRANK T. MANHEIM, The Scientific Referee (1975) Memo from a Chinese Economic Journal: We have read your manuscript with boundless delight. If we were to publish your paper, it would be impossible for us to publish any work of lower standard. And as it is unthinkable that in the next thousand years we shall see its equal, we are, to our regret, compelled to return your divine composition, and to beg you a thousand times to overlook our short sight and timidity. — From Rotten Rejections (1990) Once your paper has been written, how do you go about publishing it? In this chapter I describe the mechanics of the publication process, from the task of deciding where to submit the manuscript to the final stage of checking the proofs. I do not discuss how to decide whether your work is worth trying to publish, but Halmos offers some suggestions (particularly concerning what not to publish) in [124]. When to publish is an important question on which it is difficult to give general advice. I recommend that you find out the history of some published papers. Authors are usually happy to explain the background to a paper. Current Contents (see §14.3) regularly carries articles describing the background to a "Citation Classic", which is a paper that has been heavily cited in the literature. A good example is the article by Buzbee [47] describing the story of the paper "On direct methods for solving Poisson's equations" [B. L. Buzbee, G. H. Golub, and C. W. Nielson. SIAM J. Numer. Anal, 7(4):627-656, 1970]. The article concludes with the following comments. So, over a period of about 18 months, with no small amount of mathematical sleuthhounding, we completed this now-Classic paper. During that 18 months, we were tempted on several occasions to publish intermediate results. However, we continued to hold out for a full understanding, and, in the end, we were especially pleased that we waited until we had a comprehensive report. I concentrate here on publishing in a refereed journal. Another important vehicle for publication is conference proceedings. These are more common in computer science than mathematics, and in computer science some conference proceedings are at least as prestigious as journals. It is important to realize that many conference proceedings and a few journals are not refereed, and that when you are considered for hiring or promotion, refereed publications will probably carry greater weight than unrefereed ones. 8.1. Choosing a Journal There are more journals than ever to which you can send a scientific paper for publication, so how do you choose among them? The most important question to consider is which journals are appropriate given the content of the paper. This can be determined by looking at recent issues and reading the stated objectives of the journal, which are often printed in each issue. Look, too, at your reference list—any journals that are well represented are candidates for submission of your manuscript. Experts in your area will also be able to advise on a suitable journal. 8.1. CHOOSING A JOURNAL Several other factors should be considered. One is the prestige and quality of the journal. These rather hard-to-judge attributes depend chiefly on the standard of the papers published, which in turn depends on the standard of the submissions. The higher quality journals tend to have lower acceptance rates, so publishing in these journals is more difficult. Acceptance rates are usually not published, but may be known to members of editorial boards. The figures sent by SI AM to its editors do not give acceptance rates, but they suggest that the rates for SIAM journals are usually between 30% and 50%, depending on the journal. Gleser [106] states that "the major statistical journals receive many more manuscripts than they can eventually publish and, consequently, have a high rate of rejection", and he remarks that The Journal of the American Statistical Association rejects nearly 80% of all papers submitted. One way to quantify the prestige and quality of a journal is to look at how often papers in that journal are cited in the literature [89], [90]. Such information is provided by the Journal Citation Reports published by the Institute for Scientific Information. A study of mathematics journals based on the 1980 report of citation statistics is given by Garfield [92]; his article identifies the fifty most-cited mathematics journals, the most-cited papers from the most-cited journals, and the journals with the highest impact factor (a measure of how often an average article is cited [100]). Based on the 1980 data, the journals with the ten highest impact factors are, from highest to lowest, Comm. Pure Appl. Math., Ann. Math., Adv. in Math., SIAM Review, Acta Math.—Djursholm, Invent. Math., SIAM J. Numer. Anal, Stud. Appl. Math., Duke Math. J., Math. Program. The circulation of a journal should also be considered. If you publish in a journal with a small circulation your paper may not be as widely read as you would like. Relatively new journals from less-established publishers are likely to have small circulations, especially in the light of the budget restrictions imposed on many university libraries. SIAM publishes circulation information in the first issue of the year of each journal; see Table 8.1. The circulation of SIAM Review is so high because every SIAM member receives it. For journals published electronically and not requiring a subscription, circulation may be of less concern. The audience for your paper depends very much on the journal you choose. For example, a paper about numerical solution of large, sparse eigenvalue problems could be published in the SIAM Journal on Scientific Computing, where it would be seen by a broad range of workers in scientific computing; in the SIAM Journal on Matrix Analysis and Applications, whose readership is more biased towards pure and applied linear algebra; or in the Journal of Computational Physics, whose readership is mainly physicists and applied mathematicians, many of whom need to solve practical PUBLISHING A PAPER Table 8.1. Circulation figures for some SIAM journals. Journal SIAM SIAM SIAM SIAM SIAM SIAM SIAM SIAM SIAM J. Appl. Math. J. Comput. J. Control Optim. J. Math. Anal. J. Matrix Anal. Appl. J. Numer, Anal J. Optim. Review J. Sci. Comput. Total Distribution Per Issue, 1997 2485 2069 1965 1612 1659 2453 1450 11531 2150 Issues Per Year 6 6 6 6 4 6 4 4 6 eigenvalue problems. Other factors to consider when choosing a journal are the delays, first in refereeing. How long you have to wait to receive referee reports varies among journals. It depends on how much time the journal allows referees, how efficient an editor is at prompting tardy referees, and, of course, it depends on the referees themselves (who usually act as referees for more than one journal). The other major delay is the delay in publication: the time from when a paper is accepted to when it appears in print. For a particular article, this delay can be calculated by comparing the date of final submission or acceptance (displayed for each article by most journals) with the cover date of the journal issue. The publication delay depends on the popularity of the journal and the number of pages it publishes each year. A survey of publication delays for mathematics journals appears each year in the Notices of the American Mathematical Society journal ("Backlog of Mathematics Research Journals")—it makes interesting reading. The publication delay also depends, partly, on the author. Ervin Rodin, the editor-in-chief of Computers and Mathematics with Applications, explains in an editorial [238] some of the reasons for delays. Four that are not specific to this particular journal are that the figures or graphs are not of high enough quality, the references are not given in full detail, the equations are inconsistently numbered, and the proofs are not returned promptly. Finally, if your paper has been prepared in TgX (see Chapter 13) you might prefer to send it to a journal that typesets directly from authorsupplied TjTJX source; as well as saving on proofreading this sometimes 8.2. SUBMITTING A MANUSCRIPT brings the benefit of extra free reprints (nowadays most journals provide some free reprints for all papers—as few as 15 or as many as 100). 8.2. Submitting a Manuscript Before submitting your manuscript (strictly, it is a paper only after it has been accepted for publication) you should read carefully the instructions for authors that are printed in each issue of your chosen journal. Most of the requirements are common sense and are similar for each journal. Take particular note of the following points. 1. To whom should the manuscript be submitted? Usually it should be sent to the editor-in-chief, but some journals allow manuscripts to be sent directly to members of the editorial board; judgement is required in deciding whether to take advantage of this option (it may be quicker, since the manuscript skips the stage where an editor-inchief selects an associate editor). Usually, the editorial addresses are printed in each issue of a journal. Look at a recent issue, as editors and their addresses can change. 2. Enclose a covering letter that states to which journal the manuscript is being submitted—this is not obvious if the organization in question has several journals, as does SIAM. State the address for correspondence if there is more than one author. Usually only the designated author receives correspondence, proofs and reprint order forms. If your address will change in the foreseeable future, say so, even if the change is only temporary. (This is particularly important at the typesetting stage, after a paper has been accepted, because proofs must be dealt with quickly.) 3. How many copies of the manuscript are required? SIAM requires five. If the destination is abroad, send them by air mail; don't use surface mail, which can take several weeks. Even if a paper is rejected, the manuscript is not usually returned. You should keep a copy, particularly as you may need it at the proofreading stage. 4. Always submit single-sided copies (not double-sided) and fasten them with a staple to avoid pages being lost (the referees can easily remove the staple if they wish). Provide your full address on the manuscript (some authors forget). Dating the manuscript may help to prick the conscience of a tardy editor or referee. 5. Give key words and the Mathematics Subject Classifications and Computing Reviews classification, if these are required by the journal. PUBLISHING A PAPER It is a good policy to include them as a matter of course. You may also include at this stage a "running head"—a shortened title that appears at the tops of pages in the published paper. The running head should not exceed about 50 characters. 6. If your manuscript cites any of your unpublished work it will help the referees if you enclose a copy of that work, particularly if the manuscript relies heavily on it. Doing so avoids delays that might result from the referees asking to see the unpublished work. 7. If you are using I^TfiX double check that the cross-references and citations are correct. Adding a new equation or reference at a late stage and not running I^T^X twice (or three times if you are using BiBTgX—see Table 13.2) can result in incorrect numbering. I have seen citation errors of this type persist in a published journal article. Also, if the journal accepts TgX papers, use the style files or macros provided by the publisher; you may still be able to convert the paper to the required format once it has been accepted. 8. Most journals require that any material submitted for review and publication has not been published elsewhere. Papers that have appeared in preliminary form in conference proceedings are usually an exception to this rule. SIAM, for instance, requires that papers that have appeared in conference proceedings or in print anywhere in an abbreviated form be significantly revised before they are submitted to a SIAM journal. If your paper has already appeared in published form you must make this clear when the paper is submitted, and you must indicate so by a footnote on the first page. 9. Before putting your manuscripts in the envelope, check that no pages are missing from any of the copies. Again, delays will result if one of the copies is incomplete. 10. You should receive an acknowledgement of receipt of the manuscript within four weeks of submitting it. If you do not, write to ask whether the manuscript was received. 8.3. The Refereeing Process I will explain how the refereeing process works for SIAM journals (this discussion is partly based on an article by Gear [103]). Procedures for most other journals are similar. If a manuscript is submitted to "The Editor" at the SIAM office, it is logged by SIAM and a letter of acknowledgement 8.3. THE REFEREEING PROCESS is sent to the author, giving a manuscript number that should be quoted in future correspondence. The manuscript is then passed on to the editorin-chief, who assigns it to a member of the editorial board (this stage can take a few weeks). SIAM (or in some cases, the editor-in-chief) mails the submission, the covering letter, and a Manuscript Transmittal Sheet to the chosen editor. The editor writes to two or more people asking them to referee the paper, suggesting a deadline about six weeks from the time they receive the paper. The editor may send a sheet of "instructions for referees". Figure 8.1 contains an extract from the SIAM instructions (many of the SIAM journals also have more specific instructions), which indicates again the points to consider before submitting a manuscript. When all the referee reports have been received the editor decides the fate of the paper, informs the author, and notifies SIAM. (In some journals, including some of those published by SIAM and The Institute of Mathematics and Its Applications, the editor makes a recommendation to the editor-in-chief, who then writes to the author, possibly not naming the editor.) The paper can be accepted, accepted subject to changes, returned to the author for a substantial revision, or rejected. If a referee report is not received on time the editor reminds the referee that the report is due. After six months of inactivity the manuscript is classed as "flagged" by SIAM, and the editor is urged by SIAM to expedite the current stage of the refereeing process. If you have had no response after six months you are quite entitled to contact SIAM or the editor-inchief to enquire whether any progress has been made in refereeing the paper. Papers are sometimes mislaid or lost and your enquiry will at least reveal whether this has happened. Few papers are accepted in their original state, if only because of minor typographical errors (typos). When preparing a revised manuscript in response to referee reports it is important to address all the points raised by the referees. In the covering letter for the revised version these points might be summarized, with an indication for each one of the action (if any) that was taken for the revision. If you do not act on some of the referees' recommendations you need to explain why. It greatly helps the editor if you explain which parts of the paper have been changed, as it is an irksome task to compare two versions of a paper to see how they differ. When you submit a revised manuscript it is a good idea to mark on the front page "Revised manuscript for journal X" together with the date. This will prevent the revised version from being confused with the original. When a revised paper is received, the editor may ask the referees to look at it or may make a decision without consulting them. Keep in mind that the editor and referees are usually on your side. They The most important criterion for acceptance of a publication is originality and correctness. Clear exposition and consistent notation are also required. All papers should open with an introduction to orient the reader and explain the purpose of the paper. You are asked to prepare an unsigned report, in duplicate. We ask that you keep the report formal and impersonal so that the editor can forward it to the author. A specific recommendation for acceptance or rejection should be excluded from the report. The following checkpoints are suggested for explicit consideration: • Is the paper consistent with editorial objectives? • Is the work correct and original or of wide appeal, if a review paper? • Is its presentation clear and well organized? • Is the notation well conceived and consistent? • How does the paper relate to current literature? • Are the references complete, relevant, and accurate? • Does the title accurately characterize the paper? • Does the abstract properly summarize the paper without being too vague? • Does the introduction relate the paper to contemporary work and explain the purpose of the paper? • Are equation numbers and figure numbers consistent? When the manuscript fails to meet some explicit requirement, what material should the author develop to improve the presentation? Cover Letter Please return your report with a cover letter stating your recommendation concerning disposition of the paper. We ask that you justify a recommendation of acceptance as well as one of rejection, and please send the cover letter, report, and manuscript to the editor who requested this review. Figure 8.1. Extract from SIAM instructions for referees. 8.4. How TO REFEREE Peanuts reprinted by permission of United Feature Syndicate. are mostly busy people and would like nothing more than to be able to read your paper, quickly realize that it is correct and deserves publishing, and make that recommendation or decision. Anything you can do to help them is to your benefit. A major advantage of writing in a clear, concise fashion is that your papers may be refereed and edited more quickly! 8.4. How to Referee The main task of a referee is to help an editor to decide whether a paper is suitable (or will be suitable, after revision) for publication in a journal. Opinions vary on precisely what a referee should do (see the references below), and different referees go about the task in different ways. It is useful to think of the refereeing process as comprising two stages, even though these are often combined. The first stage is an initial scrutiny in which the referee forms an overall view of the paper, without reading it in detail. The question to be considered is whether the paper is original enough and of sufficient interest to the journal's readers to merit publication assuming that the paper is free of errors. If a negative conclusion is reached, then there is no need for the referee to check the mathematical details. If the overview reveals a paper potentially worth publishing then more detailed study is required, including consideration of the questions listed in Figure 8.1. How carefully the referee should check the mathematics depends partly on the nature of the paper. A paper applying a standard technique to a new problem may need less meticulous checking than one developing a new method of analysis. The referee's time is better concentrated on looking at the key ideas and steps in proofs rather than the low-level details, as this is the way that errors are most likely to be spotted. The referee has to examine all facets of a paper and decide which lines of investigation will be the most fruitful in coming to a decision about the paper's merit. Experienced referees learn a lot from small clues. If one lemma is imprecisely stated or proved, perhaps other results need to be carefully checked. If an important relevant paper is not cited, perhaps the author is not fully conversant with the existing literature. An unreasonable assumption, perhaps hidden in a piece of analysis, calls into question the value of a result and may lead a referee to an immediate recommendation of rejection. Some particular advice on refereeing follows. 1. If you are not willing or able to provide a report by the date requested by the editor, return the manuscript immediately. Alternatively, give the editor a date by which you can guarantee a report and ask if that would be acceptable. 2. You can save a lot of time by taking a global view of a paper before starting to examine the details. Twenty minutes spent coming to the suspicion that a paper appears flawed may enable you to pinpoint the errors much sooner than if you read the paper from start to finish, checking every line. 3. Make your recommendation in a cover letter to the editor, but not in the report itself. The report should contain the reasoning that supports your recommendation. 4. It is not necessary for you to summarize the paper if your summary would be similar to the abstract. If you can give a different and perhaps more perceptive overview of the work it will be of much use to the editor. 5. In considering changes to suggest for improving a paper that you think merits publication, you can ask many of the same questions that you would ask when writing and revising your own work. In particular, consider whether the notation, organization, length and bibliography are suitable. 8.5. THE ROLE OF THE COPY EDITOR 6. If your recommendation is rejection, do not spend much time listing minor errors, typographical or otherwise, in the report (unless you wish, for example, to help an author whose first language is not English improve his or her grammar and spelling). Concentrate on describing the major flaws. Always try to offer some positive comments, though—imagine how you would feel on reading the report if the paper were yours. 7. Always be polite arid avoid the use of language that can be interpreted as offensive or over-critical. In particular, avoid using unnecessary adjectives. Thus say "incorrect" rather than "totally incorrect" and "unwarranted" rather than "completely unwarranted". 8. Building a reputation as an efficient, conscientious and perceptive referee is worthwhile for several reasons. You will probably receive important papers to referee, thereby finding out about the work before most other researchers. As a trusted referee, your recommendations will help to influence the direction of a journal. You may even be invited onto editorial boards because of your reputation as a referee. Reputations are not made, though, by providing shallow reports of the form "well written paper, interesting results, recommend publication" when other referees point out serious weaknesses and recommend rejection. As Lindley [179] puts it, "A sound dismissal is harder to write than an advocacy of support." For further discussion of the refereeing process I recommend the papers by Gleser [106], Lindley [179], Manheim [193], Parberry [216], Smith [250] (intended for "applied areas of computer science and engineering") and Thompson [272]. See also [164, §§15-17] and the collection Publish or Perish [142], which includes chapters titled "The Refereeing Process", "The Editor's Viewpoint" and "The Publisher's Viewpoint". 8.5. The Role of the Copy Editor After a manuscript is accepted for publication it goes to a copy editor. The copy editor of journal papers has three main aims: to do limited rewriting or reorganizing of material in order to make the paper clear and readable; to edit for correctness of grammar, syntax and consistency; and to impose the house style of the journal (a fairly mechanical process). The copy editor tries to make only essential changes and to preserve the authors style. When you see a copy-marked manuscript for the first time your reaction might be one of horror at the mass of pencilled changes. Most of these will be instructions to the printer to set the paper in the house style: instructions on the fonts to use, spacing in equations, placement of section headings, and a host of other details. There may also be some changes to your wording and grammar. The editor will have had a reason for making each change. The editor will probably have made improvements that you overlooked, such as finding words that are unnecessary, improving the punctuation, and correcting inconsistencies (copy editors keep track of words or phrases that have odd spellings, capitalization or hyphenation and make sure that you use them consistently). If you are unhappy with any of the changes a copy editor has made you can attach an explanatory note to the proof, or, if you are sure an error was made, reverse the change on the proof. Copy editors are always willing to reconsider changes and will pay attention to the author's views in cases of disagreement. 8.6. Checking the Proofs Some time after your manuscript has been accepted, and after you have received and signed a copyright transferral form, you will receive page or galley proofs (galleys are sheets that have not been broken up into pages—if the journal typesets in TfjjX the galley stage may be nonexistent). You are asked to check these and return the marked proofs within a short period, often two days. For some journals, the original copy-edited manuscript is enclosed, so that you can see which changes the copy editor marked. If the marked manuscript is not enclosed you are at a disadvantage and you should check against your own copy of the manuscript, particularly for omissions, which can be very difficult to spot when you read only the proofs. The proofs you receive are usually photocopies, so if you see imperfections such as blotches and faint characters these might not be present on the original copy. A thorough check of the proofs requires one read-through in which you do a line by line comparison with the marked manuscript, and another in which you read the proofs by themselves "for meaning". In addition to checking line by line, do a more global check of the proofs: look at equation numbers to make sure they are in sequence with no omissions, check for consistency of the copy editing and typesetting (typefaces, spacing, etc.) and make sure the running heads are correct. As mentioned above, most journals print the date the manuscript was first received and the date it was accepted. Check these dates, as they may help to establish priority if similar work is published by other researchers. Some specific errors to check for are shown in Figure 8.2. If your paper contains program listings they should be checked with extra care, as printers find them particularly difficult to typeset. Common errors in typeset 8.6. CHECKING THE PROOFS • Unmatched parentheses: a — (b + c/2. • Wrong font: a = (b + c)/2. • Missing words or phrases. • Misspelt or wrong words (e.g., complier for compiler, special for spectral). • Repeated words. • Missing punctuation symbols (particularly commas). • Incorrect hyphenation. If your manuscript contains a word hyphenated at the end of a line, the copy editor and printer may not know whether the hyphen is a permanent one or a temporary one induced by the line break. • Widow: short last line of a paragraph appearing at the top of a page. Or "widow word": last word of a paragraph appearing on a line by itself (can be cured in TgjX by binding the last two words together with a tie). • O (capital Oh) for 0 (zero), 1 (lower case ell) for 1, wrong kind of asterisk (* instead of *). • Bad line breaks in mathematical equations. • Incorrectly formatted displayed equations (e.g., poor alignment in a multiline display). • Change in meaning of words or mathematics resulting from copy editor's rewriting. • Missing mathematical symbols (e.g., ri\\Lx — b\\ instead of rj — \\Lx- 6 |). • Misplaced mathematical symbols or wrong kind: subscript should be superscript, Rmxn instead of E m x n , A* instead of AT, \A\\ instead of \\A\\. • Errors in numbers in tables. • Incorrect citation numbers. E.g., if a new reference [4] is added, every citation [n] must be renumbered to [n + 1] for n > 4, but this is easily overlooked. Figure 8.2. Errors to check for when proofreading. program listings include incorrect spacing and indentation and undesired line breaks. It is best to have a listing set from camera-ready copy, if possible. Copy editors on mathematical journals often have mathematical qualifications, but they may still introduce errors in attempts to clarify, so read very carefully. For example, in one set of proofs that I received the phrase "stable in a sense weaker than" had been changed to "stable, and in a sense, weaker than", which altered the meaning. Since the original may have lacked clarity I changed the phrase to "stable in a weaker sense than". In every set of proofs I have received, I have found mistakes to correct. If you receive apparently perfect proofs, perhaps you are not proofreading carefully enough! As an indication of how hard it can be to spot typographical errors I offer the following story. Abramowitz and Stegun's monumental Handbook of Mathematical Functions [2] was first published in 1964 and various corrections have been incorporated into later printings. As late as 1991, a new error was discovered: the right-hand side of equation (26.3.16) should read Q(h) - P(k) instead of P(h) - Q(k) [242]. This error was spotted when a Fortran subroutine from the NAG library that calculates probabilities behaved incorrectly; the error was traced to the incorrect formula in Abramowitz and Stegun's book. I have spotted typographical errors in virtually every part of journal papers, including the title. The first sentence of one paper says that "One of the best unknown methods ... was developed by Jacobi." In one book preface the author thanks colleagues for helping him to produce a "final prodiuct" that is more accurate than he could have managed on his own. In the introduction of another, the author says that it would "be hard to underestimate" the importance of the subject of the book. If you still need convincing of the importance of careful proofreading, consider the book on sky diving that was hurriedly recalled so that an erratum slip could be added. It read "On Page 8, line 7, 'State zip code' should read 'Pull rip cord'." Corrections should be marked using the standard proofreading symbols; these will be listed on a sheet that accompanies the proofs. A list of proofreading symbols is given in Figure 8.3. In my experience it is possible to deviate slightly from the official conventions as long as the marking is clear. But make sure to mark corrections in the margin, which is the only place a printer looks when examining marked proofs; marks in the text should specify only the location of the correction. Take care to answer any queries raised by the copy editor (usually marked Au. in the margin). (The American Mathematical Society advises its editors to avoid asking "Is it this or that?" because many mathematicians are likely to answer "yes".) Some 8.6. CHECKING THE PROOFS Mark to be used in margin sep. w.f. cap. sm. cap. b.f. Mark to be used in margin ranslation correct broken type new paragraph no paragraph insert thin space run in insert 1-em space equalize space less space close up raise the enclosed characters (arrow indicates how much to raise) lower the enclosed characters insert hyphen quotation marks move to left 1-em dash move to right superior letter or figure* straighten lines inferior letter or figure* separate wrong font use capital letter use small capital letter use bold face for material underlined * These can be used in any combination, e.g.. indicates that the letter p is inferior to an inferior, as In O indicates that the letter p is superior to an inferior as in B P. al.) om^) > change to indicated type style crtpQ Mark to be used In text in margin raise n to position shown (i.e.. to be: x 2 " 1 ) lowercase (A slash through a letter in a word indicates that it and all subsequent letters in that word should be lowercase.) (Place material to be inserted in the margin.) Insert (All variables must be clearly identified for font, case, etc.) correct vertical alignment (The dotted line indicates what should be on a straight line.) let it remain as set Figure 8.3. Proofreading symbols. journals require the printer's errors to be marked in one colour and the author's changes in another. Even if the copy editor doesn't ask, it is advisable to check the reference list to see if any unpublished or "to appear" references can be updated. Don't be afraid of writing notes to the copy editor (I often use yellow stick-on notes), particularly to praise their often under-appreciated work. You should try to restrict changes you make on the proofs to corrections. If you make other changes you may have to pay for the extra typesetting costs. It is usually possible to add any vital, last-minute remarks (such as a mention of related work that has appeared since you submitted your manuscript) in a paragraph headed "Note Added in Proof" at the end of the paper. At about the same time as you receive the proofs you will receive a reprint order form and, depending on the publisher, an invitation to pay page charges. Page charges cover the cost of typesetting the article and payment is usually optional for mathematics journals. Even if you request only the free reprints you still need to complete and return the reprint order form. 8.7. Author-Typeset T£}X I now focus on papers that are typeset in l^X by the author (T^jX itself is discussed in Chapter 13). Many journals provide macros for use with TEX, I^TEX or AMS-WT^X. that produce output in the style of the journal. These are usually available from the journal Web page or by electronic mail from the editors or publishers. In the same way that a computer programmer can write programs that are difficult to understand, an author can produce T^X source that is badly structured and contains esoteric macros, even though it is syntactically correct. Such T^X source is difficult to modify and this can lead to errors being introduced. If you intend to provide T^jX source you should try to make it understandable. Watch out for precarious comments, such as those in the following example (the 7, symbol in TgX signifies that the rest of the line is a comment and should not be printed). The widely used IEEE standard arithmetic has $\beta=2$. '/, ANSI/IEEE Standard 754-1985 Double precision has $t=53$, $L=-1021$, $U=1024$, and $u = 2~{-53> \approx 1.11 \times 10~{-16}$. % IEEE arithmetic uses round to even. This produces the output 8.7. AUTHOR-TYPESET TgX The widely used IEEE standard arithmetic has j3 = 2. Double precision has t = 53, L = -1021, U - 1024. and u = 2~ 53 « 1.11 x 10"16. Suppose that, as these lines are edited on the computer, they are reformatted (either automatically, or upon the user giving a reformatting command) to give The widely used IEEE standard arithmetic has $\beta=2$. '/, ANSI/IEEE Standard 754-1985 Double precision has $t=53$, $L=-1021$, $U=1024$, and $u = 2"{-53> \approx 1.11 \times 10"{-16}$. 7, IEEE arithmetic uses round to even. As the comment symbols now act on different text, this results in the incorrect output The widely used IEEE standard arithmetic has 0 = 2. precision has t = 53, L = -1021, U = 1024, and u = 2~ 53 w 1.11 x 10~16. arithmetic uses round to even. Because of this possibility I try to keep comments in a separate paragraph. So I would format the above example as The widely used IEEE standard arithmetic has $\beta=2$. Double precision has $t=53$, $L=-1021$, $U=1024$, and $u = 2" {-53> \approx 1.11 \times 10~{-16}$. % ANSI/IEEE Standard 754-1985 7, IEEE arithmetic uses round to even. Similar difficulties may arise if you edit with a line length of more than 80 characters (the standard screen width on many computers). A different text editor might wrap characters past the 80th position onto new lines, or, worse, truncate the lines; in cither case, the meaning of the Tj5]X source is changed. Errors can be introduced in the transmission of TgX source by email. Characters such as ~, ", {, } may be interchanged because of incompatibilities between the ASCII character set and other character sets used by certain computers. To warn of translation you could include test lines such as 7,7, I'l 7,7, '/,'/, 0/0, Exclamation \! Dollar \$ Acute accent V Double quote \" Percent \7, Left paren \( Hash (number) \# Ampersand \& Right paren \) at the top of your file. Some mail systems object to lines longer than 72 characters. Some interpret a line beginning with the word from as being part of a mail message header and corrupt the line. Thus from which it follows that might be converted to >from: which it follows that which would be printed in TEX as ^from: which it follows that Another possible problem arises when ASCII files are transferred between Unix machines and DOS and Windows machines, since Unix terminates lines with a line-feed character, whereas DOS and Windows use a carriage-return line-feed pair. Public domain utilities, with names such as dos2unix and unix2dos, are available for converting between one format and the other. The conclusions from this discussion are twofold. (1) You should be aware of the potential problems and guard against them. Limit lines to 80 characters (or 72 characters if you will be mailing the file and are unsure which mail systems will be used), keep comment lines separate from the main text, and prepare source that is easy to read and understand. (2) Read proofs of papers that are typeset from your source with just as much care as those that are re-typeset. Between submission and printing many errors can, potentially, be introduced. If your paper is prepared in J^TjjX and you use nonstandard packages, make sure that you send the packages with the source. If you wish to avoid sending multiple files (which can be inconvenient by email), you can use the f ilecontents environment to put everything in one E*TEX file. Suppose your paper uses the package path. Then you can insert \begin-Cf ilecontentsHpath. sty} contents of path, sty \end-Cfilecontents} before the \documentclass command. When the file is run through ET^X, the file path. sty is created if it does not exist; otherwise, a warning message is printed. Any number of f ilecontents environments can be included. 8.8. COPYRIGHT ISSUES 8.8. Copyright Issues The responsibility for obtaining permission to reproduce copyrighted material rests with the author, rather than the publisher. If you want to quote more than about 250 words from a copyrighted source, you need to obtain permission. Copyright law is vague about the exact number of words allowed; if in doubt, obtain permission. Permission is also required for previously published figures and tables if they are to be reproduced in their original form. For substantially altered figures or tables or paraphrased quoted material, permission is not required but a citation of the original source must be included. All permissions must be obtained before a manuscript is submitted for publication. Write to the copyright owners (usually the publisher) with complete information about the material you would like to borrow and complete information about the book or paper you are writing. Specify whether you are applying for permission for print publication or electronic publication or both. It is usually helpful to send a duplicate letter since many copyright owners will grant permission directly on the letter and will retain the extra copy. In most cases permission is granted. Note that new permissions are usually required for new editions of a book. To be safe, always cite the original source of any material reproduced from another publication, whether or not permission is necessary. It is also wise to check with your publisher (or check your contract) if you wish to reproduce material from your own previously published material. When quoting text from another source, a convenient way to credit that source is to include publication information along with the quote, rather than in a footnote. When acknowledging permission to reproduce figures or tables, however, include the permission and original sources in the caption or a footnote. Some publishers may require exact credit lines; be sure to follow their instructions word for word. 8.9. A SIAM Journal Article—From Acceptance to Publication What happens once your paper has been accepted for publication in one of SIAM's journals? The following description answers this question. You may be surprised at how many times your paper is checked for errors! This description also explains why you should not expect to be able to make late changes to the paper after you have dealt with the proofs. Note that this description may not be typical of the processes used by other publishers, especially those less involved with electronic publication. T£X Papers When the editor's acceptance letter arrives at the SIAM publications office, an editorial assistant processes the acceptance and sends acceptance correspondence to the author. Acceptance correspondence includes a request that the author send the T^X source file for the paper to SIAM immediately, or immediately after the SIAM style macros have been applied. Once received, the T^X file is "pre-TgXed" (macro verification and minor editorial changes). The electronic art is included, any non-electronic art is scanned, and the paper is printed out again. This new printout is then copy edited. After the paper is edited it goes to a Tj]X compositor (who may or may not be an in-house SIAM staffperson) for correction of the T^K file: the editing changes are made to the file, the file is re-run, and "first proofs" are printed. The paper may be proofread at this point. If many corrections are still necessary, a second round of "first pass" corrections may be done and new "first proofs" printed. Proofreading by the SIAM staff consists of checking the edited manuscript against the proofs to ensure that the requested edits were correctly incorporated into the file. The paper is not re-read. In effect, except for author corrections, the paper is completed at this point. First proofs are then mailed to the author, who is asked to return the marked proofs or a list of changes within 48 hours of receipt. Email or faxing of changes is encouraged. After the author's changes are returned to SIAM, a SIAM staffperson incorporates the necessary corrections into the TgX file, the file is re-run, and "second proofs" are printed. These are proofed by a SIAM staffperson. As long as no typesetting or major editorial errors are found, the text of the paper is considered final at this point. SIAM's on-line services manager assigns the paper to the issue of the journal that is currently being filled. The volume, issue, year, and page numbers are added to the TgK file. (The year is the year of electronic publication, which is the definitive publication date; the printed volume may appear in a later year.) Then the final PostScript file is generated, as well as dvi and PDF files. Finally, the files are published (posted on SIAM's Web server) as part of SIAM Journals Online (http://epubs.siam.org), in the appropriate issue. Issues are filled according to the print pagination budget and are (subsequently) printed and mailed according to the same budget. Non-TEX Papers When the editorial assistant sends acceptance correspondence to an author who has not already indicated that T^jX source is available for the paper, the author is asked to confirm whether or not a T^X file is available. If 8.9. A SIAM JOURNAL ARTICLE there is no T^X file, the paper is copy edited and sent to a T^jX compositor to be keyed into ETj^X. The compositor prints the corrected paper ("first proofs"); the proofs are checked against the edited manuscript and sent to the author for proofreading. At this time the compositor sends the I^TjHpC files to SIAM, where the remainder of the production process will be completed. The author is asked to return corrections to SIAM within 48 hours of receipt of the proofs. The rest of the process is the same as for TgX papers. A Brief History of Scholarly Publishing (extract) 50,000 B.C. Stone Age publisher demands that all manuscripts be double-spaced, and hacked on one side of stone only. 1483 Invention of ibid. 1507 First use of circumlocution. 1859 "Without whom" is used for the first time in list of acknowledgments. 1916 First successful divorce case based on failure of author to thank his wife, in the foreword of his book, for typing the manuscript. 1928 Early use of ambiguous rejection letter, beginning, "While we have many good things to say about your manuscript, we feel that we are not now in position . . . " 1962 Copy editors' anthem "Revise or Delete" is first sung at national convention. Quarrel over hyphenation in second stanza delays official acceptance. — DONALD D. JACKSON, in Science with a Smile (1992) PUBLISHING A PAPER Publication Peculiarities What is the record for the greatest number of authors of a refereed paper, where the authors are listed on the title page? My nomination is P. Aarnio et al. Study of hadronic decays of the Z° boson. Physics Letters B, 240(l,2):271-282, 1990. DELPHI Collaboration. This paper has 547 authors from 29 institutions. The list of authors and their addresses occupies three journal pages. Papers with over 1,000 authors almost certainly exist, but the authors of these monster collaborations are usually listed in an appendix. For the shortest titles, I offer Charles A. McCarthy. cp. Israel J. Math., 5:249-271, 1967. Norman G. Meyers and James Serrin. H = W. Proc. Natl. Acad. Sciences USA, 51:1055-1056, 1964. The latter title has the virtue of forming a complete sentence with subject, verb and object. The only title I have seen that contains the word OK is Thomas F. Fairgrieve and Allan D. Jepson. O.K. Floquet multipliers. SIAM J. Numer. Anal, 28(5): 1446-1462, 1991. The next paper is famous in physics. Dyson explains that "Bethe had nothing to do with the writing of the paper but allowed his name to be put on it to fill the gap between Alpher and Gamow" [75]. R. A. Alpher, H. Bethe, and G. Gamow. The origin of chemical elements. Physical Review, 73(7):803-804, 1948. The only paper I know that has an animal as coauthor is J. H. Hetherington and F. D. C. Willard. Two-, three-, and four-atom exchange effects in bcc 3He. Physical Review Letters, 35(21): 1442-1443, 1975. The story of how his cat (Felix domesticus) Chester, sired by Willard, came to be a coauthor is described by Hetherington in [291, pp. 110111]. Chapter 9 Writing and Defending a Thesis 1987: Student writes Ph.D. thesis completely in verbatim environment. — DAVID F. GRIFFITHS and DESMOND J. HICHAM, Great Moments in lAT^X History15 (1997) Calvin: I think we've got enough information now, don't you? Hobbes: All we have is one "fact" you made up. Calvin: That's plenty. By the time we add an introduction, a few illustrations, and a conclusion, it will look like a graduate thesis. — CALVIN, Calvin and Hobbes by Bill Watterson (1991) Remember to begin by writing the easiest parts first... It is surprising how many people believe that a thesis... should be written in the order that it will be printed and subsequently read. — ESTELLE M. PHILLIPS and D. S. PUGH, How to Get a PhD (1994) In [118]. 147 Virtually all that has been said in Chapters 6 and 7 about writing and revising a paper applies to theses. (The term "dissertation" is synonymous with thesis, and is preferred by some.) In this chapter I give some specific advice that takes into account the special nature and purposes of a thesis written for an advanced degree. Much of the discussion applies to undergraduate level projects 9.1. The Purpose of a Thesis The purpose of a thesis varies with the type of degree (Masters or Ph.D.) and the institution. The thesis might have to satisfy one or more of the following criteria: • show that the student has read and understood a body of research literature, • provide evidence that the student is capable of carrying out original research, • show that the student has carried out original research, • represent a significant contribution to the field. It is worth checking what is expected by your institution. 9.2. Content A thesis differs from a paper in several ways. 1. A thesis must be self-contained. Whereas a paper may direct the reader to another reference for details of a method, experimental results, or further analysis of a problem, a thesis must stand on its own as a complete account of the author's work on the subject of investigation. 2. A thesis is formatted like a book, broken into chapters rather than sections. 3. A thesis may include more than one topic, whereas a paper usually focuses on one. 4. A thesis is usually longer than an average paper, making good organization particularly important. 9.2. CONTENT 5. The primary readers of a thesis (possibly the only readers) are its judges, and they will read it with at least as much care as do the referees of a paper. Since there is less pressure to save space than when writing a paper for a journal, you should generally include details in a thesis. It is important to demonstrate understanding of the subject, and phrases such as "it is easily shown that" and "we omit the proof" used in the presentation of original results may seem suspicious when you have no track record in the subject (the examiners may, of course, ask for such gaps to be filled during the oral examination). Trying to anticipate the examiners' questions should help you to decide what and how much to say on each topic. The thesis should not be padded with unnecessary material (many theses are too long), but results that would not normally be published can be included, perhaps in an appendix, either because they might be of use to future workers or because you might want to refer to them in a paper based on the thesis. There is no "ideal" number of pages beyond which a thesis gains respectability, and indeed there is great variation in the length of theses among different subjects and even within a subject. The supervisor (UK) or thesis advisor (US) can offer advice about the suitable length. A thesis has a fairly rigid structure. In the first one or two chapters the problem being addressed must be clearly described and put into context. You are expected to demonstrate a sound knowledge and understanding of the existing work on the topic by providing a critical survey of the relevant literature. If there is more than one possible approach to the problem, the choice of method must be justified. For a computational project the method developed or investigated in the thesis would normally be compared experimentally with the major alternatives. At the end of the thesis, conclusions must be carefully drawn and the overall contribution of the thesis assessed. It is a good idea to identify open problems and future directions for research, since being able to do so is one of the attributes required of a researcher. Note that a thesis does not necessarily have to present major new or improved results; in many cases the key requirement is the development and communication of original ideas using sound techniques. When you write a thesis you are usually relatively inexperienced at technical writing, so it is important to avoid inadvertently committing plagiarism (see §6.16). If you copy text word for word from another source you must put it in quotation marks and cite the source. If you find yourself copying, or paraphrasing, someone else's proof of a theorem, ask yourself if you need to give the proof—if it is not your own work, will it add anything to the thesis? Examiners will be particularly alert to the possibility of plagiarism, so be careful to avoid committing this sin. 9.3. Presentation Each institution has rules about the presentation of a thesis. Page, font and margin sizes, line spacing (often required to be double or one and a half times the standard spacing) and the form of binding may all be tightly regulated, and non-conforming theses may be rejected on submission. The opening pages will be required to follow a standard format, typically comprising the following items (some of which will be optional). 1. A title page, listing the author, title, department, type of degree, and year (and possibly month) of submission. 2. A declaration that the work has not been used in another degree submission. 3. A statement on copyright and the ownership of intellectual property rights. 4. A list of notation. 5. A brief statement of the author's research career. 6. Acknowledgements and dedications. 7. Table of contents. 8. List of figures. 9. List of tables. 10. Abstract. The abstract may need to be repeated on a separate application form. Once the degree has been obtained, the abstract is likely to be entered into a database such as Dissertation Abstracts International or Index to Theses (for theses from universities in Great Britain and Ireland, and available on the Web for registered users at http://www.theses.com). The opening pages are also the place to indicate which parts of the thesis (if any) have already been published, and which parts are joint work. Your library will contain previous successful theses, which you can inspect to check the required format—but bear in mind that rules of presentation can and do change. It is likely that a M^X package will be available at your institution for typesetting theses in the official style. 9.4. THE THESIS DEFENCE I recommend producing an index for the thesis, although this practice is not common. A well-prepared index (see §13.4) can be a significant aid to examiners and readers. When should you start to write the thesis? My advice is to start sooner rather than later. In the early months of study in which you become familiar with the problem and the literature you can begin to draft the first few chapters. You should also start immediately to collect references for the bibliography—it is difficult in the later stages to hunt for half-remembered references. One reason for making an early start on writing the background and survey material is that at this stage you will be enthusiastic—later, you may know this material so well that it seems dull and boring. I encourage my students to write up their work in KTEX as they progress through the period of study, so that when the time comes to produce the thesis much of the writing has been done. Since most students now typeset their own theses, this approach allows them to learn the typesetting system when they are least stressed, rather than in the last hectic months. If the thesis work has progressed rapidly, they may be in the pleasant position of having one or more papers already written, upon which they can base the thesis. Unlike a published paper, a thesis will not be read by a copy editor or proofreader. It is therefore particularly important that you thoroughly read and check the thesis before it is submitted. Your thesis advisor should read and comment on the thesis, arid it is worthwhile recruiting fellow students as readers, too; even if they are not specialists in the area they should be able to offer useful suggestions for improvement. 9.4. The Thesis Defence The oral defence of a thesis takes different forms in different countries. For example, in the UK the candidate answers questions posed by the examiners, but does not usually give a formal presentation, whereas in other European countries it is more common for the candidate to give a presentation followed by questions from the jury. The number of examiners also varies greatly between countries. Perhaps the most important piece of advice applicable to all forms of defence is to read the thesis beforehand. The defence may take place weeks or months after you submitted the thesis, and in the meantime you may forget exactly what material you included in the thesis and where it is located. To be properly prepared you need to know the thesis inside out. One of the purposes of the defence is for the examiners to satisfy themselves that you (not someone else) did the work you claim to have done and that you understand it. As well as asking straightforward questions about the thesis, they may therefore ask you why you took the approaches you did and to justify assumptions and amplify arguments. You can also expect questions that explore your knowledge of the literature outside the immediate area of the thesis, as the examiners gauge your general familiarity with the research area. It is important that you listen to questions carefully and answer the question that is asked, not some other question. When you are under pressure it is easy to misunderstand what the examiners ask you. If you do not understand a question, say so, and the question will be repeated or rephrased. If you give a formal presentation (typically 40-50 minutes long) you should aim to give an overview of the research area and the work you have done and not to go too deeply into the details. The examiners will want to see that you appreciate the context and significance of your work and that you are aware of problems remaining for future research. Consult Chapters 10 and 11 for practical advice on writing and giving the talk. Finally, note that an examiner who has carefully read your thesis and attended the defence should know enough about you to write a reference for your job applications. The examiners may even be able to offer advice on where to seek employment. Oral Examination Procedure (extract) 1. Before beginning the examination, make it clear to the examinee that his whole professional career may turn on his performance. Stress the importance and formality of the occasion. Put him in his proper place at the outset. 2. Throw out your hardest question first. (This is very important. If your first question is sufficiently difficult or involved, he will be too rattled to answer subsequent questions, no matter how simple they may be.) ... 9. Every few minutes, ask him if he is nervous ... 11. Wear dark glasses. Inscrutability is unnerving. 12 Terminate the examination by telling the examinee, "Don't call us; we will call you." — S. D. MASON, in A Random Walk in Science (1973) This page intentionally left blank Chapter 10 Writing a Talk My recommendations amount to this ... Make your lecture simple (special and concrete); be sure to prove something and ask something; prepare, in detail; organize the content and adjust to the level of the audience; keep it short, and, to be sure of doing so, prepare it so as to make it flexible. — PAUL R. HALMOS, How to Talk Mathematics (1974) / always find myself obliged, if my argument is of the least importance, to draw up a plan of it on paper and fill in the parts by recalling them to mind, either by association or otherwise. — MICHAEL FARADAY16 An awful slide is one which contains approximately a million numbers (and we've left our opera glasses behind). An awful lecture slide is one which shows a complete set of engineering drawings and specifications for a super-tanker. — KODAK LIMITED, Let's Stamp Out Awful Lecture Slides (1979) Quoted in [271, p. 98]. 10.1. What Is a Talk? In this chapter I discuss how to write a mathematical talk. By talk I mean a formal presentation that is prepared in advance, such as a departmental seminar or a conference talk, but not one of a series of lectures to students. In most talks the speaker writes on a blackboard or displays pre-written transparencies on an overhead projector. (From here on I will refer to transparencies as slides, since this term is frequently used and is easier to write and say.) I will restrict my attention to slides, which are the medium of choice for most speakers at conferences, but much of what follows is applicable to the blackboard. I will assume that the speaker uses the slides as a guide and speaks freely. Reading a talk word for word from the slides should be avoided; one of the few situations where it may be necessary is if you have to give a talk in a foreign language with which you are unfamiliar (Kenny [149] offers some advice on how to do this). A talk has several advantages over a written paper [50]. 1. Understanding can be conveyed in ways that would be considered too simplified or lacking in rigour for a journal paper. 2. Unfinished work, or negative results that might never be published, can be described. 3. Views based on personal experience are particularly effective in a talk. 4. Ideas, predictions and conjectures that you would hesitate to commit to paper can be explained and useful feedback obtained from the audience. 5. A talk is unique to you—no one else could give it in exactly the same way. A talk carries your personal stamp more strongly than a paper. Given these advantages, and the way in which written information is communicated in a talk, it is not surprising that writing slides differs from writing a paper in several respects. 1. Usually, less material can be covered in a talk than in a corresponding paper, and fewer details need to be given. 2. Particular care must be taken to explain and reinforce meaning, notation and direction, for a listener is unable to pause, review what has gone before, or scan ahead to see what is coming. 3. Some of the usual rules of writing can be ignored in the interest of rapid comprehension. For example, you can write non-sentences and use abbreviations and contractions freely. 10.2. DESIGNING THE TALK 4. Within reason, what you write can be imprecise and incomplete— and even incorrect. These tactics are used to simplify the content of a slide, and to avoid excessive detail. Of course, to make sure that no confusion arises you must elaborate and explain the hidden or falsified features as you talk through the slide. 10.2. Designing the Talk The first step in writing a talk is to analyse the audience. Decide what background material you can assume the listeners already know and what material you will have to review. If you misjudge the listeners' knowledge, they could find your talk incomprehensible at one extreme, or slow and boring at the other. If you are unsure of the audience, prepare extra slides that can be included or omitted depending on your impression as you go through the talk and on any questions received. The title of your talk should not necessarily be the same as the one you would use for a paper, because your potential audience may be very different from that for a paper. To encourage non-specialists to attend the talk keep technical terms and jargon to a minimum. I once gave a talk titled "Exploiting Fast Matrix Multiplication within the Level 3 BLAS" in a context where non-experts in my area were among the potential audience. I later found out that several people did not attend because they had not heard of BLAS and thought they would not gain anything from the talk, whereas the talk was designed to be understandable to them. A better title would have been the more general "Exploiting Fast Matrix Multiplication in Matrix Computations". A controversial title that you would be reluctant to use for a paper may be acceptable for a talk. It will help to attract an audience and you can qualify your bold claims in the lecture. Make sure, though, that the content lives up to the title. It is advisable to begin with a slide containing your name and affiliation and the title of your talk. This information may not be clearly or correctly enunciated when you are introduced, and it does no harm to show it again. The title slide is an appropriate place to acknowledge co-authors arid financial support. Because of the fixed path that a listener takes through a talk, the structure of a talk is more rigid than that of a paper. Most successful talks follow the time-honoured format "Tell them what you are going to say, say it, then tell them what you said." Therefore, at the start of the talk it is usual to outline what you are going to say: summarize your objectives and methods, and (perhaps) state your conclusions. This is often done with the aid of an overview slide but it can also be done by speaking over the title • > • • Introduction and Motivation Deriving Partitioned Algorithms Block LU Factorization and Matrix Inversion Exploiting Fast Matrix Multiplication Figure 10.1. Contents slide: the triangle points to the next topic. slide. The aim is to give the listeners a mental road-map of the talk. You also need to sprinkle signposts through the talk, so that the listeners know what is coming next and how far there is to go. This can be done orally (example: "Now, before presenting some numerical examples and my overall conclusions, I'll indicate how the result can be generalized to a wider class of problems"), or you can break the talk into sections, each with its own title slide. Another useful technique is to intersperse the talk with contents slides that are identical apart from a mark that highlights the topic to be discussed next; see Figure 10.1.17 Carver [102] recommends lightening a talk by building in multiple entry points, at any of which the listener can pick up the talk again after getting lost. An entry point might be a new topic, problem or method, or an application of an earlier result that does not require an understanding of the result's proof. Multiple exit points are also worth preparing if you are unsure about how the audience will react. They give you the option of omitting chunks of the talk without loss of continuity. A sure sign that you should exercise this option is if you see members of the audience looking at their watches, or, worse, tapping them to see if they have stopped! An unusual practice worth considering is to give a printed handout to the audience. This might help the listeners to keep track of complicated definitions and results and save them taking notes, or it might give a list of references mentioned in the talk. A danger of this approach is that it may be seen as presuming the audience cannot take notes themselves and are interested enough in the work to want to take away a permanent record. Handouts can, alternatively, be made available for interested persons to pick up after the talk. To save space, all the boxes that surround the example slides in this chapter are made just tall enough to hold the slide's content. 10.3. WRITING THE SLIDES 10.3. Writing the Slides Begin the talk by stating the problem, putting it into context, and motivating it. This initial scene-setting is particularly important since the audience may well contain people who are not experts in your area, or who are just beginning their research careers. The most common mistake in writing a talk is to put too much on the individual slides. The maxim "less is more" is appropriate, because a busy, cluttered slide is hard for the audience to assimilate and may divert their attention from what you are saying. Since a slide is a visual aid, it should contain the core of what you want to say, but you can fill in the details and explanations as you talk through the slide. (If you merely read the slide, it could be argued that you might as well not be there!) There are various recommendations about how many lines of type a slide should contain: a maximum of 7 8 lines is recommended by Kenny [149] and a more liberal 8-10 lines by Freeman et al. [86]. These are laudable aims, but in mathematical talks speakers often use 20 or more lines, though not always to good effect. A slide may be too long for two reasons: the content is too expansive and needs editing, or too many ideas are expressed. Try to limit each slide to one main idea or result. More than one may confuse the audience and weaken the impact of the points you try to make. A good habit is to put a title line at the top of each slide; if you find it hard to think of an appropriate title, the slide can probably be improved, perhaps by splitting it into two. Don't present a detailed proof of a theorem, unless it is very short. It is far better to describe the ideas behind the proof and give just an outline. Most people go to a talk hoping to learn new ideas, and will read the paper if they want to see the details. When a stream of development stretches over several slides, the audience might wish to refer back to an earlier slide from a later one. To prevent this you can replicate information (an important definition or lemma, say) from one slide to another. A related technique is to build up a slide gradually, by using overlays, or simply by making each slide in a series a superset of the previous one. (The latter effect can be achieved by covering the complete slide with a sheet of paper, and gradually revealing the contents; but be warned that many people find this peek-a-boo style irritating. I do not recommend this approach, but, if you must use it, cover the slide before it goes on the projector, not after.) Overlays are best handled by taping them together along one side, and flipping each one over in turn, since otherwise precise alignment is difficult. If you think you will need to refer back to an earlier slide at some particular point, insert a duplicate slide. This avoids the need to search through the pile of used slides. It is worth finding out in advance whether two projectors will be available. If so, you will have less need to replicate material because you can display two slides at a time. It is imperative to number your slides, so that you can keep them in order at all times. At the end of the talk the slides will inevitably be jumbled and numbers help you to find a particular slide for redisplaying in answer to questions. I put the number, the date of preparation, and a shortened form of the title of the talk, on the header line of each slide. When you write a slide, aim for economy of words. Chop sentences mercilessly to leave the bare minimum that is readily comprehensible. Here are some illustrative examples. Original: It can be shown that d||-Ax|| = (ATdual(Ac)}. Shorter: Can show d\\Ax\\ = {AT du&\(Ax)}. Even shorter: , giving Knuth[161]. Good: Knuth \cite{knut86>, giving Knuth [161], Good: Knuth~\cite{knut86}, giving Knuth [161]. The "Knuth[161]" form is a common mistake. The last form avoids a line break just before the citation. TfrjX assumes that a full stop ends a sentence unless it follows a capital letter. Therefore you must put a control space after a full stop if the full stop does not mark the end of the sentence. Thus p. 12 is typed as p.\ 12, and cf. Smith (1988) as c f . \ Smith (1988). A less obvious example: MATLAB (The MathWorks, Inc.) is typed as \textsc{Matlab} (The MathWorks, Inc.)\ since otherwise TgX will put extra space after the right parenthesis. In the references in a thebibliography environment there is no need to use control spaces because this environment redefines the full stop so that it does not give end of sentence spacing. On the other hand, if a sentence-ending full stop follows an uppercase letter you must specify that the full stop ends the sentence. In I^TgK this can be done by inserting the \@ command before the full stop, as in the sentence from page 11 There is also an appendix on how to prepare a CV\@. The \® command is needed more often than you might realise—in this book there are over 10 occurrences of the command. 13.3. BmlfcX BlBTgX, written by Oren Patashnik, is a valuable aid to preparing reference lists with IM^X. To use BroT^X you need first to find or create a bib file that comprises a database of papers containing those you wish to cite. BlBTgX reads a WT$L aux (auxiliary) file and constructs a sorted reference list in a bbl file, making use of the bib file. This reference list is read and 13.3. BiBTEX processed by I^Tf^X. A diagram showing how IM]EX interacts with BreT^X and Makelndex is given in Figure 13.1 (Makelndex is described in the next section). In this section I go into some detail on the use of BlBTj^X because it tends not to be emphasized by the books on I^I^X and yet is a tool that can benefit every serious I^T^X user. A bib file is an ASCII file maintained by the user (in the same way as a tex file). As an example, references [158] and [161] are expressed in the BiBTEX file used for this book as23 @article{knut79, author = "Donald E. Knuth", title = "Mathematical Typography", journal = "Bull. Amer. Math. Soc. (New Series)", volume = 1, number = 2, pages = "337-372", year = 1979 } @book{knut86, author = "Donald E. Knuth", title = "The {\TeX book}", publisher = "Addison-Wesley", address = "Reading, Massachusetts", year = 1986, pages = "ix+483", isbn = "0-201-13448-9" } The first part of the last sentence was typed as As an example, references \cite{knut79} and \cite{knut86} are expressed in \BibTeX\ format as Once you have built up a few entries, creating new ones is quick, because you can copy and modifying existing ones. There are three advantages to using BmTj^X. 1. By specifying the appropriate bibliography style (bst file) from the many available it is trivial to alter the way in which the references are formatted. Possibilities include BiBTj^ X's standard plain, abbrv and alpha formats, illustrated, respectively, by 23 My bib file uses abbreviations (described below) for the journal, publisher and address fields, but in this example I give the fields explicitly, for simplicity. TEX AND $TEX Figure 13.1. Interaction between FlgX, BroT£]X and Makelndex. The log, big and ilg files contain error messages and statistics summarizing the run. The aux file contains cross-referencing information. The sty, bst and 1st files determine the styles in which the document, reference list and index are produced. [1] Donald E. Knuth. Mathematical typography. Bulletin Amer. Math. Soc. (New Series), l(2):337-372, 1979. [1] D. E. Knuth. Mathematical typography. Bulletin Amer. Math. Soc. (New Series), l (2):337-372, 1979. [Knu79] Donald E. Knuth. Mathematical typography. Bulletin Amer. Math. Soc. (New Series), l(2):337-372, 1979. With the alpha format, citations in the text use the alphanumeric label constructed by BiBl^X ([Knu79] in this example). The unsrt format is the same as plain except it lists the entries in order of first citation instead of alphabetical order (as is required, for example, by the journal Computers and Mathematics with Applications). I#-TEX 13.3. BiBTEX packages and corresponding BlBTf^X style files are also available that produce citations and bibliographies conforming to the Harvard system (see §6.11). In this book I am using my own modification of the is-plain style by Nelson Beebe, which itself is a modification of the plain style to add support for ISSN and ISBN fields and for formatting a pages field for books. 2. To keep the reference lists of working papers up to date (as technical reports become journal papers, for example) it is necessary only to update the master bib file and rerun IM]gX and BlBTgX on each paper. If BiBT^X were not used, the reference list in each paper would have to be updated manually. Using BmTgX also saves on storage, for the bbl files can be deleted once typesetting is complete and reconstructed when required by running BlBT^X. 3. Since BlBTgX is widely used and available on a wide range of machines it is possible for people to exchange and share databases. Two large collections of bib files deserve particular note. (a) BibNet is maintained by Stefano Foresti, Nelson Beebe and Eric Grosse. It has the URL ftp://ftp.math.utah.edu/pub/bibnet, and is also available from netlib (see §14.1), specifically from http://netlib.bell-labs.com/netlib/bibnetfaq.html The bib files in BibNet include ones containing all the publications by particular authors (e.g., Gene Golub), and all (or many of) the publications that have appeared in particular journals (e.g., SI AM Review}. (b) The Collection of Computer Science Bibliographies (which includes bibliographies on mathematics), is maintained by AlfChristian Achilles at http://liinwww.ira.uka.de/bibliography/index.html This large collection of bibliographies (which includes all those in BibNet) has an excellent Web interface and powerful search facilities. It is, of course, advisable to check the accuracy of any entries that you have not created yourself, before using them. Aside from these BffiTEX-specific reasons, there are other reasons for keeping a personalized computer database of references. If you record every TEX AND $TEX paper you read then a computer search allows you to check whether or not you have read a given paper. This is a very handy capability to have, as once you have read more than a couple of hundred papers it becomes difficult to remember their titles and authors. Also, you can put comments in the database to summarize your thoughts about papers. Some BiBT£}X users include an annotate field in their bib files. Although standard BffiTgX style files ignore this field, other style files are available that reproduce it, and they can be used to prepare annotated bibliographies. In my BlBTgX book entries I include an isbn field. Here are some tips on using BmT£JX. 1. To make it easier to navigate a bib file with your editor, keep the entries in alphabetical order by author, and use positioning lines of the form -md- -meto indicate the start of a group of authors—this example marks the start of the authors whose last names begin with "Me", there being no last names beginning "Md". 2. Although it is tempting to save time by abbreviating bib file entries— typing initials instead of full author names (when they are given in the original reference), and omitting journal part numbers or institution addresses for technical reports—my experience is that it is false economy, because these details are often required eventually: for example, by a copy editor who queries an incomplete reference on page proofs. It is worth spending the extra time to create fully comprehensive entries. Note that if names are typed with no space between the initials (e.g., author = D . E . Knuth), BlBTfrjX produces only the first initial. You should therefore always leave a space between initials (D. E. Knuth), as is standard practice in typesetting. 3. Various conventions are in use for choosing keys for bib file entries (they appear on the first line of the entry and are specified in the \cite command). My method is to use the first four letters of the author's last name followed by the last two digits of the year. If there are two authors I use the first two letters of each author's last name and if three or more, the first letter of the first three or four authors' last names. Multiple papers in the same year by the same authors are distinguished by extra letters 'a', 'b' and so on. This method has proved effective for bib files with a few thousand entries. Note that a key does not have to be of the form author: gnus, as some people 13.3. BiBTEX have presumed on reading Lamport's whimsical examples involving gnus [172]! 4. BiBlpjX allows the use of abbreviations. For example, instead of typing publisher = "Society for Industrial and Applied Mathematics", address = "Philadelphia", you can type publisher = pub-SIAM, address = pub-SIAM:adr, where the following string definitions appear prior to these lines in the bib file (typically at the start of the file): @STRING{pub-SIAM = "Society for Industrial and Applied Mathematics"} ®STRING{pub-SIAM:adr = "Philadelphia"} The use of abbreviations saves typing and ensures consistency between fields that should be identical in different entries. A bib file mrabbrev. bib containing string definitions for all the standard journal abbreviations used in Mathematical Reviews is part of AMS-WFgft and is also available from BibNet. 5. If you wish to include the URL of a file available on the Web, put it in a URL field. For example, URL = "ftp://ftp.ma.man.ac.uk/pub/narep/narep306.ps.gz" Although URL fields are not supported by the standard bst files, they are used in some BibNet databases and facilitate the creation of hypertext links from a bib file to the actual papers. 6. When writing a book or thesis you often wish to print out a draft chapter together with a bibliography comprising just the references cited in that chapter. This can be achieved using the ETj^X package chapterbib by Donald Arseneau, available from the CTAN archives. Various tools are available to help maintain BroT^K databases (see [109, §13.4] for details). These include tools to sort databases, search them, syntax check and pretty print them (the program bibclean [17] even checks ISBN and ISSN fields to see whether the checksum is correct), and extract from them the entries cited in a set of aux files (so as to create a bib file containing only the entries used in a particular document). Many of the utilities are available from BibNet. Some of these tasks are easy to do oneself using AWK, which is an interpreted programming language available on most Unix systems [4]. For example, the command line awk 'BEGINi RS="" } /Riemann/' my.bib searches the file my.bib in the current directory and prints all the entries that contain the word Riemann (assuming that records are separated by a blank line). Note that this AWK call prints complete bib entries, not just the lines on which the word occurs, as a grep search would. 13.4. Indexing and Makelndex An index to a book (or thesis, or report) has three main purposes. 1. To provide easy access to all the significant information. 2. To reveal relationships. 3. To reveal omissions. A good index is therefore much more than a table of contents. But it is much less than a list of every important word, since it records useful information, not just key words. While a printed book is necessarily expressed in a linear order, the index is not constrained by ordering and can therefore reveal links between different parts of the book and bring together topics described in the text with varying terminology. A good index saves time for the reader as a result of what it does not contain: if a topic is not present in the index the reader can be sure that it is not covered in a significant way in the text. An index should contain surprises—pointers to passages that the reader might overlook when scanning the book and its table of contents. It should anticipate the various ways in which a reader might search for a topic, by including it under multiple entries, where appropriate. For example, block LU factorization might be listed under block, factorization, LU, and, in a book not about matrices, under matrix. Since decomposition is a commonly used synonym for factorization, an entry "decomposition, see factorization" would also be appropriate in this example. 13.4. INDEXING AND MAKE!NDEX One source of index entries is section and subsection headings, since these provide the framework of the text. Entries should be nouns or nouns preceded by adjectives. Any conventions used in the index must be explained in a note at the beginning. For example, you might use "t" after a page number to denote reference to a table, and "f" for a figure. If many names are to be indexed, it is worth creating separate name and subject indexes, as in this book. One reason for indexing names is to enable the reader to find where a particular paper in the bibliography is referenced, assuming, of course, that the author's name is mentioned at the point of reference. A common mistake is to produce an index entry with too many page locators. If there are more than about five page locators, subentries should be introduced to help the reader pinpoint the information required. For example, the index entry norm, 119-121, 123, 135, 159, 180 is much better broken down into, for example, norm absolute, 119, 121 dual, 120 elliptic, 180 Holder inequality, 123 spectral radius, relation with 135 unitarily invariant, 159 In the following example the subentries serve little purpose because they all have the same page number: LU factorization definition, 515 existence, 515 uniqueness, 515 This example should be collapsed into the single entry "LU factorization, 515", which is just as useful for a reader searching for information about the LU factorization. Choose as main headings the word that the reader is most likely to look under. Thus equations, displaying, 54 is better than displaying equations, 54 TEX AND LJTfiX This example is formatted as it should be if there are no other subentries of "equations". In the examples below, the subentry is assumed to be one of several and so appears on a separate line. In subentries, use connectives to clarify the meaning of the entries. The entry slides number, 138 could refer to a discussion on how to number the slides or on how many to produce. Adding the word "of" avoids the ambiguity: slides number of, 138 It can be useful to add the word "of" even in unambiguous cases to make the entry read smoothly from subentry to heading: words order of, 60 In traditional typesetting, indexing was a task to be done once a book was at the page-proof stage, and was often performed under severe time pressure. Nowadays, authors typesetting their own books by computer can index earlier in the production process, making use of indexing software. Makelndex is a C program, written by Pehong Chen [56], [109, Chap. 12] with advice from Leslie Lamport, that makes an index for a I^TjrjX document. The user has to place \index commands in the I^TfjjX source that define the name and location of the items to be indexed. If a \makeindex command is placed in the preamble (before \begin{document}) then KQ^X writes the index entries, together with the page numbers on which they occur, to an idx file. This is read by the Makelndex program, which processes and sorts the information, producing an ind file that generates the index when included in the KTjTJX document (see Figure 13.1). Makelndex provides various options in the \index command to support standard indexing requirements, such as subentries, page ranges and cross-references to other entries. Here is how the beginning of one sentence from page 187 was typed: \item It is easy to prepare transparencies\index{slides!preparing in \TeX> with \TeX\ if the paper The exclamation mark in the \index command denotes the beginning of a subentry. Multiple indexes (such as name and subject indexes) can be produced with the aid of the index package by David M. Jones, available from ftp://theory.Ics.mit.edu/pub/tex/index/ Here are some tips on indexing in I^TgX. 13.4. INDEXING AND MAKE!NDEX 1. Insert the index entry immediately following the word to be indexed, on the same line and with no spaces before the \index command (as in the example above). This ensures that the correct page reference is produced and avoids unwanted spaces appearing in the output. 2. If the scope of the item being indexed is more than one sentence, so that the scope may be broken over a page, index the item as a page range. For example, this list of tips is contained within commands \index{LaTeX@\LaTeX!indexing in I(} \index{indexing!in latexOin \LaTeXI(} \index{indexing!in latexOin \LaTeXI)} \ index{LaTeX@\LaTeX!indexing in I)} The I ( and I ) strings serve to delimit the range of the index command. 3. See entries can be produced by commands of the form \index{dotsIsee{ellipsis}} To produce see also entries in an analogous way you can use the following definition, adapted from that for \see in makeidx.sty: \newcommand\seealso[2]{\emph{see also} #1} Place all see and see also index entries together, to make it easier to edit them and check for consistency. I suggest placing them after the last item in the book to be indexed (ideally, just before the bibliography); this ensures that see also appears after the page references for an entry. 4. Do not leave the task of indexing to the very last stage. For, in inserting the \index entries, you are likely to introduce errors (of spacing, at least) and so a further round of proofreading will be needed after the indexing stage. AWK tools for indexing are described in [4, §5.3] and [22]; these tools do not support subentries. A simple and elegant way to construct key word in context (KWIC) indexes using AWK is also described in [4]. A KWIC index lists each word in the context of the line in which it is found; the list is sorted by word and arranged so that the key words line up. One of the main uses of KWIC indexes is to index titles of papers. For an interesting example of an index, see Halmos's / Want to Be a Mathematician [127]. Priestley [230] says "If index writing has not bloomed into an art form in / Want to Be a Mathematician, it has at least taken a quantum leap forward. From 'academic titles, call me mister' to 'Zygmund. A., at faculty meetings' this one is actually worth reading." 13.5. Further Sources of Information The best (and the most humorous) introduction to KTjrjX is Learning I^TfjX [118] by Griffiths and D. J. Higham. A much longer and more detailed book that is very handy for reference is A Guide to ^T^X2£ [166] by Kopka and Daly. Lamport's E^TfjK: A Document Preparation System [172] is the "official" guide to MfcX. For those still using the obsolete ETfiX 2.09, Carlisle and Higham [53] explain the advantages to be gained by upgrading toM£X2 e . For technical details of KTjgX, BiBTgX and Makelndex, and descriptions of the many available packages, see The ^ TfjK Companion [109] by Goossens, Mittelbach and Samarin. The I^TfjK Graphics Companion [110] by Goossens, Rahtz and Mittelbach is the most comprehensive and up-todate reference on producing graphics with WF$i and PostScript. If you are a really serious KTgX or T^X user you will want to study Knuth's The TgXbook [161], the "bible" of TjTJX, or another advanced reference such as Salomon's The Advanced TgXbook [244]. BiBTgX is described in all the ETfrjX textbooks mentioned above, but most comprehensively in The ^TfjK Companion [109]. An article by Knuth [158]24 offers many insights into mathematical typesetting and type design, and describes early versions of Tj^X and METRFONT (METRFONT [160] is Knuth's system for designing typefaces). The Comprehensive TgX Archive Network (CTAN) is a network of ftp servers that hold up-to-date copies of all the public domain versions of TgX, KT£}X, and related macros and programs. The three main sites are at ftp.dante.de, http://www.dante.de/ ftp.tex.ac.uk, http://www.tex.ac.uk/tex-archive tug2.cs.umb.edu, http://tug2.cs.umb.edu/ctan/ which are located in Germany, England and Massachusetts, USA, respectively. There are many mirror sites around the world, details of which may be obtained from the IgX Users Group (TUG) Web pages. The organization of TEX files is the same on each site and starts at ./tex-archive. To search a CTAN site during an anonymous ftp session type the command 24 The beginning of the abstract is quoted on page 86. 13.5. FURTHER SOURCES OF INFORMATION quote site index string, where string is a Unix regular expression (a filename optionally containing wildcards) on which to search. The TUG runs courses and conferences on TgX and produces a journal called TUGboat. It also produces a newsletter for members called T^X and TUG News. Contact details for TUG are given in Appendix D. The UK Tj^X Users Group, based in the UK, also organizes meetings and produces a newsletter (called Baskerville). It cooperates with TUG and supports the UK TgK archive (the UK node of CTAN). More informatio is available on the Web at http://www.tex.ac.uk/UKTUG/ or via email to uktug-enquiriesOuk.ac.tex Excellent advice on preparing an index is given in The Chicago Manual of Style [58] and in Bonura's The Art of Indexing [35]. The collectio Indexers on Indexing [130] contains articles on many different aspects of indexing that originally appeared in The Indexer, the journal of the Society of Indexers (UK). This society is involved in awarding indexing prizes; the 1975 Wheatley Medal was awarded for the index (by Margaret D. Anderson) to the first edition of [45]. Other good references are Words into Type [249] and Copy-Editing [45]. This page intentionally left blank Chapter 14 Aids and Resources for Writing and Research The library is the mathematician's laboratory. — PAUL R. HALMOS, / Want to be a Mathematician: An Automathography in Three Parts (1985) It's a library, honey—kind of an early version of the World Wide Web. — From a cartoon by ED STEIN Once you master spelling anonymous, you can roam around the public storage areas on computers on the Internet just as you explore public libraries. TRACY LAQUEY and JEANNE C. RYER, The Internet Companion (1993) Just as footnotes and a bibliography trace an idea's ancestors, citation indexing traces an idea's offspring. — KEVIN KELLY, in SIGNAL: Communication Tools for the Information Age (1988) 14.1. Internet Resources A huge variety of information and software is available over the Internet, the worldwide combination of interconnected computer networks. The location of a particular object is specified by a URL, which stands for "Uniform Resource Locator". Examples of URLs are http://www.netlib.org/index.html ftp://ftp.netlib.org The first example specifies a World Wide Web server (http = hypertext transfer protocol) together with a file in hypertext format (html = hypertext markup language), while the second specifies an anonymous ftp (file transfer protocol) site. In any URL, the site address may, optionally, be followed by a filename that specifies a particular file. The best way of accessing information on the Internet is with a World Wide Web browser, such as Netscape Navigator or Microsoft Internet Explorer. These browsers have intuitive interfaces, making them very easy to learn. For downloading files an alternative is to use an ftp program to carry out anonymous ftp. Anonymous ftp is a special form of ftp in which you log on as user anonymous and need not type a password (though, by convention, you are supposed to type your email address to indicate who you are). Table 14.1 lists some of the file types you may encounter when ftp'ing files. For more details about the Internet and how to access it see on-line information, or one of the many books on the subject, such as Krol [168]. Newsgroups The news system available on many computer networks contains a large number of newsgroups to which users contribute messages. The newsgroups frequently carry announcements of new software and software updates. On a Unix system, type man rn or man im for information on how to read news. Newsgroups of general interest to mathematicians are sci.math and its more specialized cousins such as sci.math.research and sci.math.symbolic, and comp.text.tex for TjrjX information. For many newsgroups a FAQ document of Frequently Asked Questions is available. Digests Various magazines are available by email. These collect questions, answers and announcements submitted to a particular email address. For example, NA-Digest is a weekly magazine about numerical analysis [73]; send mail to [email protected] for information on how to subscribe. 14.1. INTERNET RESOURCES Table 14.1. Standard file types. Suffix .bib .bst .dvi Type ASCII ASCII binary .gif, .tif . gz .1st .pdf .ps . shar •sty .tar ASCII binary .tex .txt . uu Explanation BlBT£X source. BlBlEX style file. TgX output. Use dvips to convert to PostScript. Image file formats (Graphics Interchange Format, Tagged Image File Format) [37]. Compressed. Use gunzip or gzip -d to recover original file. Makelndex style file. Portable Document Format (PDF), developed by Adobe Systems, Inc. Can be read using the Adobe Acrobat software. PostScript file. "Shell archive" collection of files. Use sh to extract files. MEX style file. "Tape archive" collection of files. Use tar -xvf or pax -r to extract the contents. TpjX source. Text file. Re-coded form of binary file, suitable for mailing. Use uudecode to recover original binary file. Compressed. Use uncompress to recover original file. Compressed by an older algorithm. Use unpack to recover original file. Compressed by PKZIP. Use an unzip program to recover original file. Netlib Netlib is an electronic repository of public domain mathematical software for the scientific computing community. In particular, it houses the various -PACK program libraries, such as EISPACK, LINPACK, MINPACK and LAPACK, and the collected algorithms of the Association for Computing Machinery (ACM). Netlib has been in existence since 1985 and can be accessed by email, ftp or the Web. In addition to providing mathematical software, netlib provides the facility to download technical reports from certain institutions, to download software and errata for textbooks, and to search the SIAM membership list (via the whois command). Background on netlib is given in an article by Dongarra and Grosse [72] (see also [40]) and news of the system is published regularly in NA-Digest and SIAM News (received by every personal and institutional member of SIAM). To obtain a catalogue of the contents of netlib send an email message with body send index to [email protected] or [email protected]. Alternatively, netlib can be accessed over the Web at the address http://www.netlib.org/ index.html. Copies of netlib exist at various other sites throughout the world. e-MATH The AMS runs a computer service, e-MATH, with many features. For example, it allows you to obtain pointers to reviews in Mathematical Reviews (1985 to present), to download the list of Mathematics Subject Classifications, to download articles (in one of several formats, including dvi, PostScript and TgX) from the Bulletin of the American Mathematical Society and other journals, to access lists of employment and post-doctoral opportunities, and to search the combined membership list of the AMS, the Mathematical Association of America (MAA), SIAM, and the American Mathematical Association of Two-Year Colleges (AMATYC). E-MATH is best accessed via its Web interface at http://www.ams.org/. Status reports on e-MATH are published in the Notices of the American Mathematical Society in the "Inside the AMS" column (every personal and institutional member of the AMS receives this journal). 14.2. Library Classification Schemes The two main classification schemes used in libraries are the Library of Congress Classification and the Dewey Decimal Classification. The Library of Congress Classification was developed in the early 1900s for the collections of the Library of Congress in the US. The main classes are 14.3. REVIEW, ABSTRACT AND CITATION SERVICES denoted by single capital letters, the subclasses by two capital letters, and divisions of the subclasses by integers, which themselves can be subdivided beyond the decimal point. Mathematics is subclass QA of the science class Q; an outline of this subclass is given in Table 14.2. Every book is identified by a call number. For example, the first edition of this book has the call number QA42.H54. where 42 is the subdivision of class QA described as "Communication of mathematical information, language, authorship" and H54 is the author number. The Dewey Decimal Classification was first introduced in 1876 and is used by most libraries in the UK. It divides knowledge into ten different broad subject areas called classes, numbered 000, 100, . . . . 900. Class 500 covers Natural Sciences and Mathematics, and subclass 510 Mathematics. Table 14.3 gives an outline of subclass 510. Both schemes were developed to classify the mathematics of the nineteenth century, so some modern areas of mathematics fit into them rather awkwardly. Variation is possible in the way the schemes are used in different libraries. For example, in the John Rylands University Library of Manchester the unassigned sections 517 and 518 are used for analysis and numerical analysis, respectively. My experience is that because of the vagaries of the schemes and the differing opinions of librarians who choose classifications, books are often not classified in the way you would expect. If you are looking for a specific book, searching for it by author and title (or by ISBN) in an on-line catalogue is usually the best way of locating it. 14.3. Review, Abstract and Citation Services When you need to find out what work has been done in a particular area or by a particular author, or need to track down an incomplete reference, you should consult one of the reviews or citation collections. The main ones are as follows. Mathematical Reviews (MR) is run by the American Mathematical Society (AMS) and was first published in 1940. Each month the journal publishes short reviews of recently published papers drawn from approximately 2000 scholarly publications. Each review either is written by one of the nearly 12,000 reviewers or is a reprint of the paper's abstract. The reviews are arranged in accordance with the Mathematics Subject Classifications (see §6.7). MR is particularly useful for finding details of a paper in a journal that your library does not receive—based on the review you can decide whether to order the paper via the inter-library service. Sometimes you will see an entry in a reference list containing a term such as MR 31 #1635. This means that the article in question was reviewed in volume 31 Table 14.2. Outline of Library of Congress Classification subclass QA. 1-99 101-145 150-272 273-274 276-280 281 292 295 297-299 300-433 401-433 440-699 801-939 General mathematics Elementary mathematics, arithmetic Algebra Probabilities Mathematical statistics Interpolation Sequences Series Numerical analysis Analysis Analytical problems used in the solution of physical problems Geometry (including topology) Analytical mechanics Table 14.3. Outline of Dewey Decimal Classification subclass 510 (21st edition, 1996). 510a 511 512 513 514 515 516 517 518 519 Mathematics General principles of mathematics Algebra, number theory Arithmetic Topology Analysis Geometry Unassigned Unassigned Probabilities and applied mathematics "Section 510 covers the general works of the entire subclass. 14.3. REVIEW, ABSTRACT AND CITATION SERVICES of MR, as review number 1635. The AMS also produces Current Mathematical Publications (CMP), which is essentially a version of MR that contains the bibliographic records but not the reviews. However, CMP is much more up to date than MR: it is issued every three weeks and contains a list of items received by the MR office, most of which will eventually be reviewed in MR. Computing Reviews (CR) plays a role for computer science similar to the one MR plays for mathematics. It is published by the Association for Computing Machinery and uses its own classification scheme (see §6.7). The other major abstracting journal for computer science is Computer and Control Abstracts; it has wider coverage than CR. Current Contents (CC), from the Institute for Scientific Information (ISI), Philadelphia, is a weekly list of journal contents pages (similar in size to the US TV Guide). The Physical Sciences edition is the one in which mathematics and computer science journals appear. Each issue of CC is arranged by subject area and contains an index of title words. Each issue also contains an article by Eugene Garfield, the founder of ISI; these articles often report citation statistics, such as most-cited papers in particular subject areas. The Science Citation Index (SCI), also from the ISI, records reference lists of papers in such a way that the question "which papers cite a given paper?" can be answered. Approximately 3300 journals are indexed at present, across all science subjects (the total number of scholarly science journals is of the order 25,000). The SCI began in the early 1960s and covers the period from 1945 to the present [88], [91]. The SCI provides a means for finding newer papers that were influenced by older ones, whereas searching reference lists takes you in the opposite direction. For example, suppose we are interested in the reference W. KAHAN, Further remarks on reducing truncation errors, Comm. ACM, 8 (1965), p. 40. If we look under "KAHAN W" in the five-year cumulation 1975-1979 Citation Index, and then under the entry for his 1965 paper, we find four papers that include Kahan's in their reference lists. For each of these citing papers the first author, journal, volume, starting page number and year of publication are given. The full bibliographic data for these papers can be found in the SCI Source Index. Looking up these four papers in later indexes we find further references on the topic of interest. If you can remember the title of a paper but not the author, the SCI Permuterm Subject Index can help. This is a key word index in which every significant word in each article title is paired with every other significant word from the same title. Under each pair of key words is a list of relevant authors; their papers may be found in the Source Index. As an example, if all we know of Kahan's article are the words "truncation" and "errors" and the year of publication we can find the full details of the article from the five-year cumulation 1965-1969 Permuterm Subject Index and the corresponding Source Index. Garfield's article "How to Use the Science Citation Index" [99] gives detailed examples of the use of the SCI and is well worth reading. Electronic versions of the SCI can be used to search for papers with given key words in the title, abstract or indexing fields, to obtain a list of papers by an author, and to obtain a list of an author's cited works, showing the number of times and where each work has been cited. A limitation of the SCI is that it records only the first author of a cited paper, so citations to a paper by "Smith and Jones" will benefit Smith's citation count but not Jones's. The ISI has a Web page at http: //www. isinet. com/. It contains some of Garfield's past articles about citation indexing and gives details about electronic access to the ISI products. Zentralblatt fur Mathematik und ihre Grenzgebiete (ZM), also titled Mathematical Abstracts, is another mathematical review journal. It was founded in 1931 and is published by Springer-Verlag and Fachinformationszentrum Karlsruhe. It uses the Mathematics Subject Classifications, and its coverage is almost identical to that of MR. MR, SCI and ZM are available in computer-readable form on compact disc (CD-ROM) and in on-line databases. A number of databases, including the ISI Citation Indexes, can be accessed from BIDS (Bath Information and Data Services) at http://www. bids.ac.uk, which operates from the University of Bath in the UK. Most of the databases are available on an institutional-license basis, with most UK universities having a license. 14.4. Text Editors As the use of computers in research and writing increases, we spend more and more time at the keyboard, much of it spent typing text with a text editor. Of all the various programs we use, the text editor is the one that generates the most extreme feelings: most people have a favourite editor and fervently defend it against criticism. Under the Unix operating system two editors are by far the most widely used. The first, and oldest, is vi, which has the advantage that it is available on every Unix system. The second, and the more powerful, is Emacs. Not only does Emacs do almost everything you would expect of a text editor, but from within it you can run other programs and view and edit their output; rename, move 14.4. TEXT EDITORS Citation Facts and Figures The Science Citation Index (1945-1988) contains about 33 million cited items. The most-cited paper is O. H. Lowry, N. J. Rosebrough, A. L. Farr and R. J. Randall, Protein measurement with the Folin phenol reagent, J. Biol. Cham., 193:265-275, 1951. which has 187,652 citations. The next most cited paper (also on protein methods, as are all of the top three) has 59,759 citations. The six mostcited papers from Mathematics, Statistics and Computer Science are as follows; their rankings on the list of most-cited papers range from 24th to 297th. 1. D. B. Duncan, Multiple range and multiple F tests, Biometrics, 11:1-42, 1955. (8985 citations) 2. E. L. Kaplan and P. Meier, Nonparametric estimation from incomplete observations, J. Amer. Statist. Assoc., 53:457-481, 1958. (4756 citations) 3. D. W. Marquardt, An algorithm for least-squares estimation of nonlinear parameters, J. Soc. Indust. Appl. Math., 11:431-441, 1963. (3441 citations) 4. D. R. Cox, Regression models and life-tables, J. Royal Statist. Soc. Ser. B Metho., 34:187-220, 1972. (3392 citations) 5. R. Fletcher and M. J. D. Powell, A rapidly convergent descent method for minimization, Comp. «/., 6:163-168, 1963. (1948 citations) 6. J. W. Cooley and J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, Math. Comp., 19:297-301, 1965. (1845 citations) In the period 1945-1988, 55.8% of the papers in the index were cited only once, and 24.1% were cited 2-4 times. 5767 papers were cited 500 or more times. References: [93], [94], [96], [97], [98]. and delete files; send and read electronic mail; and surf the Web. Some workstation users carry out nearly all their computing "within Emacs", leaving it running all the time. There are various versions of Emacs, one of the most popular of which is GNU Emacs [51], [107]. (GNU stands for "Gnu's not UNIX" and refers to a Unix-like operating system that is being built by Richard Stallman and his associates at the Free Software Foundation.) GNU Emacs is available for workstations and 386 (and above)-based PC-compatibles; other PC versions include Freemacs, MicroEmacs and Epsilon. GNU Emacs contains modes for editing special types of files, such as TgX, I^TfrX and Fortran files. In these modes special commands are available; for example, in TjHJX mode one Emacs command will invoke TpjK on the file in the current editing buffer. Appendix C contains a list of the 60+ most useful GNU Emacs commands and should be helpful to beginners and intermediate users. For anyone who spends a lot of time typing at a computer, learning to touch type is essential. This need not be time-consuming, and can be done using one of the self-tutor programs available. 14.5. Spelling Checking, Filters and Pipes Programs that check, and possibly correct, spelling errors are widely available. These are useful tools, even for the writer who spells well, because, as Mcllroy has observed [199], most spelling errors are caused by errors in typing. The Unix25 spelling checker spell takes as input a text file and produces as output a list of possibly misspelled words. The list comprises words that are not in spell's dictionary and which cannot be generated from its entries by adding certain inflections, prefixes or suffixes; a special "stop list" avoids non-words such as beginer (begin + er) being accepted. The development of spell is described in a fascinating article by Mcllroy [199] and is summarized by Bentley in [20, Chap. 13]. When I ran an earlier version of this book through delatex (see below), followed by spell with the British spelling option, part of the output I obtained was as follows (the output is wrapped into columns here to save space) bbl beginer blah blocksize bst capitalized cccc co comp computerized de deci delatex dependant der diag dispensible dvi ees eg 25 More details on the Unix commands described in this section can be obtained from a Unix reference manual or from the on-line manual pages by typing man followed by the command name. 14.5. SPELLING CHECKING, FILTERS AND PIPES The -ize endings are flagged as errors because spell expects you to use -is endings when the British spelling option is in effect. The one genuine error revealed by this extract is dispensable, which should be dispensable. Spell can be instructed to remove from its output words found in a supplemental list provided by the user. This way you can force spell to accept technical terms, acronyms and so on. Spell can be used to check a single word, mathematical, say, by typing at the command line spell mathematical ~d ("d is obtained by holding down Ctrl and typing d). In this case there is no output because the word is recognized by spell. If you use GNU Emacs, you can call the Unix spell program from within the editor. The command Esc-$ checks the word under the cursor and Esc-x spell-buffer checks the spelling of the whole buffer. In each case you are given the opportunity to edit an unrecognized word and replace all occurrences with the corrected version. A problem with spell checking TgX documents is that most TgX com mands will be flagged as errors. When working in Unix the solution is to run the file through detex, or delatex26 for !M]gX documents, before passing it to spell; these filters strip the file of all TgX and WFpfi. commands, respectively. An alternative is simply to have the spelling checker learn the TgX and M^X command names as though they were valid words. It is important to realize that spelling checkers will not identify misspellings that are themselves words, such as form for from (a common error in published papers), except for expect, or conversation for conservation. For this reason, bigger does not necessarily mean better for dictionaries used by spelling checkers. Peterson [225] investigates the relationship between dictionary size arid the probability of undetected typing errors; he recommends that "word lists used in spelling programs should be kept small." Spelling correction programs are also available on various computers. These not only flag unrecognized words, but present guesses of the correct spelling. They look for errors such as transposition of letters, or a single letter extra, missing or incorrect (research is mentioned by Peterson in [224] that finds these to be the cause of 80% of all spelling errors). The suggested corrections can be amusing: for example, one spelling corrector I have used suggests dunce for Duncan and turkey for Tukey. Ispell is an interactive spelling checker and corrector available for Unix and DOS systems. When invoked with a filename it displays each word 26 This filter is available from netlib in the typesetting directory (see §14.1). in the file that does not appear in its dictionary and offers a list of "near misses" and guesses of the correct word. You can accept one of the suggested words or type your own replacement. An Ispell interface exists for GNU Emacs. You can do limited searching of spell's dictionary using the look command, which finds all words with a specified prefix. Thus look comp I grep ion$ displays all words that begin with comp- and end in -ion. It is interesting to examine the frequency of word usage in your writing. Under Unix this can be done with the following pipe, where file is a filename: cat file I deroff -w I tr A-Z a-z I sort I uniq -c I sort -rn The deroff -w filter divides the text into words, one per line (at the same time removing any troff commands that may be present), tr A-Z a-z converts all words to lower case, sort sorts the list, uniq -c converts repeated lines into a single line preceded by a count of how many times the line occurred, and sort -rn sorts on the numeric count field in reverse order (largest to smallest). Applying this pipe to an earlier version of this book I obtained as the first twenty-five lines of output (wrapped into columns here to save space) 1533 the 709 696 690 613 a of to is and in ndex it be for that are mph ite by as egin nd this i you not or with (The non-words ndex, mph, ite, egin, and nd are left-over I^T£;K commands, which appear since I did not use delatex in the pipe.) It is worth examining word frequency counts to see if you are overusing any words. As far as I am aware, this particular count does not reveal any abnormalities in my word usage. The Unix dif f command takes two files and produces a list of changes that would have to be made to the first file to make it the same as the second. The changes are expressed in a syntax similar to that used in the ed text editor. If you use the -c option (dif f -c filename) then the three lines before and after each change are printed to show the context. The main use of dif f for the writer is to see how two versions of a file (a current and an earlier draft, say) differ. If your co-author updates the source file for a TfjX paper, you can use dif f to see what changes have been made. 14.6. STYLE CHECKERS Another useful filter is the we command, which counts the lines, words and characters in a file. When I ran the source for an almost-final draft of this book through we (using the command we *. tex, since the source is contained in several .tex files) I obtained as the final line of output 17312 This count shows that the source contains 87,478 words, though many of these words are I^Tp^X instructions that do not result in a printed word, so this is an overestimate of the actual word count. When I ran the source through delatex before sending it to we the word count dropped to 73,302. Style Checkers Programs exist that try to check the style of your text. Various commercial programs are available for PCs. Style checking programs have been available for Unix machines since the late 1970s. An article by Cherry [57] describes several programs: these include style, which "reads a document and prints a summary of readability indices, sentence length and type, word usage, and sentence openers", and diction, which "prints all sentences in a document containing phrases that are either frequently misused or indicate wordiness". One of the readability formulas used by style is the Kincaid formula, which assigns the reading grade (relative to the US schooling system) 11.8 x (average syllables per word) + 0.39 x (average words per sentence) - 15.59. The formula was derived by a process that involved measuring how well a large sample of US Navy personnel understood Navy technical manuals. (It is a contractual requirement of the US Department of Defense that technical manuals achieve a particular reading measure.) In his book The Art of Plain Talk [81, Chap. 7], Flesch proposes the following score for measuring the difficulty of a piece of writing: s = 0.1338 x (average words per sentence) + 0.0645 x (average affixes per 100 words) — 0.0659 x (average personal references per 100 words) — 0.75. The score is usually between 0 and 7, and Flesch classifies the scores in unit intervals from s < 1, "very easy", to s > 6, "very difficult". He states that comics fall into the "very easy" class and scientific journals into the "very difficult" class. Klare [155] explains that "Prior to Flesch's time, readability was a little-used word except in educational circles, but he made it an important concept in most areas of mass communication." The limitations of readability indices are well known and are recognized by their inventors [155], [198]. For example, the Kincaid and Flesch formulas are invariant under permutations of the words of a sentence. More generally, readability formulas measure style and not clarity, content or organization. For the writer, a readability formula is best regarded as a rough means for rating a draft to see whether it is suitable for the intended audience. AT& T's Bell Laboratories markets The Writer's Workbench, an extensive system that incorporates style, diction and various other programs [186]. One of these is double, which checks for occurrences of a word twice in succession, possibly on different lines. Repeated words are hard for a human proofreader to detect. Hartley [133] obtained suggestions from nine colleagues about how to improve a draft of his paper [132] and compared them with the suggestions generated by The Writer's Workbench. He concluded that "Text-editing programs can deal well with textual issues (perhaps better than humans) but humans have prior knowledge and expertise about content which programs currently lack." Knuth ran a variety of sample texts through diction and style [164, §40], including technical writing, Wuthering Heights and Grimm's Fairy Tales. He found that his book of commentaries on Chapter 3, verse 16 of each book of the Bible [162] was given a significantly lower reading grade level than the other samples, and concluded that we tend to write more simply when writing outside our own field. Appendix A The Greek Alphabet Capital Lower case A B A E Z H e i K A M N £1 n p E T T $ X * fi /3 7 6, £ C •n e,-& L K V V t. o English name alpha beta gamma delta epsilon zeta eta theta iota kappa lambda mu mi xi P, 0 Pi rho f, S T 0, UJ sigma upsilon phi chi psi This page intentionally left blank Appendix B Summary of TfeX and M^X Symbols This appendix is based on symbols. tex version 3.2 by David Carlisle, available from the CTAN Archives. Table B.I. Accents. 6 6 oo \'{o} 6 \~{o} 6 \v{o} Q \c{o} 6 \'{o} \={o} 6 \H{o} o \d{o} 6 \~{o} 6 \.{o} \t{oo} o \b{o} 6 \"{o} 6 \u{o} 6 \r{o} Table B.2. Dotless letters for use with accents, i Table B.3. Math mode accents. a a a \ddot{a} \breve{a} \hat{a} a a a \acute{a} a \check{a} a \widehat{a} a \bar{a} a \grave{a} a \tilde{a} a \dot{a} \vec{a} \widetilde{a} Table B.4. Foreign symbols. oe \oe A \AA 6 \ss in TgX. Table B.17. AMS Delimiters. r _i \lrcorner Table B.18. AMS Arrows. \dashrightarro leftleftarrows Lleftarrow leftarrowtailP eftrightharpoons circlearrowleft upuparrows downharpoonleft leftrightsquigarrow rightleftarrows rightleftarrows rightarrowtail rightleftharpoons circlearrowright Jdowndownarrows tdownharpoonright \dashleftarrow \leftrightarrows \twoheadleftarrow \looparrowleft \curvearrowleft \Lsh \upharpoonleft \multimap \rightrightarrows \rightrightarrows \twoheadrightarrow \looparrowright \curvearrowright \Rsh \upharpoonright \rightsquigarrow SUMMARY OF TgX AND I$T|£X SYMBOLS Table B.19. AMS Negated Arrows. \nleftarrow nLeftarrow nleftrightarrow \nrightarrow Rightarrow \nLeftrightarrow Table B.20. AMS Greek. \digamma Table B.21. AMS Hebrew. 3 Table B.22. AMS Miscellaneous. \triangledown \circledS \nexists \Game \varnothing \blacksquare \sphericalangle \diagup \hslash \square \angle \rnho \Bbbk \blacktriangle \blacklozenge \complement \vartriangle \lozenge \measuredangle \Finv \backprime \blacktriangledown \bigstar \eth Table B.23. AMS Binary Operators. \dotplus \Cup \doublebarwedge \boxdot \ltimes \rightthreetimes circleddash \centerdot \smallsetminus \barwedge \boxminus \boxplus \rtimes \curlywedge \Cap \veebar \boxtimes \divideontimes \leftthreetimes \curlyvee \circledcirc SUMMARY OF TgX AND L^I^X SYMBOLS Table B.24. AMS Binary Relations. \leqq \eqslantless \lessapprox \lessdot \lessgtr \lesseqqgtr \risingdotseq \lesssim \approxeq Mil \lesseqgtr \doteqdot \fallingdotseq \backsimeq \Subset \precsim \vartriangleleft \vDash \smallsmile \bvunpeq \precapprox \trianglelefteq \Vvdash \smallfrown \Bumpeq \geqslant \gtrsim \gtrdot \gtrless \gtreqqless \circeq \thicksim \supseteqq \sqsupset \curlyeqsucc \succapprox \trianglerighteq \shortmid \between \varpropto Ytherefore \eqslantgtr \gtrapprox \ggg \gtreqless \eqcirc \triangleq \thickapprox \Supset \succcurlyeq \succsim \vartriangleright \Vdash \shortparallel \pitchfork \backepsilon \because \blacktriangleleft SUMMARY OF TgX AND Br^X SYMBOLS Table B.25. AMS Negated Binary Relations. nless nleqq lvertneqq nprec \precnapprox \nmid ntriangleleft subsetneq varsubsetneqq \ngeqslant gneqq gnapprox \nsucceq \ncong nvDash ntrianglerighteq supsetneq varsupsetneqq \nleq \lneq \lnsim \npreceq \nsim \nvdash \ntrianglelefteq \varsubsetneq \ngtr \ngeqq \gvertneqq \nsucc \succnsim \nshortparallel \nVDash \nsupseteq \varsupsetneq \nleqslant \lneqq \lnapprox \precnsim \nshortmid \nvDash \nsubseteq \subsetneqq \ngeq \gneq \gnsim \nsucceq \succnapprox \nparallel \ntriangleright \nsupseteqq \supsetneqq Table B.26. Math Alphabets. Required package ABCdef ABCdef ABCdef ABC A'BQ a»£Def \mathrm{ABCdef} \mathit{ABCde \mathnormal{ABCde \mathcal{ABC} \mathcaHABC} \mathf rak{ABCdef} \mathbb{ABC} euscript with option mathcal eufrak amsfonts or amssymb This page intentionally left blank Appendix C GNU Emacs—The Sixty+ Most Useful Commands ~x means hold down the control key and press x. Esc-x means press Esc, release it, then press x (or hold down the meta key and press x). Use tab key for filename completion in the rmnibuffer. "x ~c ~x u ~n ~a "f Esc-f Esc-< "v "1 "d exit undo "g ~z General panic key— aborts current situation suspend Emacs (f g to restart) Cursor Motion next line ~P beginning of line ~e ~b forward character forward word Esc-b beginning of buffer Esc-> next page Esc-v redraw screen, centring line Deletion delete next character Esc-d "x "f "x i "x "s "x s ~x "w Esc-x previous line end of line back character back word end of buffer previous page delete next word Files load file read file into current buffer save file save all buffers write named file write-region save region 235 ~x b ~x ~k Buffers switch to another buffer kill buffer "x "b list all buffers Search and Replace ~s incremental search forward* ~r incremental search backward* 0 Esc- /, query replace * terminate with Esc (leave point on found item) or "g (point remains where it was at start of search). Cut and Paste (region = from mark to point) ~k kill to end of line ~k ~k kills to end of line then next new line "w kill region Esc-w copy region to kill ring "y yank from kill ring Esc-y yank pop (use after "y) ~@ or ~ space set mark ~x "x swap point and mark ~u ~@ goto previous mark Esc-h mark paragraph "x h mark buffer "t Esc-1 ~x ~u ~h 1 Esc-q Transposing, Capitalizing, Help, etc. transpose characters Esc-t transpose words lowercase word Esc-u uppercase word uppercase region ~x "1 lowercase region ~h k insert literal describe key (help) show last 100 chars typed ~h t Emacs tutorial reformat paragraph ~x *t transpose lines Windows 'x 2 divide screen into two windows "x o switch to other window s x 1 current window becomes only window Mail "x m 'c ~c "c ~s mail buffer send mail, select other buffer send mail, leave mail buffer open ~x ( ~x ) ~x e Macros start recording keyboard macro end recording keyboard macro play keyboard macro Shell Esc-! Esc-1 Esc-x shell shell with region as input start shell buffer This page intentionally left blank Appendix D Mathematical and Other Organizations American Mathematical Society P.O. Box 6248 Providence, Rhode Island 02940-6248 USA tel: 800-321-4AMS (4267) or 401-455-4000 fax: 401-331-3842 email: amsQmath. ams. org Web: http://www.ams.org/ Canadian Mathematical Society 577 King Edward, Suite 109 POB 450, Station A Ottawa, Ontario KIN 6N5, Canada tel: 613-562-5702 fax: 613-565-1539 email: of f iceOcms .math.ca Web: http://camel.math.ca/CMS/ Edinburgh Mathematical Society University of Edinburgh James Clerk Maxwell Building Mayfield Road Edinburgh EH9 3JZ Scotland email: [email protected] Web: http://www.maths.ed.ac.uk/~chris/ems/ 239 European Mathematical Society EMS Secretariat: Mrs. T. Makelainen, Department of Mathematics P.O. Box 4 (Hallituskatu 15) FIN-00014 University of Helsinki Finland email: [email protected] Web: http:/ The Institute of Mathematics and Its Applications Catherine Richards House 16 Nelson Street Southend-on-Sea Essex SSI 1EF England tel: 01702 354020 fax: 01702 354111 email: postQima.org.uk Web: http: London Mathematical Society Burlington House Piccadilly London W1V ONL England tel: 0171 437 5377 fax: 0171 439 4629 email: ImsOlms .ac.uk Web: http://www.1ms.ac.uk/ Mathematical Association 259 London Road Leicester LE2 3BE England tel: 0116 2703877 fax: 0116 2703877 MATHEMATICAL AND OTHER ORGANIZATIONS Mathematical Association of America 1529 Eighteenth Street, NW Washington, D.C. 20036-1385 USA tel: 202-387-5200 fax: 202-265-2384 email: [email protected] Web: http://www.maa.org For member services: The MAA Service Center P.O. Box 91112 Washington, D.C. 20090-1112 USA tel: 800-331-1622, 301-617-7800 fax: 301-206-9789 The Society for Industrial and Applied Mathematics 3600 University City Science Center Philadelphia, Pennsylvania 19104-2688 USA tel: 215-382-9800 fax: 215-386-7999 email: [email protected] Web: http://www.siam.org TEX Users Group P.O. Box 1239 Three Rivers, CA 93271-1239 USA tel: 209-561-0112 fax: 209-561-4584 email: tug@mai 1. tug. org Web: http://www.tug.org/ This page intentionally left blank Appendix E Winners of Prizes for Expository Writing This appendix is based on lists supplied by the Mathematical Association of America. The American Mathematical Society also awards prizes for mathematical writing; full details are available on the e-MATH Web page (see §14.1). Winners of the Chauvenet Prize Named after William Chauvenet (1820-1870). a professor of mathematics in the United States Navy, this prize is awarded for a "noteworthy paper published in English, such as will come within the range of profitable reading for a member of the Mathematical Association of America." The first twenty-four prize-winning papers (Bliss (1924)-Zalcman (1974)) are collected in the two volume Chauvenet Papers [1] (which, usefully, are indexed) . 1925 Gilbert Ames Bliss, Algebraic functions and their divisors, Ann. Math., 26, 1924, pp. 95-124. 1929 T. H. Hildebrandt, The Borel theorem and its generalizations, Bull. Amer. Math. Soc., 32, 1926, pp. 423-474. 1932 G. H. Hardy, An introduction to the theory of numbers, Bull. Amer. Math. Soc., 35, 1929, pp. 778 818. 1935 Dunham Jackson, Series of orthogonal polynomials, Ann. Math., 2 (34), 1933, pp. 527-545; Orthogonal trigonometric sums, Ann. Math., 2 (34), 243 1933, pp. 799-814; The convergence of Fourier series, Amer. Math. Monthly, 1934. 1938 G. T. Whyburn, On the structure of continua, Bull. Amer. Math. Soc., 42, 1936, pp. 49-73. 1941 Saunders MacLane, Modular fields, Amer. Math. Monthly, 47 (5), 1940, pp. 259-274; Some recent advances in algebra, Amer. Math. Monthly, 46 (1), 1939, pp. 3-19. 1944 Robert H. Cameron, Some introductory exercises in the manipulation of Fourier transforms, National Mathematics Magazine, 1941, pp. 331-356. 1947 Paul R. Halmos, The foundations of probability, Amer. Math. Monthly, 51 (9), 1944, pp. 493-510. 1950 Mark Kac, Random walk and the theory of Brownian motion, Amer. Math. Monthly, 54 (7), 1947, pp. 369-391. 1953 E. J. McShane, Partial orderings and Moore-Smith limits, Amer. Math. Monthly, 59 (1), 1952, pp. 1-11. 1956 R. H. Bruck, Recent advances in the foundations of Euclidean plane geometry, Amer. Math. Monthly, 52 (7, Part II), 1955, pp. 2-17. 1960 Cornelius Lanczos, Linear systems in self-adjoint form, Amer. Math. Monthly, 65 (9), 1958, pp. 665-679. 1963 Philip J. Davies, Leonhard Euler's integral: A historical profile of the gamma function, Amer. Math. Monthly, 66 (10), 1959, pp. 849-869. 1964 Leon A. Henkin, Are logic and mathematics identical!, Science, 138 (3542), 1962, pp. 788-794. Jack K. Hale and Joseph P. LaSalle, Differential equations: Linearity vs. nonlinearity, SIAM Rev., 5 (3), 1963, pp. 249-272. 1966 No award 1967 Guido L. Weiss, Harmonic analysis, MAA Stud. Math., 3, 1965, pp. 124178. 1968 Mark Kac, Can one hear the shape of a drum?, Amer. Math. Monthly, 73 (4, Part II), Slaught Papers No. 11, 1966, pp. 1-23. 1969 No award 1970 Shiing-Shen Chern, Curves and surfaces in Euclidean space, MAA Stud. Math., 1967, pp. 16-56. 1971 Norman Levinson, A motivated account of an elementary proof of the prime number theorem, Amer. Math. Monthly, 76 (2), 1969, pp. 225-245. 1972 Jean Francois Treves, On local solvability of linear partial differential equations, Bull. Amer. Math. Soc., 76, 1970, pp. 552-571. 1973 Carl D. Olds, The simple continued fraction expansion of e, Amer. Math. Monthly, 77 (9), 1970, pp. 968-974. 1974 Peter D. Lax, The formation and decay of shock waves, Amer. Math. Monthly, 79 (3), 1972, pp. 227-241. 1975 Martin D. Davis and Reuben Hersh, Hilbert's Wth problem, Scientific American, 229 (5), November 1973, pp. 84-91. 1976 Lawrence Zalcman, Real proofs of complex theorems (and vice versa), Amer. Math. Monthly, 81 (2), 1974, pp. 115-137. 1977 W. Gilbert Strang, Piece-wise polynomials and the finite element method, Bull. Amer. Math. Soc., 79 (6), 1973, 1128-1137. 1978 Shreeram S. Abhyankar, Historical ramblings in algebraic geometry and related algebra, Amer. Math. Monthly, 83 (6), 1976, pp. 409-448. 1979 Neil J. A. Sloane, Error-correcting codes and invariant theory: New applications of a nineteenth-century technique, Amer. Math. Monthly, 84 (2), 1977, pp. 82-107. 1980 Heinz Bauer, Approximation and abstract boundaries, Amer. Math. Monthly, 85 (8), 1978, pp. 632-647. 1981 Kenneth I. Gross, On the evolution of noncommutative harmonic analysis, Amer. Math. Monthly, 85 (7), 1978, pp. 525-548. 1982 No award 1983 No award 1984 R. Arthur Knoebel, Exponentials reiterated, Amer. Math. Monthly, 88 (4), 1981, pp. 235-252. 1985 Carl Pomerance, Recent developments in primality testing, Mathematical Intelligencer, 3 (3), 1981, pp. 97-105. 1986 George Miel, Of calculations past and present: The Archimedean algorithm, Amer. Math. Monthly, 90 (17), 1983, pp. 17-35. 1987 James H. Wilkinson, The perfidious polynomial, in Gene H. Golub, ed., Studies in Numerical Analysis, vol. 24 of Studies in Mathematics, The Mathematical Association of America, Washington, D.C., 1984, pp. 1-28. 1988 Steve Smale, On the efficiency of algorithms of analysis, Bull. Amer. Math. Soc., 13 (2), 1985, pp. 87-121. 1989 Jacob Korevaar, Ludwig Bieberbach's conjecture and its proof by Louis de Branges, Amer. Math. Monthly, 93 (7), 1986, pp. 505-514. 1990 David Allen Hoffman, The computer-aided discovery of new embedded minimal surfaces, Mathematical Intelligencer, 9, 1987. 1991 W. B. R. Lickerish and Kenneth C. Millett, The new polynomial invariants of knots and links, Math. Mag., 61 (1), 1988, pp. 3-23. 1992 Steven G. Krantz, What is several complex variables?, Amer. Math. Monthly, 94 (3), 1987, pp. 236-256. 1993 J. M. Borwein. P. B. Borwein. and D. H. Bailey, Ramanujan, modular equations, and approximations to Pi or how to compute one billion digits of Pi, Amer. Math. Monthly, 96 (3), 1989, pp. 201-219. 1994 Barry Mazur, Number theory as gadfly, Amer. Math. Monthly, 98 (7), 1991, pp. 593-610. 1995 Donald G. Saari, A visit to the Newtonian N-body problem via elementary complex variables, Amer. Math. Monthly, 97 (2), 1990, pp. 105-119. 1996 Joan Birman, New points of view in knot theory, Bull. Amer. Math. Soc., 28, 1993, pp. 253-287. 1997 Tom Hawkins, The birth of Lie's theory of groups, The Mathematical Intelligencer, 1994, pp. 6-17. Winners of the Lester R. Ford Award An award for articles in the American Mathematical Monthly. 1965 R. H. Bing, Spheres in £3, Amer. Math. Monthly, 71 (4), 1964, pp. 353-364. Louis Brand, A division algebra for sequences and its associated operational calculus, Amer. Math. Monthly, 71 (7), 1964, pp. 719-728. R. G. Kuller, Coin tossing, probability, and the Weierstrass approximation theorem, Math. Mag., 37, 1964, pp. 262-265. R. D. Luce, The mathematics used in mathematical psychology, Amer. Math. Monthly, 71 (4), 1964, pp. 364-378. Hartley Rogers, Jr., Information theory, Math. Mag., 37, 1964, pp. 63-78. Elmer Tolsted, An elementary derivation of the Cauchy, Holder, and Minkowski inequalities from Young's inequality, Math. Mag., 37, 1964, pp. 2-12. 1966 C. B. Allendoerfer, Generalizations of theorems about triangles, Math. Mag., 38, 1965, pp. 253-259. Peter D. Lax. Numerical solution of partial differential equations, Amer. Math. Monthly, 72 (2, Part II), Slaught Papers No. 10, 1965, pp. 74-84. Marvin Marcus and Henryk Mine, Permanents, Amer. Math. Monthly, 72 (6), 1965, pp. 577-591. 1967 Wai-Kai Chen. Boolean matrices and switching nets, Math. Mag., 36, 1966, pp. 1-8. D. R.. Fulkersou, Flow networks and combinatorial operations research, Amer. Math. Monthly, 73 (2), 1966, pp. 115-138. Mark Kac, Can one hear the shape of a drum,!, Amer. Math. Monthly, 73 (4, Part II), Slaught Papers No. 11, 1966, pp. 1-23. M. Z. Nashed, Some remarks on variations and differentials, Amer. Math. Monthly, 73 (4, Part II), Slaught Papers No. 11, 1966, pp. 63-76. P. B. Yale, Automorphisms of the complex numbers, Math. Mag., 39, 1966, pp. 135-141. 1968 Frederic Cunningham, Jr., Taking limits under the integral sign, Math. Mag., 40, 1967, pp. 179-186. W. F. Newns, Functional dependence. Amer. Math. Monthly, 74 (8), 1967, pp. 911-920. Daniel Pedoe. On a theorem in geometry, Amer. Math. Monthly, 74 (6), 1967, pp. 627-640. Keith L. Phillips, The maximal theorems of Hardy and Littlewood, Amer. Math. Monthly, 74 (6), 1967, pp. 648-660. F. V. Waugh and Margaret W. Maxfield, Side-and-diagonal numbers, Math. Mag., 40, 1967, pp. 74-83. Hans J. Zassenhaus, On the fundamental theorem of algebra, Amer. Math. Monthly, 74 (5), 1967, pp. 485- 497. 1969 Harley Flanders. A proof of Minkowski's inequality for convex curves, Amer. Math. Monthly, 75 (6), 1968, pp. 581 593. George E. Forsythe, What to do till the computer scientist comes, Amer. Math. Monthly, 75 (5), 1968, pp. 454-462. M. F. Neuts, Are many l-l-functions on the positive integers onto?, Math. Mag., 41, 1968, pp. 103-109. Pierre Samuel, Unique factorization, Amer. Math. Monthly, 75 (9), 1968, pp. 945 952. Hassler Whitney, The mathematics of physical quantities, I and II, Amer. Math. Monthly, 75 (2,3), 1968, pp. 115-138 and 227-256. Albert Wilansky, Spectral decomposition of matrices for high school students, Math. Mag., 41, 1968, pp. 51-59. 1970 Henry L. Alder, Partition identities—from Euler to the present, Amer. Math. Monthly, 76 (7), 1969, pp. 733-746. Ralph P. Boas, Inequalities for the derivatives of polynomials, Math. Mag., 42, 1969, pp. 165-174. W. A. Coppel, J. B. Fourier—on the occasion of his two hundredth birthday, Amer. Math. Monthly, 76 (5), 1969, pp. 468-483. Norman Levinson, A motivated account of an elementary proof of the prime number theorem, Amer. Math. Monthly, 3, 1969, pp. 225-245. John Milnor, A problem in cartography, Amer. Math. Monthly, 10, 1969, pp. 1101-1112. Ivan Niven, Formal power series, Amer. Math. Monthly, 8, 1969, pp. 871889. 1971 Jean A. Dieudonne, The work of Nicholas Bourbaki, Amer. Math. Monthly, 77 (2), 1970, pp. 134-145. George E. Forsythe, Pitfalls in computation, or why a math book isn't enough, Amer. Math. Monthly, 77 (9), 1970, pp. 931-956. Paul R. Halmos, Finite-dimensional Hilbert spaces, Amer. Math. Monthly, 77 (5), 1970, pp. 457-464. Eric Langford, A problem in geometric probability, Math. Mag., 43, 1970, pp. 237-244. P. V. O'Neil, Ulam's conjecture and graph reconstructions, Amer. Math. Monthly, 77 (1), 1970, pp. 35-43. Olga Taussky, Sums of squares, Amer. Math. Monthly, 77 (8), 1970, pp. 805-830. 1972 G. D. Chakerian and L. H. Lange, Geometric extremum problems, Math. Mag., 44, 1971, pp. 57-69. P. M. Cohn, Rings of fractions, Amer. Math. Monthly, 78 (6), 1971, pp. 596-615. Frederic Cunningham, Jr., The Kakeya problem for simply connected and for star-shaped sets, Amer. Math. Monthly, 78 (2), 1971, pp. 114-129. W. J. Ellison, Waring's problem, Amer. Math. Monthly, 78 (1), 1971, pp. 10-36. Leon Henkin, Mathematical foundations for mathematics, Amer. Math. Monthly, 78 (5), 1971, pp. 463-487. Victor Klee, What is a convex set?, Amer. Math. Monthly, 78 (6), 1971, pp. 616-631. 1973 Jean A. Dieudonne, The historical development of algebraic geometry, Amer. Math. Monthly, 79 (8), 1972, pp. 827-866. Samuel Karlin, Some mathematical models of population genetics, Amer. Math. Monthly, 79 (7), 1972, pp. 699-739. Peter D. Lax, The formation and decay of shock waves, Amer. Math. Monthly, 79 (3), 1972, pp. 227-241. Thomas L. Saaty, Thirteen colorful variations on Guthrie's four-color conjecture, Amer. Math. Monthly, 79 (1), 1972, pp. 2-43. Lynn A. Steen, Conjectures and counterexamples in metrization theory, Amer. Math. Monthly, 79 (2), 1972, pp. 113-132. R. L. Wilder, History in the mathematics curriculum: Its status, quality, and function, Amer. Math. Monthly, 79 (5), 1972, pp. 479-495 1974 Patrick Billinglsey, Prime numbers and Brownian motion, Amer. Math. Monthly, 80 (10), 1973, pp. 1099-1115. Garrett Birkhoff, Current trends in algebra, Amer. Math. Monthly, 80 (7), 1973, pp. 760-782. Martin Davis, Hubert's tenth problem is unsolvable, Amer. Math. Monthly, 80 (3), 1973, pp. 233-269. I. J. Schoenberg, The elementary cases of Landau's problem of inequalities between derivatives, Amer. Math. Monthly, 80 (2), 1973, pp. 121-158. Lynn A. Steen, Highlights in the history of spectral theory, Amer. Math. Monthly, 80 (4), 1973, pp. 359-381 Robin J. Wilson, An introduction to matroid theory, Amer. Math. Monthly, 80 (5), 1973, pp. 500-525. 1975 Raymond Ayoub, Euler and the zeta function, Amer. Math. Monthly, 81 (10), 1974, pp. 1067-1086. James Callahan, Singularities and plane maps, Amer. Math. Monthly, 81 (3), 1974, pp. 211-240. Donald E. Knuth, Computer science and its relation to mathematics, Amer. Math. Monthly, 81 (4), 1974, pp. 323-343. Johannes C. C. Nitsche, Plateau's problems and their modern ramifications, Amer. Math. Monthly, 81 (9), 1974, pp. 945-968. S. K. Stein, Algebraic tiling, Amer. Math. Monthly, 81 (5), 1974, pp. 445462. Lawrence Zalcman, Real proofs of complex theorems (and vice versa], Amer. Math. Monthly, 81 (2), 1974, pp. 115-137. 1976 M. L. Balinski and H. P. Young, The quota method of apportionment, Amer. Math. Monthly, 82 (7), 1975, pp. 701-730. E. A. Bender and J. R. Goldman, On the applications of Mobius inversion in combinatorial analysis, Amer. Math. Monthly, 82 (8), 1975, pp. 789803. Branko Griinbaum, Venn diagrams and independent families of sets, Math. Mag., 48, 1975, pp. 12-23. J. E. Humphreys, Representations of SL(2,p), Amer. Math. Monthly, 82 (1), 1975, pp. 21-39. J. B. Keller and D. W. McLaughlin, The Feynman integral, Amer. Math. Monthly, 82 (5), 1975, pp. 451-465. J. J. Price, Topics in orthogonal functions, Amer. Math. Monthly, 82 (6), 1975, pp. 594-609. 1977 Shreeram S. Abhyankar, Historical ramblings in algebraic geometry and related algebra, Amer. Math. Monthly, 83 (6), 1976, pp. 409-448. Joseph B. Keller, Inverse problems, Amer. Math. Monthly, 83 (2), 1976, pp. 107-118. D. S. Passman, What is a group ring?, Amer. Math. Monthly, 83 (3), 1976, pp. 173-185. James P. Jones, Diahachiro Sato, Hideo Wada, and Douglas Wiens, Diophantine representation of the set of prime numbers, Amer. Math. Monthly, 83 (6), 1976, pp. 449-464. J. H. Ewing, W. H. Gustafson, P. R. Halmos, S. H. Moolgavkar, W. H. Wheeler, and W. P. Ziemer, American mathematics from 1940 to the day before yesterday, Amer. Math. Monthly, 83 (7), 1976, pp. 503-516. 1978 Ralph P. Boas, Jr., Partial sums of infinite series, and how they grow, Amer. Math. Monthly, 84 (4), 1977, pp. 237-258. Louis H. Kauffman and Thomas F. Banchoff, Immersions and mod-2 quadratic forms, Amer. Math. Monthly, 84 (3), 1977, pp. 168-185. Neil J. A. Sloane, Error-correcting codes and invariant theory: New applications of a nineteenth-century technique, Amer. Math. Monthly, 84 (2), 1977, pp. 82-107. 1979 Bradley Efron, Controversies in the foundations of statistics, Amer. Math. Monthly, 85 (4), 1978, pp. 231-246. Ned Click, Breaking records and breaking boards, Amer. Math. Monthly, 85 (1), 1978, pp. 2-26. Kenneth I. Gross, On the evolution of noncommutative harmonic analysis, Amer. Math. Monthly, 85 (7), 1978, pp. 525-548. Lawrence A. Shepp and Joseph B. Kruskal, Computerized tomography: The new medical X-ray technology, Amer. Math. Monthly, 85 (6), 1978, pp. 420-439. 1980 Desmond P. Fearnley-Sander, Hermann Grassmann and the creation of linear algebra, Amer. Math. Monthly, 86 (10), 1979, pp. 809-817. David Gale, The game of hex and the Brouwer fixed-point theorem, Amer. Math. Monthly, 86 (10), 1979, pp. 818-826. Karel Hrbacek, Nonstandard set theory, Amer. Math. Monthly, 86 (8), 1979, pp. 659-677. Cathleen S. Morawetz, Nonlinear conservation equations, Amer. Math. Monthly, 86 (4), 1979, pp. 284-287. Robert Osserman, Bonnesen-style isoperimetric inequalities, Amer. Math. Monthly, 86 (1), 1979, pp. 1-29. 1981 R. Creighton Buck, Sherlock Holmes in Babylon, Amer. Math. Monthly, 87 (5), 1980, pp. 335-345. Brune H. Pourciau, Modern multiplier rules, Amer. Math. Monthly, 87 (6), 1980, pp. 433-452. Alan H. Schoenfeld, Teaching problem-solving skills, Amer. Math. Monthly, 87 (10), 1980, pp. 794-805. Edward R. Swart, The philosophical implications of the four-color problem, Amer. Math. Monthly, 87 (9), 1980, pp. 697-707. Lawrence A. Zalcman, Offbeat integral geometry, Amer. Math. Monthly, 87 (3), 1980, pp. 161-175. 1982 Philip J. Davis, Are there coincidences in mathematics!, Amer. Math. Monthly, 88 (5), 1981, pp. 311-320. R. Arthur Krioebel, Exponentials reiterated, Amer. Math. Monthly, 88 (4). 1981, pp. 1983 Robert F. Brown, The, fixed point property and Cartesian products, Amer. Math. Monthly, 89 (9), 1982, pp. 654-678. Tony Rothman, Genius and biographers: The fictionalization of Evariste Galois, Amer. Math. Monthly, 89 (2), 1982. pp. 84-106. Robert S. Strichartz, Radon inversion—variations on a theme, Amer. Math. Monthly, 89 (6), 1982, pp. 377-384 and 420-423 (solutions of problems). 1984 Judith V. Grabiner, Who gave you the epsilonl Cauchy and the origins of rigorous calculus, Amer. Math. Monthly, 90 (3), 1983, pp. 185-194. Roger Howe, Very basic Lie theory, Amer. Math. Monthly, 90 (9), 1983, pp. 600-623. John Milnor, On the geometry of the Kepler problem, Amer. Math. Monthly, 90 (6), 1983, pp. 353-365. Joel Spencer, Large numbers and unprovable theorems, Amer. Math Monthly, 90 (10), 1983, pp. 669-675. William C. Waterhouse, Do symmetric problems have symmetric solutions!. Amer. Math. Monthly, 90 (6), 1983, pp. 378-387. 1985 John D. Dixon, Factorization and primality tests, Amer. Math. Monthly, 91 (6), 1984, pp. 333-352. Donald G. Saari and John B. Urenko, Newton's method, circle maps, and chaotic motion, Amer. Math. Monthly, 91 (1), 1984, pp. 3-17. 1986 Jeffrey C. Lagarias, The 3x+l problem and its generalizations, Amer. Math. Monthly, 92 (1), 1985, pp. 3-23. Michael E. Taylor, Review of Lars Hormander's "The Analysis of linear partial differential operations, I and II", Amer. Math. Monthly, 92 (10), 1985, pp. 745-749. 1987 Stuart S. Antman, Review of Ann Hibler Koblitz's "A convergence of lives— Sophia Kovalevskaia: Scientist, Writer, Revolutionary", Amer. Math. Monthly, 93 (2), 1986, pp. 139-144. Joan Cleary, Sidney A. Morris and David Yost, Numerical geometry— numbers for shapes, Amer. Math. Monthly, 93 (4), 1986, pp. 260-275. Howard Killer, Crystallography and cohomology of groups, Amer. Math. Monthly, 93 (10), 1986, pp. 765-779. Jacob Korevaar, Ludwig Bieberbach's conjecture and its proof by Louis de Branges, Amer. Math. Monthly, 93 (7), 1986, pp. 505-514. Peter M. Neumann, Review of Harold M. Edwards' "Galois Theory", Amer. Math. Monthly, 93 (5), 1986, pp. 407-411. 1988 James F. Epperson, On the Runge example, Amer. Math. Monthly, 94 (4), 1987, pp. 329-341. Stan Wagon, Fourteen proofs of a result about tiling a rectangle, Amer. Math. Monthly, 94 (7), 1987, pp. 601-617. 1989 Richard K. Guy, The strong law of small numbers, Amer. Math. Monthly, 95 (8), 198, pp. 697-712. Gert Almkvist and Bruce Berndt, Gauss, Landen, Ramanujan, the arithmetic-geometric mean, ellipses, TT and the "Ladies Diary", Amer. Math. Monthly, 95 (7), 1988, pp. 585-608. 1990 Jacob E. Goodman, Janos Pach and Chee K. Yap, Mountain climbing, ladder moving and the ring-width of a polygon, Amer. Math. Monthly, 96 (6), 1989, pp. 494-510. Doron Zeilberger, Kathy O'Hara's constructive proof of the unimodality of the Gaussian polynomials, Amer. Math. Monthly, 96 (7), 1989, pp. 590602. 1991 Marcel Berger, Convexity, Amer. Math. Monthly, 97 (8), 1990, pp. 650-678. Ronald L. Graham and Frances Yao, A whirlwind tour of computational geometry, Amer. Math. Monthly, 97 (8), 1990, pp. 687-701. Joyce Justicz, Edward R. Scheinerman and Peter M. 'Winkler, Random intervals, Amer. Math. Monthly, 97 (10), 1990, pp. 881-889. 1992 Clement W. H. Lam, The search for a finite projective plane of order 10, Amer. Math. Monthly, 98, 1991, pp. 305-318. 1994 Bruce C. Berndt and S. Bhargava, Ramanujan—For lowbrows, Amer. Math. Monthly, 100, 1993, pp. 644-656. Reuben Hersh, Szeged in 1934, Amer. Math. Monthly, 100, 1993, pp. 219230. Leonard Gillman, An axiomatic approach to the integral, Amer. Math. Monthly, 100, 1993, pp. 16-25. Joseph H. Silverman, Taxicabs and sums of two cubes, Amer. Math. Monthly, 100, 1993, pp. 331-340. Dan Velleman and 1st van Szalkai, Versatile coins, Amer. Math. Monthly, 100, 1993, pp. 26-33. 1995 Fernando Q. Gouvea, A marvelous proof, Amer. Math. Monthly, 101, 1994, pp. 203-222. Jonathan L. King, Three problems in search of a measure, Amer. Math. Monthly, 101, 1994, pp. 609-628. I. Kleiner and N. Movshovitz-Hadar, The role of paradoxes in the evolution of mathematics, Amer. Math. Monthly, 102, 1994, pp. 963-974. William C. Waterhouse, A counterexample for Germain, Amer. Math. Monthly, 101, 1994, pp. 140-150. 1996 Martin Aigner, Turan's graph theorem, Amer. Math. Monthly, 102, 1995, pp. 808^816. Sheldon Axler, Down with determinantsl, Amer. Math. Monthly, 102, 1995, pp. 139-154. John Oprea, Geometry and the Foucault pendulum, Amer. Math. Monthly, 102, 1995, pp. 515-522. 1997 Robert G. Bartle, Return to the Riemann integral, Amer. Math. Monthly, 103, 1996, pp. 625-632. A. F. Beardon, Sums of powers of integers, Amer. Math. Monthly, 103, 1996, pp. 201-213. John Brillhart and Patrick Morton, A case study in mathematical research: The Golay-Rudin-Shapiro sequence, Amer. Math. Monthly, 103, 1996, pp. 854-869. Winners of the George Polya Award An award for articles in the College Mathematics Journal—formerly the Two-Year College Mathematics Journal. 1977 Julian Weisglass, Small groups: An alternative to the lecture method, 7 (I), 1976, pp. 15-20. Anneli Lax, Linear algebra, A potent tool, 7 (2), 1976, pp. 3-15. 1978 Allen H. Holmes, Walter J. Sanders and John W. LeDuc, Statistical inference for the general education student—it can be done, 8 (4), 1977, pp. 223-230. Freida Zames, Surface area and the cylinder area paradox, 8 (4), 1977, pp. 207-211. 1979 Richard L. Francis, A note on angle construction, 9 (2), 1978, pp. 75-80. Richard Plagge, Fractions without quotients: Arithmetic of repeating decimals, 9 (1), 1978, pp. 11-15. 1980 Hugh F. Ouellette and Gordon Bennett, The discovery of a generalization, 10 (2), 1979, pp. 100-106. Robert Nelson, Pictures, probability and paradox, 10 (3), 1979, pp. 182-190. 1981 Gulbank D. Chakerian, Circles and spheres, 11 (1), 1980, pp. 26-41. Dennis D. McCune, Robert G. Dean and William D. Clark, Calculators to motivate infinite composition of functions, 11 (3), 1980, pp. 189-195. 1982 John A. Mitchem, On the history and solution of the four-color problem, 12 (2), 1981, pp. 108-116. Peter L. Renz, Mathematical proof: What it is and what it ought to be, 12 (2), 1981, pp. 83-103. 1983 Douglas R. Hofstadter, Analogies and metaphors to explain Godel's theorem, 13 (2), 1982, pp. 98-114. Paul R. Halmos, The thrills of abstraction, 13 (4), September 1982, pp. 243-251. Warren Page and V. N. Murty, Nearness relations among measures of central tendency and dispersion, Part 1, 13 (5), 1982, pp. 315 327. 1984 Ruma Falk and Maya Bar-Hillel, Probabilistic dependence between events, 14 (3), 1983, pp. 240-247. Richard J. Trudeau, How big is a pointl, 14 (4), 1983, pp. 295-300. 1985 Anthony Barcellos, The fractal geometry of Mandelbrot, 15 (2), 1984, pp. 98-114. Kay W. Dundas, To build a better box, 15 (1), 1984, pp. 30-36. 1986 Philip J. Davis, What do I knowl A study of mathematical self-awareness, 16 (1), 1985, pp. 22-41. 1987 Constance Reid, The autobiography of Julia Robinson, 17 (1), 1986, pp. 3-21. Irl C. Bivens, What a tangent line is when it isn't a limit, 17 (2), 1986, pp. 133 143. 1988 Dennis M. Luciano and Gordon M. Prichett, From Caesar ciphers to publickey cryptosystems, 18 (1), 1987, pp. 2-17. V. Frederick Rickey, Isaac Newton: Man, myth and mathematics, 18 (5), 1987, pp. 1989 Edward Rozema, Why should we pivot in Gaussian elimination?., 19 (1), 1988, pp. 63-72. Beverly L. Brechner and John C. Mayer, Antoine's necklace or how to keep a necklace from falling apart, 19 (4), 1988, pp. 306-320. 1990 Israel Kleiner, Evolution of the function concept: A brief survey, 20 (4), 1989, pp. 282-300. Richard D. Neidinger, Automatic differentiation and APL, 20 (3), 1989, pp. 238-251. 1991 William B. Gearhart and Harris S. Shultz, The function sinx/x, 21 (2), 1990, pp. 90-99. Mark Schilling, The longest run of heads, 21 (3), 1990, pp. 196-207. 1993 William Dunham, Euler and the Fundamental Theorem of Algebra, 22 (4), 1991. Howard Eves, Two surprising theorems on Cavalieri Congruence, 22 (2), 1991. 1994 Charles W. Groetsch, Inverse problems and Torricelli's law, 24 (3), 1993, pp. 210-217. Dan Kalman, Six ways to sum a series, 24 (5), 1993, pp. 402-421. 1995 Anthony P. Ferzola, Euler and differentials, 25 (2), 1994. Paulo Ribenboim, Prime number records, 25 (4), 1994. 1996 John H. Ewing, Can we see the Mandelbrot set?, 26 (2), 1995, pp. 90-99. James G. Simmonds, A new look at an old function, el6, 26 (1), 1995, pp. 6-10. 1997 Chris Christensen, Newton's method for resolving affected equations, 27 (5), 1996, pp. 330-340. Leon Harkleroad, How mathematicians know what computers can't do, 27 (1), 1996, pp. 37-42. Winners of the Carl B. Allendoerfer Award An award for articles in Mathematics Magazine. 1977 Joseph A. Gallian, The search for finite groups, 49 (4). 1976, pp. 163-179. B. L. van der Waerden, Hamilton's discovery of quaternions, 49 (5), 1976, pp. 227-234. 1978 Geoffrey C. Shephard and Branko Grunbaum, Tilings by regular polygons, 50 (5), 1977, pp. 227-247. David A. Smith, Human population growth: Stability or explosion, 50 (4), 1977, pp. 186-197. 1979 Doris W. Scattschneider, Tiling the plane with congruent pentagons, 51 (1), 1978, pp. 29-44. Bruce C. Berndt, Ramanujan's Notebooks, 51 (3), 1978, pp. 147-164. 1980 Ernst Snapper, The three crises in mathematics: Logicism, intuitionism, and formalism, 52 (4), 1979, pp. 207-216. Victor L. Klee, Jr., Some unsolved problems in plane geometry, 52 (3), 1979, pp. 1981 Stephen B. Maurer, The king chicken theorems, 53 (2), 1980, pp. 67-80. Donald E. Sanderson, Advanced plane topology from an elementary standpoint, 53 (2), 1980, pp. 81-89. 1982 J. Ian Richards, Continued fractions without tears, 54 (4), 1981, pp. 163171. Marjorie Senechal, Which tetrahedra fill space?, 54 (5), 1981, pp. 227-243. 1983 Donald O. Koehler, Mathematics and literature, 55 (2), 1982, pp. 81-95. Clifford H. Wagner, A generic approach to iterative methods, 55 (5), 1982, pp. 259-273. 1984 Judith Grabiner, The changing concept of change: The derivative from Fermat to Weierstrass, 56 (4), 1983, pp. 195-206. 1985 Philip D. Straffin, Jr. and Bernard Grofman, Parliamentary coalitions: A tour of models, 57 (5), 1984, pp. 259-274. Frederick S. Gass, Constructive ordinal notation systems, 57 (3), 1984, pp. 131-141. 1986 Bart Braden, The design of an oscillating sprinkler, 58 (1), 1985, pp. 29-38. Saul Stahl, The other map coloring theorem, 58 (3), 1985, pp. 131-145. 1987 Israel Kleiner, The evolution of group theory, 59 (4), 1986, pp. 195-215. Paul Zorn, The Bieberbach conjecture, 59 (3), 1986, pp. 131-148. 1988 Steven Galovich, Products of sines and cosines, 60 (2), 1987, pp. 105-113. Bart Braden, Polya's geometric picture of complex contour integrals, 60 (5), 1987, pp. 321-327. 1989 Judith V. Grabiner, The centrality of mathematics in the history of western thought, 61 (4), 1988, pp. 220-230. W. B. Raymond Lickerish and Kenneth C. Millett, The new polynomial invariants of knots and links, 61 (1), 1988, pp. 3-23. 1990 Fan K. Chung, Martin Gardner, and Ronald L. Graham, Steiner trees on a checkerboard, 62 (2), 1989, pp. 83-96. Thomas Archibald, Connectivity and smoke-rings: Green's second identity in its first fifty years, 62 (4), 1989, pp. 219-237. 1991 Ranjan Roy, The discovery of the series formula for TT by Leibniz, Gregory and Nilakantha, 63 (5), 1990, pp. 291-306. 1992 Israel Kleiner, Rigor and proof in mathematics: A historical perspective, 64 (5), 1991, pp. 291-314. G. D. Chakerian and David Logothetti, Cube slices, pictorial triangles, and probability, 64 (4), 1991, pp. 219-241. 1994 Joan Hutchinson, Coloring ordinary maps, maps of empires, and maps of the moon, 66 (4), 1993, pp. 211-226. 1995 Lee Badger, Lazzarini's lucky approximation of •K, 67 (2), 1994. Tristan Needham, The geometry of harmonic functions, 67 (2), 1994. 1996 Judith Grabiner, Descartes and problem-solving, 1995, pp. 83-97. Daniel J. Velleman and Gregory S. Call, Permutations and combination locks, 68, 1995, pp. 243-253. 1997 Colm Mulcahy, Plotting and scheming with wavelets, 69, December 1996, pp. 323-343. Lin Tan, Group of rational points on the unit circle, 69, June 1996. Winners of the MAA Book Prize, renamed Beckenbach Book Prize (1986) 1984 Charles Robert Hadlock, Field Theory and Its Classical Problems, Cams Monograph No. 19, 1978. 1986 Edward Packel, The Mathematics of Games and Gambling, MAA New Mathematical Library Series, 1981. 1989 Thomas M. Thompson, From Error-Correcting Codes through Sphere Packings to Simple Groups, Carus Monograph No. 21, 1994 Steven George Krantz, Complex Analysis: The Geometric Viewpoint, Carus Mathematical Monographs, 1990. 1996 Constance Reid, The Search for E. T. Bell, Also Known as John Taine, Mathematical Association of America, 1993, x+372 pp. ISBN 0-88385508-9. Winners of the Merten M. Hasse Prize (established 1986) 1987 Anthony Barcellos, The fractal geometry of Mandelbrot, The College Mathematics Journal, 15, 1984. 1989 Irl C. Bivens, What a tangent is when it isn't a limit, The College Mathematics Journal, 17, 1986. 1991 Barry A. Cipra, An introduction to the Ising model, Amer. Math. Monthly, 94 (10), 1987, pp. 937-959. 1993 J. M. Borwein, P. B. Borwein, and D. H. Bailey, Ramanujan, modular equations, and approximations to Pi or how to compute one billion digits of Pi, Amer. Math. Monthly, 96 (3), 1989, pp. 201-219. 1995 Andrew J. Granville, Zaphod Beeblebrox's brain and the fifty-ninth row of Pascal's triangle, Amer. Math. Monthly, 99, 1992, pp. 318-331. 1997 Jonathan King, Three problems in search of a measure, Amer. Math. Monthly, 101, August-September 1994. Glossary abstract. A brief, self-contained summary of the contents of a paper that appears by itself at the beginning of the paper. Also a brief (written) summary of the contents of a talk. ACM. The Association for Computing Machinery. acknowledgements. A section preceding the references (or a footnote, or a final paragraph) in which the author thanks people or organizations for help, advice or financial support for the work described. AMS. The American Mathematical Society. yi]y(§-T^X. A macro package for TgX that makes it easier to typeset mathematical papers with Tp^X. It gives new structures for displaying mathematical equations and comes with a special font of mathematical symbols. AMS-WFfffi.. A package for WFftX. that incorporates the features of Aj$- anonymous ftp. A form of ftp in which the user logs on as user anonymous and need not type a password (though, by convention, the user's electronic mail address is typed as the password). archive. In Unix, a single file that contains a set of other files (e.g., as manipulated with the tar command). Or a collection of software that is located at a particular Internet address and can be accessed by anonymous ftp. ASCII. American Standard Code for Information Interchange. A coding system in which letters, digits, punctuation symbols and control characters are represented in seven bits by a number from 0 to 127. An eighth bit is often added to allow extra characters. bibliography. A list of publications on a particular topic, or the reference list of a book. 263 BlBTjjX. A program that cooperates with KT^X in the preparation of reference lists. It makes use of bib files, which are databases of references in BiBTgX format. citation. A reference in the text to a publication or other source, usually one that is listed in the references. compositor. The person who typesets the text (especially in traditional printing). Computing Reviews Classification System. A classification system for computer science. An example of an entry is G.I.3 [Numerical Analysis]: Numerical Linear Algebra—sparse and very large systems. conjecture. A statement that the author believes to be true but for which a proof or disproof has not been found. copy editor. A person who prepares a manuscript for typesetting by checking and correcting grammar, punctuation, spelling, style, consistency and other details. corollary. A direct or easy consequence of a lemma, theorem or proposition. Current Contents. A publication from the Institute for Scientific Information that provides a weekly list of journal contents pages. The Physical Sciences edition is the one in which mathematics and computer science journals appear. CTAN. The Comprehensive TgjX Archive Network: a network of ftp servers that hold up-to-date copies of all the public domain versions of TfjX, I^TjrjX, and related macros and programs. DOS. Disk Operating System. Usually refers to MS-DOS (Microsoft Disk Operating System), which is a computer operating systems for personal computers (PCs). Festschrift (or festschrift). (German) A collection of writings published in honour of a scholar. folio. A printed page number. Also a sheet of a manuscript. ftp. File transfer protocol. A protocol for file transfer between different computers on the Internet network. Also a program for transferring files using this protocol. Harvard system. A system of citation by author name and year, e.g., "seeKnuth (1986)". hypertext. On-line text with pointers to other text. For example, a paper provided on a Web page in hypertext format may allow you to link directly to references in the bibliography that are themselves available on the Web. hypothesis. A statement taken as a basis for further reasoning. IMA. The Institute of Mathematics and Its Applications, Southend-onSea, England. Also the Institute for Mathematics and Its Applications, University of Minnesota, Minneapolis, USA. Internet. The worldwide network of interconnected computer networks. It provides electronic mail, file transfer, news, remote login, and other services. ISBN. International Standard Book Number. ISSN. International Standard Serial Number. BTgX. A macro package for TgX that simplifies the production of papers, books and letters, and emphasizes the logical structure of a document. It permits automatic cross-referencing and has commands for drawing pictures. lemma. An auxiliary result needed in the proof of a theorem or proposition. May also be an independent result that does not merit the title theorem. LMS. The London Mathematical Society. MAA. The Mathematical Association of America. macro. In computing, a shorthand notation for specifying a sequence of operations. For example, in typesetting this book in I^Tj^X I used the definition \def\mw{mathematical writing} and typed \mw whenever I wanted the phrase "mathematical writing" to appear. Makelndex. A program that makes an index for a I^Tj^X document. manuscript. Literally, a handwritten document. More generally, any unpublished document, particularly one submitted for publication. managing editor. A person who is in charge of the editorial activities of a publication and who supervises a group of editors. Mathematical Reviews. A monthly review publication run by the American Mathematical Society (AMS) and first published in 1940. Each listed paper is accompanied by a review or a reprint of the paper's abstract. Mathematics Subject Classifications. A classification scheme published in Mathematical Reviews that divides mathematics into 61 sections numbered between 0 and 94, further divided into many subsections. A typical entry is 65F05 (direct methods for solving linear systems). netlib. An electronic repository of public domain mathematical software for the scientific computing community. offprint. See reprint. page charges. Charges levied by a publisher to offset the cost of publishing an article. In mathematics journals payment is usually optional. PDF. Portable Document Format (PDF), developed by Adobe Systems, Inc. and based on PostScript. Can be read using the Adobe Acrobat software. peer review. Refereeing done by peers of the author (people working in the same area). Should perhaps be called "peer refereeing", but "peer review" is standard. poster. A display of graphics and text that summarizes a piece of work. It usually comprises sheets of paper attached to a poster board. proceedings. A collection of papers describing the work presented at a conference or workshop. Also may be a title for a journal: for example, The Proceedings of the American Mathematical Society. proofreading. The process of checking proofs for errors (usually by comparing them with an original) and marking the errors with standard proofreading symbols. proofs. Typeset material ready for checking and correction. proposition. Same meaning as a theorem (but possibly regarded as a lesser result). PostScript. A page description language developed by Adobe Systems, Inc. Now a standard format in which to provide documents on the World Wide Web. referee. A person who advises an editor on the suitability of a manuscript for publication. references. The list of publications cited in the text, or those publications themselves. reprint. A separate printing of an article that appeared in a book or journal. Often, a limited number are supplied free of charge to the author. reviewer. A person who reviews previously published or completed work. Sometimes used incorrectly as a synonym for referee. running head. An abbreviated title that appears in the headline of pages in a published paper. Science Citation Index. A publication from the Institute for Scientific Information that records all papers that reference an earlier paper, across all science subjects. It covers the period from 1945 to the present. SIAM. The Society for Industrial and Applied Mathematics. technical report. A document published by an organization for external circulation, usually as part of a series. TgX. A system for computer typesetting of mathematics, developed by Donald Knuth at Stanford University. Also used as a verb: "to TjrjX a paper" is to typeset the paper in T^X. theorem. A major result of independent interest. thesaurus. A list of words in which each word is followed by a list of words of similar meaning or sense. The main list may be arranged by meaning (Roget's Thesaurus) or alphabetically (most other thesauruses). title. "The fewest possible words that adequately describe the contents of a paper, book, poster, etc." [68] Unix. A computer operating system developed at Bell Laboratories. Widely used on workstations and supercomputers. URL. Uniform resource locator. A URL is the address of an object on the World Wide Web. widow. A short last line of a paragraph appearing at the top of a page. World Wide Web. The handbook to the Web browser Netscape explains that "The World Wide Web (WWW or Web) is one facet of the Internet consisting of client and server computers handling multimedia Bibliography [1] J. C. Abbott, editor. The Chauvenet Papers: A Collection of Prize-Winning Expository Papers in Mathematics, Volumes 1 and 2. Mathematical Association of America, Washington, D.C.. 1978. [2] Milton Abramowitz and Irene A. Stegun, editors. Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, volume 55 of Applied Mathematics Series. National Bureau of Standards, Washington, D.C., 1964. Reprinted by Dover, New York. [3] Forman S. Acton. Numerical Methods That Work. Harper and Row, New York, 1970. xviii+541 pp. Reprinted by Mathematical Association of America, Washington, B.C., with new preface and additional problems, 1990. ISBN 0-88385-450-3. [4] Alfred V. Aho, Brian W. Kernighan, and Peter J. Weinberger. The AWK Programming Language. Addison-Wesley, Reading, MA, USA, 1988. x+210 pp. ISBN 0-201-07981-X. [5] D. J. Albers and G. L. Alexanderson. editors. Mathematical People: Profiles and Interviews. Birkhauser, Boston, 1985. [6] The American Heritage College Dictionary. Third edition, Houghton Mifflin, Boston, 1997. xxxiv+1630 pp. ISBN 0-395-67161-2. [7] The American Heritage Dictionary of the English Language. Third edition, Houghton Mifflin, Boston, 1996. xliv+2140 pp. ISBN 0-395-44895-6. [8] American Mathematical Society. AMS-ST^X Version 1.1 User's Guide. Providence, RI, USA, 1991. [9] Robert R. H. Anholt. Dazzle 'em with Style: The Art of Oral Scientific Presentation. W. H. Freeman, New York, 1994. xiii+200 pp. ISBN 0-71672583-5. [10] Anonymous. Next slide please. Nature, 272(5656):743, 1978. [11] Don Aslett. Is There a Speech Inside You? Writer's Digest Books, Cincinnati, Ohio, 1989. 135 pp. ISBN 0-89879-361-0. [12] David H. Bailey. Twelve ways to fool the masses when giving performance results on parallel computers. Supercomputer Rev., August:54-55, 1991. Also in Supercomputer, Sept. 1991, pp. 4-7. [13] Sheridan Baker. The Practical Stylist. Sixth edition, Harper and Row, New York, 1985. xii+290 pp. ISBN 0-06-040439-6. 269 [14] Robert Barrass. Scientists Must Write: A Guide to Better Writing for Scientists, Engineers and Students. Chapman and Hall, London. 1978. xiv+176 pp. ISBN 0-412-15430-7. [15] Robert Barrass. Students Must Write: A Guide to Better Writing in Course Work and Examinations. Methuen, London, 1982. ix+149 pp. ISBN 0-41633620-5. [16] Floyd K. Baskette, Jack Z. Sissors, and Brian S. Brooks. The Art of Editing. Fifth edition, Macmillan, New York, 1992. viii+518 pp. ISBN 0-02-3062959. [17] Nelson H. F. Beebe. Bibliography prettyprinting and syntax checking. TUGboat, 14(4):395-419, 1993. [18] David F. Beer, editor. Writing and Speaking in the Technology Professions: A Practical Guide. IEEE Press, New York, 1992. ISBN 0-87942-284-X. [19] Albert H. Beiler. Recreations in the Theory of Numbers: The Queen of Mathematics Entertains. Dover, New York, 1966. xviii+349 pp. ISBN 0-486-21096-0. [20] Jon L. Bentley. Programming Pearls. Addison-Wesley, Reading, MA, USA, 1986. viii+195 pp. ISBN 0-201-10331-1. [21] Jon L. Bentley. More Programming Pearls: Confessions of a Coder. Addison-Wesley, Reading, MA, USA, 1988. viii+207 pp. ISBN 0-201-11889-0. [22] Jon L. Bentley and Brian W. Kernighan. Tools for printing indexes. Electronic Publishing, 1(1):3—17, 1988. Also Computer Science Technical Report No. 128, AT&T Bell Laboratories, Murray Hill, NJ, October 1986. [23] Abraham Bernian and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1994. xx+340 pp. Corrected republication. with supplement, of work first published in 1979 by Academic Press. ISBN 0-89871321-8. [24] Andre Bernard, editor. Rotten Rejections: A Literary Companion. Penguin, New York, 1990. 101 pp. ISBN 0-14-014786-1. [25] Theodore M. Bernstein. The Careful Writer: A Modern Guide to English Usage. Atheneum, New York, 1965. xviii+487 pp. ISBN 0-689-70555-7. [26] Theodore M. Bernstein. Miss Thistlebottom's Hobgoblins: The Careful Writer's Guide to the Taboos, Bugbears and Outmoded Rules of English Usage. Farrar, Straus and Giroux, New York. 1971. 260 pp. Reprinted by Simon and Schuster, New York, 1984. ISBN 0-671-50404-5. [27] Theodore M. Bernstein. Dos, Don'ts and Maybes of English Usage. Barnes and Noble, New York, 1977. 250 pp. ISBN 0-88029-944-4. [28] Theodore M. Bernstein. Punctuation. IEEE Trans. Prof. Commun., PC-20 (l):38-44, 1977. Reprinted from [25]. [29] Cicely Berry. Your Voice and How to Use It Successfully. Harrap, London, 1975. 160 pp. ISBN 0-245-52886-5. [30] Ambrose Biercc. Write it Right: A Little Blacklist of Literary Faults. The Union Library Association, New York, 1937. Reprinted in [26, pp. 209-253]. [31] Lennart Bjorck, Michael Knight, and Eleanor Wikborg. The Writing Process: Composition Writing for University Students. Second edition, Student litteratnr, Lund, Sweden and Chartwell Bratt. Bromley, Kent, England, 1990. ISBN 91 44 28222 2 and 0 86238 300 5. [32] Bloomsbury Thesaurus. Bloomsbury, London, 1993. xxx+1569 pp. ISBN 0-7475-1226-4. [33] Ralph P. Boas. Can we make mathematics intelligible? Monthly, 88:727-731, 1981. Amer. Math. [34] Bela Bollobas, editor. Littlewood's Miscellany. Cambridge University Press, 1986. 200 pp. First published in 1953 by Methuen as A Mathematician's Miscellany. ISBN 0-521-33702-X. [35] Larry S. Bonura. The Art of Indexing. Wiley, New York, 1994. xxii+233 pp. ISBN 0-471-01449-4. [36] Vernon Booth. Communicating in Science: Writing a Scientific Paper and Speaking at Scientific. Meetings. Second edition, Cambridge University Press, 1993. xvi+78 pp. ISBN 0-521-42915-3. [37] L. A. Brankin and A. M. Mumford. File formats for computer graphics: Unraveling the confusion. Technical Report TR1/92, NAG Ltd., Oxford, June 1992. [38] Mary Helen Briscoe. Preparing Scientific Illustrations: A Guide to Better Posters, Presentations, and Publications. Second edition, Springer-Verlag, New York, 1996. xii+204 pp. ISBN 0-387-94581-4. [39] William Broad and Nicholas Wade. Betrayers of the Truth: Fraud and Deceit in Science. Oxford University Press, 1982. 256 pp. ISBN 0-19281889-9. [40] Shirley V. Browne, Jack J. Dongarra, Stan C. Green, Keith Moore, Thomas H. Rowan, and Reed C. Wade. Netlib services and resources. Report ORNL/TM-12680, Oak Ridge National Laboratory, Oak Ridge, TN, USA, April 1994. 42 pp. [41] Bill Bryson. The Penguin Dictionary of Troublesome Words. Second edition, Penguin, London, 1987. 192 pp. ISBN 0-14-051200-4. [42] Bill Bryson. Mother Tongue: English and How It Got That Way. Avon Books, New York, 1990. 270 pp. ISBN 0-380-71543-0. [43] Robert Burchfield. Unlocking the English Language. Faber and Faber, London, 1989. xv+202 pp. ISBN 0-571-14416-0. [44] David M. Burton. Elementary Number Theory. Allyn and Bacon, Boston, 1980. ix+390 pp. ISBN 0-205-06978-9. [45] Judith Butcher. Copy-Editing: The Cambridge Handbook for Editors, Authors and Publishers. Third edition, Cambridge University Press, 1992. xii+471 pp. ISBN 0-521-40074-0. [46] Tony Buzan. Use Your Head. BBC Publications, London, 1982. 156 pp. ISBN 0-563-16552-9. [47] Bill Buzbee. Poisson's equation revisited. Current Contents, 36:8, 1992. [48] George D. Byrne. How to improve technical presentations. SIAM News, 20:10-11, January 1987. [49] Florian Cajori. A History of Mathematical Notations. Two Volumes Bound as One. Volume I: Notations in Elementary Mathematics. Volume II: Notations Mainly in Higher Mathematics. Dover, New York, 1993. xxviii+820 pp. Reprint of works originally published in 1928 and 1929 by The Open Court Publishing Company, Chicago. ISBN 0-486-67766-4. [50] James Calnan and Andras Barabas. Speaking at Medical Meetings: A Practical Guide. Second edition, William Heinemann Medical, London, 1981. xii+184 pp. ISBN 0-433-05001-2. [51] Debra Cameron and Bill Rosenblatt. Learning GNU Emacs. O'Reilly & Associates, Sebastopol, CA, 1991. xxvii+411 pp. ISBN 0-937175-84-6. [52] G. V. Carey. Mind the Stop: A Brief Guide to Punctuation with a Note on Proof-Correction. Second edition, Penguin, London, 1958. 126 pp. Reprinted 1976. ISBN 0-14-051072-9. [53] David P. Carlisle and Nicholas J. Higham. LMgX2e: Should you upgrade to it? SIAM News, 29 (1):12, 1996. [54] The Chambers Dictionary. Chambers, Edinburgh, 1993. xviii+2062 pp. ISBN 0-550-10255-8. [55] T. W. Chaundy, P. R. Barrett, and Charles Batey. The Printing of Mathematics: Aids for Authors and Editors and Rules for Compositors and Readers at the University Press, Oxford. Oxford University Press, 1954. [56] Pehong Chen and Michael A. Harrison. Index preparation and processing. Software—Practice and Experience, 18(9):897-915, 1988. [57] Lorinda L. Cherry. Writing tools. IEEE Trans. Communications, COM-30 (1):100-105, 1982. [58] The Chicago Manual of Style. Fourteenth edition, University of Chicago Press, Chicago and London, 1993. ix+921 pp. ISBN 0-226-10389-7. [59] Collins Cobuild English Dictionary. Collins, London, 1995. xxxix+1951 pp. ISBN 0-00-370941-8. [60] Collins English Dictionary. Third edition, HarperCollins, Glasgow, 1991. xxxi+1791 pp. ISBN 0-00-433286-5. [61] Collins Plain English Dictionary. HarperCollins, London, 1996. 758 pp. ISBN [62] Bruce M. Cooper. Writing Technical Reports. Penguin, London, 1964. 188 pp. Reprinted 1986. ISBN 0-14-020676-0. [63] Michael Crichton. Medical obfuscation: Structure and function. The New England Journal of Medicine, 293(24):1257-1259, 1975. [64] Francis Crick. The double helix: A personal view. Nature, 248:766-769, 1974. [65] H. Crowder, R. S. Dembo, and J. M. Mulvey. On reporting computational experiments with mathematical software. ACM Trans. Math. Software, 5: 193-203, 1979. [66] David Crystal. The English Language. Penguin, London, 1988. 124 pp. ISBN 0-14-022730-X. [67] P. J. Davis. Fidelity in mathematical discourse: Is one and one really two? Amer. Math. Monthly, 79:252-263, 1972. [68] Robert A. Day. How To Write and Publish a Scientific Paper. Fourth edition, Cambridge University Press, and Oryx Press, Phoenix, Arizona, 1994. xiv+223 pp. ISBN 0-521-55898-0. [69] Robert A. Day. Scientific English: A Guide for Scientists and Other Professionals. Second edition, Oryx Press, Phoenix, Arizona, 1995. xii+148 pp. ISBN 0-89774-989-8. [70] J. T. Dillon. The emergence of the colon: An empirical correlate of scholarship. Amencan Psychologist, 36(8):879-884, 1981. [71] Bernard Dixon. Sciwrite. Chemistry in Britain. 9(l):70-72, 1973. [72] Jack J. Dongarra and Eric Grosse. Distribution of mathematical software via electronic mail. Comm. ACM, 30(5):403-407, 1987. [73] Jack J. Dongarra and Bill Rosener. NA-NET: Numerical analysis NET. Technical Report CS-91-146, Department of Computer Science, University of Tennessee, Knoxvillc, September 1991. 21 pp. [74] Susan Dressel and Joe Chew. Authenticity beats eloquence. IEEE Trans. Prof. Commun., PC-30:82-83, 1987. [75] Freeman Dyson. George Green and physics. Physics World, 6(8):33-38, 1993. [76] Hans F. Ebel, Glaus Bliefert, and William E. Russey. The Art of Scientific Writing: From Student Reports to Professional Publications in Chemistry and Related Fields. VCH Publishers, New York, 1987. ISBN 0-89573-645-4. [77] Anne Eisenberg. Guide to Technical Editing: Discussion, Dictionary, and Exercises. Oxford University Press, New York, 1992. ix+182 pp. ISBN 0-19-506306-6. [78] J. R. Ewer and G. Latorre. A Course in Basic Scientific English. Longman, Harlow, Essex, 1969. ISBN 0-582-52009-6. [79] John Ewing, editor. A Century of Mathematics Through the Eyes of the Monthly. Mathematical Association of America, Washington, D.C., 1994. xi+323 pp. ISBN 0-88385-459-7. [80] Harley Flanders. Manual for Monthly authors. Amer. Math. Monthly, 78: 1-10, 1971. [81] Rudolf Flesch. The Art of Plain Talk. Harper and Brothers, New York, 1946. xiii+210 pp. [82] G. E. Forsythe. Suggestions to students on talking about mathematics papers. Amer. Math. Monthly. 64:16-18, 1957. Reprinted in [79]. [83] H. W. Fowler. A Dictionary of Modern English Usage. Second edition, Oxford University Press, 1968. Revised by Sir Ernest Gowers. [84] H. W. Fowler and F. G. Fowler. The King's English. Third edition, Oxford University Press, 1931. 382 pp. Reprinted 1990. ISBN 0-19-881330-9. [85] James Franklin and Albert Daoud. Introduction to Proofs in Mathematics. Prentice-Hall, Englewood Cliffs, NJ, USA, 1988. vii+175 pp. ISBN 0-13474313-X. [86] Daniel H. Freeman, Jr., Maria Elena Gonzalez, David C. Hoaglin, and Beth A. Kilss. Presenting statistical papers. The American Statistician, 37 (2):106-110, 1983. [87] Matthew P. Gaffney and Lynn Arthur Steen. Annotated Bibliography of Expository Writing in the Mathematical Sciences. Mathematical Association of America, Washington, D.C., 1976. xi+282 pp. ISBN 0-88385-422-8. [88] Eugene Garfield. "Science Citation Index"—A new dimension in indexing. Science, 144(3619) :649-654, 1964. [89] Eugene Garfield. Citation analysis as a tool in journal evaluation. Science, 178:471-479, 1972. [90] Eugene Garfield. Significant journals of science. Nature, 264:609-615, 1976. [91] Eugene Garfield. Citation Indexing: Its Theory and Application in Science, Technology, and Humanities. John Wiley, New York, 1979. xxi+274 pp. ISBN 0-471-02559-3. [92] Eugene Garfield. Journal citation studies. 36. Pure and applied mathematics journals: What they cite and vice versa. Current Contents, 5(15):5-10, 1982. [93] Eugene Garfield. The 100 most-cited papers ever and how we select citation classics. Current Contents, 23, 1984. [94] Eugene Garfield. The Awards of Science and Other Essays. Essays of an Information Scientist: 1984. ISI Press, Philadelphia, 1985. [95] Eugene Garfield. Journal editors awaken to the impact of citation errors. How we control them at ISI. Current Contents, 41:5-13, 1990. [96] Eugene Garfield. The most-cited papers of all time, SCI 1945-1988. Part 1A. The SCI top 100—will the Lowry method ever be obliterated? Current Contents, 7:3-14, 1990. [97] Eugene Garfield. The most-cited papers of all time, SCI 1945-1988. Part IB. Superstars new to the SCI top 100. Current Contents, 8:3-13, 1990. [98] Eugene Garfield. The most-cited papers of all time, SCI 1945-1988. Part 3. Another 100 from the Citation Classics Hall of Fame. Current Contents, 34:3-13, 1990. [99] Eugene Garfield. How to use the science citation index (SCI). In SCI Science Citation Index 1991. Guide and List of Source Publications, ISI Press, Philadelphia, PA, 1991, pages 26-33. Reprinted from Current Contents, 9: 5 14, 1983. [100] Eugene Garfield. The impact factor. Current Contents, 34 (25):3-7, 1994. [101] Rowan Gamier and John Taylor. 100% Mathematical Proof. Wiley, Chichester, UK, 1996. viii+317 pp. ISBN 0-471-9G199-X. [102] Robert V. Garver. Presenting the peer paper. IEEE Trans. Prof. Commun., PC-23(l):18-22, 1980. [103] C. William Gear. Inside SIAM's mysterious journal publication process. SIAM News, 24(2):6, March 1991. [104] Leonard Gillman. Writing Mathematics Well: A Manual for Authors. The Mathematical Association of America, Washington, B.C., 1987. ix+49 pp. ISBN 0-88385-443-0. [105] Leonard Gillman. Paul Halmos's expository writing. In Paul Halmos: Celebrating 50 Years of Mathematics, John H. Ewing and F. W. Gehring, editors, Springer-Verlag, Berlin. 1991, pages 33-48. [106] Leon J. Gleser. Some notes on refereeing. The American Statistician, 40 (4):310 312, 1986. [107] GNU Emacs Manual, Emacs Version 19. Free Software Foundation. 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. Available on-line with the GNU Emacs distribution. [108] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Third edition, Johns Hopkins University Press. Baltimore, MD. USA, 1996. xxvii+694 pp. ISBN 0-8018-5413-X (hardback), 0-8018-5414-8 (paperback). [109] Michael Goossens, Frank Mittelbach, and Alexander Samarin. The &TfiX Companion. Addison-Wesley, Reading, MA, USA, 1993. xxx+528 pp. ISBN 0-201-54199-8. [110] Michael Goossens, Sebastian Rahtz, and Frank Mittelbach. The &TfjX Graphics Companion: Illustrating Documents with Tfff and PostScript. Addison-Wesley, Reading, MA, USA, 1997. xxv+554 pp. ISBN 0-201-85469-4. [Ill] Karen Elizabeth Gordon. The Transitive Vampire: A Handbook of Grammar for the Innocent, the Eager, and the Doomed. Revised and expanded edition, Times Books, New York, 1984. x+149 pp. ISBN Karen Elizabeth Gordon. The New Well-Tempered Sentence: A Punctuation Handbook for the Innocent, the Eager, and the Doomed. Revised and expanded edition, Ticknor and Fields, New York, 1993. x+148 pp. ISBN 0-395-62883-0. Calvin R. Gould. The overhead projector. IEEE Trans. Prof. Commun., PC-15(l):2-6, 1972. [114] Calvin R. Gould. Visual aids—how to make them positively legible. IEEE Trans. Prof. Commun., PC-16(2):35-38, 1973. [115] Sir Ernest Cowers. The Complete Plain Words. Third edition, Penguin, London, 1986. vi+288 pp. Revised by Sidney Greenbaum and Janet Whitcut. ISBN 0-14-051199-7. [116] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete Mathematics: A Foundation for Computer Science. Second edition, AddisonWesley, Reading, MA, USA, 1994. xiii+657 pp. ISBN 0-201-55802-5. [117] Martin W. Gregory. The infectiousness of pompous prose. Nature, 360: 11-12, 1992. [118] David F. Griffiths and Desmond J. Higham. Learning &TgX. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997. 84 pp. ISBN 0-89871-383-8. R. Grone, C. R. Johnson, E. M. Sa, and H. Wolkowicz. Normal matrices. Linear Algebra and Appl., 87:213-225, 1987. [120] Jerrold W. Grossman. Paul Erdos: The master of collaboration. In The Mathematics of Paul Erdos II. Ronald L. Graham and Jaroslav Nestfil, editors, Springer-Verlag, Berlin, 1997, pages 467-475. Paul R. Halmos. How to write mathematics. Enseign. Math., 16:123—152, 1970. Reprinted in [257] and [245]. [122] Paul R. Halmos. Finite-Dimensional Vector Spaces. Springer-Verlag, New York, 1974. viii+200 pp. ISBN 0-387-90093-4. [123] Paul R. Halmos. How to talk mathematics. Notices Amer. Math. Soc., 21 (3):155-158, 1974. Reprinted in [245]. [124] Paul R. Halmos. What to publish. Amer. Math. Monthly, 82(1):14-17, 1975. Reprinted in [245]. [125] Paul R. Halmos. A Hilbert Space Problem Book. Second edition, SpringerVerlag, Berlin, 1982. [126] Paul R. Halmos. Think it gooder. The Mathematical Intelligencer, 4(1): 20-21, 1982. [127] Paul R. Halmos. I Want to Be a Mathematician: An Automathography in Three Parts. Springer-Verlag, New York, 1985. xv+421 pp. ISBN 0-88385445-7. [128] Paul R. Halmos. Some books of auld lang syne. In A Century of Mathematics in America, Part I, Peter Duren, Richard A. Askey, and Uta C. Merzbach, editors, American Mathematical Society, Providence, RI. USA, 1988, pages 131-174. [129] Sven Hammarling and Nicholas J. Higham. How to prepare a poster. SIAM News, 29(4):20, 19, May 1996. [130] Leonard Montague Harrod, editor. Indexers on Indexing: A Selection of Articles Published in The Indexer. R. K. Bowker, London, 1978. x+430 pp. ISBN 0-8352-1099-5. [131] Horace Hart. Hart's Rules for Compositors and Readers at the University Press Oxford. Thirty-ninth edition, Oxford University Press, 1983. xi+182 pp. First edition 1893. ISBN 0-19-212983-X. [132] James Hartley. Eighty ways of improving instructional text. IEEE Trans. Prof. Commun., PC-24:17-27, 1981. [133] James Hartley. The role of colleagues and text-editing programs in improving text. IEEE Trans. Prof. Commun., PC-27:42-44, 1984. [134] James Hartley. Designing Instructional Text. Second edition, Kogan Page. London, 1985. 175 pp. ISBN 0-85038-943-7. [135] Edward F. Hartree. Ethics for authors: A case history of acrosin. Perspectives in Biology and Medicine, 20:82-91, 1976. [136] J. B. Heaton and N. D. Turton. Longman Dictionary of Common Errors. Longman, Harlow, Essex, 1987. ISBN 0-582-96410-5. [137] John L. Hennessy and David A. Patterson. Computer Architecture: A Quantitative Approach. Morgan Kaufmann, San Mateo, CA, USA, 1990. xxviii+594+appendices pp. ISBN 1-55860-188-0. [138] A. J. Herbert. The Structure of Technical English. Longman, Harlow, Essex, 1965. ISBN 0-582-52523-3. [139] Desmond J. Higham. More commandments of good writing. Manuscript, Department of Mathematics and Computer Science, University of Dundee, UK, November 1992. [140] Nicholas J. Higham. Algorithm 694: A collection of test matrices in MATLAB. ACM Trans. Math. Software, 17(3):289-305, September 1991. [141] Nicholas J. Higham. Which dictionary for the mathematical scientist? IMA Bulletin, 30(5/6):81-88, 1994. [142] Philip J. Hills, editor. Publish or Perish. Peter Francis Publishers, Berrycroft, Cambridgeshire, UK, 1987. 186 pp. ISBN 1-8701-6700-7. [143] Alston S. Householder. The Theory of Matrices in Numerical Analysis. Blaisdell, New York, 1964. xi+257 pp. Reprinted by Dover, New York. 1975. ISBN 0-486-61781-5. [144] Kenneth E. Iverson. A Programming Language. Wiley, New York, 1962. [145] Donald D. Jackson. A brief history of scholarly publishing. In [292, pp. 133134]. [146] R. H, F. Jackson, P. T. Boggs, S. G. Nash, and S. Powell. Guidelines for reporting results of computational experiments. Report of the ad hoc committee. Math. Prog., 49:413-425, 1991. [147] J. L. Kelley. Writing mathematics. In Paul Halmos: Celebrating 50 Years of Mathematics, John H. Ewing and F. W. Gehring, editors, Springer-Verlag, Berlin, 1991, pages 91-96. [148] Kevin Kelly, editor. SIGNAL: Communication Tools for the Information Age. Harmony Books, Crown Publishers, New York. ISBN 0-517-57084-X. [149] Peter Kenny. A Handbook of Public Speaking for Scientists and Engineers. Adam Hilger, Bristol, 1982. xi+181 pp. ISBN 0-85274-553-2. [150] G. A. Kerkut. Choosing a title for a paper. Comp. Biochem. Physiol., 47A (1):1, 1983. [151] Brian W. Kernighan and Lorinda L. Cherry. A system for typesetting mathematics. Comm. ACM, 18(3): 151-157, 1975. [152] Lester S. King. Medical writing number 7: The opening sentence. J. Amer. Medical Assoc., 202(6):535-536, 1967. [153] John Kirkman. Good Style: Writing for Science and Technology. E & FN Spon (Chapman and Hall), London, 1992. viii+221 pp. ISBN 0-419-171908. [154] Charles Kittel. Introduction to Solid State Physics. Fourth edition, Wiley, New York, 1971. xv+766 pp. ISBN 0-471-49021-0. [155] George R. Klare. The Measurement of Readability. Iowa State University Press, Ames, IA, USA, 1963. [156] G. Norman Knight. Book indexing in Great Britain: A brief history. The Indexer, 6(1):14-18, 1968. Reprinted in [130, pp. 9-13]. [157] Donald E. Knuth. The Art of Computer Programming. Addison-Wesley, Reading, MA, USA, 1973-1981. Three volumes. [158] Donald E. Knuth. Mathematical typography. Bulletin Amer. Math. Soc. (New Series), l(2):337-372, 1979. [159] Donald E. Knuth. The Art of Computer Programming, Volume 2, Seminumerical Algorithms. Second edition, Addison-Wesley, Reading, MA, USA, 1981. xiii+688 pp. ISBN 0-201-03822-6. [160] Donald E. Knuth. The METRFONT Book. Addison-Wesley, Reading, MA, USA, 1986. xi+361 pp. ISBN 0-201-13444-6. [161] Donald E. Knuth. The T^ibook. Addisori-Wesley, Reading, MA, USA, 1986. ix+483 pp. ISBN 0-201-13448-9. [162] Donald E. Knuth. 3:16 Bible Texts Illuminated. A-R Editions. Madison. WI, 1991. 268 pp. ISBN 0-89579-252-4. [163] Donald E. Knuth. Two notes on notation. Amer. Math. Monthly, 99(5): 403-422, 1992. [164] Donald E. Knuth, Tracy Larrabee, and Paul M. Roberts. Mathematical Writing. MAA Notes Number 14. Mathematical Association of America, Washington, B.C., 1989. 115 pp. Also Report STAN-CS-88-1193, Department of Computer Science, Stanford University, Stanford, CA, USA, January 1988. ISBN 0-88385-063-X. [165] Kodak Limited. Let's stamp out awful lecture slides. Kodak publication S-22(H), April 1979. [166] Helmut Kopka and Patrick W. Daly. A Guide to RTffi2e: Document Preparation for Beginners and Advanced Users. Second edition, AddisonWesley, Wokingham, England, 1995. x+554 pp. ISBN 0-201-42777-X. [167] Steven G. Krantz. A Primer of Mathematical Writing: Being a Disquisition on Having Your Ideas Recorded, Typeset, Published, Read, and Appreciated. American Mathematical Society, Providence, RI, USA, 1997. xv+223 pp. ISBN 0-8218-0635-1. [168] Ed Krol. The Whole Internet User's Guide & Catalog. Second edition, O'Reilly & Associates, Sebastopol, CA, USA. 1994. xxv+543 pp. ISBN 1-56592-063-5. [169] Marcel C. LaFollette. Stealing into Print: Fraud, Plagiarism, and Misconduct in Scientific Publishing. University of California Press, Berkeley, CA, 1992. viii+293 pp. ISBN 0-520-07831-4. [170] David Lambuth et al. The Golden Book on Writing. Penguin. 1964. xiv+81 pp. ISBN 0-14-046263-5. [171] Leslie Lamport. Document production: Visual or logical? Notices Amer. Math. Soc., 34:621-624, 1987. [172] Leslie Lamport. RTfiX: A Document Preparation System,. User's Guide and Reference Manual. Second edition, Addison-Wesley, Reading, MA, USA, 1994. xvi+272 pp. ISBN 0-201-52983-1. [173] Leslie Lamport. 600-608, 1995. How to write a proof. Amer. Math. Monthly, 102(7): [174] Kenneth K. Landes. A scrutiny of the abstract. II. Bull. Amer. Assoc. Petroleum. Geologists, 50(9):1992, 1966. [175] Richard A. Lanham. Revising Prose. Third edition, Macmillan, New York, 1992. xi+123 pp. ISBN 0-02-367445-8. [176] Tracey LaQuey and Jeanne C. Ryer. The Internet Companion: A Beginner's Guide to Global Networking. Addison-Wesley, Reading, MA, USA, 1993. x+196 pp. ISBN 0-201-62224-6. [177] Uri Leron. Structuring mathematical proofs. Amer. Math. Monthly, 90(3): 174-185, 1983. [178] Xia Li and Nancy B. Crane. Electronic Styles: A Handbook for Citing Electronic Information. Information Today, Inc., Medford, NJ, USA, 1996. xviii+213 pp. ISBN 1-57387-027-7. [179] Dennis V. Lindley. Refereeing. The Mathematical Intelligencer, 6(2):56-60, 1984. [180] Stephen Lock, editor. How to Do It. Second edition, British Medical Association, London, 1985. ISBN 0-7279-0186-9. [181] Longman Dictionary of Contemporary English. Third edition, Longman, Harlow, Essex, 1995. xxii+1668 pp. ISBN 0-582-23750-5. Longman Dictionary of the English Language. New edition, Longman, Harlow, Essex, 1991. xxv+1890 pp. ISBN 0-582-07038-4. Harry Lorayne. How to Develop a Super-Power Memory. Signet, New York, 1974. xii+180 pp. ISBN 0-451-12941-5. Harry Lorayne and Jerry Lucas. The Memory Book. Wyndham, London, 1974. 207 pp. ISBN 0-352-39856-6. [185] Beth Luey. Handbook for Academic Authors. Third edition, Cambridge University Press, 1995. ISBN 0-521-49892-9. [186] Nina H. Macdonald, Lawrence T. Prase, Patricia S. Gingrich, and Stacey A. Keenan. The Writer's Workbench: Computer aids for text analysis. IEEE Trans. Communications, COM-30(1):105-110, 1982. [187] A. J. MacGregor. Graphics simplified: Charts and graphs. Scholarly Publishing, 8(2): 151-164, 1977. [188] A. J. MacGregor. Graphics simplified: Preparing charts and graphs. Scholarly Publishing, 8(3):257-274, 1977. [189] N. J. Mackintosh, editor. Cyril Burt: Fraud or Framed? Oxford University Press, 1995. vii+156 pp. ISBN 0-19-852336-X. [190] Donald S. MacQueen. Using Numbers in English: A Reference Guide for Swedish Speakers Including Basic Terminology for Describing Graphs. Studentlitteratur, Lund, Sweden and Chartwell Bratt, Bromley, Kent, England, 1990. ISBN 91-4431921-5 and 0-862382645. [191] John Maddox. Must science be impenetrable? 1983. Nature, 305(6):477-478, [192] Thomas Mallon. Stolen Words: Forays Into the Origins and Ravages of Plagiarism. Penguin, London, 1989. xiv+300 pp. ISBN 0-14-014440-4. [193] Prank T. Manheim. The scientific referee. IEEE Trans. Prof. Commun., PC-18(3):190-195, 1975. [194] S. D. Mason. Oral examination procedure. In [235, pp. 160-161]. [195] Diane L. Matthews. The scientific poster: Guidelines for effective visual communication. Technical Communication, 37 (3):225-232, 1990. [196] Thomas H. Maugh, II. Poster sessions: A new look at scientific meetings. Science, 184:1361, June 1974. [197] Stephen B. Maurer. Advice for undergraduates on special aspects of writing mathematics. PRIMUS (Problems Resources and Issues in Mathematics Undergraduate Studies), l(l):9-28, March 1991. [198] Glenda M. McClure. Readability formulas: Useful or useless? IEEE Trans. Prof. Commun., PC-30:12-15, 1987. [199] M. Douglas Mcllroy. Development of a spelling list. IEEE Trans. Communications, COM-30:91-99, 1982. [200] N. David Mermin. What's wrong with these equations? April:9, 1988. Reprinted in [202]. Physics Today, [201] N. David Mermin. What's wrong with this Lagrangean? April:9, 1988. Reprinted with postscript in [202]. Physics Today, [202] N. David Mermin. Boojums All the Way Through: Communicating Science, in a Prosaic Age. Cambridge University Press, 1990. xxi+309 pp. ISBN 0-521-38880-5. [203] Merriam-Webster's Collegiate Dictionary. Tenth edition, MerriamWebster, Springfield, MA, USA, 1993. 1559 pp. ISBN 0-87779-708-0. [204] James A. Michener. James A. Michener's Writer's Handbook: Explorations in Writing and Publishing. Random House, New York, 1992. ix+182 pp. ISBN 0-679-74126-7. [205] Joan P. Mitchell. The New Writer: Techniques for Writing Well with a Computer. Microsoft Press, Redmond, WA, USA, 1987. viii+245 pp. ISBN 1-55615-029-6. [206] R. D. Nelson, editor. The Penguin Dictionary of Mathematics. Second edition. Penguin, London, 1998. 350 pp. ISBN 0-14-051342-6. [207] Maeve O'Connor. Editing Scientific Books and Journals. Pitman Medical, Tunbridge Wells, Kent, UK, 1978. ISBN 0-27279517-8. [208] Maeve O'Connor. How to Copyedit Scientific Books and Journals. ISI Press. Philadelphia, PA, 1986. ix+150 pp. ISBN 0-89495-064-9. [209] Maeve O'Connor. Writing Successfully in Science. Chapman and Hall, London, 1991. xi+229 pp. ISBN 0-412-446308. [210] Maeve O'Connor and F. Peter Woodford. Writing Scientific Papers in English. Pitman Medical, Tunbridge Wells, Kent, 1977. vii+108 pp. ISBN 0-272-79515-1. [211] D. P. O'Leary, G. W. Stewart, and J. S. Vandergraft. Estimating the largest eigenvalue of a positive definite matrix. Math. Comp., 33:1289-1292, 1979. [212] Oxford Advanced Learner's Dictionary of Current English. Fifth edition, Oxford University Press, 1995. x+1428 pp. ISBN 0-19-431422-7. [213] The Concise Oxford Dictionary of Current English. Ninth edition, Oxford University Press, 1995. xxi+1673 pp. ISBN 0-19-861319-9. The New Shorter Oxford English Dictionary. Oxford University Press, 1993. xxvii+3801 pp. ISBN 0-19-861134-X. The Oxford English Dictionary. Second edition, Oxford University Press, 1989. ISBN 0-19-861186-2. [216] Ian Parberry. A guide for new referees in theoretical computer science, ftp: //ftp. tint. edu/ian/guides/ref eree/manuscript. ps, 1994. [217] Beresford N. Parlett. The Symmetric Eigenvalue Problem. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1998. xxiv+398 pp. Unabridged, amended version of book first published by Prentice-Hall in 1980. ISBN 0-89871-402-8. [218] Eric Partridge. Usage and Abusage: A Guide to Good English. Penguin. London, 1973. 381 pp. ISBN 0-14-051024-9. [219] Jan A. Pechenik. A Short Guide to Writing About Biology. HarperCollins, New York, 1987. xiv+194 pp. ISBN 0-673-39232-5. [220] John E. Pemberton. How to Find Out in Mathematics. Second edition, Pergamon, London, 1969. [221] Carol Rosenblum Perry. The Fine Art of Technical Writing. Blue Heron Publishing, Hillsboro, OR, USA, 1991. 112 pp. ISBN 0-936085-24-X. [222] H. Petard. A brief dictionary of phrases used in mathematical writing. Amer. Math. Monthly, 73:196-197, 1966. Reprinted in [79]. [223] Ivars Peterson. Searching for new mathematics. SIAM Review, 33:37-42. 1991. [224] James L. Peterson. Computer Programs for Spelling Correction: An Experiment in Program Design. Number 96 in Lecture Notes in Computer Science. Springer-Verlag, Berlin, 1980. [225] James L. Peterson. A note on undetected typing errors. Comm. ACM, 29 (7):633-637, 1986. [226] Estelle M. Phillips and D. S. Pugh. How to Get a PhD: A Handbook for Students and Their Supervisors. Second edition, Open University Press, Buckingham, UK, 1994. xiv-1-203 pp. ISBN 0-335-19214-9. [227] George Piranian. Say it better. The Mathematical Intelligencer, 4(1):17-19, 1982. [228] George Polya. How to Solve It: A New Aspect of Mathematical Method. Second edition, Doubleday, New York, 1957. xxi+253 pp. [229] Simeon Potter. Our Language. Penguin, London, 1976. [230] W. M. Priestley. Paul Halmos: Words more than numbers. In Paul Halmos: Celebrating 50 Years of Mathematics, John H. Ewing and F. W. Gehring, editors, Springer-Verlag. Berlin, 1991, pages 49-69. [231] D. A. Pyke. Referee a paper. In [180, pp. 215-219]. [232] Randolph Quirk and Gabriele Stein. English in Use. Longman, Harlow, Essex, 1990. 262 pp. ISBN 0-582-06613-1. [233] Random House Unabridged Dictionary. Second edition, Random House, New York, 1993. xxxix+2478+32 pp. ISBN 0-679-42917-4. [234] Random House Webster's College Dictionary. Second edition, Random House, New York, 1997. xxxii+1535 pp. ISBN 0-679-45570-1. [235] A Random Walk in Science. An anthology compiled by Robert L. Weber and edited by Eric Mendoza. The Institute of Physics, Bristol and London, 1973. xvii+206 pp. ISBN 0-85498-027-X. [236] Jerome Irving Rodale. The Synonym Finder. Warner Books, New York, 1978. 1361 pp. Completely revised by Laurence Urdang and Nancy LaRoche. ISBN 0-446-37029-0. [237] Patsy Rodenburg. The Right to Speak: Working with the Voice. Methuen, London, 1992. xiv+306 pp. ISBN 0-413-66130-X. [238] Ervin Y. Rodin. Speed of publication—an editorial. Applic., 24(4):l-2, 1992. Computers Math. [239] Charles G. Roland. Thoughts about medical writing XXXVII. Verify your references. Anesthesia and Analgesia ... Current Researches. 55(5):717718, 1976. [240] Richard Rubinstein. Digital Typography: An Introduction to Type and Composition for Computer System Design. Addison-Wesley, Reading, MA. USA, 1988. ISBN 0-201-17633-5. [241] Kjell Erik Rudestam and Rae R. Newton. Surviving Your Dissertation: A Comprehensive Guide to Content and Process. Sage Publications. Newbury Park, CA, USA, 91320, 1992. xi+221 pp. ISBN 0-8039-4563-9. [242] Stephan M. Rudolfer and Peter C. Watson. Table errata. Math. Comp., 59 (206): 727, 1992. [243] William Satire. Fumblerules: A Lighthearted Guide to Grammar and Good Usage. Dell Publishing, New York, 1990. 152 pp. ISBN 0-440-21010-0. [244] David Salomon. The Advanced TfiXbook. Springer-Verlag, New York, 1995. xx+491 pp. ISBN 0-387-94556-3. [245] Donald E. Sarason and Leonard Gillman, editors. P. R. Halmos. Selecta: Expository Writing. Springer-Verlag, New York. 1983. xix+304 pp. ISBN 0-387-90756-4. [246] David Louis Schwartz. How to be a published mathematician without trying harder than necessary. In The Journal of Irreproducible Results: Selected Papers, George H. Scherr, editor, third edition, 1986, page 205. [247] Steven Schwartzman. The Words of Mathematics: An Etymological Dictionary of Mathematical Terms Used in English. Mathematical Association of America, Washington, D.C., 1994. vii+261 pp. ISBN 0-88385-511-9. [248] Steven Schwartzman. Number words in English. College Mathematics Journal, 26(3):191-195, 1995. [249] Marjorie E. Skillin, Robert M. Gay, and other authorities. Words Into Type. Third edition, Prentice-Hall, Englewood Cliffs, NJ, USA, 1974. xx+583 pp. ISBN 0-13-964262-5. [250] Alan Jay Smith. The task of the referee. IEEE Computer, 23(4):65-71, 1990. [251] Michael D. Spivak. The Joy of TffX: A Gourmet Guide to Typesetting with the JkfirfS-TEX Macro Package. Second edition, American Mathematical Society, Providence, HI, USA, 1990. [252] Elsie Myers Stainton. A bag for editors. Scholarly Publishing, 8(2):111-119, 1977. [253] Elsie Myers Stainton. The uses of dictionaries. Scholarly Publishing, 11(3): 229-241, 1980. [254] Elsie Myers Stainton. The Fine Art of Copyediting. Columbia University Press, New York, 1991. xi+126 pp. ISBN 0-231-06961-8. [255] De Witt T. Starnes and Gertrude E. Noyes. The English Dictionary from Cawdrey to Johnson 1604-1755. University of North Carolina Press, Chapel Hill, NC, USA, 1946. New edition with an introduction and a select bibliography by Gabriele Stein, John Benjamins Publishing Company, Amsterdam and Philadelphia, 1991. ISBN 90-272-4544-4. [256] Norman E. Steenrod. How to write mathematics. In [257, pp. 1-17]. [257] Norman E. Steenrod, Paul R. Halmos, Menahem M. Schiffer, and Jean A. Dieudonne. How to Write Mathematics. American Mathematical Society, Providence, RI, USA, 1973. [258] David Sternberg. How to Complete and Survive a Doctoral Dissertation. St. Martin's Press, New York, 1981. 231 pp. ISBN 0-312-39606-6. [259] Andrew Sterrett, editor. Using Writing to Teach Mathematics. MAA Notes Number 16. The Mathematical Association of America, Washington, D.C., 1990. xvii+139 pp. ISBN 0-88385-066-4. [260] Hans J. Stetter. Analysis of Discretization Methods for Ordinary Differential Equations. Springer-Verlag, Berlin, 1973. xvi+388 pp. ISBN 3-54006008-1. [261] G. W. Stewart. Introduction to Matrix Computations. Academic Press, Now York, 1973. xiii+441 pp. ISBN 0-12-670350-7. [262] Gilbert Strang. Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA, USA, 1986. ix+758 pp. ISBN 0-9614088-0-4. [263] William Strunk, Jr. and E. B. White. The Elements of Style. Third edition, Macmillan, New York, 1979. xvii+92 pp. ISBN 0-02-418200-1. [264] John Swales. Writing Scientific English. Thomas Nelson, London, 1971. [265] John Swales. Episodes in ESP: A Source and Reference Book on the Development of English for Science and Technology. Prentice Hall International. Kernel Hempstead, Hampshire, UK, 1988. ISBN 0-13-283383-2. [266] Michael Swan. Practical English Usage. Second edition, Oxford University Press, 1995. xxx+658 pp. ISBN 0-19-431197-X. [267] Ellen Swanson. Mathematics into Type: Copy Editing and Proofreading of Mathematics for Editorial Assistants and Authors. Revised edition, American Mathematical Society, Providence, RI, USA, 1979. x+90 pp. ISBN 0-8218-0053-1. [268] J. J. Sylvester. Explanation of the coincidence of a theorem given by Mr Sylvester in the December number of this journal, with one stated by Professor Donkin in the June number of the same. Philosophical Magazine, (Fourth Series) 1:44-46, 1851. Reprinted in [269, pp. 217-218]. [269] The Collected Mathematical Papers of James Joseph Sylvester, volume 1 (1837-1853). Cambridge University Press, 1904. xii+650 pp. [270] Judith A. Tarutz. Technical Editing: The Practical Guide for Editors and Writers. Addison-Wesley, Reading, MA, USA, 1992. ISBN 0-201-56356-8. [271] John Meurig Thomas. Michael Faraday and the Royal Institution. Adam Hilger, Bristol UK, 1991. xii+234 pp. ISBN 0-7503-0145-7. [272] Robert C. Thompson. Author vs. referee: A case history for middle level mathematicians. Amer. Math. Monthly, 90 (10):661-668, 1983. [273] Martin Tompa. Figures of merit. Research Report RC 14211 (#63576), IBM Thomas J. Watson Research Center, Yorktown Heights, New York, November 1988. [274] Jerzy Trzeciak. Writing Mathematical Papers in English: A Practical Guide. Gdansk Teachers' Press, Gdansk, Poland, 1993. 48 pp. ISBN 83-85694-02-1. [275] Edward R. Tufte. The Visual Display of Quantitative Information. Graphics Press, Cheshire, CT, USA, 1983. 197 pp. [276] Edward R. Tufte. Envisioning Information. Graphics Press, Cheshire, CT, USA, 1990. 126 pp. [277] Edward R. Tufte. Visual Explanations: Images and Quantities, Evidence and Narrative. Graphics Press, Cheshire, CT, USA, 1997. 158 pp. ISBN 0-9613921-2-6. [278] Kate L. Turabian. A Manual for Writers of Term Papers, Theses, and Dissertations. Sixth edition, The University of Chicago Press, Chicago and London, 1996. ix+308 pp. ISBN 0-226-81627-3. [279] Christopher Turk. Effective Speaking: Communicating in Speech. E & FN Spon (Chapman and Hall), London, 1985. ix+275 pp. ISBN 0-419-13030-6. [280] Christopher Turk and John Kirkman. Effective Writing: Improving Scientific, Technical and Business Communication. Second edition, E & FN Spon (Chapman and Hall), London, 1989. 277 pp. ISBN 0-419-14660-1. [281] Barry T. Turner. Effective Technical Writing and Speaking. Second edition, Business Books, London, 1978. xiii+220 pp. ISBN 0-220-66344-0. [282] Adrian Underbill. Use Your Dictionary: A Practice Book for Users of Oxford Advanced Learner's Dictionary of Current English and Oxford Student's Dictionary of Current English. Oxford University Press, 1980. 56 pp. ISBN 0-19-431104-X. [283] Mary-Claire van Leunen. A Handbook for Scholars. Revised edition, Oxford University Press, New York, 1992. xi+348 pp. ISBN 0-19-506954-4. [284] Charles F. Van Loan. Computational Frameworks for the Fast Fourier Transform. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1992. xiii+273 pp. ISBN 0-89871-285-8. [285] Charles F. Van Loan. FFTs and the sparse factorization idea (abstract). Linear Algebra and Appl., 162-164:717, 1992. [286] Jan Venolia. Write Right! A Desk Drawer Digest of Punctuation, Grammar and Style. David St John Thomas Publisher, Nairn, Scotland, 1986. 126 pp. ISBN 0-946537-57-7. [287] Keith Waterhouse. On Newspaper Style. Viking, London, 1989. 250 pp. ISBN 0-670-82626-X. [288] Keith Waterhouse. English our English (and How to Sing It). Viking, London, 1991. xxvii+147 pp. ISBN 0-670-83269-3. [289] David S. Watkins. Fundamentals of Matrix Computations. Wiley, New York, 1991. xiii+449 pp. ISBN 0-471-54601-1. [290] J. D. Watson and F. H. C. Crick. Molecular structure of nucleic acids: A structure for deoxyribose nucleic acid. Nature, 171(4356):737-738, 1953. April 25. [291] Robert L. Weber, editor. More Random Walks in Science. Bristol and London, 1982. xv+208 pp. ISBN 0-85498-040-7. [292] Robert L. Weber, editor. Science with a Smile. Institute of Physics Publishing, Bristol and Philadelphia, 1992. 452 pp. ISBN 0-7503-0211-9. [293] Webster's New World College Dictionary. Third edition, Macmillan, New York, 1997. xxxvi+1588 pp. ISBN 0-02-861674-X. [294] Webster's Third New International Dictionary of the English Language. Merriam-Webster, Springfield, MA, USA, 1986. 110+2662 pp. ISBN 087779-201-1, 0-87779-206-2. [295] Herbert S. Wilf. T^X: A non-review. Amer. Math. Monthly, 93:309-315, 1986. [296] J. H. Wilkinson. The Algebraic Eigenvalue Problem. Oxford University Press, 1965. xviii+662 pp. ISBN 0-19-853403-5 (hardback), 0-19-853418-3 (paperback). [297] L. Pearce Williams, editor. The Selected Correspondence of Michael Faraday: Volume 1, 1812-1848. Cambridge University Press, 1971. ISBN 0521-07908-X. [298] Frederick T. Wood, Roger H. Flavell, and Linda M. Flavell. Current English Usage. Macmillan, London, 1989. v+329 pp. ISBN 0-333-27840-2. [299] F. Peter Woodford. Sounder thinking through clearer writing. Science, 156 (3776):743-745, 1967. [300] F. Peter Woodford, editor. Scientific Writing for Graduate Students: A Manual on the Teaching of Scientific Writing. Council of Biology Editors, Bethesda, MD, USA, 1986. x+187 pp. ISBN 0-914340-06-9. [301] John D. Woolsey. Combating poster fatigue: How to use visual grammar and analysis to effect better visual communications. Trends in Neuroscienc.es, 12(9):325-332, 1989. [302] William Zinsser. Writing with a Word Processor. Harper and Row, New York, 1983. viii+117 pp. ISBN 0-06-091060-7. [303] William Zinsser. Writing to Learn. Harper and Row, New York, 1988. x+256 pp. ISBN 0-06-091576-5. [304] William Zinsser. On Writing Well: An Informal Guide to Writing Nonfiction. Fourth edition, HarperCollins, New York, 1990. xiii+288 pp. ISBN 0-06-272027-9. This page intentionally left blank Name Index "Kindly look her up in my index, Doctor," murmured Holmes without opening his eyes. — ARTHUR CONAN DOYLE, A Scandal in Bohemia (1891) A suffix "t" after a page number denotes a table, "f" a figure, "n" a footnote, and "q" a quotation at the opening of a chapter. Broad, William, 105 Brooks, Brian S., 125q Bryson, Bill, 9, 12, 44, 45 n, 59 q, 71 Buchholz, W., 82 Burchfield, Robert, 8 Burt, Cyril, 105 Burton, David M., 3 Butcher, Judith, 10 Buzan, Tony, 178 Buzbee, Bill, 126 Byrne, George D., 178 Abramowitz, Milton, 138 Achilles, Alf-Christian, 199 Acton, Forman S., 3, 55 Anderson, Margaret D., 207 Anholt, Robert R. H., 183 Arseneau, Donald, 201 Aslett, Don, 178 Bailey, David H., 90 Baker, Sheridan, 2, 9 Barabas, Andras, 174, 178 Barrass, Robert, 11 Baskette, Floyd K., 125 q Beebe, Nelson H. F., 199 Beiler, Albert H., 3 Belsley, D. A., 81 Bentley, Jon L., 94, 218 Bernstein, Theodore M., 9, 41 Berry, Cicely, 178 Bierce, Ambrose, Iq Bliefert, Glaus, 11, 59 q Boas, Ralph P., 15q, 17, 18 Bonhours, Dominique, 35 q Bonura, Larry S., 207 Booth, Vernon, 11, 76, 178 Briscoe, Mary Helen, Cajori, Florian, 24 Calnan, James, 174, 178 Calvin, 147 q, 179 q Carey, G. V., 9, 51 Carlisle, David P., 206, 225 Chen, Pehong, 204 Cherry, Lorinda L., 221 Chew, Joe, 171 q Choi, Man-Duen, 82 Cockeram, Henry, 5q Conan Doyle, Arthur, 289 Cooper, Bruce M., 11 Crick, Francis H. C., 96, 96 n 289 Crystal, David, 12 Daly, Patrick W., 206 Daoud, Albert, 18 Davis, Philip J., 125 q Day, Robert A., 9, 11 de Morgan, Augustus, 24 Dillon, J. T., 81 Dixon, Bernard, 77 q, 107 q Dongarra, Jack J., 212 Dressel, Susan, 171 q Dyson, Freeman, 146 Ebel, Hans F., 11, 59 q Eisenberg, Anne, 10 Ewer, J. R., 76 Faraday, Michael, 155 q, 171 q Flanders, Harley, 10 Flavell, Linda M., 9 Flavell, Roger H., 9 Flesch, Rudolf, 221 Foresti, Stefano, 199 Forsythe, G. E., 178 Fowler, F. G., 9, 45 Fowler, H. W., 9, 45, 46 Franklin, James, 18 Freeman, Jr., Daniel H., 159 Gaffney, Matthew P., 11 Garfield, Eugene, 127, 215, 216 Gamier, Rowan, 18 Garver, Robert V., 158, 178 Gautschi, Walter, 81, 88 Gear, C. William, 130 Gillman, Leonard, 10, 23, 53 Gleser, Leon J., 127, 135 Golub, Gene H., 3, 199 Goossens, Michel, 206 Gordon, Karen Elizabeth, 9, 51 Gould, Calvin R., 178 Gowers, Sir Ernest, 9, 45 n Gregory, Martin W., 115 NAME INDEX Griffiths, David F., 147q, 185q, 206 Grosse, Eric, 199, 212 Halmos, Paul R., 3, 5q, 10, 24, 38,48, 53, 80, 102, 107 q, 108, 117, 126, 155 q, 178, 206, 209q Hartley, James, 91, 94, 222 Hennessy, John L., 84 Herbert, A. J., 75 Hetherington, J. H., 146 Higham, Desmond J., Iq, 147q, 185 q, 206 Higham, Nicholas J., 206 Hobbes, 147 q, 179 q Householder, Alston S., 22 Iverson, Kenneth E., 23 Jackson, Donald D., 145 Jones, David M., 195, 204 Kac, Marc, 82 Kahan, W., 81, 215 Kelley, J. L., 35q, 80 Kelly, Kevin, 209 q Kenny, Peter, 156, 159, 171 q, 174, 178 Kerkut, G. A., 80 King, Lester S., 87 Kirkman, John, 11, 55 Kittel, Charles, 15 q Klare, George R., 221 Knuth, Donald E., 3, 10, 23, 41, 45 n, 84, 86, 91 n, 94, 108, 185 q, 186, 206, 222 Kopka, Helmut, 206 Krantz, Steven G., 10 Krol, Ed, 210 Kuh, E., 81 LaFollette, Marcel C., 104 NAME INDEX Lambuth, David, 9 Lamport, Leslie, 18, 186,190, 201, 204, 206 Landes, Kenneth K., 77 q Lanham, Richard A., 10 LaQuey, Tracy, 209 q Latorre, G., 76 Leron, Uri, 18 Lindley, Dennis V., 135 Lindsey, Charles H., 81 Littlewood, J. E., 77 q, 107 q Lorayne, Harry, 178 Luey, Beth, 9, 11 MacGregor, A. J., 94 MacQueen, Donald S., 76 Mallon, Thomas, 104 Manheim, Frank T., 125 q, 135 Marquardt, Donald W., 103 Mason, S. D., 152q Matthews, Diane L., 182 Maugh, Thomas H., II, 179q Maurer, Stephen B., 11 Mcllroy, M. Douglas, 41, 218 Mermin, N. David, 31, 41 Messing, J., 98 Michener, James A., 5q, 12 Miller, Webb, 81 Mitchell, Joan P., 12 Mittelbach, Frank, 206 Moler, Cleve B., 81 Newton, Rae R., 153 O'Connor, Maeve, 10, 11 O'Leary, Dianne P., 80, 84, 88 Ockendon, J. R., 97 Parberry, Ian, 135 Parlett, Beresford N., 3, 17, 22, 81 Partridge, Eric, 9 Patashnik, Oren, Patterson, David A., 84 Pechenik, Jan A., 11 Pemberton, John E., 12 Perry, Carol Rosenblum, 9 Peterson, Ivars, 79 Peterson, James L., 219 Phillips, Estelle M., 147q, 153 Piranian, George, 53 Polya, George, 15 q, 18 Potter, Simeon, 12 Priestley, W. M., 206 Pugh, D. S., 147q, 153 Pyke, D. A., 96n Quintillian, 35 q Quirk, Randolph, 8, 72 Rahtz, Sebastian, 206 Rodale, Jerome Irving, 9 Rodenburg, Patsy, 178 Rodin, Ervin Y., 128 Roget, Peter Mark, 5q Rudestam, Kjell Erik, 153 Russey, William E., 11, 59q Ryer, Jeanne C., 209 q Safire, William, 9 Salomon, David, 206 Samarin, Alexander, 206 Santoro, Nicola, 84 Schwartzman, Steven, 8 Shaw, George Bernard, 59 q Sidney, Jeffrey B., 84 Sidney, Stuart J., 84 Sissors, Jack Z., 125 q Smith, Alan Jay, 135 Spivak, Michael D., 186 Stainton, Elsie Myers, 8, 10, 77 q Stallman, Richard, 218 Steen, Lynn Arthur, 11 Steenrod, Norman E., 10 Stegun, Irene A., 138 Stein, Ed, 209 q Stein, Gabriele, 8, 72 Steinberg, David, 153 Stetter, Hans J., 59q Stewart, G. W., 3, 80, 88 Strang, Gilbert, 3, 18 Strassen, Volker, 81 Strunk, Jr., William, 9, 84 Swales, John, 76 Swan, Michael, 62, 75 Swanson, Ellen, 10 Swift, Dean, 293 Sylvester, J. J., 83 Tarutz, Judith A., 10 Taylor, John, 18 Thompson, Robert C., 135 Tompa, Martin, 84 Trzeciak, Jerzy, 76 Tufte, Edward R., 91, 94, 181 Turabian, Kate L., 10, 47 Turk, Christopher, 11, 55, 178 Underbill, Adrian, 75 Urrutia, Jorge, 84 van Leunen, Mary-Claire, 11, 35 q, 36, 77q, 96, 98, 101, 104 Van Loan, Charles P., 3, 15q, 23, 81 Van Zandt, Timothy, 163 Vandergraft, J. S., 88 Vieira, J., 98 Wade, Nicholas, 105 Waterhouse, Keith, 9 Watkins, David S., 19 Watson, James D., 96, 96 n Watterson, Bill, 147, 179 Welsch, R. E., 81 White, E. B., 9 Wilf, Herbert S., 185 q Wilkinson, James H., 17, 82 Wood, Frederick T., 9 NAME INDEX Woodford, F. Peter, 11 Woolsey, John D., 179q, 182 Zinsser, William, 2, 9, 12, 35 q, 57, 107q Subject Index At the laundress's at the Hole in the Wall in Curs/tor's Alley up three pair of stairs. . . you may speak to the gentleman, if his flux be over, who lies in the flock bed, my index maker. — DEAN SWIFT27, A Further Account of the Most Deplorable Conditions of Mr Edmund Cur//, Bookseller, Since His Being Poisoned on the 28th March (1716) 27 Quoted in [156]. A suffix "t" after a page number denotes a table, "f" a figure, "n" a footnote, and "q" a quotation at the opening of a chapter. Definitions of technical terms are found in the glossary (Appendix E), which is not indexed here. Greek, 223 alternate versus alternative, 44 ambiguous "this" and "it", 40 American Mathematical Society Bulletin, 212 Notices, 128, 212 American Statistical Association Journal, 127 AMS subject classifications, 87 4VfrS-MfeX, 187, 201 ytMjS-T^K, 186 and, comma before final, 51-52 anonymous ftp, 210 apostrophe, 52-53 appendix, 97 articles (the, a, an), 30, 62-63 audience, analysing, 78-79, 157 author list a or an, 36 a or the, 30 abbreviations, 36-37 introducing, 37 abstract, 85-86 citing references in, 85 generic, 86 mathematics in, 85 "this paper proves", 105 acknowledgements, 96-97 acronym, 36 active voice, 37-38 adjective, 37, 39-40 adverb, 39-40 affect versus effect 44 -al and -age words. 40 alphabet choosing notation from, 21 longest, 146 order of, 83-85 spotlight factor, 84 AWK, 202, 205 bastard enumeration, 46 bibclean, 202 BibNet, 199, 201, 202 BmT]EX, 130, 196-202 abbreviations, 201 annotated bibliographies, 200 bibliography style, 197 BibNet, 199, 201, 202 Collection of Computer Science Bibliographies, 199 databases maintaining, 202 sharing, 199 keys, choosing, 200-201 URL field, 201 BIDS (Bath Information and Data Services), 216 book, date for reference list, 102 both, 49 brackets, in expressions, 32 Bulletin of the American Mathematical Society, 212 capitalization, 41, 102 of word after colon, 41 c/, 37 citation by name and year, 94 by number, 94 Harvard system, 94, 95 including author's name, 94 indexing, 215-216 placement of, 94 Collection of Computer Science Bibliographies, 199 collocations, 61 colon SUBJECT INDEX capitalizing word after, 41 in TEX, 190 in title, 81 comma, 51 before final "and", 51-52 serial, 51-52 commandments of giving a talk, 171, 177 of good writing, 39 compare to versus compare with, 44 Comprehensive Tfj]X Archive Network, see CTAN comprise versus compose, 44 computational results, reporting, 89-90 Computer and Control Abstracts, 215 Computing Reviews, 215 Classification System, 87 conclusions, 96 conference proceedings, 126 conjecture, 17 connecting words and phrases, 4849, 64-69 consistency, 41-42 constitute, 44 constructions, common in mathematical writing, 63-64 contractions, 42 copy editor, role of, 135-136 copy marking, 135 copyright, 143 corollary, 16 criticism, constructive, 2 cross-references, in I$Tjj]X, 187 CTAN (Comprehensive T^X Archive Network), 163, 201, 206207, 225 Current Contents, 126, 215 Current Mathematical Publications, 215 SUBJECT INDEX dangling participle, 43, 114 dangling theorem, 102 dashes, 188 dating work, 85, 192 definitions, 19-21 if, in, 20 redundant, 20 delatex, 219 delay in publication, 128 delay in refereeing, 128 deroff, 220 detex, 219 Dewey Decimal Classification, 212213, 214t diction, 221, 222 dictionary, 6-8, 54, 72-74 American Heritage College, 7 American Heritage Dictionary of the English Language, 6 bilingual, 72 Chambers, 7 Collins Cobuild English Dictionary, 72 Collins English, 7 Collins Plain English Dictionary, 72 Concise Oxford, 7 learner's, 72 Longman Dictionary of Contemporary English, 72 Longman Dictionary of the English Language, 7 Merriam-Webster's Collegiate, 7 New Shorter Oxford English, 6 Oxford Advanced Learner's Dictionary of Current English, 72 Oxford English, 6 Random House Unabridged, 6 Random House Webster's College, 7 using, 72-74 Webster's New World College, 7 Webster's Third New International, 6 diff, 220 digests, 210 dots, see ellipsis double, 222 double negatives, 63 due to versus owing to, 44 e-MATH, 87, 212 e.g., 36 effect versus affect, 44 either, 58 electronic mail, see email ellipsis, 32, 191 at end of sentence, 33 em-dash, 188 Emacs, 216, 218-219 commands, 235-237 email, 187, 210 corrupting TgX source, 141 line length, 141 en-dash, 188 English language examination IELTS, 75 TOEFL, 75 English language, thinking in, 60 English usage, guides to, 9, 75 enumeration, 46 epsf macros, 189 equations displaying, 27-28 line breaks in displayed, 28 numbering, 103 punctuation of, 29 referencing numbered, 31 which to display, 27 essentially, 40 examples before general case, 18 role of, 18-19 exclamation mark, 52 exercises, in textbook, 19 expressions, punctuation of, 29 false friends, 73 false if, 46 FAQ (frequently asked questions), 210 fewer versus less, 44 file transfer between DOS and Unix, 142 file transfer protocol, see ftp file types, 21 It font, sans serif, 162 footnotes, 103 for example, 36 fraud, 105 Free Software Foundation, 218 ftp, 187 anonymous, 210 full stop, 51, 74 functions, mathematical in MEX, 191 in roman font, 32 galley proofs, 136 Ghostscript, 189n glossary, 263-268 GNU Emacs, 218-220 commands, 235-237 good writing, definition of, 1 Greek alphabet, 223 grep, 202, 220 halmos (D), 18, 24 hanging theorem, 102 Harvard system, 94, 95 hyphen, suspensive, 48 hyphenation, 47-48, 188 hypothesis, 17 "I" versus "we", 57 i.e., 36 idiomatic phrases, 60 idioms, 61 IELTS, 75 if false, 46 in definitions, 20 *j(f,37 inventor, 24 impact factor, 127 index KWIC, 205 purpose of, 202 indexing, 202-206 AWK tools for, 205 in MEX, 204-205 index package, 204 main headings, choosing, 203204 Makelndex, 204-205 maximum number of page locators per entry, 203 multiple entries for one topic, 202 multiple indexes, 203 subentries, using connectives, 204 Institute for Scientific Information (LSI), 127, 215 integer, 50 International Standard Book Number, see ISBN International Standard Serial Number, see ISSN Internet, 210-212 introduction, 87-89 first sentence of, 87 ISBN, 101, 200, 202 SUBJECT INDEX -ise and -ize endings, 42, 70 Ispell, 219 ISSN, 102, 202 its or it's, 42 journal Chinese economic, rejection from, 125 choosing, 126-129 circulation figures, 127 impact factor, 127 papers in TgX, 128 publication delays, 128 refereeing delays, 128 submitting a manuscript, 129130 Journal Citation Reports, 127 key words, 87 Kincaid formula (readability), 221 KWIC (key word in context) index, 205 E^TEX, 130, 186-207, see also TgX \@ (to mark end of sentence), 196 \date, 192 \frac, 191 amstex package, 187 chapterbib package, 201 checking cross references and citations, 130 eqnarray package, 195 file contents environment, 142 filenames and internet addresses, typing, 189 graphics and graphicsx packages, 189 importing PostScript figures, 189 index package, 204 indexing in, 204-205 297 line spacing, wider, 109 lists, overuse of, 193 path package, 189 picture environment, 189 seminar package, 163 sequence of invocation with DmT^X and Makelndex, 193 showlabels package, 190 slides document class, 163 symbols, 225-233 lemma, 16 less versus fewer, 44 library classification schemes, 212213 Library of Congress Classification, 212-213, 214t like, 49 linking words, 48-49, 64-69 look, 220 Makelndex, 204-205 Mathematical Abstracts, 216 mathematical functions in M£X, 191 in roman font, 32 Mathematical Reviews, 83, 102, 201, 212-215 mathematical writing, glossary, 33 Mathematics Subject Classifications, 87, 129, 212 METflFONT, 206 misspellings, common, 41, 421 NA-Digest, 210, 212 Nature, 96 negatives, 63 netlib, 199, 212, 219 newsgroups, 210 notation, 21-24 extreme cases, simplifying in, 22 good, requirements for, 15, 21 square bracket of logical condition [•], 23-24 Notices of the American Mathematical Society, 212 nth, etc., 32, 63 numbering mathematical objects, 103-104 numbers, spelling out, 50 only, 57 oral examination, procedure, 152 ordinal numbers, 63 organization and structure, 79-80 overhead projector, keystoning, 161 owing to versus due to, 44 paragraphs, 50-51 parallelism, 28-29 participle, dangling, 43, 114 passive voice, 37-38 period, 74, see also full stop permissions, 143 Permuterm Subject Index, 215 perturb, 74 plagiarism, 104-105 poster, 180-183 board size, 180 definition, 180 layout, 182-183 size, 181 title, 180-181 transporting, 183 PostScript, 189, 211 practice versus practise, 44 problem, 49-50 program listing, errors in typesetting, 136 pronoun, personal, 57 proof emphasizing structure of, 1718 SUBJECT INDEX indicating nature of omission, 18 marking the end of, 18 note added in proof, 140 proofreading, 136-140 errors to check for, 137 f symbols, 138, 139 f proofs, see galley proofs proposition, 16 psf ig macros, 189 publishing what to publish, 126 when to publish, 126 punctuation, 51-53 in foreign languages, 74 of mathematical expressions, 29 of numbers, 74 QED, 18 quotation marks, 52, 192 readability formula Flesch formula, 221 Kincaid formula, 221 limitations of, 222 reason, 50 refereeing, how to, 133-135 refereeing process, 130-133 references author initials, 98 author name, 99 date to quote for book, 102 errors in, 98, 99 format of, 98, 197 ordering of, 102 publisher name, 101 record full details, 101 secondary sources, 98 to items on the World Wide Web, 99-100 using BlBlftX, 196-202 SUBJECT INDEX rejection, from Chinese economic journal, 125 reordering words, 57-58 reprints, 129 revising a draft, 107-124 check-list for, HOf Roget's Thesaurus, 8 running head, 130 sans serif font, 162 satisfy versus verify, 62 saying what you mean, 53 scholarly publishing, brief history, 145 Science Citation Index, 215-217 § (section), 89 semicolon, 51 as list separator, 52 sentence first words, 53, 57, 77 opening, 53, 57 serial comma, 51-52 SI prefixes, 93 SIAM electronic search of membership list, 212 progress of an accepted article, 143-145 SIAM journals circulation figures, 128 instructions for referees, 132 refereeing process, 130 SIAM News, 212 significant, 50 simplification, 53-54 slides awful, definition of, 155 duplicate, 160 economy of words, 160 hand written or typeset?, 162163 legibility, 161-162 299 lines, number of, 159 number of, 162 overlays, 159, 163 preparing in I^lEX, 163 preparing in TgjX, 187 projecting from a computer, 163 title slide, 157 sort, 220 speech pitch variation, 175 speed of, 174-175 volume of, 174 spell, 218-219 spelling, 69-71 alternative forms, 69-71 British versus American, 8, 40-41, 701, 69-71 checker, 74, 218-220 common errors, 41, 421 corrector, 219 split infinitive, 112, 113 spotlight factor, 84 style, 221, 222 style checker, 74, 221-222 subject classifications, 87 submitting a manuscript, 129-130 previously published material, 130 symbols 27 ^,27 e versus e, 32 3, 26 V, 26-27 at end of sentence, 115 at start of sentence, 29 placement of, 29 separate by words, 30 TEX and MEX, 225-233 unnecessary, 29 versus words, 24-27 year of first use in print, 251 synonyms, 54-55 notational, 30 tables, 90-94 design of, 91 row versus column orientation, 91 versus graphs, 91 talk advantages over a paper, 156 computer, slides projected from, 163 differences from a paper, 156 eye contact with audience, 175 finishing, 176 finishing on time, 175-176 gestures, 175 in a foreign language, 156 microphone, 173 multiple entry points, 158 multiple exit points, 158 nerves, 175 notes for, 172 overhead projector, use of, 173174 pauses, 175 pointer, use of, 173 practising, 172 speech pitch variation, 175 speed of, 174-175 volume of, 174 ten commandments, 177 title, 157 writing, 155 tense, 56 TEX, 186-207 \big, \bigg, \Big, 188 \ddots, 192 \left, \right, 188 SUBJECT INDEX \quad, \qquad, 194 \vdots, 192 \widehat, \widetilde, 191 accents, 191 author typesetting, 140-142 colon, 190 comments, precarious, 140 control space (\ u ), 196 dashes, 188 delimiters, 188 ellipsis, 191 errors introduced by email transmission, 141-142 Greek letters in italic, 192 key, choice in labels, 190 macros for notation, 190 mathematical functions in roman, 191 new paragraph, unintentional, 193 quotes, 192 slashed fractions in text, 191 source code readability, 193194 spaces, 196 spacing in formulas, 192, 194195 symbols, 225-233 confusable, 1911 ties, 196 TEX Users Group (TUG), 207 text editors, 216-218 th, etc. (fcth term), 32, 63 that is, 36 that versus which, 45 the or a, 30 theorem, 16-17 dangling, 102 hanging, 102 "invalid", 105 theorem-like structures, how to number, 103 SUBJECT INDEX thesaurus, 8-9, 54, 72 Roget's, 8 thesis defending, 151-152 opening pages, format of, 150 oral examination procedure, 152 purpose of, 148 writing, 148-151 this, ambiguous, 40 title, 80-83 choice of words, 77 line breaks in, 83 of poster, 180-181 of talk, 157 shortest, 146 TOEFL, 75 touch typing, 218 tr, 220 transparencies, see slides troff,186 try and versus try to, 50 UK TfiX Users Group, 207 uniq, 220 Unix, 202, 210 delatex, 219 deroff, 220 detex, 219 d i f f , 220 grep, 202, 220 look, 220 sort, 220 spell, 218 troff,186 tr, 220 uniq, 220 we, 221 URL (Uniform Resource Locator), 210 field in BroTEX, 201 verify versus satisfy, 62 301 vi (editor), 216 voice active, 37-38 passive, 37-38 we, 221 "we" versus "I", 57 which versus that, 45 wicked which, 45, 109, 114 word frequency count, 220 word pyramid, 55 wordprocessing, technical, 186 words absolute, 37 abstract, 54 -age ending, 40 -al ending, 40 ambiguous, 49-50 Anglo-Saxon, 54 compound, 47-48 concrete, 54 connecting, 48-49, 64-69 distinctions between, 44-45, 62 elegant variation, 45-46 French origin, 54 hyphenation of, 47-48 -ise and -ize endings, 42, 70 Latin origin, 54 linking, 48-49, 64-69 misused, 49 50 omission of, 50 order of, 57-58 redundant, to avoid confusion, 19 versus symbols, 24-27 World Wide Web, 210 referencing items on, 99-100 Writer's Workbench, 222 writing factorial technique, 108 is difficult, 2 302 plain, 2 qualities needed for, 1 spiral technique, 108 ten commandments, 39 to learn, 2 writing a talk, 155 Zentralblatt fur Mathematik und ihre Grenzgebiete, 216 zeros, spelling of, 42 zip code versus rip cord, 138
{"url":"https://silo.pub/handbook-of-writing-for-the-mathematical-sciences-u-4243868.html","timestamp":"2024-11-13T18:27:45Z","content_type":"text/html","content_length":"459531","record_id":"<urn:uuid:1ba92a3d-3876-46cc-9788-9a4ba64bfaa6>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00014.warc.gz"}