content
stringlengths
86
994k
meta
stringlengths
288
619
“Training” Dogs Turns out some animals get home the same way we do: by train! There are a few dogs in the Russian city of Moscow who know how to ride the subway. No one knows how the pups know when to get on and off at their regular stops, but somehow they do it. So if you’re ever lost on the Moscow Metro, just ask a dog which way to go — and remember to give him a bone to thank him. Wee ones: If your dog rides the train to the city, then home, then to the city, then home…where does your dog go next? Little kids: If 7 dogs get on the train, then 1 dog gets off, then 2 dogs get on, how many dogs are on the train now? Bonus: If 4 of those dogs bark at the train whistle, how many don’t? Big kids: If in January 3 dogs ride the train every day, then in February there are 5 dogs in total, then in March there are 7 dogs…how many would ride in June to keep the pattern? Bonus: How many more dogs would need to join the ones riding in June for them to have 60 doggie paws in total? Wee ones: To the city. Little kids: 8 dogs. Bonus: 4 dogs. Big kids: 13 dogs, since it’s 3 months later and you add 2 dogs each month. Bonus: 2 more dogs, since you need 15 dogs to have 60 paws.
{"url":"https://bedtimemath.org/fun-math-subway-dogs/","timestamp":"2024-11-07T23:34:34Z","content_type":"text/html","content_length":"86331","record_id":"<urn:uuid:b3424c22-d432-4b06-9231-062183789c70>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00046.warc.gz"}
Increasing and Decreasing Intervals (Functions) The intervals where the function is increasing show a certain situation in which the values of $X$ and $Y$ increase together. The intervals where the function is decreasing expose a certain situation in which the value of $X$ in a function increases while that of $Y$ decreases. Does the function in the graph decrease throughout? In what domain does the function increase? In what domain does the function increase? In what domain is the function increasing? In what domain is the function negative? Is the function in the graph below decreasing? Is the function in the graph decreasing? Is the function in the graph decreasing? Is the function shown in the graph below decreasing? Is the function shown in the graph below decreasing? In what interval is the function increasing? Purple line: \( x=0.6 \) Determine the domain of the following function: A function describing the charging of a computer battery during use. Determine the domain of the following function: The function describes a student's grades throughout the year. Determine the domain of the following function: The function represents the weight of a person over a period of 3 years. Determine which domain corresponds to the described function: The function describes a person's energy level throughout the day. Does the function in the graph decrease throughout? In what domain does the function increase? In what domain does the function increase? In what domain is the function increasing?
{"url":"https://www.tutorela.com/math/increasing-and-decreasing-intervals-of-a-function","timestamp":"2024-11-02T10:39:41Z","content_type":"text/html","content_length":"446540","record_id":"<urn:uuid:c73aa7f3-85b8-4bcb-99a1-e30b53c9cca7>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00275.warc.gz"}
All examples File Brief Description bind_member_functions.cpp This examples shows how member functions can be used as system functions in odeint. bind_member_functions_cpp11.cpp This examples shows how member functions can be used as system functions in odeint with std::bind in C++11. bulirsch_stoer.cpp Shows the usage of the Bulirsch-Stoer method. chaotic_system.cpp The chaotic system examples integrates the Lorenz system and calculates the Lyapunov exponents. elliptic_functions.cpp Example calculating the elliptic functions using Bulirsch-Stoer and Runge-Kutta-Dopri5 Steppers with dense output. fpu.cpp The Fermi-Pasta-Ulam (FPU) example shows how odeint can be used to integrate lattice systems. generation_functions.cpp Shows skeletal code on how to implement own factory functions. harmonic_oscillator.cpp The harmonic oscillator examples gives a brief introduction to odeint and shows the usage of the classical Runge-Kutta-solvers. harmonic_oscillator_units.cpp This examples shows how Boost.Units can be used with odeint. heun.cpp The Heun example shows how an custom Runge-Kutta stepper can be created with odeint generic Runge-Kutta method. list_lattice.cpp Example of a phase lattice integration using std::list as state type. lorenz_point.cpp Alternative way of integrating lorenz by using a self defined point3d data type as state type. my_vector.cpp Simple example showing how to get odeint to work with a self-defined vector type. phase_oscillator_ensemble.cpp The phase oscillator ensemble example shows how globally coupled oscillators can be analyzed and how statistical measures can be computed during integration. resizing_lattice.cpp Shows the strength of odeint's memory management by simulating a Hamiltonian system on an expanding lattice. simple1d.cpp Integrating a simple, one-dimensional ODE showing the usage of integrate- and generate-functions. solar_system.cpp The solar system example shows the usage of the symplectic solvers. stepper_details.cpp Trivial example showing the usability of the several stepper classes. stiff_system.cpp The stiff system example shows the usage of the stiff solvers using the Jacobian of the system function. stochastic_euler.cpp Implementation of a custom stepper - the stochastic euler - for solving stochastic differential equations. stuart_landau.cpp The Stuart-Landau example shows how odeint can be used with complex state types. two_dimensional_phase_lattice.cpp The 2D phase oscillator example shows how a two-dimensional lattice works with odeint and how matrix types can be used as state types in odeint. van_der_pol_stiff.cpp This stiff system example again shows the usage of the stiff solvers by integrating the van der Pol oscillator. gmpxx/lorenz_gmpxx.cpp This examples integrates the Lorenz system by means of an arbitrary precision type. mtl/gauss_packet.cpp The MTL-Gauss-packet example shows how the MTL can be easily used with odeint. mtl/implicit_euler_mtl.cpp This examples shows the usage of the MTL implicit Euler method with a sparse matrix type. thrust/ The Thrust phase oscillator ensemble example shows how globally coupled oscillators can be analyzed with Thrust and CUDA, employing the power of modern graphic phase_oscillator_ensemble.cu devices. thrust/phase_oscillator_chain.cu The Thrust phase oscillator chain example shows how chains of nearest neighbor coupled oscillators can be integrated with Thrust and odeint. thrust/lorenz_parameters.cu The Lorenz parameters examples show how ensembles of ordinary differential equations can be solved by means of Thrust to study the dependence of an ODE on some thrust/relaxation.cu Another examples for the usage of Thrust. ublas/lorenz_ublas.cpp This example shows how the ublas vector types can be used with odeint. vexcl/lorenz_ensemble.cpp This example shows how the VexCL - a framework for OpenCL computation - can be used with odeint. openmp/lorenz_ensemble_simple.cpp OpenMP Lorenz attractor parameter study with continuous data. openmp/lorenz_ensemble.cpp OpenMP Lorenz attractor parameter study with split data. openmp/lorenz_ensemble_nested.cpp OpenMP Lorenz attractor parameter study with nested vector_space_algebra. openmp/phase_chain.cpp OpenMP nearest neighbour coupled phase chain with continuous state. openmp/phase_chain_omp_state.cpp OpenMP nearest neighbour coupled phase chain with split state. mpi/phase_chain.cpp MPI nearest neighbour coupled phase chain. 2d_lattice/spreading.cpp This examples shows how a vector< vector< T > > can be used a state type for odeint and how a resizing mechanism of this state can be implemented. quadmath/black_hole.cpp This examples shows how gcc libquadmath can be used with odeint. It provides a high precision floating point type which is adapted to odeint in this example. molecular_dynamics.cpp A very basic molecular dynamics simulation with the Velocity-Verlet method.
{"url":"https://beta.boost.org/doc/libs/1_82_0/libs/numeric/odeint/doc/html/boost_numeric_odeint/tutorial/all_examples.html","timestamp":"2024-11-10T00:10:03Z","content_type":"text/html","content_length":"23385","record_id":"<urn:uuid:ba0fc579-e089-42e1-832a-4ce59138a79d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00163.warc.gz"}
Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution The interaction between surface acoustic waves (SAWs) and spin waves (SWs) in a piezoelectric-magnetic thin film heterostructure yields potential for the realization of novel microwave devices and applications in magnonics. In the present work, we characterize magnetoacoustic waves in three adjacent magnetic micro-stripes made from CoFe + Ga, CoFe, and CoFe + Pt with a single pair of tapered interdigital transducers (TIDTs). The magnetic micro-stripes were deposited by focused electron beam-induced deposition and focused ion beam-induced deposition direct-writing techniques. The transmission characteristics of the TIDTs are leveraged to selectively address the individual micro-stripes. Here, the external magnetic field is continuously rotated out of the plane of the magnetic thin film and the forward volume SW geometry is probed with the external magnetic field along the film normal. Our experimental findings are well explained by an extended phenomenological model based on a modified Landau–Lifshitz–Gilbert approach that considers SWs with nonzero wave vectors. Magnetoelastic excitation of forward volume SWs is possible because of the vertical shear strain ɛ[xz] of the Rayleigh-type SAW. Over the last decade, increasing attention has been paid to the resonant coupling between surface acoustic waves (SAWs) and spin waves (SWs).^1–3 On the one hand, magnetoacoustic interaction opens up the route toward energy-efficient SW excitation and manipulation in the field of magnonics.^4 On the other hand, magnetoacoustic interaction greatly affects the properties of the SAW, which, in turn, can be used to devise new types of microwave devices, such as magnetoacoustic sensors^5,6 or microwave acoustic isolators.^7–14 High flexibility in the design of these devices is possible since the properties of the SWs can be varied in a wide range of parameters. For instance, the SW dispersion can be reprogrammed by external magnetic fields or electrical currents^15,16 and more complex design of the magnet geometry^17,18 or use of multilayers^14,19–21 to allow for multiple dispersion branches with potentially large nonreciprocal behavior. Conversely, the SAW–SW interaction can be also used as an alternative method to characterize magnetic thin films, SWs, and SAWs.^12,20,22,23 Design of future magnetoacoustic devices can benefit from the fact that SAW technology is well developed and already employed in manifold ways in our daily life.^24–27 Efficient excitation and detection of SAWs with metallic comb-shaped electrodes—so-called interdigital transducers (IDTs)—are possible on piezoelectric substrates. For example, acoustic delay lines with low insertion losses of about 6 dB at 4 GHz have been realized.^28 Fundamental limitations in the SAW excitation efficiency are mainly given by interaction with thermal phonons, spurious excitation of longitudinal acoustic waves in the air, and nonlinear effects at high input power.^27,29 So far, IDTs exciting SAWs homogeneously over the whole aperture have been used in resonant magnetoacoustic experiments. Apart from Refs. 30 and 31, these studies have been performed with an external magnetic field that was exclusively oriented in the plane of the magnetic thin film. Here, we experimentally demonstrate targeted magnetoacoustic excitation and characterization of SWs in the forward volume SW geometry with micrometer-scale spatial resolution. To do so, magnetoacoustic transmission measurements are performed with one pair of tapered interdigital transducers (TIDTs) at three different magnetic micro-stripes, as shown in Fig. 1. This study is carried out in different geometries in which the external magnetic field is tilted out of the plane of the magnetic thin film. We demonstrate that magnetoelastic excitation of SWs is possible even if the static magnetization is parallel to the magnetic film normal—which is the so-called forward volume spin wave (FVSW) geometry—thanks to the vertical shear strain component ɛ[xz] of the Rayleigh-type SAW. The experimental results are simulated with an extended phenomenological model, which takes the arbitrary orientation of the external magnetic field and magnetization into account. The magnetic micro-stripes with lateral dimensions of about 20 × 40 µm^2 and different magnetic properties were deposited by focused electron beam-induced deposition (FEBID) and focused ion beam-induced deposition (FIBID). One particular advantage of using the direct-write approach^32,33 to fabricate the micro-stripes is the ease with which the magnetic properties can be tailored, such as the saturation magnetization.^34 Moreover, direct-write capabilities make the fabrication of complex 3D magnetic structures on the nanoscale possible. Applications in magnonics include, for instance, 3D nanovolcanoes with tunable higher-frequency eigenmodes,^35 2D and 3D magnonic crystals with SW bandgaps,^36,37 SW beam steering via graded refractive index, and frustrated 3D magnetic A surface acoustic wave is a sound wave propagating along the surface of a solid material with evanescent displacement normal to the surface. The density, surface boundary conditions, and elastic, dielectric, and potentially piezoelectric properties of the material mainly determine if and which SAW mode can be launched. Typical SAW modes on homogeneous substrates show a linear dispersion with a constant propagation velocity of about c[SAW] = 3500 m/s.^27 We use a standard Y-cut Z-propagation LiNbO[3] substrate, which gives rise to a Rayleigh-type SAW. On the substrate surface, this SAW mode causes a retrograde elliptical lattice motion in a plane defined by the SAW propagation direction and the surface normal.^27,40 An optical micrograph of the fabricated magnetoacoustic device is shown in Fig. 1. Rayleigh-type SAWs can be excited in a frequency range between $f0−ΔfTIDT2,…,f0+ΔfTIDT2$, which corresponds to different positions of the TIDT along the length of its aperture W. To describe the magnetoacoustic transmission of the three different magnetic thin films, we extend the phenomenological model of Dreher et al.^30 and Küß et al.^12 in terms of magnetoacoustically excited SWs with nonzero wave vector and arbitrary orientation of the equilibrium magnetization direction, as detailed below. A. Magnetoacoustic driving fields and SAW transmission In the following, we use the (x, y, z) coordinate system shown in Fig. 2.^30 The x and z axes are parallel to the wave vector $kSAW=kx̂$ of the SAW and normal to the plane of the magnetic micro-stripes, respectively. The equilibrium direction of the magnetization M and the orientation of the external magnetic field H are specified by the angles (θ[0], ϕ[0]) and (θ[H], ϕ[H]). Here, θ [0] and ϕ[0] are calculated by minimization of the static free energy. For that, we take the external magnetic field H, thin film shape anisotropy $Msẑ$ with saturation magnetization M[s], and a small uniaxial in-plane anisotropy H[ani], which encloses an angle ϕ[ani] with the x axis, into account.^12,30 Because the characterized magnetic thin films are relatively^12 thick (d ≥ 24 nm), we neglect the surface anisotropy. The SAW–SW interaction can be described by effective dynamic magnetoacoustic driving fields, which exert a torque on the static magnetization.^41 The resulting damped precession of M is then determined by the Landau–Lifshitz–Gilbert (LLG) equation for small precession amplitudes. To this end, we introduce the rotated (1, 2, 3) Cartesian coordinate system in Fig. 2 . The 3-axis is parallel to M and the 2-axis is aligned in the film plane.^41 In this phenomenological model, it is assumed that the frequencies f and wave vectors k of SAW and SW are identical.^ 12,42 We assume that the magnon–phonon coupling strength is in the weak coupling regime, as discussed for the three micro-stripes in Appendix A. Furthermore, only magnetic films with small thicknesses |k|d ≪ 1 and homogeneous strain in the z-direction of the magnetic film are considered.^12,30 The effective magnetoacoustic driving field as a function of SAW power in the (1,2) plane can be written^12 as Here, ω = 2πf and c[SAW] are the angular frequency and propagation velocity of the SAW, w is the width of the acoustic beam, and the constant R = 1.4 × 10^11 J/m^3^43 The normalized effective magnetoelastic driving fields $h̃1$ and $h̃2$ of a Rayleigh wave with strain components ɛ[kl=xx,zz,xz] ≠ 0 are^12,30 where b[1,2] are the magnetoelastic coupling constants for cubic symmetry of the ferromagnetic layer,^7,30$ãkl=εkl,0/(|k||uz,0|)$ are the normalized amplitudes of the strain, and ɛ[kl,0] are the complex amplitudes of the strain. Furthermore, u[z,0] is the amplitude of the lattice displacement in the z-direction. For the sake of simplicity, we neglect non-magnetoelastic interactions, such as magneto-rotation coupling,^12,22,44 spin-rotation coupling,^45–47 and gyromagnetic coupling.^48 In contrast to previous magnetoacoustic studies^10,12,20,22,23,42,49 where the equilibrium magnetization direction was aligned in the plane of the magnetic film (θ[0] = 90°), the strain component ɛ[zz] results in a modified driving field for geometries with θ[0] ≠ 90°. In the experiments, we characterize the SAW–SW interaction for the three geometries depicted in Fig. 3. The oop0-, oop45-, and oop90-geometries are defined by the polar angle ϕ[H] of the external magnetic field H. Since the symmetry of the magnetoacoustic driving field h essentially determines the magnitude of the magnetoacoustic interaction, we will now discuss the orientation dependence of $|μ0h̃(θ0)|$ for the Rayleigh wave strain components ɛ[xx], ɛ[zz], and ɛ[xz] separately, setting all other strain components equal to zero.^30 In Fig. 4, we show a polar plot of the normalized magnitude of the driving field $|μ0h̃(θ0)|$, using $2b1,2ãkl=$ 1 T and assuming no in-plane anisotropy (H[ani] = 0, ϕ[0] = ϕ[H]). First, it is interesting that magnetoelastic excitation of SWs in the FV-geometry (θ[0] = 0°) can be solely mediated by the driving fields of the shear component ɛ[xz]. Second, finite element method (FEM) eigenmode simulations reveal^50 that the strain component ɛ[zz] is phase shifted by π with respect to ɛ[xx]. Thus, the magnetoacoustic driving fields of ɛ[xx] and ɛ[zz] show a constructive superposition. Third, the SAW–SW helicity mismatch effect arises because of a ±π/2 phase shift of ɛ[xz] with respect to ɛ[xx].^8–12,23,30 Under an inversion of the SAW propagation direction (k → −k, or k[S21] → k[S12]), the phase shift changes its sign (π/2 → −π/2). For measurements in the in-plane geometry, the SAW–SW helicity mismatch effect is attributed to a superposition of driving fields caused by ɛ[xx] and ɛ[xz]. This is in contrast to the oop90-geometry (ϕ [0] = 90°), where the SAW–SW helicity mismatch effect is mediated by the strain components ɛ[zz] and ɛ[xz]. The magnetoacoustic driving field causes the excitation of SWs in the magnetic film. Thus, the power of the traveling SAW is exponentially decaying while propagating through the magnetic film with length l[f] and thickness d. With respect to the initial power P[0], the absorbed power of the SAW is The magnetic susceptibility tensor $χ̄$ describes the magnetic response to small time-varying magnetoacoustic fields and is calculated as described by Dreher et al.^30 for arbitrary equilibrium magnetization directions (θ[0], ϕ[0]). Besides the external magnetic field, exchange coupling, and uniaxial in-plane anisotropy, we take the dipolar fields for SWs with k ≠ 0 also into account, which are given in Eq. (B1) in Appendix B. Finally, to directly simulate the experimentally determined relative change of the SAW transmission ΔS[ij] on the logarithmic scale, we use for SAWs propagating parallel (k ≥ 0) and antiparallel (k < 0) to the x axis. B. Spin wave dispersion Resonant SAW–SW excitation is possible if the dispersion relations of SAW and SW intersect in the uncoupled state. The SW dispersion is obtained by setting $detχ̄−1=0$ and taking the real part of the solution for small SW damping constants α. If we neglect the uniaxial in-plane anisotropy (H[ani] = 0, ϕ[0] = ϕ[H]), we obtain^51 Here, γ is the gyromagnetic ratio, $G0=1−e−|k|d|k|d$ and $D=2Aμ0Ms$ with the magnetic exchange constant A. We exemplarily calculated the SW resonance frequency f in Fig. 5(a) for the oop0-geometry as a function of the external magnetic field magnitude μ[0]H. The corresponding azimuthal angle θ[0] of the equilibrium magnetization orientation is shown in Fig. 5(b). For the simulation, we use besides ϕ[0] = 0°, k = 5.9 µm^−1, μ[0]M[s] = 1 T, and H[ani] = 0 the parameters of the CoFe + Ga thin film in Table II. Additionally, the resonance frequency f = 3 GHz of a SAW with k = 5.9 µm^−1 is depicted by the dashed line in Fig. 5(a). The dispersion f(μ[0]H) changes strongly with the azimuthal angle θ[ H] of the applied external magnetic field. For the FVSW geometry θ[H] = 0°, the magnetic thin film is saturated (θ[0] = 0°) when the magnetic field overcomes the magnetic shape anisotropy μ[0]H > μ [0]M[s] and resonant SAW–SW interaction is only possible at μ[0]H = 1.06 T. In contrast, for θ[H] = 0.9°, we expect magnetoacoustic interaction in a wide range μ[0]H ≈ 0.7, …, 1.0 T, where the dispersions of SAW and SW intersect. For this geometry and μ[0]H ≤ 1.5 T, the magnetic film is not fully saturated (θ[0] ≠ 0.9°). In contrast to previous magneotoacoustic studies performed with conventional IDTs,^10,12,20,22,23,31,42,49 here, we use “tapered” or “slanted” interdigital transducers (TIDTs)^52–55 to characterize SAW–SW interaction in three different magnetic thin micro-stripes in one run. Although the fingers of the TIDT are slanted, the SAW propagates dominantly parallel to the x axis in Fig. 1 because of the strong beam steering effect of the Y-cut Z-propagation LiNbO[3] substrate.^27,52 The linear change of the periodicity p(y) along the transducer aperture W results in a spatial dependence of the SAW resonance frequency f(y) = c[SAW]/p(y).^52 Thus, a TIDT has a wide transmission band and can be thought of as consisting of multiple conventional IDTs that are connected electrically in parallel. ^54 In good approximation, the frequency bandwidth of a conventional IDT is given by Δf[IDT] = 0.9f[0]/N and is constant for higher harmonic resonance frequencies. From the bandwidth Δf[TIDT] of the TIDT, the width of the acoustic beam w at constant frequency can be estimated^55 with The TIDTs are fabricated out of Ti(5)/Al(70) (all thicknesses are given in units of nm) and have an aperture of W = 100 µm, the number of finger-pairs is N = 22, and the periodicity p(y) changes from 3.08 to 3.72 µm. As shown in Fig. 6(a), we operate the TIDT at the third harmonic resonance, which corresponds to a transmission band and SAW wavelength in the ranges of 2.69 GHz < f < 3.22 GHz and 1.06 µm < λ < 1.27 µm. According to Eq. (7), we expect for the width of the acoustic beam at constant frequency w = 100 µm (41/530 MHz) ≈ 7.7 µm. Moreover, Streibel et al. argue that internal acoustic reflections in the single electrode structure used additionally lowers w by a factor of about four.^55 Since λ is in the range of w, diffraction effects can be expected. These beam spreading losses are partly compensated by the beam steering effect and the frequency selectivity of the receiving transducer, which filters out the diffracted portions of the SAW.^55 The three different magnetic micro-stripes in Fig. 1 were deposited by direct-writing techniques between the two 800 µm distant TIDTs. For details, we refer the readers to Appendix C. The compositions of the deposited magnetic films were characterized by energy-dispersive x-ray spectroscopy (EDX). The results are summarized in Table I. More details about the microstructure and magnetic properties of CoFe can be found in Refs. 34 and 56. For the microstructure of mixed CoFe–Pt deposits, we refer the readers to Ref. 57 in which results of a detailed investigation of the microstructural and magnetic properties of fully analogous Co–Pt deposits are presented. We determined the thicknesses d and the root mean square roughness of the samples CoFe + Ga (24 ± 2), CoFe (72 ± 2), and CoFe + Pt (70 ± 2) by atomic force microscopy (AFM). The length and widths of all micro-stripes are identical, with l[f] = 40 µm and w[f] = 20 µm, except $wfCoFe+Ga=$ 26 µm. TABLE I. Sample . C . O . Fe . Co . Ga . Pt . CoFe + Pt 61.8 6.5 4.2 20.1 7.4 CoFe 26.2 6.9 12.4 54.5 CoFe + Ga 16.9 16.5 7.7 37.5 21.4 Sample . C . O . Fe . Co . Ga . Pt . CoFe + Pt 61.8 6.5 4.2 20.1 7.4 CoFe 26.2 6.9 12.4 54.5 CoFe + Ga 16.9 16.5 7.7 37.5 21.4 The SAW transmission of our delay line device was characterized by a vector network analyzer. Based on the low propagation velocity of the SAW, a time-domain gating technique was employed to exclude spurious signals,^58 in particular electromagnetic crosstalk. We use the relative change of the background-corrected SAW transmission signal as to characterize SAW–SW coupling. Here, ΔS[ij] is the magnitude of the complex transmission signal with ij ∈ {21, 12}. In all measurements, the magnetic field is swept from −2 to 2 T. A. Experimental results In Fig. 6(b), we show the magnetoacoustic transmission ΔS[21] as a function of external magnetic field magnitude and frequency for the FVSW geometry (θ[H] ≈ 0°). Within the wide transmission band of the TIDT, the magnetoacoustic transmission ΔS[21](μ[0]H) clearly differs for the three different frequency sub-bands, each of which spatially addresses one of the three different magnetic micro-stripes. Both, the maximum change of the transmission with $Max(ΔS21CoFe)>Max(ΔS21CoFe+Pt)>Max(ΔS21CoFe+Ga)$ and the resonance fields are different for the three films. The small signals ΔS[21] ≠ 0 at frequencies corresponding to the gaps between the magnetic structures are attributed to diffraction effects. The apparent signal ΔS[21] at the edges of the transmission band is attributed to measurement noise. From Fig. 6(b), we identify the frequencies corresponding to the centers of the three magnetic films CoFe + Ga, CoFe, and CoFe + Pt as 2.78, 2.96, and 3.17 GHz, respectively. Further analysis is performed at these fixed frequencies. In Fig. 7, we show the magnetoacoustic transmission ΔS[21](μ[0]H, θ[H]) of all three films in the oop0-, oop45-, and oop90-geometry (see Fig. 3) as a function of external magnetic field magnitude μ [0]H and orientation θ[H] in the range of −90° ≤ θ[H] ≤ 90° with an increment of Δθ[H] = 3.6°. For almost all geometries, the magnetoacoustic response ΔS[21](μ[0]H, θ[H]) has a star shape symmetry, which was already observed by Dreher et al. for Ni(50) thin films.^30 This symmetry results from magnetic shape anisotropy. The sharp resonances in Fig. 7 around θ[H] = 0° are studied in Fig. 8 in the range of −3.6° ≤ θ[H] ≤ 3.6° with Δθ[H] = 0.225° in more detail. For all three magnetic micro-stripes, SWs can be magnetoacoustically excited in the FVSW geometry (θ[H] = 0°) and the resonance fields μ[0]H[res](θ[H] = 0°) differ. Additionally, the symmetry of the magnetoacoustic resonances μ[0]H[res](θ[H]) changes for the geometries oop0, oop45, and oop90 and the different magnetic micro-stripes. In general, the resonance fields |μ[0]H[res]| decrease if |ϕ[H]| is increased from 0° to 90° (oop0–oop90). Moreover, the line symmetry with respect to θ[H] = 0° is broken, in particular, for the oop45- and oop90-geometry. B. Simulation and interpretation To simulate the experimental results in Figs. 7 and 8 with Eq. (4), we first have to determine the saturation magnetizations M[s] of the different magnetic thin films. For this purpose, we compute Eq. (5) for the FVSW geometry (θ[H] = 0°, θ[0] = 0°). The relation M[s](H ≡ H[res]) is shown in Fig. 5(c) for all three magnetic films. Thereby, the frequency f and wave vector k of the SW are determined by the SAW and we assume c[SAW] = 3200 m/s,^59g = 2.18,^34 and D = 24.7 × 10^−12 Am.^34 Since the in-plane anisotropy H[ani] is expected to be small compared to the shape anisotropy, the impact on the resonance in the FVSW geometry is small, and we use H[ani] = 0. Under these assumptions, the relations M[s](H[res]) are almost identical for the three magnetic films. Together with the experimentally determined μ[0]H[res](θ[H] = 0°) in Fig. 8, the saturation magnetizations of CoFe + Ga, CoFe, and CoFe + Pt are determined to be 772, 1296, and 677 kA/m, respectively. For the simulations in Figs. 7 and 8, we use the parameters summarized in Table II. The complex amplitudes of the normalized strain $ãkl=εkl,0/(|k‖uz,0|)$ are estimated from a COMSOL^50 finite element method (FEM) simulation. Since we do not know the elastic constants and density of the magnetic micro-stripes, we assume a pure LiNbO[3] substrate with a perfectly conducting overlayer of zero thickness. Thus, the real values of $ãkl$ might deviate from the assumed ones.^12 Furthermore, the normalized strain of the simulation was averaged over the thickness 0 ≤ z ≤ −d. The values for the SW effective damping α, magnetoelastic coupling for polycrystalline films^30b[1] = b[2], and small phenomenological uniaxial in-plane anisotropy (H[ani], ϕ[ani]) were adjusted to obtain a good agreement between experiment and simulation. Thereby, α includes Gilbert damping and inhomogeneous line broadening.^12 The phenomenological uniaxial in-plane anisotropy could be caused by substrate clamping effects or the patterning strategy of the FEBID/FIBID direct-write process. Note that the values of all these parameters listed in Table II are very reasonable. TABLE II. . CoFe + Ga . CoFe . CoFe + Pt . d (nm) 24 72 70 f (GHz) 2.78 2.96 3.17 M[s] (kA/m) 772 1296 677 α 0.04 0.1 0.05 ϕ[ani] (deg) −10 0 88 μ[0]H[ani] (mT) 1 5 10 $ãxx$ 0.49 0.40 0.40 $ãzz$ −0.15 −0.10 −0.10 $ãxz$ 0.13i 0.17i 0.17i |b[1]| (T) 4 15 6 . CoFe + Ga . CoFe . CoFe + Pt . d (nm) 24 72 70 f (GHz) 2.78 2.96 3.17 M[s] (kA/m) 772 1296 677 α 0.04 0.1 0.05 ϕ[ani] (deg) −10 0 88 μ[0]H[ani] (mT) 1 5 10 $ãxx$ 0.49 0.40 0.40 $ãzz$ −0.15 −0.10 −0.10 $ãxz$ 0.13i 0.17i 0.17i |b[1]| (T) 4 15 6 For all three magnetic micro-stripes, the qualitative agreement between simulation and experiment in Figs. 7 and 8 is good. For magnetoelastic interaction, SWs can be excited in the FVSW geometry (θ[ H] = 0°) solely due to the vertical shear strain ɛ[xz], which causes a nonzero magnetoacoustic driving field, as discussed in Fig. 4. According to Eq. (2), the driving field mediated by ɛ[xx,zz] contributes to θ[H] ≠ 0°. In Fig. 8, the intensity of the resonances for θ[H] ≠ 0° is, therefore, more pronounced than for θ[H] = 0°. Because the driving fields, which are mediated by the strain ɛ[xx ] and ɛ[zz], are in phase, SW excitation in one of the out-of-plane geometries can be even more efficient than in the in-plane geometry. The magnetoacoustic resonance fields of the three magnetic micro-stripes mainly differ due to differences in M[s] and d, which strongly affect the corresponding dipolar fields of a SW. As expected from the SW dispersion in Fig. 5(a), we observe in the case of the CoFe + Ga film in Figs. 8(a) and 8(b) for θ[H] = 0 a resonance at μ[0]H = 1.06 T with a narrow linewidth and for θ[H] = 0.9° a wide resonance between μ[0]H ≈ 0.7, …, 1.0 T. The symmetry of the magnetoacoustic resonances μ[0]H[res](θ[H]) changes with the geometries oop0, oop45, and oop90 since the magnetic dipolar fields of the SW dispersion Eq. (5) depend on ϕ[0]. For CoFe + Pt, two resonances are observed in the oop00-geometry, whereas in the oop45- and oop90-geometry, confined oval-shaped resonances show up. This behavior can be modeled by assuming an uniaxial in-plane anisotropy with ϕ[ani] ≈ 90°. In the oop00-geometry, the resonance with the lower resonant fields can be attributed to the switching of the in-plane direction of the equilibrium magnetization direction. In the oop45- and oop90-geometries, the resonance frequencies of the SWs are higher than the excitation frequency of the SAW for |θ[H]| > 0.7°. Thus, the magnetoacoustic response ΔS[21] is low for |θ[H]| > 0.7° in Figs. 8(o)–8(r). We attribute discrepancies between experiment and simulation to the following effects: The phenomenological model solely considers an in-plane uniaxial anisotropy. Additional in-plane and out-of-plane anisotropies would result in a shift in the resonance fields. Furthermore, the strain is estimated by a simplified FEM simulation and assumed to be homogeneous along the thickness of the micro-stripe. Moreover, we neglect magneto-rotation coupling,^12,22,44 spin-rotation coupling,^45–47 and gyromagnetic coupling.^48 These assumptions have an impact on the intensity and symmetry of the resonances. Finally, low-intensity spurious signals are caused by SAW diffraction effects, which are, for instance, observed in Figs. 8(m), 8(o), and 8(q) for |μ[0]H| > 1 T. C. Nonreciprocal behavior The nonreciprocal behavior of the magnetoacoustic wave in the oop0-, oop45-, and oop90-geometries is illustrated for CoFe + Ga in Fig. 9. If the magnetoacoustic wave propagates in inverted directions k[S21] and k[S12] (k and −k), the magnetoacoustic transmission ΔS[21](μ[0]H, θ[H]) and ΔS[12](μ[0]H, θ[H]) differs for the oop45- and oop90-geometry. The qualitative agreement between experiment and simulation is also good with respect to nonreciprocity. The SAW–SW helicity mismatch effect, discussed in the theory section, causes ΔS[21](μ[0]H, θ[H]) ≠ ΔS[12](μ[0]H, θ[H]) in Fig. 9 and the broken line symmetry with respect to θ[H] = 0° in Figs. 8 and 9. So far, nonreciprocal magnetoacoustic transmission was only observed in studies where the external magnetic field was aligned in the plane of the magnetic film (θ[H] = 90°).^8–12,23,30 The magnetoacoustic driving field in Eq. (2) is linearly polarized along the 1-axis for ϕ[0] = 0. Thus, no nonreciprocity due to the SAW–SW helicity mismatch effect is observed in the oop0-geometry. In contrast, the driving field has a helicity in the oop45- and oop90-geometry. Since this helicity is inverted under inversion of the propagation direction of the SAW (ɛ[xz,0] → −ɛ[xz,0]), nonreciprocal behavior shows up in the oop45- and oop90-geometry. In comparison to the experimental results, the simulation slightly underestimates the nonreciprocity. This is mainly attributed to magneto-rotation coupling,^12,22,44 which can be modeled by a modulated effective coupling constant b[2,eff] and can result in an enhancement of the SAW–SW helicity mismatch effect.^12,22 In conclusion, we have demonstrated magnetoacoustic excitation and characterization of SWs with micrometer-scale spatial resolution using TIDTs. The magnetoacoustic response at different frequencies, which lie within the wide transmission band of the TIDT, can be assigned to the spatially separated CoFe + Ga, CoFe, and CoFe + Pt magnetic micro-stripes. SAW–SW interaction with micrometer-scale spatial resolution can have interesting implications for future applications in magnonics and the realization of new types of microwave devices, such as magnetoacoustic sensors^5,6,60 or microwave acoustic isolators.^14,19–21 For instance, giant nonreciprocal SAW transmission was observed in magnetic bilayers and proposed to build reconfigurable acoustic isolators.^14,19–21 In combination with TIDTs, acoustic isolators, which show in adjacent frequency bands different nonreciprocal behavior, could be realized. Furthermore, if two orthogonal delay lines are combined in a cross-shaped structure, the resolution of magnetoacoustic interaction of different magnetic micro-structures in two dimensions can potentially be achieved.^55,61 In addition, we extended the theoretical model of magnetoacoustic wave transmission^12,30 in terms of SWs with nonzero wave vector and arbitrary out-of-plane orientation of the static magnetization direction. This phenomenological model provides a good description of the experimental results for CoFe + Ga, CoFe, and CoFe + Pt magnetic micro-stripes in different geometries of the external magnetic field—including the FVSW geometry—in a qualitative way. We find that FVSWs can be magnetoelastically excited by Rayleigh-type SAWs due to the shear strain component ɛ[xz]. Moreover, magneto-rotation coupling,^12,22,44 spin-rotation coupling,^45–47 or gyromagnetic coupling^48 may contribute to the excitation of FVSWs. Since the SAW–SW helicity mismatch effect, which is related to ɛ[xz] and the effective coupling constant b[2,eff], is low in Ni thin films,^9,30,42,62,63 we expect a low excitation efficiency for FVSWs in Ni. In contrast to the previously discussed in-plane geometry, the strain component ɛ[zz] of Rayleigh-type waves plays an important role in the out-of-plane geometries and can result in enhanced SAW–SW coupling efficiency and SAW–SW helicity mismatch This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project Nos. 391592414 and 492421737. M.H. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through the Transregional Collaborative Research Center TRR 288 (Project A04) and through Project No. HU 752/16-1. Conflict of Interest The authors have no conflicts to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request. Following Ref. 64, we estimate the magnon–phonon coupling strength Ω for all films. To calculate the filling factor from the magnon and phonon overlap, we assume that the SAW extends to within one wavelength λ[SAW] into the substrate and that the magnon is uniform over the ferromagnetic film thickness. Table III lists the coupling strengths estimated in this way together with the magnon loss rates αω. For all films, the magnon–phonon coupling is in the weak coupling regime. . CoFe + Ga . CoFe . CoFe + Pt . Ω (MHz) 6 150 60 αω (MHz) 700 1900 1000 . CoFe + Ga . CoFe . CoFe + Pt . Ω (MHz) 6 150 60 αω (MHz) 700 1900 1000 The effective dipolar fields in the (1,2,3) coordinate system for arbitrary equilibrium magnetization directions (θ[0], ϕ[0]) are taken from Ref. 51 by comparing Eq. (23) with the Landau–Lifshitz Here, m[1,2] are the precession amplitudes of the normalized magnetization m = M/M[s]. FEBID and FIBID are direct-write lithographic techniques used for the fabrication of samples of various dimensions, shapes, and compositions.^33 In FEBID/FIBID, the adsorbed molecules of a precursor gas injected in a SEM/FIB chamber dissociate by means of the interaction with the electron/ion beam forming the sample during the rastering process.^32 In the present work, the samples were fabricated in a dual beam SEM/FIB microscope (FEI, Nova NanoLab 600) equipped with a Schottky electron emitter. FEBID was employed to fabricate the CoFe and CoFe + Pt samples with the following electron beam parameters: 5 kV acceleration voltage, 1.6 nA beam current, 20 nm pitch, and 1 µs dwell time. The number of passes, i.e., the number of rastering cycles, was 1500. FIBID was used to prepare the CoFe + Ga sample with the following ion beam parameters: 30 kV acceleration voltage, 10 pA ion beam current, 12 nm pitch, 200 ns dwell time, and 500 passes. The precursor HFeCo[3](CO)[12] was employed to fabricate the CoFe and the CoFe + Ga samples,^65 while HFeCo[3](CO)[12] and (CH[3])[3]CH[3]C[5]H[4]Pt were simultaneously used to grow CoFe + Pt.^66 Standard FEI gas-injection-systems (GIS) were used to flow the precursor gases in the SEM via capillaries with 0.5 mm inner diameter. The capillary–substrate surface distance was about 100 and 1000 µm for the HFeCo[3](CO)[12] and (CH [3])[3]CH[3]C[5]H[4]Pt GIS, respectively. The temperature of the precursors were 64 and 44°C for HFeCo[3](CO)[12] and (CH[3])[3]CH[3]C[5]H[4]Pt, respectively. The basis pressure of the SEM was 5 × 10^7 mbar, which rose up to about 6 × 10^7 mbar, during CoFe and CoFe + Ga deposition, and to about 2 × 10^6 mbar, during CoFe + Pt deposition. D. A. V. I. A. V. , and A. A. , “ Magnon-phonon interactions in magnon spintronics (review article) Low Temp. Phys. , and , “ Advances in coherent coupling between magnons and acoustic phonons APL Mater. , “ Acoustic control of magnetism toward energy-efficient applications Appl. Phys. Rev. A. A. A. V. , and , “ YIG magnonics J. Phys. D: Appl. Phys. , and , “ Magneto-surface-acoustic-waves microdevice using thin film technology: Design and fabrication process Sens. Actuators, A N. X. , and , “ Wide band low noise love wave magnetic field sensor system Sci. Rep. , “ Interaction of spin waves and ultrasonic waves in ferromagnetic crystals Phys. Rev. M. F. , “ Acoustic-surface-wave isolator Appl. Phys. Lett. , and , “ Nonreciprocal propagation of surface acoustic wave in Ni/LiNbO[3] Phys. Rev. B J. M. , and P. V. , “ Large nonreciprocal propagation of surface acoustic waves in epitaxial ferromagnetic/semiconductor hybrid structures Phys. Rev. Appl. , “ Highly nonreciprocal spin waves excited by magnetoelastic coupling in a Ni/Si bilayer Phys. Rev. Appl. , and , “ Nonreciprocal Dzyaloshinskii–Moriya magnetoacoustic waves Phys. Rev. Lett. , and , “ Nonreciprocal surface acoustic waves in multilayers with magnetoelastic and interfacial Dzyaloshinskii-Moriya interactions Phys. Rev. Appl. , and , “ Wide-band nonreciprocity of surface acoustic waves induced by magnetoelastic coupling with a synthetic antiferromagnet Phys. Rev. Appl. R. A. A. K. S. S. P. K. , and , “ Reconfigurable spin-wave nonreciprocity induced by dipolar interaction in a coupled ferromagnetic bilayer Phys. Rev. Appl. , and , “ Switchable giant nonreciprocal frequency shift of propagating spin waves in synthetic antiferromagnets Sci. Adv. , “ Review and prospects of magnonic crystals and devices with reprogrammable band structure J. Phys.: Condens. Matter , and , “ Towards ultraefficient nanoscale straintronic microwave devices Phys. Rev. B P. J. D. A. N. X. , and M. R. , “ Giant nonreciprocity of surface acoustic waves enabled by the magnetoelastic interaction Sci. Adv. , and , “ Nonreciprocal magnetoacoustic waves in dipolar-coupled ferromagnetic bilayers Phys. Rev. Appl. , and , “ Large surface acoustic wave nonreciprocity in synthetic antiferromagnets Appl. Phys. Express , and , “ Nonreciprocal surface acoustic wave propagation via magneto-rotation coupling Sci. Adv. , and , “ Symmetry of the magnetoelastic interaction of Rayleigh and shear horizontal magnetoacoustic waves in nickel thin films on LiTaO[3] Phys. Rev. Appl. C. K. Surface Acoustic Wave Devices for Mobile and Wireless Communications Academic Press San Diego, B. E. , and , “ Surface acoustic wave biosensors: A review Anal. Bioanal. Chem. A. R. D. A. , and , “ Surface acoustic wave (SAW) directed droplet flow in microfluidics for PDMS devices Lab Chip D. P. Surface Acoustic Wave Filters: With Applications to Electronic Communications and Signal Processing 2nd ed. C. H. S. , and , “ GHz-range low-loss wide band filter using new floating electrode type unidirectional transducers ,” in IEEE 1992 Ultrasonics Symposium. Proceedings ), Vol. 1, pp. R. C. , “ Problems encountered in high-frequency surface-wave devices ,” in IEEE 1974 Ultrasonics Symposium. Proceedings ), pp. M. S. , and S. T. B. , “ Surface acoustic wave driven ferromagnetic resonance in nickel thin films: Theory and experiment Phys. Rev. B J. Y. H. J. von Bardeleben , and , “ Surface-acoustic-wave-driven ferromagnetic resonance in (Ga,Mn)(As,P) epilayers Phys. Rev. B , and O. V. , “ Focused electron beam induced deposition meets materials science Microelectron. Eng. , and , “ Living up to its potential – Direct-write nanofabrication with focused electron beams J. Appl. Phys. S. A. A. V. K. Y. A. V. G. N. , and O. V. , “ Engineered magnetization and exchange stiffness in direct-write Co–Fe nanoelements Appl. Phys. Lett. O. V. N. R. A. V. S. A. K. Y. A. V. , and G. N. , “ Spin-wave eigenmodes in direct-write 3D nanovolcanoes Appl. Phys. Lett. , “ Plane-wave theory of three-dimensional magnonic crystals Phys. Rev. B Three-dimensional Magnonics: Layered, Micro- and Nanostructures , edited by Jenny Stanford Publishing van den Berg , and , “ Realisation of a frustrated 3D magnetic nanowire lattice Commun. Phys. J. M. de Teresa , and O. V. , “ Writing 3D nanomagnets using focused electron beams , “ On waves propagated along the plane surface of an elastic solid Proc. London Math. Soc. M. S. , and S. T. B. , “ Elastically driven ferromagnetic resonance in nickel thin films Phys. Rev. Lett. P. G. D. C. , and R. A. , “ Traveling surface spin-wave resonance spectroscopy using surface acoustic waves J. Appl. Phys. W. P. , “ A simple method of approximating surface acoustic wave power densities IEEE Trans. Sonics Ultrason. , “ Surface acoustic attenuation due to surface spin wave in ferro- and antiferromagnets AIP Conf. Proc. , and , “ Effects of mechanical rotation on spin currents Phys. Rev. Lett. , and , “ Mechanical generation of spin current by spin-rotation coupling Phys. Rev. B , and , “ Spin current generation using a surface acoustic wave generated via spin-rotation coupling Phys. Rev. Lett. , and , “ Observation of gyromagnetic spin wave resonance in NiFe films Phys. Rev. Lett. , and , “ Surface-acoustic-wave induced ferromagnetic resonance in Fe thin films and magnetic field sensing Phys. Rev. Appl. Comsol, COMSOL Multiphysics® v. 5.4.Q5: Please provide the Publication Year in Ref. 50. Stockholm, Sweden , see , “ Influence of the Dzyaloshinskii–Moriya interaction on the spin-wave spectra of thin films J. Phys.: Condens. Matter A. P. van den Heuvel , “ Use of rotated electrodes for amplitude weighting in interdigital surface-wave transducers Appl. Phys. Lett. , “ Design techniques for SAW filters using slanted finger interdigital transducers IEEE Trans. Ultrason. Ferroelectr. Freq. Control , “ Tapered transducers-design and applications ,” in 1998 IEEE Ultrasonics Symposium. Proceedings ), Vol. 1, pp. , and A. C. , “ SAW tomography-spatially resolved charge detection by SAW in semiconductor structures for imaging applications ,” in 1999 IEEE Ultrasonics Symposium. Proceedings. International Symposium (Cat. No. 99CH37027) ), Vol. 1, p. M. K. I. Al Mamoori , and , “ Direct-write of free-form building blocks for artificial magnetic 3D lattices Sci. Rep. C. H. A. S. , and , “ Room temperature L1[0] phase transformation in binary CoPt nanostructures prepared by focused-electron-beam-induced deposition Grundlagen der vektoriellen Netzwerkanalyse 3rd ed. Rohde & Schwarz The propagation velocity of a Rayleigh-type SAW on a pure Y-cut Z-propagation LiNbO[3] substrate with a perfectly conducting overlayer of zero thickness is c[SAW] = 3404 m/s.^27 We assume that c[SAW] in the real piezoelectric-ferromagnetic heterostructure is slightly lowered^20 because of mass loading and different elastic constants of LiNbO[3] and the magnetic films. R. B. , and , “ Imaging of love waves and their interaction with magnetic domain walls in magnetoelectric magnetic field sensors Adv. Electron. Mater. , and , “ Fast surface acoustic wave-based sensors to investigate the kinetics of gas uptake in ultra-microporous frameworks ACS Sens. M. S. , and S. T. B. , “ Voltage controlled inversion of magnetic anisotropy in a ferromagnetic thin film at room temperature New J. Phys. , and , “ Power absorption in acoustically driven ferromagnetic resonance Appl. Phys. Lett. A. N. A. A. V. V. de Loubens Ben Youssef G. E. W. A. N. V. S. , and , “ Coherent long-range transfer of angular momentum between magnon Kittel modes by phonons Phys. Rev. B , and , “ Direct writing of CoFe alloy nanostructures by focused electron beam induced deposition from a heteronuclear precursor C. H. , and , “ Granular Hall sensors for scanning probe microscopy
{"url":"https://pubs.aip.org/aip/apm/article/10/8/081112/2835077/Forward-volume-magnetoacoustic-spin-wave","timestamp":"2024-11-05T15:44:20Z","content_type":"text/html","content_length":"462734","record_id":"<urn:uuid:ec68a018-d609-4789-b369-a2243f770dfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00056.warc.gz"}
Instrumental Convergence For Realistic Agent Objectives This post treats reward functions as “specifying goals”, in some sense. As I explained in Reward Is Not The Optimization Target, this is a misconception that can seriously damage your ability to understand how AI works. Rather than “incentivizing” behavior, reward signals are (in many cases) akin to a per-datapoint learning rate. Reward chisels circuits into the AI. That’s it! Summary of the current power-seeking theorems Give me a utility function, any utility function, and for most ways I could jumble it up—most ways I could permute which outcomes get which utility, for most of these permutations, the agent will seek power. This kind of argument assumes that (the set of utility functions we might specify) is closed under permutation. This is unrealistic, because practically speaking we reward agents based off of observed features of the agent’s environment. For example, Pac-Man eats dots and gains points. A football AI scores a touchdown and gains points. A robot hand solves a Rubik’s cube and gains points. But most permutations of these objectives are implausible because they’re high-entropy, they’re very complex, they assign high reward to one state and low reward to another state without a simple generating rule that grounds out in observed features. Practical objective specification doesn’t allow that many degrees of freedom in what states get what reward. I explore how instrumental convergence works in this case. I also walk through how these new results retrodict the fact that instrumental convergence basically disappears for agents with utility functions over action-observation histories. Consider the following environment, where the agent can either stay put or move along a purple arrow. From left to right, top to bottom, the states have labels Suppose the agent gets some amount of reward each timestep, and it’s choosing a policy to maximize its average per-timestep reward. Previous results tell us that for generic reward functions over states, at least half of them incentivize going right. There are two terminal states on the left, and three on the right, and 3 > 2; we conclude that at least of objectives incentivize going right. But it’s damn hard to have so many degrees of freedom that you’re specifying a potentially independent utility number for each state.^1 Meaningful utility functions will be featurized in some sense—only depending on certain features of the world state, and of how the outcomes transpired, etc. If the featurization is linear, then it’s particularly easy to reason about power-seeking Let be the feature vector for state , where the first entry is 1 iff the agent is standing on . The second and third entries represent and , respectively. That is, the featurization only records what shape the agent is standing on. Suppose the agent makes decisions in a way which depends only on the featurized reward of a state: , where expresses the feature coefficients. Then the relevant terminal states are only {triangle, circle, star}, and we conclude that 2/3 of coefficient vectors incentivize going right. This is true more precisely in the orbit sense: For every coefficient vector , at least^2 2/3 of its permuted variants make the agent prefer to go right. This particular featurization increases the strength of the orbit-level incentives—whereas before, we could only guarantee 1/2-strength power-seeking tendency, now we guarantee 2/3-level.^3^4 There’s another point I want to make in this tiny environment. From left to right, top to bottom, the states have labels Suppose we find an environmental symmetry which lets us apply the original power-seeking theorems to raw reward functions over the world state. Letting be a column vector with an entry of 1 at state and 0 elsewhere, in this environment, we have the symmetry enforced by Given a state featurization, and given that we know that there’s a state-level environmental symmetry , when can we conclude that there’s also feature-level power-seeking in the environment? Here, we’re asking “if reward is only allowed to depend on how often the agent visits each shape, and we know that there’s a raw state-level symmetry, when do we know that there’s a shape-feature embedding from (left shape feature vectors) into (right shape feature vectors)?” In terms of “what choice lets me access ‘more’ features?”, this environment is relatively easy—look, there are twice as many shapes on the right. More formally, we have: where the left set can be permuted two separate ways into the right set (since the zero vector isn’t affected by feature permutations). But I’m gonna play dumb and walk through to illustrate a more important point about how power-seeking tendencies are guaranteed when featurizations respect the structure of the environment. Consider the state . We permute it to be using (because ), and then featurize it to get a feature vector with 1 and 0 elsewhere. Alternatively, suppose we first featurize to get a feature vector with 1 and 0 elsewhere. Then we swap which features are which, by switching and . Then we get a feature vector with 1 and 0 elsewhere—the same result as above. The shape featurization plays nice with the actual nitty-gritty environment-level symmetry. More precisely, a sufficient condition for feature-level symmetries: (Featurizing and then swapping which features are which) commutes with (swapping which states are which and then featurizing).^5 And where there are feature-level symmetries, just apply the normal power-seeking theorems to conclude that there are decision-making tendencies to choose sets of larger features. In a different featurization, suppose the featurization is the agent’s coordinates. . Given the start state, if the agent goes up, its reachable feature vector is just {(x=0 y=1)}, whereas the agent can induce (x=1 y=0) if it goes right. Therefore, whenever up is strictly optimal for a featurized reward function, we can permute that reward function’s feature weights by swapping the x- and y-coefficients ( and , respectively). Again, this new reward function is featurized, and it makes going right strictly optimal. So the usual arguments ensure that at least half of these featurized reward functions make it optimal to go right. But sometimes these similarities won’t hold, even when it initially looks like they “should”! The agent can induce the feature vectors if it goes left. However, it can induce if it goes right. There is no way of switching feature labels so as to copy the left feature set into the right feature set! There’s no way to just apply a feature permutation to the left set, and thereby produce a subset of the right feature set. Therefore, the theorems don’t apply, and so they don’t guarantee anything about how most permutations of every reward function incentivize some kind of behavior. On reflection, this makes sense. If , then there’s no way the agent will want to go right. Instead, it’ll go for the negative feature values offered by going left. This will hold for all permutations of this feature labelling, too. So the orbit-level incentives can’t hold. If the agent can be made to “hate everything” (all feature weights are negative), then it will pursue opportunities which give it negative-valued feature vectors, or at least strive for the oblivion of the zero feature vector. Vice versa for if it positively values all features. Consider a deep RL training process where the agent’s episodic reward is featurized into a weighted sum of the different resources the agent has at the end of the game, with weight vector . For simplicity, we fix an opponent policy and a learning regime (number of epochs, learning rate, hyperparameters, network architecture, and so on). We consider the effects of varying the reward feature coefficients . Outcomes of interest Game state trajectories. AI decision-making function returns the probability that, given our fixed learning regime and reward feature vector , the training process produces a policy network whose rollouts instantiate some trajectory . *What the theorems say: If is the zero vector, the agent gets the same reward for all trajectories, and so gradient descent does nothing, and the randomly initialized policy network quickly loses against any reasonable opponent. No power-seeking tendencies if this is the only plausible parameter setting. If only has negative entries, then the policy network quickly learns to throw away all of its resources and not collect any more. If and only if this has been achieved, the training process is indifferent to whether the game is lost. No real power-seeking tendencies if it’s only plausible that we specify a negative vector. If has a positive entry, then policies learn to gather as much of that resource as possible. In particular, there aren’t orbit elements with positive entries but where the learned policy tends to just die, and so we don’t even have to check that the permuted variants of such feature vectors are also plausible. Power-seeking occurs. This reasoning depends on which kinds of feature weights are plausible, and so wouldn’t have been covered by the previous results. Similar setup to StarCraft II, but now the agent’s episode reward is (Amount of iron ore in chests within 100 blocks of spawn after 2 in-game days)(Same but for coal), where are scalars (together, they form the coefficient vector ). An iron ore block in Minecraft. Outcomes of interest Game state trajectories. AI decision-making function returns the probability that, given our fixed learning regime and feature coefficients , the training process produces a policy network whose rollouts instantiate some trajectory . What the theorems say If is the zero vector, the analysis is the same as before. No power-seeking tendencies. In fact, the agent tends to not gain power because it has no optimization pressure steering it towards the few action sequences which gain the agent power. If only has negative entries, the agent definitely doesn’t hoard resources in chests. Otherwise, there’s no real reward signal and gradient descent doesn’t do a whole lot due to sparsity. If has a positive entry, and if the learning process is good enough, agents tend to stay alive. If the learning process is good enough, there just won’t be a single feature vector with a positive entry which tends to produce non-self-empowering policies. The analysis so far is nice to make a bit more formally, but it isn’t really pointing out anything that we couldn’t have figured out pre-theoretically. I think I can sketch out more novel reasoning, but I’ll leave that to a future post. Consider some arbitrary set of “plausible” utility functions over outcomes. If we have the usual big set of outcome lotteries (which possibilities are, in the view of this theory, often attained via “power-seeking”), and contains copies of some smaller set via environmental symmetries , then when are there orbit-level incentives within—when will most reasonable variants of utility functions make the agent more likely to select rather than ? When the environmental symmetries can be applied to the -preferring-variants, in a way which produces another plausible objective. Slightly more formally, if, for every plausible utility function where the agent has a greater chance of selecting than of selecting , we have the membership for all . This covers the totally general case of arbitrary sets of utility function classes we might use. (And, technically, “utility function” is decorative at this point—it just stands in for a parameter which we use to retarget the AI policy-production process.) The general result highlights how ≝ { plausible objective functions } affects what conclusions we can draw about orbit-level incentives. All else equal, being able to specify more plausible objective functions for which means that we’re more likely to ensure closure under certain permutations. Similarly, adding plausible -dispreferring objectives makes it harder to satisfy , which makes it harder to ensure closure under certain permutations, which makes it harder to prove instrumental convergence. Structural assumptions on utility really do matter when it comes to instrumental convergence: Setting Strength of instrumental convergence u[aoh] Nonexistent u[OH] Strong State-based objectives (e.g. state-based reward in mdps) Moderate Environmental structure can cause instrumental convergence, but (the absence of) structural assumptions on utility can make instrumental convergence go away (for optimal agents). In particular, for the mdp case, I wrote: mdps assume that utility functions have a lot of structure: the utility of a history is time-discounted additive over observations. Basically, , for some and reward function over observations. And because of this structure, the agent’s average per-timestep reward is controlled by the last observation it sees. There are exponentially fewer last observations than there are observation histories. Therefore, in this situation, instrumental convergence is exponentially weaker for reward functions than for arbitrary u[OH]. This is equivalent to a featurization which takes in an action-observation history, ignores the actions, and spits out time-discounted observation counts. The utility function is then over observations (which are just states in the mdp case). Here, the symmetries can only be over states, and not histories, and no matter how expressive the plausible state-based-reward-set is, it can’t compete with the exponentially larger domain of the observation-history-based-utility-set , and so the featurization has limited how strong instrumental convergence can get by projecting the high-dimensional u[OH] into the lower-dimensional u[State]. But when we go from u[aoh] to u[OH], we’re throwing away even more information—information about the actions! This is also a sparse projection. So what’s up? When we throw away info about actions, we’re breaking some symmetries which made instrumental convergence disappear in the u[aoh] case. In any deterministic environment, there are equally many u[aoh] which make me want to go e.g. down (and, say, die) as which make me want to go up (and survive). This is guaranteed by symmetries which swap the value of an optimal aoh with the value of an aoh going the other way: If the agent cares not about its own action histories, but about its observation histories, there are just more ways to care about going up and being alive! Twice as many ways, in fact! But when we restrict the utility function to not care about actions, now you can only modify how it cares about observation histories. Here, the aoh environmental symmetry which previously ensured balanced statistical incentives, no longer enjoys closure under , and so the restricted plausible set theorem no longer works, and instrumental convergence appears when restricting from u[aoh] to u I thank Justis Mills for feedback on a draft. From last time: 1. The results aren’t first-person: They don’t deal with the agent’s uncertainty about what environment it’s in. 2. Not all environments have the right symmetries ☆ But most ones we think about seem to 3. [DEL:Don’t account for the ways in which we might practically express reward functions.:DEL] (This limitation was handled by this post.) I think it’s reasonably clear how to apply the results to realistic objective functions. I also think our objective specification procedures are quite expressive, and so the closure condition will hold and the results go through in the appropriate situations. 1. It’s not hard to have this many degrees of freedom in such a small toy environment, but the toy environment is pedagogical. It’s practically impossible to have full degrees of freedom in an environment with a trillion states. ⤴ 2. “At least”, and not “exactly.” If is a constant feature vector, it’s optimal to go right for every permutation of (trivially so, since ’s orbit has a single element—itself). ⤴ 3. Even under my more aggressive conjecture about “fractional terminal state copy containment”, the unfeaturized situation would only guarantee 3/5-strength orbit incentives, strictly weaker than 2/ 3-strength. ⤴ 4. Certain trivial featurizations can decrease the strength of power-seeking tendencies, too. For example, if the featurization is 2-dimensional: , this will tend to produce 1:1 survive/die orbit-level incentives, whereas the incentives for raw reward functions may be 1,000:1 or stronger. ⤴ 5. There’s something abstraction-adjacent about this result (proposition D.1 in the linked Overleaf paper). The result says something like “do the grooves of the agent’s world model featurization, respect the grooves of symmetries in the structure of the agent’s environment?”, and if they do, bam, sufficient condition for power-seeking under the featurized model. I think there’s something important here about how good world-model-featurizations should work, but I’m not sure what that is yet. I do know that “the featurization should commute with the environmental symmetry” is something I’d thought—in basically those words—no fewer than 3 times, as early as summer[2021], without explicitly knowing what that should even mean. ⤴
{"url":"https://turntrout.com/instrumental-convergence-for-realistic-agent-objectives","timestamp":"2024-11-12T22:26:56Z","content_type":"text/html","content_length":"180610","record_id":"<urn:uuid:0e8f408f-fa09-4b8f-9f9c-da383bf4bd2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00559.warc.gz"}
Mastering Formulas In Excel: Which Type Of Cell Reference Preserves Th Mastering cell references in Excel is crucial for creating accurate and efficient formulas. When writing formulas, it's important to understand the different types of cell references and how they behave when copied to other cells. One of the types of cell reference that preserves the exact cell address in a formula is absolute cell reference. Understanding when and how to use this type of cell reference can greatly improve your Excel skills and save you time in the long run. Key Takeaways • Mastering cell references in Excel is crucial for creating accurate and efficient formulas. • There are different types of cell references, including relative, absolute, and mixed references. • Absolute cell references preserve the exact cell address in a formula, making them useful for certain scenarios. • Understanding and applying the appropriate type of cell reference can greatly improve Excel skills and save time. • Using cell references effectively can streamline data analysis and reporting processes in Excel. Understanding Relative Cell References When working with formulas in Excel, it is essential to understand the concept of relative cell references. This type of cell reference is widely used in Excel and is important for creating dynamic and flexible formulas. A. Definition of relative cell references Relative cell references in Excel are cell addresses that change when a formula is copied to other cells. When you use a relative cell reference in a formula and then copy that formula to another cell, the reference will adjust based on its new location. For example, if you have a formula that adds the contents of cell A1 to the contents of cell B1 and then copy that formula to cell C1, the formula will adjust to add the contents of cell A2 to the contents of cell B2. B. Examples of how relative cell references adjust when copied to other cells • Example 1: If you have a formula in cell C1 that multiplies the value in cell A1 by the value in cell B1 (=A1*B1), when you copy the formula to cell C2, it will automatically adjust to multiply the value in cell A2 by the value in cell B2 (=A2*B2). • Example 2: If you have a formula in cell D1 that sums the values in cells A1 and B1 (=A1+B1), when you copy the formula to cell D2, it will adjust to sum the values in cells A2 and B2 (=A2+B2). Understanding relative cell references is crucial for working efficiently with Excel formulas. By grasping how these references adjust when copied to other cells, you can create formulas that dynamically update and adapt to changes in your dataset. Exploring Absolute Cell References When working with formulas in Excel, it is important to understand the different types of cell references. One type of cell reference that preserves the exact cell address in a formula is the absolute cell reference. Definition of absolute cell references An absolute cell reference is denoted by adding a dollar sign ($) before the column letter and row number in a cell reference. For example, if you want to make cell A1 an absolute reference, you would write it as $A$1. • Preventing cell address changes: The use of the dollar sign in an absolute cell reference prevents the cell address from changing when the formula is copied to other cells. This means that no matter where the formula is copied, the absolute cell reference will always point to the same cell. Examples of how absolute cell references do not change when copied to other cells Let's consider a simple example to demonstrate how absolute cell references do not change when copied to other cells. Suppose we have a formula in cell B1 that multiplies the value in cell A1 by 2, using an absolute cell reference for A1. If we were to copy this formula to cell B2, the formula would automatically adjust to multiply the value in A2 by 2. However, because we used an absolute cell reference for A1, the formula still points to A1, even in cell B2. • Updated formula in B2: =A$1*2 This demonstrates how the absolute cell reference preserves the exact cell address in the formula, regardless of where it is copied within the Excel sheet. Delving into Mixed Cell References When it comes to mastering formulas in Excel, understanding the concept of cell references is crucial. In addition to absolute and relative cell references, we also have mixed cell references, which combine aspects of both. A. Definition of mixed cell references Mixed cell references in Excel are a combination of absolute and relative references. In a mixed cell reference, either the row or the column is absolute, while the other is relative. B. Examples of how mixed cell references combine aspects of both relative and absolute references • Example 1: In the formula "=A$1+B2", the reference "A$1" is a mixed cell reference. The column reference "A" is relative, while the row reference "$1" is absolute. This means that when the formula is copied to another cell, the column reference will change based on the new location, but the row reference will remain constant. • Example 2: Similarly, in the formula "=$A1+B$2", the reference "$A1" is a mixed cell reference. The column reference "$A" is absolute, while the row reference "1" is relative. This means that when the formula is copied to another cell, the column reference will remain constant, but the row reference will change based on the new location. Comparing the Three Types of Cell References When working with formulas in Excel, it is crucial to understand the different types of cell references and their respective advantages and disadvantages. Let's take a closer look at relative, absolute, and mixed cell references to determine which one preserves the exact cell address in a formula. A. Advantages and disadvantages of using relative, absolute, and mixed cell references 1. Relative Cell References: • Advantages: When copied across multiple cells, relative cell references adjust based on their new location, making it easier to replicate the formula. • Disadvantages: They can be problematic when you want to keep a specific cell address constant in a formula. 2. Absolute Cell References: • Advantages: Absolute cell references preserve the exact cell address in a formula, regardless of where the formula is copied. • Disadvantages: They can be cumbersome to work with when you have to manually input dollar signs ($) to denote absolute references. 3. Mixed Cell References: • Advantages: Mixed cell references allow you to lock either the row or column in a formula, providing flexibility in certain situations. • Disadvantages: They require a clear understanding of when to use the dollar sign ($) to lock the row or column. B. Key considerations when choosing the appropriate type of cell reference for a formula 1. Understanding the formula's purpose: Consider whether the formula requires the cell address to remain constant or adjust based on its new location. 2. Reusability: If the formula needs to be replicated across multiple cells, relative cell references may be more appropriate. However, if the formula should always reference a specific cell, absolute references are essential. 3. Complexity: For more complex scenarios where specific rows or columns need to be locked, mixed cell references provide the necessary flexibility. Practical Applications of Cell References in Excel Formulas When working with Excel formulas, it is essential to understand the different types of cell references and how they can be applied to streamline data analysis and reporting processes. By mastering cell references, you can ensure that your formulas are accurate and efficient. A. How to apply different types of cell references in common Excel formulas • Relative Cell References: This type of cell reference will change when copied to other cells. It is denoted by the absence of dollar signs before the column and row reference (e.g. A1). • Absolute Cell References: This type of cell reference remains fixed when copied to other cells. It is denoted by the presence of dollar signs before the column and row reference (e.g. $A$1). • Mixed Cell References: This type of cell reference allows for either the column or row reference to change when copied to other cells. It is denoted by the presence of a dollar sign before either the column or row reference (e.g. $A1 or A$1). B. Tips for effectively using cell references to streamline data analysis and reporting processes • Consistent Use: Ensure that you consistently use the appropriate type of cell reference in your formulas to preserve the exact cell address as needed. • Copy and Paste: When copying and pasting formulas, be mindful of the type of cell reference used and how it will behave in the new location. • Named Ranges: Utilize named ranges to simplify the use of cell references in complex formulas and to make your formulas more readable and easier to maintain. Recap of the importance of mastering cell references in Excel: Mastering cell references in Excel is crucial for ensuring the accuracy and efficiency of your formulas. By understanding the different types of cell references, such as relative, absolute, and mixed, you can preserve the exact cell address when copying or moving formulas to different cells. Final thoughts on the benefits of understanding and utilizing the different types of cell references in formulas: • Utilizing the right cell reference type can save time and prevent errors in your calculations. • Understanding cell references allows for more dynamic and flexible formulas that can adapt to changes in your data. • By mastering cell references, you can take full advantage of Excel's powerful functionality and improve your productivity. Overall, investing the time to master cell references in Excel will undoubtedly pay off in more accurate and efficient spreadsheet work. ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/mastering-formulas-in-excel-which-type-of-cell-reference-cell-address-in-a-formula","timestamp":"2024-11-13T01:14:27Z","content_type":"text/html","content_length":"214479","record_id":"<urn:uuid:47190148-35ca-4751-aab2-8baa2ece2664>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00202.warc.gz"}
This format (*.cft-geo) can be used to import rotational symmetric components only. It is a XML format including the following information: General information (mandatory) LengthMm for millimeters Length unit type LengthM for meters LengthIn for inches Unshrouded flag Only required for vaned designs. 1 = unshrouded, 0 = shrouded xTipInlet Only required for vaned and unshrouded designs. Tip length at inlet. xTipOutlet Only required for vaned and unsrhouded designs. Tip length at outlet. Meridional contour information (mandatory) Hub contour Array of curves. At least one curve is required. Each curve contains an array of 2D-points (r, z coordinates). Stretches on rotation axis can be specified as part of the hub contour. It is required, for example, for designs without hub (stators with pipe form). Shroud contour Array of curves. At least one curve is required. Each curve contains an array of 2D-points (r, z coordinates). Blades information (only required for vaned designs) Mean lines and blade thickness data must be provided for both main and splitter blade. Only symmetric blades are supported. Span positions Relative position between hub and shroud (0...1). Array of at least 2 float numbers. Mean line data Array of at least 2 curves. Each curve contains an array of 3D-points (r, T, z coordinates). Array of 2 curves. The first curve defines thickness data on hub, the second one on shroud. Each curve contains an array of points defined by two coordinates: x = relative point position on mean line Thickness data y = blade thickness at this position Thickness data are required at least for both relative positions, 0 (leading edge) and 1 (trailing edge). The thickness distribution along the mean line is interpolated using all An example file can be easily generated by exporting any CFturbo component using the "CFturbo Exchange" export interface.
{"url":"https://manual.cfturbo.com/en/import_cft-geo.html","timestamp":"2024-11-09T22:16:11Z","content_type":"text/html","content_length":"12286","record_id":"<urn:uuid:49073099-a8ba-4f2f-af26-9b459cedc0bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00833.warc.gz"}
1. Adapt Design In the adapt phase, blocks may refine or coarsen to adapt to the evolving resolution requirements of a simulation. The main complication is enforcing the “level-jump” condition, which prohibits adjacent blocks from being in non-consecutive mesh refinement levels. (Blocks are partitioned into “levels” based on how refined they are: more highly-refined blocks are in higher-numbered levels, with the “root-level” of the simulation defined as “level 0”. The difference in resolution between any pair of successive levels L and L+1 (the “refinement factor”) is always 2 in Enzo-E.) Maintaining the level-jump conditions may require refining blocks that would not otherwise be refined, or may require not coarsening blocks that would otherwise be coarsened. The process of refining blocks in a mesh hierarchy solely to maintain the level-jump condition across block faces is called balancing the mesh (not to be confused with dynamic load balancing) Figure 1. illustrates the steps used in the adapt phase. Suppose we begin with the mesh hierarchy at the left, which contains seven blocks: three in a coarse level and four in the next finer level. The first step involves applying local refinement criteria to each block; in this particular example, the center-most fine block is tagged for refinement, here indicated by a “+” in the left-most If we were to only refine this block, however, level jumps would be introduced across the faces marked by red lines in the center image (here we optionally include corners as “faces”). By refining the coarse blocks, these level jumps are removed. This final mesh after completing the balancing step is shown on the right. We note that blocks marked for refinement solely to maintain the level-jump condition may themselves trigger further refinement in neighboring blocks. While such cascades can repeat multiple steps, each block in the cascade is in a coarser level than its predicessor, so cascades are always guaranteed to terminate. However, cascades still complicate parallelizing the algorithm, since any given block may not immediately know whether it needs to refine (or not coarsen) so determining when the balancing step of the adapt phase is actually complete is non-trivial. 1.1. Revised adapt algorithm description In this section we describe a revised algorithm for the adapt phase in Enzo-E/Cello. This algorithm was first developed by Phil Miller, and is presented in his Ph.D. Dissertation, Reducing synchronization in distributed parallel programs (University of Illinois at Urbana-Champaign, 2016). The previous parallel algorithm implemented in Cello relied on Charm++’s support for “quiescence detection”, which is defined as “the state in which no processor is executing an entry point, no messages are awaiting processing, and there are no messages in-flight” (see The Charm++ Parallel Programming System) Getting this algorithm to work correctly required considerable effort and debugging, and even after several years of development on Enzo-E / Cello, users still occasionally ran into issues of level-jumps in the resulting mesh hierarchy. Miller’s algorithm avoids using quiescence detection in favor of a more direct approach. First, each block evaluates its local adapt criteria to determine whether it needs to refine, stay in the same level, or can coarsen. Next, both lower and upper bounds on mesh levels are determined for each block and communicated with neighbors. Bounds for a block may be adjusted as newer updated bounds arrive from neighboring blocks. When a block’s minimum and maximum levels match, the block’s next level is decided. All leaf blocks are guaranteed to reach this state, which can be proven by induction on the mesh level starting with the finest level (See Miller 2016). Before presenting the algorithm, we define the following notation: • \(B_i\) block i • \(B_j\) a block adjacent to block i • \(L_i^{k}\) the level of Block i in cycle k • \(\hat{L}_i^{k+1}\) block i’s desired next level as locally-evaluated from refinement criteria • \(\underline{L}_{i,s}^{k+1} \leq L_i^{k+1} \leq \bar{L}_{i,s}^{k+1}\): current lower and upper level bounds (for step s), which are dynamically updated • \(L_i^{k+1}\) the next level which is decided when \(\underline{L}_{i,s}^{k+1} = \bar{L}_{i,s}^{k+1}\) We can now write the two main conditions that we use to initialize and update the level bounds: • \(|L_i^k - L_i^{k+1}| \le 1\) the (temporal) level-jump condition: a block can refine or coarsen at most once per adapt cycle • \(|L_i^{k} - L_j^{k}| \le 1\) the (spacial) level-jump condition: refinement levels of adjacent blocks can differ by at most one Level bounds are initialized to be \(\underline{L}_{i,0}^{k+1} \leftarrow \hat{L}_i^{k+1}\) and \(\bar{L}_{i,0}^{k+1} \leftarrow L_i^{k} + 1\). That is, the minimum level is initially the level determined by the local refinement criteria, and the maximum level is initially one level of refinement more than the current level (or the maximum allowed level in the simulation.) The balancing step of the algorithm proceeds by alternately sending a block’s level bounds to its neighbors, and, having received updated bounds from its neighbors, updating the block’s own level bounds. Bounds are updated according to the following: \(\underline{L}_{i,s+1}^{k+1} \leftarrow \max ( \underline{L}_{i,s}^{k+1}, \max_j (\underline{L}_{j,s}^{k+1} - 1))\) \(\bar{L}_{i,s+1}^{k+1} \leftarrow \max ( \underline{L}_{i,s}^{k+1}, \max_j(\bar{L}_{j,s}^{k+1} - 1))\) The lower bound is updated if any neighbor’s minimum bound is greater than one plus the block’s current minimum bound. The maximum bound, which is used to determine when the balancing algorithm terminates, is defined as the maximum of the minimum bound, and the maximum of all neighboring maximum bounds minus one. Note that in general the maximum bound can only be updated after all neighboring blocks have been heard from. Additional synchronization is required for a block to coarsen, since a block can coarsen only if all of its siblings can as well. 1.2. Revised adapt algorithm implementation To reduce the complexity of the already over-burdened Block classes, we introduce an Adapt class to maintain and update level bounds for a Block and its neighbors. The Adapt class keeps track of the current level bounds of all neighboring blocks, which is redundantly stored as a list of LevelInfo objects for each neighboring Block, and a face_level_ vector of the current level in the direction of each face. (The face_level_ representation is a carry-over from the previous algorithm, but was retained because it simplifies code that needs to access a neighbor’s level given the neighbor’s relative direction rather than absolute Index). Below summarizes the API for the newer LevelInfo section, which is used to collectively determine the next level for all blocks in the mesh. void set_rank (int rank) Set dimensionality of the problem \(1 \leq \mbox{rank} \leq 3\). Only required for initialization in test code, since Cello initializes it using cello::rank(). void set_valid (bool valid) Set whether the Adapt object is “valid” or not. Set to false when the corresponding Block is refined. “valid” is accessed internally when a block is coarsened to identify the first call triggered by child blocks. It’s set to true internally after the first call to coarsen(). void set_periodicity (int period[3]) Set the periodicity of the domain, so that the correct neighbors can be identified on domain boundaries. void set_max_level (int max_level) Set the maximum allowed mesh refinement level for the problem. void set_min_level (int min_level) Set the minimum allowed mesh refinement level for the problem. void set_index (Index index) Set the index of the Adapt object’s associated block. void insert_neighbor (Index index) Insert the given Index into the list of neighbors. This is a lower-level routine and should generally not be called–use refine_neighbor() instead. void insert_neighbor (Index index, bool is_sibling) Insert the given Index, and specify that the Block is a sibling. This version is used exclusively in test code in test_Adapt.cpp. void delete_neighbor (Index index) Delete the specified neighbor. This is a lower-level routine and should generally not be called–use coarsen_neighbor() instead. void reset_bounds () Reset level bounds for this block and neighbor blocks in preparation for a new adapt phase. void refine_neighbor (Index index) Update the list of neighboring blocks associated with refining the specified neighbor block. void coarsen_neighbor (Index index) Update the list of neighboring blocks associated with coarsening the specified neighbor block. void refine(Adapt adapt_parent, int ic3[3]) Update the Adapt object for a recently refined block. The block’s parent adapt object is passed in to update the neighbor lists accordingly, and which child this block is in its parent block is specified by ic3[]. void coarsen(Adapt adapt_child) Update the adapt object for a recently coarsened block. Must be called exactly once for each coarsened child (in any order), specified by the child block’s associated Adapt object. This is required to update the neighbor lists correctly. void initialize_self(Index index, int level_min, int level_now) Initialize the adapt object with the given Block index and level bounds. void update_neighbor(Index index, int level_min, int level_max, bool can_coarsen) Update the specified neighbor block’s level bounds and “can_coarsen” attribute. void update_bounds() Reevaluate the block’s level bounds given the current level bounds of all neighbors. bool is_converged() Return whether the level bounds of this block have converged to a single value (that is min_level == max_level). bool neighbors_converged() Return whether all neighboring block’s level bounds have converged. void get_level_bounds(int * level_min, int * level_max, bool * can_coarsen) Get the current level bounds and “can_coarsen” attribute for this Block. Must be preceeded by a call to “update_bounds()”. bool get_neighbor_level_bounds(Index index, int * level_min, int * level_max, bool * can_coarsen) Return the level bounds and “can_coarsen” attribute for the specified neighbor. int level_min() Return the current lower bound on this block’s refinement level. int level_max() Return the current upper bound on this block’s refinement level. bool can_coarsen() Return the current value of “can_coarsen” for this block. int num_neighbors() Return the number of neighbors for this block. int is_sibling(int i) Return whether the ith neighbor is a sibling of this block (whether the neighbor block and this block share the same parent. Index index() Return the Block index associated with this Adapt object. Index index(i) Return the Block index for the ith neighbor block.
{"url":"https://enzo-e.readthedocs.io/en/main/design/design-adapt.html","timestamp":"2024-11-06T23:06:44Z","content_type":"text/html","content_length":"27309","record_id":"<urn:uuid:e50a3cc9-b9e5-488d-8eea-4c87c78d344c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00746.warc.gz"}
An application of data encryption technique using random number generator An application of data encryption technique using random number generator Verma, Sharad Kumar Mewar University, Rajasthan, India ([email protected]) Ojha, D. B. Mewar Institute of Technology, Ghaziabad, UP, India ([email protected]) Received: 19 January 2012 Revised: 5 February 2012 Accepted: 7 February 2012 Available Online: 11 February 2012 DOI: 10.5861/ijrsc.2012.v1i1.72 ISSN: 2243-772X Online ISSN: 2243-7797 Coding theory is one of the most important and direct applications of information theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Information theoretic concepts apply to cryptography and cryptanalysis. Cryptography is the study of sending and receiving secret messages. With the widespread use of information technologies and the rise of digital computer networks in many areas of the world, securing the exchange of information has become a crucial task. In the present paper an innovative technique for data encryption is proposed based on the random sequence generation. The new algorithm provides data encryption at two levels and hence security against crypto analysis is achieved at relatively low computational overhead. Verma, S. K. & Ojha, D. B. An application of data encryption technique using random number generator 1. Introduction Cryptography or cryptology; from Greek meaning “hidden, secret”; and “writing”, or “study” respectively; is the practice and study of techniques for secure communication in the presence of third parties called adversaries (Liddell & Scott, 1984). More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries (Bellare & Rogaway, 2005) and which are related to various aspects in information security such as data confidentiality, data integrity, and authentication (Menezes, van Oorschot, & Vanstone, 1997). Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. 1.1 Cryptographic goals Generally, a good cryptography scheme must satisfy a combination of four different goals (Cole, Fossen, Northcutt, & Pomeranz, 2003). 1. Authentication: Allowing the recipient of information to determine its origin, that is, to confirm the sender's identity. This can be done through something you know or you have. Typically provided by digital signature. 2. Non-repudiation: Ensuring that a party to a communication cannot deny the authenticity of their signature on a document or the sending of a message that they originated. Typically provided by digital signature. 3. Data integrity: A condition in which data has not been altered or destroyed in an unauthorized manner. Typically provided by digital signature. 4. Confidentiality: Keeping the data involved in an electronic transaction private. Typically provided by encryption There are two main types of cryptography. Those are public-key and symmetric-key. Public-key is a form of cryptography in which two digital keys are generated, one is private, which must not be known to another user, and one is public, which may be made available in public. These keys are used for either encrypting or signing messages. The public-key is used to encrypt a message and the private-key is used to decrypt the message. However, in another scenario, the private-key is used to sign a message and the public-key is used to verify the signature. The two keys are related by a hard one-way (irreversible) function, so it is computationally infeasible to determine the private key from the public key. Since the security of the private key is critical to the security of the cryptosystem, it is very important to keep the private key secret. This public-key system has the problem of being slow. Figure 1. Symmetric-key cryptography, where the same key is used both for encryption and decryption (Chandra Sekhar, Sudha, & Prasad Reddy, 2007) Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. 1. Data compression (source coding): There are two formulations for the compression problem: A. Lossless data compression: the data must be reconstructed exactly; B. Lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of Information theory is called rate–distortion theory. 2. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models. Verma, S. K. & Ojha, D. B. 1.2 Source theory Any process that generates successive messages can be considered a source of information. A memory-less source is one in which each message is an independent identically-distributed random variable, whereas the properties of ergodicity and stationarity impose more general constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. 1.3 Rate Information rate is the average entropy per symbol. For memory-less sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is That is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is That is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. It is common in information theory to speak of the “rate” or “entropy” of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding. 1.4 Channel capacity Communications over a channel - such as an Ethernet cable - is the primary motivation of information theory. As anyone who's ever used a telephone (mobile or landline) knows, however, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. How much information can one hope to communicate over a noisy (or otherwise imperfect) channel? Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y | x) be the conditional probability distribution function of Y given X. We will consider p(y | x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. 2. Data encryption using random number A cipher is an algorithm for performing encryption (and the reverse, decryption) - a series of well-defined steps that can be followed as a procedure. Classical ciphers are based around the notions of character substitution and transposition. Messages are sequences of characters taken from some plaintext alphabet (e.g. the letters A to Z) and are encrypted to form sequences of characters from some cipher text alphabet. The plaintext and cipher text alphabets may be the same. Substitution ciphers replace plaintext characters with cipher text characters. For example, if the letters of the alphabet A . . Z are indexed by 0 . . . 25, then a Caesar cipher might replace a letter with index k by the letter with index (k + 3) mod 26. Thus, the word “JAZZ” would become “MDCC”. Transposition ciphers work by shuffling the plaintext in certain ways. Thus, reversing the order of letters in successive blocks of four would encrypt “CRYPTOGRAPHY” as “PYRCRGOTYHPA”. Modern crypto-systems have now supplanted the classical ciphers but cryptanalysis of classical ciphers is the most popular cryptological application for meta-heuristic search research. The reasons are probably mixed. The basic concepts of substitution and transposition are still widely used today (though typically using blocks of bits rather than characters) and so these ciphers form simple but plausible test beds for exploratory research. Problems of varying difficulty can easily be created (e.g. by altering the key size). One cannot know how correct a decrypted text is without knowing the plaintext. Instead, the degree to which decrypted text has the distributional properties of natural language is taken as a surrogate measure of correctness of the decryption key. In English text the letter “E” will usually occur more than any other. Similarly, the pair (bigram) “TH” will occur frequently, as will the triple (trigram) “THE”. In contrast, the occurrence of the pair “AE” is less common and the occurrence of “ZQT” is either a rare occurrence of an acronym or else indicates a terrible inability to spell. The frequencies with which these various N-grams appear in plaintext are used as the basis for determining the correctness of the key which produced that plaintext. The more the frequencies resemble expected frequencies, the closer the underlying decryption key is assumed to be to the actual key. With probabilistic encryption algorithms, a crypto analyst can no longer encrypt random plain texts looking for correct cipher text. Since multiple cipher texts will be developed for one plain text, even if he decrypts the message to plain text, he does not know how far he had guessed the message correctly. Also the cipher text will always be larger than plain text. The new encryption algorithm is based on the concept of Poly alphabet cipher, which is an improvement over mono alphabetic technique. In this technique the character in the plain text is replaces using a random sequence generator. Random number generator using quadruple vector: For the generation of the random numbers a quadruple vector is used. The quadruple vector T is generated for 44 values i.e. for 0-255 ASCII values. Verma, S. K. & Ojha, D. B. The recurrence matrix [1] [2] Α = 1 is used to generate the random sequence for the 0-255 ASCII characters by multiplying r = [A] * [T] and considering the values to mod 4. The random sequence generated using the formula [40 41 42]*r is as follows: Random = [ 0 16 32 48 5 21 37 53 10 26 42 58 15 31 47 63 4 20 36 52 9 25 41 57 14 30 46 62 3 19 35 51 8 4 40 56 13 29 45 61 2 18 34 50 7 23 39 55 12 28 44 60 1 17 33 49 6 22 38 54 11 27 43 59 0 16 32 63 4 20 36 52 9 5 41 57 14 30 46 62 3 19 35 51 8 24 40 56 13 29 45 61 2 18 34 50 7 23 39 55 12 28 44 60 1 17 33 49 6 22 38 54 11 27 43 59 ] 3. Encryption and decryption process: To avoid the result to be guessed by combination and permutation, we can offset the result by some simple rules, as shown in the following (Chandra Sekhar, Sudha, & Prasad Reddy, 2007): Case 1. Offset by constant value: HOW ARE U+ n (e.g. n=10) = RYa*K[O*CY- Case 2. Offset by a polynomial function: HOW ARE U + [ 38 37 36 35 34 33 32 31 30 ] 1. A recurrence matrix used is as a key. Let it be A. 2. Generate a “quadruple vector” T for 44 values, i.e, from 0 to 255. 3. Multiply r= A *T; 4. Consider the values to mod 4. 5. A sequence is generated using the formula [40 41 42]*r. 6. This sequence is used as a key 7. Convert the plain text to equivalent ASCII value. 8. Add the key to the individual numerical values of the message 9. New offset the values using the offset rules 10. This would be the cipher text generated 11. For Decryption the key is subtracted from the cipher text and use the offset rule to get the original message 3.2 Example: 3.2.1 Encryption 1. Plain text: HOW ARE U 2. The equivalent ASCII characters are [72 79 87 32 65 82 69 32 89 79 85] 3. From the random sequence the key is chosen as [0 16 32 48 5 21 37 53 10 26 42] 4. Adding the key to the equivalent ASCII string of the plain text we get Ci = [72 95 119 80 70 103 106 85 99 105 127] 5. Using the offset rule2 6. C’i1 = [59121 19778 6680 2267 799 346 187 112 108 108 128] 7. Adjusting to 255 we get C’i1 = [216 143 50 227 34 91 187 112 108 108 128] 8. Hence the cipher text would be ‡ Ấ 2 π“ [ ╗p l l Ç 3.2.2 Decryption 1. C’i1 = [216 143 50 227 34 91 187 112 108 108 128] 2. By using the offset rule we get Ci = [ 72 95 119 80 70 103 106 85 99 105 127] 3. Subtracting the key from the cipher text we get [72 79 87 32 65 82 69 32 89 79 85] 4. Which is the chosen plain text: HOW ARE U 4. Conclusions In the new algorithm a quadruple vector is considered. A mod function is used on the product of matrix key and the quadruple vector. Thus the computational overhead is very low. It is almost impossible to extract the original information in the proposed method even if the algorithm is known. In block cipher algorithms, the plain text is converted into cipher text after a number of rounds, which makes the computational more complex. 5. References: Bellare, M. & Rogaway, P. (2005). Introduction to modern cryptography. Retrieved January 5, 2012, from Certicom Corp. (2004). The elliptic curve cryptosystem for smart cards. Retrieved January 5, 2012, from Verma, S. K. & Ojha, D. B. number generation. 2007 IEEE International Conference on Granular Computing (pp. 569-579), San Jose, CA: USA. Cole, E., Fossen, J., Northcutt, S., & Pomeranz, H. (2003). SANS security essentials with CISSP CBK Version 2.1.USA: SANS Press. Liddell, H. G., & Scott, R. (1984). Greek-English lexicon. Oxford University Press. Menezes, A. J., van Oorschot, P. C. & Vanstone, S. A. (1997). Handbook of applied cryptography (vol. 6). CRC Press.
{"url":"https://1library.net/document/q75x6xnz-application-data-encryption-technique-using-random-number-generator.html","timestamp":"2024-11-08T17:35:41Z","content_type":"text/html","content_length":"167558","record_id":"<urn:uuid:c4c660e9-12de-4e0a-b31c-ed043d33b770>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00706.warc.gz"}
AP Calculus BC 101: Knowing the Basics Furthering mathematical and calculus studies are now made possible via AP Calculus courses. A student who wants to enjoy credit courses, which they can use in the university of their choice, should consider taking AP Calculus B during their high school years. This goes especially well for students who would want to major in a college degree that involves calculus or more advanced mathematics. But first, know the basics of AP Calculus BC, the usual lessons, topics, and syllabus parts that would be involved in AP Calculus BC lessons, and the different textbooks that you can use as source materials when studying and reviewing for the Advanced Placement exam. What is AP Calculus BC? Contrary to popular notion, AP Calculus BC is not just an enhancement of the other AP course AP Calculus AB. Instead, it takes off from some of the topics discussed in that course, recaps past lessons that have been covered in AP Calculus AB, and builds up on the past lessons by having additional topics in the syllabus. AP Calculus BC is given as a full-year course that concentrates on calculus of functions of a single variable. However, before a student can take this AP course, he or she should have already completed four years of mathematics that are at par with secondary mathematics. This is because this type of lesson is tailored for students who are about to enter college or university. As such, some of the lessons that a student should already know about would be algebra, trigonometry, analytic geometry, elementary functions, and trigonometry, among others. Knowledge of the properties, algebra, and graphs of functions is a main pre-requisite requirement for those who want to study calculus. This would allow them to understand functions and apply them in the equations and lessons in a calculus class. Why Take Up AP Calculus BC? Several students who take up AP Calculus BC have specific goals in mind. Whether it’s to get the credit to give them credited hours for college or to give them the extra edge for when they are applying to different universities, AP Calculus BC looks great in a student’s transcript, especially if it is for a math-related college course. However, the College Board also has a specific set of goals which it wants the AP Calculus BC course and exam to achieve to benefit the students. Among these goals is to give students better understanding on the relationships and representations of functions in terms of graphical, analytical, numerical, or verbal. A student should also have a solid understanding of derivatives in relation to solving problems, definite integrals, and the relationship of these two topics in relation to their roles in Fundamental Theorem of Calculus. On top of all of these, a student should be able to communicate these concepts through oral or written means. Sources for AP Calculus BC When studying for AP Calculus BC, it is best to find and use textbooks that would teach the college-level instruction and meet the requirements of AP Calculus BC. The following is a list of some of the textbooks that are currently being used in some AP Calculus BC classes, as well as in preparation for the AP Exam: • Calculus: Early Transcendentals Single and Multivariable, Eighth Edition by Howard A. Anton, Irl Bivens, and Stephen Davis • Barron’s AP Calculus Advanced Placement Examination: Review of Calculus AB and BC, 6th edition by Shirley O. Hockett • Calculus: Graphical, Numerical, Algebraic, 3rd edition by Ross Finney, Franklin Demana, Bert Waits, and Daniel Kennedy • Cracking the AP Calculus AB & BC Exams, 2011 edition by David S. Kahn • Calculus, Concepts, and Calculators by George Best, Stephen Carter, and Douglas Crabtree • Calculus by Ron Larson, Bruce Edwards, and Robert Hostetler Additional topics
{"url":"https://education.stateuniversity.com/pages/cw15l9txab/AP-Calculus-BC-101-Knowing-the-Basics.html","timestamp":"2024-11-10T05:04:41Z","content_type":"text/html","content_length":"11589","record_id":"<urn:uuid:0218ba0d-e8f3-4806-b851-9c78be4eb153>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00115.warc.gz"}
tf.clip_by_norm | TensorFlow v2.15.0.post1 Clips tensor values to a maximum L2-norm. View aliases Compat aliases for migration See Migration guide for more details. t, clip_norm, axes=None, name=None Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its L2-norm is less than or equal to clip_norm, along the dimensions given in axes. Specifically, in the default case where all dimensions are used for calculation, if the L2-norm of t is already less than or equal to clip_norm, then t is not modified. If the L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to: t * clip_norm / l2norm(t) In this case, the L2-norm of the output tensor is clip_norm. As another example, if t is a matrix and axes == [1], then each row of the output will have L2-norm less than or equal to clip_norm. If axes == [0] instead, each column of the output will be clipped. Code example: some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32) tf.clip_by_norm(some_nums, 2.0).numpy() array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]], This operation is typically used to clip gradients before applying them with an optimizer. Most gradient data is a collection of different shaped tensors for different parts of the model. Thus, this is a common usage: # Get your gradients after training loss_value, grads = grad(model, features, labels) # Apply some clipping grads = [tf.clip_by_norm(g, norm) for g in grads] # Continue on with training t A Tensor or IndexedSlices. This must be a floating point type. clip_norm A 0-D (scalar) Tensor > 0. A maximum clipping value, also floating point axes A 1-D (vector) Tensor of type int32 containing the dimensions to use for computing the L2-norm. If None (the default), uses all dimensions. name A name for the operation (optional). A clipped Tensor or IndexedSlices. ValueError If the clip_norm tensor is not a 0-D scalar tensor. TypeError If dtype of the input is not a floating point or complex type.
{"url":"https://tensorflow.google.cn/versions/r2.15/api_docs/python/tf/clip_by_norm","timestamp":"2024-11-06T08:49:09Z","content_type":"text/html","content_length":"41306","record_id":"<urn:uuid:9684b2e0-978d-425a-a437-8d87e01d0043>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00179.warc.gz"}
How to Analyze a Company Using Financial Ratios an... Analyzing a company is essential whether you're an investor, a business owner, or just curious about how a business operates. One of the best ways to understand a company's health is by using financial ratios. In this article, we'll walk you through the basics of company analysis, explain key financial ratios, and show you how to use the Financial Modeling Prep (FMP) API to get the data you need. Why Analyze a Company? Before diving into numbers, it's important to understand why analyzing a company matters: • Investment Decisions. Helps you decide whether to buy, hold, or sell a stock. • Performance Tracking. Measures how well a company is doing over time. • Comparing to closest peers. Pick great stocks based on peer analysis. Understanding Financial Ratios Financial ratios are simple calculations that help you evaluate different aspects of a company's performance. They need to be compared to the company's closest peers and also examined individually to see how strong the company is. They are divided into several categories: 1. Liquidity Ratios. These ratios measure a company's ability to pay its short-term debts. • Current Ratio = Current Assets ÷ Current Liabilities This ratio shows how many dollars in current assets are available to cover each dollar of current liabilities. Example: If a company has $200,000 in current assets and $100,000 in current liabilities, the current ratio is 2.0. This means the company has $2 for every $1 it owes in the short term debt. • Quick Ratio (Acid-Test Ratio) = (Current Assets - Inventory) ÷ Current Liabilities This ratio provides a more stringent measure than the current ratio by excluding inventory, which may not be quickly convertible to cash. Example: If a company has $200,000 in current assets, $50,000 in inventory, and $100,000 in current liabilities, the quick ratio is (200,000 - 50,000) ÷ 100,000 = 1.5. This means the company has $1.50 in liquid assets for every $1 it owes. • Cash Ratio = Cash and Cash Equivalents ÷ Current Liabilities This is the most conservative liquidity ratio, measuring only the most liquid assets. Example: If a company has $80,000 in cash and cash equivalents and $100,000 in current liabilities, the cash ratio is 0.8. This means the company has $0.80 in cash for every $1 it owes. 2. Profitability Ratios. These ratios show how well a company is generating profit. • Net Profit Margin = Net Income ÷ Revenue This ratio indicates how much profit a company makes for every dollar of revenue. Example: If a company makes $50,000 in profit from $200,000 in sales, the net profit margin is 25%. This means the company keeps $0.25 from every $1 of sales after all expenses. • Return on Assets (ROA) = Net Income ÷ Total Assets ROA measures how efficiently a company uses its assets to generate profit. Example: If a company earns $100,000 in net income and has $500,000 in total assets, the ROA is 20%. This means the company generates $0.20 in profit for every $1 of assets. • Return on Equity (ROE) = Net Income ÷ Shareholders' Equity ROE indicates how effectively a company uses shareholders' equity to generate profit. Example: If a company has $100,000 in net income and $400,000 in shareholders' equity, the ROE is 25%. This means the company generates $0.25 in profit for every $1 of equity. 3. Leverage Ratios. These ratios assess how much debt a company is using to finance its assets. High leverage can indicate higher risk. • Debt-to-Equity Ratio = Total Debt ÷ Shareholders' Equity This ratio shows the proportion of debt a company is using relative to its equity. Example: A debt-to-equity ratio of 1.5 means the company uses $1.50 in debt for every $1 of equity. • Interest Coverage Ratio = Earnings Before Interest and Taxes (EBIT) ÷ Interest Expenses This ratio measures a company's ability to pay interest on its debt. Example: If a company has an EBIT of $150,000 and interest expenses of $50,000, the interest coverage ratio is 3.0. This means the company earns $3 for every $1 of interest expense. • Debt Ratio = Total Debt ÷ Total Assets This ratio indicates the percentage of a company's assets that are financed by debt. Example: If a company has $300,000 in total debt and $600,000 in total assets, the debt ratio is 0.5 or 50%. This means half of the company's assets are financed by debt. 4. Efficiency Ratios. These ratios evaluate how effectively a company uses its assets and manages its operations. • Asset Turnover Ratio = Revenue ÷ Total Assets This ratio measures how efficiently a company uses its assets to generate sales. Example: If a company generates $300,000 in sales with $150,000 in assets, the asset turnover ratio is 2.0. This means the company generates $2 in sales for every $1 of assets. • Inventory Turnover Ratio = Cost of Goods Sold (COGS) ÷ Average Inventory This ratio shows how many times a company's inventory is sold and replaced over a period. Example: If a company has a COGS of $400,000 and an average inventory of $100,000, the inventory turnover ratio is 4.0. This means the inventory turns over four times a year. • Receivables Turnover Ratio = Net Credit Sales ÷ Average Accounts Receivable This ratio measures how effectively a company collects its receivables. Example: If a company has net credit sales of $500,000 and average accounts receivable of $50,000, the receivables turnover ratio is 10.0. This means the company collects its receivables ten times a 5. Market Ratios. These ratios relate to the company's stock performance. • Earnings Per Share (EPS) = Net Income ÷ Number of Outstanding Shares EPS indicates the portion of a company's profit allocated to each outstanding share of common stock. Example: If a company earns $100,000 and has 10,000 shares, the EPS is $10. This means each share earns $10. • Price-Earnings Ratio (P/E) = Market Price per Share ÷ Earnings Per Share The P/E ratio shows what the market is willing to pay for a company's earnings. Example: If a company's stock is priced at $50 and its EPS is $10, the P/E ratio is 5. This means investors are willing to pay $5 for every $1 of earnings. • Dividend Yield = Annual Dividends per Share ÷ Market Price per Share This ratio measures how much a company pays out in dividends each year relative to its stock price. Example: If a company pays $2 in annual dividends per share and the stock price is $40, the dividend yield is 5%. This means investors earn a 5% return from dividends alone. Using the Financial Modeling Prep API for Financial Ratios Gathering accurate financial data is crucial for calculating these ratios. The Financial Modeling Prep (FMP) API provides easy access to a wealth of financial information, including financial ratios. I. Here's how you can use it with API request: Step 1: Sign Up for the FMP API First, visit the Financial Modeling Prep website and sign up for a free account. Once registered, you'll receive an API key that you'll use to access the data. Step 2: Choose the Financial Ratios Endpoint The FMP API offers various endpoints for different types of data. For financial ratios, you'll use the Ratios API endpoint. This endpoint provides a range of ratios for a specified company. Step 3: Make an API Request To get financial ratios for a company, you'll need its stock ticker symbol. For example, let's use Apple company whose ticker symbol is AAPL. Here's a sample API request URL: https://financialmodelingprep.com/api/v3/ratios/AAPL?period=quarter&apikey=YOUR API KEY Replace `YOUR_API_KEY` with the API key you received during sign-up. Now you can use this link to fetch data in your custom code. Step 4: Understand the Response When you make the request, the API will return data in JSON format. Here's a simplified example of what the response might look like: "date": "2023-12-31", "priceEarningsRatio": 25.5, "debtToEquity": 1.2, "currentRatio": 1.5, "netProfitMargin": 22.3, "returnOnAssets": 15.4, "returnOnEquity": 18.7 Each object in the array represents financial ratios for a specific date. You can use these ratios to analyze the company's performance over time. II. Here's how you can use FMP data with FMP Playground: Step 1: Go to Financial Modeling Prep Playground Extract financial ratios data in CSV, text or JSON formats and export data into Excel for example. Step2: Open FMP Playground. On the left side of the screen find the FINANCIAL STATEMENTS field, click on it and then click on the RATIOS field. Step3: Input the stock ticker In the Symbol field input the stock ticker, choose your desired Period and extract the data. Step4: Export your findings In the top right corner find the Export button, to export your findings in CSV, JSON or Text formats. Analyze the Data Once you have the data, you can calculate and compare the financial ratios. Here's how you might interpret some of the data: - Price-Earnings Ratio (P/E). A P/E of 25.5 suggests that investors are willing to pay $25.50 for every $1 of earnings. Compare this with industry averages to see if the stock is over or undervalued. - Debt-to-Equity Ratio. At 1.2, the company has $1.20 in debt for every $1 of equity. This indicates the level of financial leverage the company is using. - Current Ratio. A current ratio of 1.5 means the company can cover its short-term liabilities 1.5 times with its short-term assets. Generally, a ratio above 1 is good. - Net Profit Margin. A margin of 22.3% shows that the company keeps $22.30 from every $100 in sales after all expenses. Putting It All Together By combining your understanding of financial ratios with the data from the FMP API, you can perform a comprehensive analysis of a company. Here's a simple process to follow: 1. Gather Data. Use the FMP API to collect the latest financial ratios. 2. Calculate Ratios. If needed, calculate additional ratios based on the data provided. 3. Compare. Look at how the company's ratios compare to industry standards or competitors. 4. Trend Analysis. Examine how the ratios have changed over time to identify trends. 5. Make Decisions. Use your analysis to buy or sell shares. Analyzing a company using financial ratios is a powerful method to understand its financial health and performance. The Financial Modeling Prep API makes it easy to access the data you need without manually digging through financial statements. By following the steps outlined in this article, you can start analyzing companies effectively and make more informed financial decisions.
{"url":"https://intelligence.financialmodelingprep.com/education/financial-ratios/how-to-analyze-a-company-using-financial-ratios-and-the-financial-modeling-prep-api","timestamp":"2024-11-08T14:55:46Z","content_type":"text/html","content_length":"146238","record_id":"<urn:uuid:0603e3b2-1df1-4645-99b9-1807fe3334c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00402.warc.gz"}
Question #4dc1e | Socratic Question #4dc1e 1 Answer "$\text{F " alpha" a}$" says that force and acceleration are proportional, therefore: triple the force and you will triple the acceleration. Newton's Second Law is often written F = m*a, but that is not the way Newton wrote it. He wrote it with the proportionality sign. The mass, m, of the object being accelerated is the constant of proportionality that allows us to use the equal sign. I hope this helps, Impact of this question 1214 views around the world
{"url":"https://socratic.org/questions/58ef64dd7c01497065e4dc1e#497943","timestamp":"2024-11-07T23:24:50Z","content_type":"text/html","content_length":"33434","record_id":"<urn:uuid:0cc2b65a-5501-4dd1-b3aa-be78f96951e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00480.warc.gz"}
This example computes the vibration modes, eigenvalues, and frequencies for a circular drum or membrane. The membrane is modeled by the unit circle and assumed to be attached to a rigid frame. The Poisson PDE equation is used with the Eigenvalue solver to compute the solution. This model is available as an automated tutorial by selecting Model Examples and Tutorials... > Classic PDE > Vibrations of a Circular Membrane from the File menu. Or alternatively, follow the video tutorial or step-by-step instructions below. 1. To start a new model click the New Model toolbar button, or select New Model... from the File menu. 2. Select the Poisson Equation physics mode from the Select Physics drop-down menu. 3. Press OK to finish the physics mode selection. 4. To create a circle or ellipse, first click on the Create circle/ellipse Toolbar button. Then left click in the main plot axes window, and hold down the mouse button. Move the mouse pointer to draw the shape outline, and release the button to finalize the shape. 5. Select E1 in the geometry object Selection list box. 6. To modify and edit the selected ellipse, click on the Inspect/edit selected geometry object Toolbar button to open the Edit Geometry Object dialog box. 7. Enter 0 0 into the center edit field. 8. Enter 1 into the x[radius] edit field. 9. Enter 1 into the y[radius] edit field. 10. Press OK to finish and close the dialog box. 11. Switch to Grid mode by clicking on the corresponding Mode Toolbar button. 12. Enter 0.1 into the Grid Size edit field. 13. Press the Generate button to call the grid generation algorithm. 14. Switch to Equation mode by clicking on the corresponding Mode Toolbar button. 15. Press OK to finish the equation and subdomain settings specification. 16. Switch to Boundary mode by clicking on the corresponding Mode Toolbar button. 17. Select all boundaries (1-4) in the Boundaries list box. 18. Select Dirichlet boundary condition from the Poisson Equation drop-down menu. 19. Enter 0 into the Dirichlet coefficient edit field. 20. Press OK to finish the boundary condition specification. 21. Switch to Solve mode by clicking on the corresponding Mode Toolbar button. Open the Solver Settings dialog box, and select the Eigenvalue solver which per default will try to find the six smallest eigenvalues and corresponding eigenvectors. 1. Press the Settings Toolbar button. 2. Select Eigenvalue from the Solution and solver type drop-down menu. 3. Press the Solve button. Plot the first, second, and fourth modes and confirm that they feature a corresponding number of peaks and troughs. 1. Press the Plot Options Toolbar button. 2. Select the Height Expression check box. 3. Select the fourth solution (25.9617 (0.810936 Hz)) from the Available solutions/eigenvalues (frequencies) drop-down menu. 4. Press OK to plot and visualize the selected postprocessing options. From the analytical solution, the frequency of the second and third mode to the first one should be a factor 1.59 times higher, fourth and fifth 2.14, and sixth 2.30. The vibrations of a circular membrane classic pde model has now been completed and can be saved as a binary (.fea) model file, or exported as a programmable MATLAB m-script text file, or GUI script (.fes) file.
{"url":"https://www.featool.com/doc/Classic_PDE_02_circular_drum1","timestamp":"2024-11-05T09:46:35Z","content_type":"application/xhtml+xml","content_length":"10819","record_id":"<urn:uuid:f35ece80-1a72-4ac3-8a8f-43bd08a333e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00629.warc.gz"}
functions and CCR A recent paper on arXiv.org talks about the CCR Instrument, not John Fogerty 's guitar, but rather the Calculus Concept Readiness Instrument - a mathematics placement test from . The paper's brief background discussion on what concepts students need to be "calculus ready" is interesting. A large literature is cited that states that the most important concept that students need to be familiar with (for Calculus and for general mathematics education) is the function concept . Unfortunately, the function concept and the concept of function composition have often been cited as weak points for teachers as well as students (see this study by David Meel, for example). A really nice article to read if you would like to expand your understanding of the function concept and get a sense of how it has mutated over the last few hundred years is Israel Kleiner's Evolution of the Function Concept: A Brief Survey As mathematics has advanced, functions have become strange, generalized, and freed from conceptual limitations that threaten to tie them down. Kliener quotes in offering an explanation as to why, in contrast, our education proceeds with with simple (antiquated, in Meel's assessment) and well behaved functions. If logic were the sole guide of the teacher, it would be necessary to begin with the most general functions, that is to say with the most bizarre. It is the beginner that would have to be set grappling with this teratologic museum. For those of us who are lucky enough to wander through this bizarre museum, the concept of function gets stretched in interesting ways. I seem to remember that learning about in Linear Algebra expanded my appreciation of functions and their strangeness. Here's an example: consider two sets $A$ and $B$, and suppose that $a$ is an element of $A$. There is a function, call it $\hat{a}$, whose domain is the set of functions from $A$ to $B$, and whose co-domain is the set $B$. This function $\hat{a}$ is defined by the rule $\hat{a}(f) = f(a)$, where $f$ is any function from $A$ to $B$. The first time you see something like this it seems lovely and weird - functions become elements, elements become functions, and the rule that defines the element-become-function $\ hat{a}$ looks like a bit of notational sleight of hand. Things can get even stranger of course, such as when arrows (function-like-things) are defined without elements at all Concerns about "Calculus Concept Readiness" reminds us how preparation for Calculus is often seen as the ultimate goal for K-12 education. Should this be the case? Discrete/finite mathematics may provide a better goal (convincingly and quickly argued by Arthur Benjamin in his TED talk ). It can also be argued that discrete math provides a better setting for learning the fundamentals of the function concept than pre-calculus courses that focus more on functions that are well behaved for the purposes of elementary Real analysis.
{"url":"http://www.mathrecreation.com/2010/12/functions-and-ccr.html","timestamp":"2024-11-05T15:47:22Z","content_type":"application/xhtml+xml","content_length":"81423","record_id":"<urn:uuid:2330e5f3-dfcd-4075-ae40-b558d09537f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00401.warc.gz"}
Cite as David Doty, Mahsa Eftekhari, Othon Michail, Paul G. Spirakis, and Michail Theofilatos. Brief Announcement: Exact Size Counting in Uniform Population Protocols in Nearly Logarithmic Time. In 32nd International Symposium on Distributed Computing (DISC 2018). Leibniz International Proceedings in Informatics (LIPIcs), Volume 121, pp. 46:1-46:3, Schloss Dagstuhl – Leibniz-Zentrum für Informatik Copy BibTex To Clipboard author = {Doty, David and Eftekhari, Mahsa and Michail, Othon and Spirakis, Paul G. and Theofilatos, Michail}, title = {{Brief Announcement: Exact Size Counting in Uniform Population Protocols in Nearly Logarithmic Time}}, booktitle = {32nd International Symposium on Distributed Computing (DISC 2018)}, pages = {46:1--46:3}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-092-7}, ISSN = {1868-8969}, year = {2018}, volume = {121}, editor = {Schmid, Ulrich and Widder, Josef}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2018.46}, URN = {urn:nbn:de:0030-drops-98359}, doi = {10.4230/LIPIcs.DISC.2018.46}, annote = {Keywords: population protocol, counting, leader election, polylogarithmic time}
{"url":"https://drops.dagstuhl.de/search/documents?author=Eftekhari,%20Mahsa","timestamp":"2024-11-04T07:16:34Z","content_type":"text/html","content_length":"63788","record_id":"<urn:uuid:d1293555-5dda-442d-b298-bb25ed7d8af7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00734.warc.gz"}
What is Biogas and What is it used for? - Green Orbits Biogas is a sustainable form of energy that is produced by the breakdown of organic waste. We can use it for various purposes like electricity production, heat generation, fuel & so on. Biogas plants can significantly reduce greenhouse gas emissions. This is because they use these harmful gases to produce clean energy. As per World Biogas Association, Biogas has the potential to reduce global climate change emissions by 20%. This article will explore what Biogas is, how it’s produced, and its benefits and applications! Biogas – what is it & what is it made of? What is called Bio-Gas? Biogas is a type of biogenic gas that the conversion of organic material can produce into methane by anaerobic bacteria. What is Biogas made of? Biogas is generally made of 60-75% Methane (CH4) 25-35% Carbon dioxide (CO2) 0-1% Nitrogen (N2) 0-1% Hydrogen (H2) 0-1% Hydrogen sulfide (H2S) 0-3% Carbon monoxide (CO) 0-2% Oxygen (O2) We can use the Biogas produced for electricity production or to power vehicles. It’s highly sustainable as it does not produce any greenhouse gases! Biogas production process Typically, Biogas is created through a process known as anaerobic digestion. Anaerobic digestion is the degradation of organic materials in the absence of oxygen. It releases gas as a by product of the process. To produce the biogas biomass or organic material. This can come from food waste, animal manure, or plant matter such as rice husks and corn stover. You need to mix the waste with water in a digester. Then anaerobic fermentation happens because of anaerobic bacteria present. It produces gas that can either be captured or combusted into electricity. The more organic materials present inside the digester lead to higher production rates of Biogas. It’s because bacteria’s metabolism rate increases when they’re provided with more food particles. Depending on the output needed, the biogas production plants of various types. It differs in the process involved, sizes, shapes & type of organic material used. It is broadly categorized into the following types. Batch plants – You need to feed the plant with organic material in one go & it produces Biogas over some time Continuous plants – The plant can produce Biogas 24 hours, without any interruption. Semi-batch plants – It operates continuously or as a batch depending on the type of material used for processing. What sort of waste can be used to produce Biogas? Biogas can be produced from any waste with high organic content, such as manure and rotting leaves. It can also be produced by industrial food waste or sewage sludge from wastewater treatment plants. The process is not restricted to just one form of material. It simply depends on the available resources & type of digester used. Here is a four such wastes that you can use to produce Biogas. 1. Livestock manure The Livestock manures include manures of You can use it for anaerobic digestion to create Biogas. Apart from bio-gas, it yields nutrient-rich digestate. You can use this as organic fertilizer. 2. Agricultural wastes Farming, agricultural, and food processing plants produce a lot of organic waste that you can use for biogas production. They have a high untapped potential to generate energy & reduce greenhouse gasses. This includes Rice straws Corn stalks Cotton stalks or any other type of plant material that has high levels of biomass content. 3. Industrial wastewaters Wastewater treatment facilities generate huge amounts of sludge which you can convert to Biogas. The wastewater from these facilities contains sewage as well as industrial effluents like detergents and paints in the form of solid particles called sludge solids (SS). We can also use these to produce Biogas in a biogas plant. However, the presence of heavy metals may reduce the efficiency of the system. 4. Sewage Sludges Sewerage treatments process large volumes of water with a heavy load on Biological Oxygen Demand ( BOD) and Chemical Oxygen Demand (COD). Sewage sludge is a byproduct of liquid effluent from this sewage treatment process. It has high levels of organic content. Eventually, the biogas plants can convert these to Biogas via anaerobic digestion. Uses of Biogas As one of the most reliable forms of renewable energy, we can use Biogas in various sectors. It has a broad spectrum of uses including heat generation, producing electricity, as a fuel, waste management, and so on. Cooking & Heating – Biogas can reduce dependency on cow dung for cooking fires which tends to pose health hazards. Biogas replaces other fuels such as wood, charcoal, etc… Thus reducing deforestation and other environmental hazards. Electricity generation- You can use Biogas for electricity generation through biogas plants. It is the most efficient form of renewable energy available today, with an output that ranges from 18-24%. It also eliminates dependency on fuel such as diesel, LPG gas, etc., which are costly to purchase. Biotechnology (BT) companies have introduced innovations for –Better conversion efficiency and -Less emission of pollutants. They have upgraded the design of the conventional biogas plant into a more stable semi-continuous or continuous system. You can use these upgraded plants depending on the type of material processed. This can reduce operating costs without affecting production capacity. This way, environmental pollution is also reduced significantly. Waste management With increasing population density in urban areas due to rural-urban migration, the problem of waste management has become a major issue. We can use the biogas production from wet wastes (kitchen and garden) as an additional source of fuel to generate electricity or power plants. This in turn will reduce reliance on conventional fuels such as coal. Sewage treatment systems can also use biomethanation process. Here biodegradable organic matter like kitchen and garden garbage is converted into methane gas. You can efficiently utilize the generated natural gas for cooking purposes. If not necessary, it’s exported to grid roads/to CNG stations. Hence, it reduces the demand for other fossil fuel-based transport modes lowering levels of pollution. As fuel It’s an excellent choice for running vehicles like automobiles, ships, buses, etc. It also reduces emissions by up 90% over petrol combustion engines when used as transport fuel at low compression ratios. We can also use Biogas to fuel Fuel cell vehicles. For example, BMW has been using fuel-cell technology in their Hydrogen-Fuelled Plug-in Hybrid Vehicle (called the “HFBX”). This vehicle gets up to 100 miles per gallon and achieves a range of 435 miles on one tank of gas! This is an eco-friendly alternative that reduces our dependency on conventional fuels such as coal. As a fertilizer The residue or the digestate is the byproduct of biogas production. You can use it as a fertilizer for plants in agricultural fields. This reduces the use of chemical fertilizers that often leads to toxic run-offs and other environmental hazards. With all these practical uses, Biogas has a lot to offer. It is not only renewable but also sustainable and environmentally friendly. Biogas is a viable substitute for other sources of energy that have harmful effects on the environment. You can also use it as an alternative to fossil fuels, and it doesn’t pollute air or water as many others do. If you’re looking for ways to reduce your carbon footprint, Biogas might just be the answer!
{"url":"https://www.greenorbits.com/what-is-biogas-and-what-is-it-used-for/","timestamp":"2024-11-04T20:07:43Z","content_type":"text/html","content_length":"86216","record_id":"<urn:uuid:226dfec5-41ea-4606-a240-709332b7e598>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00011.warc.gz"}
Category:Lower Sorbian terms by suffix - Wiktionary, the free dictionary Category:Lower Sorbian terms by suffix Newest and oldest pages Newest pages ordered by last category link update: No pages meet these criteria. Oldest pages ordered by last edit: No pages meet these criteria. Lower Sorbian terms categorized by their suffixes. This category has the following 40 subcategories, out of 40 total.
{"url":"https://en.m.wiktionary.org/wiki/Category:Lower_Sorbian_terms_by_suffix","timestamp":"2024-11-13T11:49:35Z","content_type":"text/html","content_length":"47896","record_id":"<urn:uuid:294b1acd-0c88-452b-9443-fb3261ada424>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00641.warc.gz"}
Journal of Prime Research in Mathematics JPRM-Vol. 1 (2016), Issue 1, pp. 45 – 59 Open Access Full-Text PDF I. Keerthi Asir, S. Athisayanathan Abstract: Let \(v\) be a vertex and \(C\) a clique in a connected graph \(G\). A vertex-to-clique \(u − C\) path P is a \(u − v\) path, where v is a vertex in \(C\) such that \(P\) contains no vertices of \(C\) other than \(v\). The vertex-to-clique distance, \(d(u, C)\) is the length of a smallest \(u−C\) path in \(G\). A \(u−C\) path of length \(d(u, C)\) is called a \(u − C\) geodesic. The vertex-to-clique eccentricity \(e_1(u)\) of a vertex \(u\) in \(G\) is the maximum vertex-to-clique distance from \(u\) to a clique \(C ∈ ζ\), where \(ζ\) is the set of all cliques in \(G\). The vertex-to-clique radius \(r_1\) of \(G\) is the minimum vertex-to-clique eccentricity among the vertices of \(G\), while the vertex-to-clique diameter \(d_1\) of \(G\) is the maximum vertex-to-clique eccentricity among the vertices of \(G\). Also the vertex toclique detour distance, \(D(u, C)\) is the length of a longest \(u−C\) path in \(G\). A \(u−C\) path of length \(D(u, C)\) is called a \ (u−C\) detour. The vertex-to-clique detour eccentricity \(e_{D1}(u)\) of a vertex \(u\) in \(G\) is the maximum vertex-toclique detour distance from u to a clique \(C ∈ ζ\) in \(G\). The vertex-to-clique detour radius \(R_1\) of \(G\) is the minimum vertex-to-clique detour eccentricity among the vertices of \(G\), while the vertex-to-clique detour diameter \(D_1\) of \(G\) is the maximum vertex-to-clique detour eccentricity among the vertices of \(G\). It is shown that \(R_1 ≤ D_1\) for every connected graph \(G\) and that every two positive integers a and b with \(2 ≤ a ≤ b \) are realizable as the vertex-to-clique detour radius and the vertex-to-clique detour diameter, respectively, of some connected graph. Also it is shown that for any three positive integers \(a\), \ (b\), \(c\) with \(2 ≤ a ≤ b < c\), there exists a connected graph G such that \(r_1 = a\), \(R_1 = b\), \(R = c\) and for any three positive integers \(a\), \(b\), \(c\) with \(2 ≤ a ≤ b < c\) and \ (a + c ≤ 2b\), there exists a connected graph \(G\) such that \(d_1 = a\), \(D_1 = b\), \(D = c\). Read Full Article The t-pebbling number of some wheel related graphs JPRM-Vol. 1 (2016), Issue 1, pp. 35 – 44 Open Access Full-Text PDF A. Lourdusamy, F. Patrick, T. Mathivanan Abstract: Let \(G\) be a graph and some pebbles are distributed on its vertices. A pebbling move (step) consists of removing two pebbles from one vertex, throwing one pebble away, and moving the other pebble to an adjacent vertex. The t-pebbling number of a graph \(G\) is the least integer \(m\) such that from any distribution of m pebbles on the vertices of \(G\), we can move t pebbles to any specified vertex by a sequence of pebbling moves. In this paper, we determine the t-pebbling number of some wheel related graphs. Read Full Article Projective configurations and the variant of cathelineaus complex JPRM-Vol. 1 (2016), Issue 1, pp. 24 – 34 Open Access Full-Text PDF Sadaqat Hussain, Raziuddin Siddiqui Abstract: In this paper we try to connect the Grassmannian subcomplex defined over the projective differential map \(\acute{d}\) and the variant of Cathelineau’s complex. To do this we define some morphisms over the configuration space for both weight 2 and 3. we also prove the commutativity of corresponding diagrams. Read Full Article g-noncommuting graph of some finite groups JPRM-Vol. 1 (2016), Issue 1, pp. 16 – 23 Open Access Full-Text PDF M. Nasiri, A. Erfanian, M. Ganjali, A. Jafarzadeh Abstract: Let \(G\) be a finite non-abelian group and \(g\) a fixed element of \(G\). In 2014, Tolue et al. introduced the g-noncommuting graph of \(G\), which was denoted by \(Γ^{g}_G\) with vertex set \(G\) and two distinct vertices \(x\) and \(y\) join by an edge if \([x, y] \neq g\) and \(g^{−1}\). In this paper, we consider induced subgraph of \(Γ^{g}_{G}\) on \(G /Z(G)\) and survey some graph theoretical properties like connectivity, the chromatic and independence numbers of this graph associated to symmetric, alternating and dihedral groups. Read Full Article A note on self-dual AG-groupoids JPRM-Vol. 1 (2016), Issue 1, pp. 01 – 15 Open Access Full-Text PDF Aziz-Ul-Hakim, I. Ahmad, M. Shah Abstract: In this paper, we enumerate self-dual AG-groupoids up to order 6, and classify them on the basis of commutativity and associativity. A self-dual AG-groupoid-test is introduced to check an arbitrary AG-groupoid for a self-dual AG-groupoid. We also respond to an open problem regarding cancellativity of an element in an AG-groupoid. Some features of ideals in self-dual AG-groupoids are explored. Some desired algebraic structures are constructed from the known ones subject to certain conditions and some subclasses of self-dual AG-groupoids are introduced. Read Full Article On the mixed hodge structure associated hypersurface singularities JPRM-Vol. 1 (2015), Issue 1, pp. 137 – 161 Open Access Full-Text PDF Mohammad Reza Rahmati Abstract: Let \(f : \mathbb{C}^{n+1} → \mathbb{C}\) be a germ of hypersurface with isolated singularity. One can associate to f a polarized variation of mixed Hodge structure \(H\) over the punctured disc, where the Hodge filtration is the limit Hodge filtration of W. Schmid and J. Steenbrink. By the work of M. Saito and P. Deligne the VMHS associated to cohomologies of the fibers of \(f\) can be extended over the degenerate point \(0\) of disc. The new fiber obtained in this way is isomorphic to the module of relative differentials of \(f\) denoted \(Ω_f\) . A mixed Hodge structure can be defined on \(Ω_f\) in this way. The polarization on \mathcal{H} deforms to Grothendieck residue pairing modified by a varying sign on the Hodge graded pieces in this process. This also proves the existence of a Riemann-Hodge bilinear relation for Grothendieck pairing and allow to calculate the Hodge signature of Grothendieck pairing. Read Full Article
{"url":"http://jprm.sms.edu.pk/page/20/","timestamp":"2024-11-13T14:39:05Z","content_type":"text/html","content_length":"76372","record_id":"<urn:uuid:ab0a125d-7ef4-41a2-bdf1-a746f9eca3b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00496.warc.gz"}
Resource Estimation Challenge at QRISE 2024: Recap This spring, we partnered with Quantum Coalition to offer a challenge at QRISE 2024. This six-week-long event aimed to connect students with quantum computing industry research challenges and help them get started doing research projects of their own. The challenge we offered to the participants focused on resource estimation of quantum algorithms. Resource estimation helps us answer the question “How many physical qubits and how much time is necessary to execute a quantum algorithm under specific assumptions about the hardware platform used?” Getting these kinds of estimates serves multiple purposes: 1. It allows us to deduce the conditions that quantum hardware needs to meet to offer practical quantum advantage. 2. It helps us clarify which algorithms truly give quantum advantage over their classical counterparts, and which ones do not, and if they do, what problem instances get the advantage. 3. It allows us to compare the efficiency of different algorithms that solve the same problem long before they become viable to run on quantum machines, thus enabling work on improving quantum The goal of the challenge was to implement a quantum algorithm of participants’ choice and obtain and analyze the estimates of resources required for running it on future fault tolerant quantum computers using the Microsoft Azure Quantum Resource Estimator. This is exactly the kind of questions quantum algorithms researchers work on! Let’s meet the winning teams and learn about their projects in their own words! Team Qu-Cats Katie Harrison Muhammad Waqar Amin Nikhil Londhe Sarah Dweik Quantum approximate optimization problem (QAOA) is a quantum algorithm used to solve optimization problems. However, QAOA can only solve an optimization problem that can be formulated as a quadratic unconstrained bounded optimization (QUBO) problem. In this project, we have chosen to solve the Number Partitioning Problem (NPP) using QAOA. NPP involves partitioning a given set of numbers to determine whether it is possible to split them into two distinct partitions, where the difference between the total sum of numbers in each partition is minimum. This problem has applications in various fields, including cryptography, task scheduling, and VLSI design. This problem is also recognized for its computational difficulty, often described as the Easiest Hard Problem. In this project, we have accomplished two primary objectives. Initially, we determined the optimal QPU configuration to run QAOA. Subsequently, we conducted an analysis of resource estimates as we scaled the input size. To determine the best setup for the quantum processing unit (QPU), we evaluated resources for eight different hardware setups, tracking variables like physical qubits, the fraction of qubits used by T-factories, and runtime, among others. The table below details results for the eight different configurations. In addition, we conducted an analysis of resource estimates across a range of input variables. The plot below represents a segment of the analysis, primarily illustrating how the number of physical qubits varies with increasing input size. Besides that, we have plotted other variables, such as algorithm qubits, partitions (in NPP), and T-factory qubits. We see that all variables increase as the input size increases. This is expected because from the QUBO cost function we require one bit for every element in the set. We also plotted the number of partitions that represents the scale of the problem for a particular input size. Interestingly, we notice that up to 12 elements, the number of partitions is higher than the number of physical qubits. This indicates that QAOA is at a severe disadvantage compared to the brute-force approach. However, as the number of elements continues to increase beyond 12, the growth in the number of physical qubits slows down. Team Exponential Integer factorization is a well-studied problem in computer science that is the core hardness assumption for the widely used RSA cryptosystem. It is part of a larger framework called the hidden subgroup problem which includes the discrete logarithm, graph isomorphism and the shortest vector problem. State-of-the-art classical algorithms that exist today, such as the number field sieve, can perform factorization in subexponential time. Shor’s algorithm is a famous result that has kicked off the search for practical quantum advantage. It showed that a sufficiently large, fault-tolerant quantum computer can factor integers in polynomial time. Recently, Regev published an algorithm that provides a polynomial speedup over Shor’s, without the need for fault-tolerance. Regev’s result leverages an isomorphism between factoring and the shortest vector problem on lattices, which had remained elusive for more than two decades. This project provides resource estimates for different variants of Regev’s quantum circuit, by comparing state preparation routines and evaluating recent optimizations to quantum modular exponentiation. In scope for future work is the classical post-processing of the samples from the quantum circuit (more below). The initial step of Regev’s quantum circuit prepares control qubits in a Gaussian superposition state. For n qubits, this is achieved by discretizing the domain of the Gaussian (normal) probability distribution into 2^n equally spaced regions and encoding those cumulative probabilities as amplitudes of the quantum state. For example, here is a visualization of successive sampling of a Gaussian state over n = 4 qubits, plotted using the Q# Histogram: As we add more shots, the histogram gradually adopts the shape of a bell curve. Such a visual test can be useful during development, especially when running on actual quantum hardware where the quantum state is not available for introspection. This project explores three different algorithms for Gaussian state preparation: • Q# library PreparePureStateD • Arbitrary state preparation by Möttönen et al., similar to above where the amplitudes for each basis state are specified • Grover-Rudolph state preparation which is meant specifically for probability distributions like the Gaussian, and does not require amplitudes as input In the resource estimation of the overall quantum circuit, we use the fastest method from the three listed here, namely PreparePureStateD, to initialize the Gaussian state. The next step of Regev’s quantum circuit is modular exponentiation on small primes. This project implements two different algorithms: • Binary exponentiation used in Regev’s original paper • Fibonacci exponentiation with the Zeckendorf representation of integers, using a fast algorithm for Fibonacci number calculation Regev’s algorithm uses the quantum computer to sample a multidimensional lattice. In terms of complexity analysis, Gaussian states have properties that work well on such lattices. However, it is unclear whether a Gaussian state is actually required in practice. For this reason, our test matrix looks like this: Quantum modular exponentiation algorithm used Control register state preparation algorithm used Fibonacci exponentiation with uniform superposition Binary exponentiation with uniform superposition Fibonacci exponentiation with Gaussian superposition Binary exponentiation with Gaussian superposition Here are the resource estimation results for different variants of the factoring circuit for N = 143: The overall winner is Fibonacci exponentiation with a uniform distribution over the control qubits. In this analysis, the size of the control register is fixed to 20 logical qubits for all the four profiles being tested. Preparing a uniform superposition is just a layer of Hadamard gates, which is the same for all problem sizes N. This is clearly advantageous over Gaussian state preparation, where the radius of the Gaussian state required increases exponentially with N. This project is focused on quantum resource estimation, and for these purposes the classical post-processing of the samples from the quantum circuit is not required. However, this is required for a complete implementation of Regev’s algorithm. Current work includes investigation of lattice reduction techniques, followed by filtering of corrupted samples and fast classical multiplication in order to compute a prime factor. Other state preparation algorithms in the literature – including ones specific to Gaussian states – may also prove beneficial by reducing the gate complexity and number of samples required from the quantum circuit. 0 comments Discussion are closed.
{"url":"https://devblogs.microsoft.com/qsharp/resource-estimation-challenge-at-qrise-2024-recap/","timestamp":"2024-11-11T07:09:21Z","content_type":"text/html","content_length":"193246","record_id":"<urn:uuid:bcd489ed-61c5-4110-ba2b-ece4fd7363c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00039.warc.gz"}
Evan - Tutor for Math Certified Tutor Ever since the 8th grade, I knew that I wanted to be a math tutor. I had thought that I was just an average student with an average level of intelligence. Then, my teacher asked me to say after school for tutoring in math. Not so I could be tutored, but so I could help tutor other students. Apparently, I was doing the best in the class, some of my classmates were struggling, and she needed a hand. I was happy to help, being a bit of a teacher's pet, and discovered a passion for math that I hadn't had before and hasn't left me since. At that point, I assumed it was just math. I didn't figure out that it was teaching math I truly enjoyed until college. Once I figured that out, I decided to become a math tutor for a living. Connect with a tutor like Evan Education & Certification Undergraduate Degree: St Petersburg College - Bachelors, Middle Grades Mathematics Education Comedy, Hiking, Gaming Tutoring Subjects Algebra 3/4 Elementary School Math What is your teaching philosophy? Whatever you're teaching has to be fun and relevant, but mostly fun. If it isn't fun, it isn't engaging. If it isn't engaging, it isn't going to stick. What might you do in a typical first session with a student? Check their understanding of the basics, get to know them, and figure out what they like so I know what to apply to future examples. How can you help a student become an independent learner? Don't just show them the answers to problems, show them how I got to that answer, including how I figure out what to do. Being smart isn't always about knowing all the answers, but rather knowing how to find them. How would you help a student stay motivated? Positive reinforcement and examples related to what they are passionate about (money, sports, etc.). If a student has difficulty learning a skill or concept, what would you do? Address that concept in terms of money. I've discovered that talking about something in terms of money sometimes helps them understand. How do you help students who are struggling with reading comprehension? Break down the problem piece by piece, then turn those pieces into numbers or steps in a problem. What strategies have you found to be most successful when you start to work with a student? 1) Make it fun 2) Talking in terms of money. Everyone likes money. 3) Avoid talking or PowerPoints, make it a game. How would you help a student get excited/engaged with a subject that they are struggling in? Make it relevant to what they are already passionate about. Passion inspires passion. What techniques would you use to be sure that a student understands the material? Repetition. If a student can get 5 or more problems in a row correct, they should understand it. How do you build a student's confidence in a subject? If students are feeling low confidence is a certain area, I like to backtrack and practice any prerequisite skills to get them more comfortable and more confident. How do you evaluate a student's needs? 3 ways (in order of importance) 1) What do they think they need (Where do they feel uncomfortable, even if they are doing well?)? 2) What does their work say they need (Low test scores? Where?)? 3) What does the state say they need (common core)? How do you adapt your tutoring to the student's needs? More problems can be done if the student doesn't understand and less can be done if they get it. I can also find examples and tailor them to student preferences if they need something they What types of materials do you typically use during a tutoring session? Schoolbook (Good source of problems), scrap paper (and lots of it), pencils, and the internet (can quickly find definitions and provide examples). Call us today to connect with a tutor like Evan
{"url":"https://www.varsitytutors.com/gb/tutors/878176380","timestamp":"2024-11-14T10:45:28Z","content_type":"text/html","content_length":"445463","record_id":"<urn:uuid:fd6b9aaf-404e-48d1-b097-5a509698f440>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00694.warc.gz"}
Adaptive LASSO Estimator adaptiveLassoEst {cvCovEst} R Documentation Adaptive LASSO Estimator adaptiveLassoEst() applied the adaptive LASSO to the entries of the sample covariance matrix. The thresholding function is inspired by the penalized regression introduced by Zou (2006). The thresholding function assigns a weight to each entry of the sample covariance matrix based on its initial value. This weight then determines the relative size of the penalty resulting in larger values being penalized less and reducing bias (Rothman et al. 2009). adaptiveLassoEst(dat, lambda, n) dat A numeric data.frame, matrix, or similar object. lambda A non-negative numeric defining the amount of thresholding applied to each element of dat's sample covariance matrix. n A non-negative numeric defining the exponent of the adaptive weight applied to each element of dat's sample covariance matrix. A matrix corresponding to the estimate of the covariance matrix. Rothman AJ, Levina E, Zhu J (2009). “Generalized Thresholding of Large Covariance Matrices.” Journal of the American Statistical Association, 104(485), 177-186. doi:10.1198/jasa.2009.0101, https:// Zou H (2006). “The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association, 101(476), 1418-1429. doi:10.1198/016214506000000735, https://doi.org/10.1198/ adaptiveLassoEst(dat = mtcars, lambda = 0.9, n = 0.9) version 1.2.2
{"url":"https://search.r-project.org/CRAN/refmans/cvCovEst/html/adaptiveLassoEst.html","timestamp":"2024-11-05T17:02:23Z","content_type":"text/html","content_length":"3581","record_id":"<urn:uuid:02b0aca6-6fdb-4fed-b38d-bc574a84e597>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00870.warc.gz"}
Test Coin2 Suppose there are two coins and the percentage that each coin flips a Head is \(p\) and \(q\), respectively. \(p, q \in [0,1] \), \(p \neq q \), and the values are given and known. If you are free to flip one of the coins any number of times, how many times \(n\) do you have to flip the coin to decide with some significance level \( \left( \textrm{say } \alpha = 0.05 \right) \) that it’s the \(p \) coin or the \(q\) coin that you’ve been flipping? The distribution of heads after \(n\) flips for a coin will be a binomial distribution with means at \(pn\) and \(qn\). Two binomial distributions, n = 20. The means are pn = 10 and qn = 14. Setting Up Our Hypothesis Test Let’s say we want to test if our coin is the \(p\) coin and let’s say we arbitrarily decide to call the smaller probability \(p\), i.e. \(p < q\). We know that coin flips give us a binomial distribution, and we know the standard deviation of a binomial random variable \(X_p\) (let \(X_p\) or \(X_{p,n}\) be a binomial random variable for the number of flips that are heads, where the probability of a head on a flip is \(p\) and we do \(n\) number of flips), which is: $$ \textrm{Standard Deviation of }{X_p} = \sqrt{ Var\left( {X_p} \right) } = \sqrt{ np(1-p) } $$ Digression: we can also split our \(n\) Bernoulli trial coin flips that make up our binomial random variable \(X_p\) into \(m\) number of binomial random variables \(X_{p,k}\) each with \(k\) trials, such that \(k \times m = n\). Then the standard error of the mean proportion of heads from \(m\) binomial random variables (each with \(k\) trials) is: $$ \textrm{Standard error of the mean} = \sqrt{ Var\left( \overline{X_{p,k}} \right) } = \sqrt{ Var \left( {1 \over m} \sum_{i=1}^{m} {X_{p,k}} \right) } $$ $$= \sqrt{ Var(\sum_{i=1}^{m} X_{p,k}) \over m^2 } = \sqrt{ m \cdot Var(X_{p,k}) \over m^2 }= \sqrt{ {m \cdot kp(1-p) \over m^2 } } = \sqrt{ { kp(1-p) \over m} } $$ This standard error above is for the random variable \(X_{p,k}\), each of which has \(k\) Bernoulli trials. In other words, the standard deviation of \( {1 \over m} \sum_{i=1}^{m} X_{p,k} \) is \( \ sqrt{ kp(1-p) \over m }\). But if you simply change \(k\) to \(km = n\) and reduce \(m\) to \(1\), you get the same result as if you took all \(km = n\) trials as the number of trials for one binomial random variable, our original \(X_p\): where we now say that the standard deviation of \( {1 \over 1} \sum_{i=1}^{1} X_{p,n} = X_{p,n} = X_p \) is \( \sqrt{ np(1-p) \over 1 } = \sqrt{ np (1-p) } \). By going from \(m\) repetitions of \(X_{p,k}\) to \(1\) repetition of \(X_{p,n}\), both the mean and the standard deviation is multiplied by \(m\). The mean of \(X_{p,k}\) is \(kp\) and the mean of \ (X_{p,n}\) is \(mkp = np\); the standard deviation of \(X_{p,k}\) is \( \sqrt{ kp(1-p) } \) and the standard deviation of \(X_{p,n}\) is \( \sqrt{ mkp(1-p) } =\sqrt{ np(1-p) } \). The standard error of the mean of \(m\) repetitions of \(X_{p,k}\) is \( \sqrt{ { kp(1-p) \over m} } \) while the mean of \(m\) repetitions of \(X_{p,k}\) is of course just \( {1 \over m} \sum_{i=1}^{m} \mathbb{E} \ left[ X_{p,k} \right] = {1 \over m} m (kp) = kp \). So when going from \(1\) repetition of \(X_{p,k}\) to \(m\) repetitions of \(X_{p,k}\), the mean goes from \(kp\) to \(mkp = np\) and the standard error of the mean of \(X_{p,k}\) goes from \( \sqrt{ { kp(1-p) \over m} } \) to the standard deviation of \( X_{p,n} \) by multiplying the standard error of the mean of \( X_{p,k} \) by \(m\): \( m \ cdot \sqrt{ { kp(1-p) \over m} } = \sqrt{ { m^2 \cdot kp(1-p) \over m} } = \sqrt{ { mkp(1-p)} } = \sqrt{ { np(1-p)} } \). Knowing the standard deviation of our random variable \(X_p\), a 0.05 significance level for a result that “rejects” the null would mean some cutoff value \(c\) where \(c > pn\). If \(x_p\) (the sample number of heads from \(n\) coin tosses) is “too far away” from \(pn\), i.e. we have \(x_p > c\), then we reject the null hypothesis that we have been flipping the \(p\) coin. But note that if we choose a \(c\) that far exceeds \(qn\) as well, we are in a weird situation. If \(x_p > c\), then \(x_p \) is “too large” for \(pn\) but also is quite larger than \(qn\) (i.e. \( x_p > qn > pn \) ). This puts us in an awkward situation because while \(x_p \) is much larger than \(pn\), making us want to reject the hypothesis that we have were flipping the \(p\) coin, it is also quite larger than \(qn\), so perhaps we obtained a result that was pretty extreme “no matter which coin we had.” If we assume the null hypothesis that we have the \(p\) coin, our result \(x_p \) is very unlikely, but it is also quite unlikely even if we had the \(q\) coin, our alternative hypothesis. But still, it is more unlikely that it is the \(p\) coin than it is the \(q\) coin, so perhaps it’s not that awkward. But what if \(x_p\) does not exceed \(c\)? Then we can’t reject the null hypothesis that we have the \(p\) coin. But our sample result of \(x_p\) might in fact be closer to \(qn\) than \(pn\) – \(x_p\) might even be right on the dot of \(qn\) – and yet we aren’t allowing ourselves to use that to form a better conclusion, which is a truly awkward situation. If \(c\) is, instead, somewhere in between \(pn\) and \(qn\), and \(x_p > c\), we may reject the null hypothesis that our coin is the \(p\) coin, while \(x_p\) is in a region close to \(q\), i.e. a region that is a more likely result if we actually had been flipping the \(q\) coin, bringing us closer to the conclusion that this is the \(q\). However, if we reverse the experiment – if we use the same critical value \(c\) and say that if \(x_p < c\) then we reject our null hypothesis that \(q\) is our coin, then the power and significance of the test for each coin is different, which is also Above, the pink region is the probability that \(X_p\) ends in the critical region, where \(x_p > c\), assuming the null hypothesis that we have the \(p\) coin. This is also the Type I Error rate (a.k.a. false positive) – the probability that we end up falsely rejecting the null hypothesis, assuming that the null hypothesis is true. Above, the green region is the power \(1-\beta\), the probability that we get a result in the critical region \(x_p > c\) assuming that the alternative hypothesis is true, that we have the \(q\) coin. The blue-gray region is \(\beta\), or the Type II Error rate (a.k.a. false negative) – the probability that we fail to reject the null hypothesis (that we have the \(p\) coin) when what’s actually true is the alternative hypothesis (that we have the \(q\) coin). Now let us “reverse” the experiment with the same critical value – we want to test our null hypothesis that we have the \(q\) coin: We have \(x_p < c\). We fail to reject the null hypothesis that we have the \(p\) coin, and on the flip side we would reject the null hypothesis that we have the \(q\) coin. but we have failed a tougher test (the first one, with a small \(\alpha_p\)) and succeeded in rejecting an easier test (the second one, with a larger \(\alpha_q\)). In hypothesis testing, we would like to be conservative, so it is awkward to have failed a tougher test but "be ok with it" since we succeeded with an easier test. Common sense also, obviously, says that something is strange when \(x_p\) is closer to \(q\) than \(p\) and yet we make the conclusion that since \(x_p\) is on the "\(p\)-side of \(c\)," we have the \(p\) coin. In reality, we wouldn't take one result and apply two hypothesis tests on that one result. But we would like the one test procedure to make sense with whichever null hypothesis we start with, \(p\) coin or \(q\) coin (since it is arbitrary which null hypothesis we choose in the beginning, for we have no knowledge of which coin we have before we start the experiment). What we can do, then, is to select \(c\) so that the probability, under the hypothesis that we have the \(p\) coin, that \(X_p > c\) is equal to the probability that, under the hypothesis that we have the \(q\) coin, that \(X_q < c\). In our set up, we have two binomial distributions, which are discrete, as opposed to the normal distributions above. In addition, binomial distributions, unless the mean is at \(n/2\), are generally not symmetric, as can be seen in the very first figure, copied below as well, where the blue distribution is symmetric but the green one is not. We can pretend that the blue distribution is the binomial distribution for the \(p\) coin and the green distribution for the \(q\) coin. The pmf of a binomial random variable, say \(X_p\) (that generates Heads or Tails with probability of Heads \(p\)) is: $$ {n \choose h} p^h (1-p)^{n-h} $$ where \(n\) is the total number of flips and \(h\) is the number of Heads among those flips. We let \(c\) be the critical number of Heads that would cause us to reject the null hypothesis that the coin we have is the \(p\) coin in favor of the alternative hypothesis that we have the \(q\) coin. The area of the critical region, i.e. the probability that we get \(c_H\) heads or more assuming the hypothesis that we have the \(p\) coin, is: $$ Pr(X_p > c) = \sum_{i=c}^{n} \left[ {n \choose i} p^i (1-p)^{n-i} \right] $$ And the reverse, the probability that we get \(c_H\) heads or less assuming the hypothesis that we have the \(q\) coin, is: $$ Pr(X_q < c) = \sum_{i=0}^{c} \left[ {n \choose i} q^i (1-q)^{n-i} \right] $$ So we want to set these two equal to each other and solve for \(c\): $$ \sum_{i=c}^{n} \left[ {n \choose i} p^i (1-p)^ {n-i} \right] = \sum_{i=0}^{c} \left[ {n \choose i} q^i (1-q)^{n-i} \right] $$ But since the binomial distribution is discrete, there may not be a \(c\) that actually works. For large \(n\), a normal distribution can approximate the binomial distribution. In that case, we can draw the figure below, which is two normal distributions, each centered on \(pn\) and \(qn\) (the means of the true binomial distributions), and since normal distributions are symmetric, the point at which the distributions cross will be our critical value. The critical regions for \(X_p\) (to the right of \(c\)) and for \(X_q\) (to the left of \(c\)) will have the same area. If we pretend that these normal distributions are binomial distributions, i.e. if we pretend that our binomial distributions are symmetric (i.e. we pretend that \(n\) is going to be large enough that both our binomial distributions of \(X_p\) and \(X_q\) are symmetric enough), then to find \(c\) we can find the value on the horizontal axis at which, i.e. the number of Heads at which, the two binomial probability distributions are equal to each other. $$ {n \choose c} p^c (1-p)^{n-c} = {n \choose c} q^c (1-q)^{n-c} $$ $$ p^c (1-p)^{n-c} = q^c (1-q)^{n-c} $$ $$ \left({p \over q}\right)^c \left({1-p \over 1-q}\right)^{n-c} = 1 $$ $$ \left({p \over q}\right)^c \left({1-p \over 1-q}\right)^{n} \left({1-q \over 1-p}\right)^c = 1 $$ $$ \left({p(1-q) \over q(1-p)}\right)^c = \left({1-q \over 1-p}\right)^{n} $$ $$ c \cdot log \left({p(1-q) \over q(1-p)}\right) = n \cdot log \left({1-q \over 1-p}\right) $$ $$ c = n \cdot log \left({1-q \over 1-p}\right) / log \left({p(1-q) \over q(1-p)}\right) $$ The mean of a binomial distribution \(X_p\) has mean \(pn\) with standard deviation \(\sqrt{np(1-p)}\). With a normal distribution \(X_{\textrm{norm}}\) with mean \(\mu_{\textrm{norm}}\) and standard deviation \(\sigma_{\textrm{norm}}\), the value \( c_{\alpha} = X_{\textrm{norm}} = \mu_{\textrm{norm}} = 1.645\sigma_{\textrm{norm}}\) is the value where the area from that value \(c_{\alpha}\) to infinity is \(0.05 = \alpha\). Thus, \( c_{\alpha} \) is the critical value for a normal random variable where the probability that \(X_{\textrm{norm}} > c_{\alpha} = 0.05)\). So for a binomial random variable \(X_p\), we would have \(c_{\textrm{binomial, }\alpha} = pn + 1.645\sqrt{np(1-p)}\). Thus, we have that this critical value for a binomial random variable \(X_p\): $$ c = n \cdot log \left({1-q \over 1-p}\right) / log \left({p(1-q) \over q(1-p)}\right) $$ must also be $$ c_{\textrm{binomial, }\alpha} \geq pn + 1.645\sqrt{np(1-p)} $$ for the area to the right of \(c\) to be \(\leq 0.05\). To actually find the critical value \(c_{\textrm{binomial, }\alpha}\), we can just use $$ c_{\textrm{binomial, }\alpha} \geq pn + 1.645\sqrt{np(1-p)} $$ Since we are given the values of \(p\) and \(q\), we would plug in those values to find the required \(n\) needed to reach this condition for the critical value. So we have $$ n \cdot log \left({1-q \over 1-p}\right) / log \left({p(1-q) \over q(1-p)}\right) = pn + 1.645\sqrt{np(1-p)} $$ $$ \sqrt{n} = 1.645\sqrt{p(1-p)} / \left[ log \left({1-q \over 1-p}\right) / log \left({p(1-q) \over q(1-p)}\right) – p \right] $$ $$ n = 1.645^2p(1-p) / \left[ log \left({1-q \over 1-p}\right) / log \left({p(1-q) \over q(1-p)}\right) – p\right]^2 $$ For example, if \(p = 0.3\) and \(q = 0.7\), we have \(n = 14.2066 \), or rather, \(n \geq 14.2066 \). Wolfram Alpha calculation of above, enter the following into Wolfram Alpha: 1.645^2 * p * (1-p) / (ln((1-q)/(1-p))/ln(p*(1-q)/(q*(1-p))) – p )^2; p = 0.3, q = 0.7 Note that if we switch the values so that \(p = 0.7\) and \(q = 0.3\), or switch the \(p\)’s and \(q\)’s in the above equation for \(n\), we obtain the same \(n_{\textrm{min}}\). This makes sense since our value for \(n_{\textrm{min}}\) depends on \(c\) and \(c\) is the value on the horizontal axis at which the two normal distributions from above (approximations of binomial distributions) with means at \(pn\) and \(qn\) cross each other. Thus, we set up the distributions so that that whole problem is symmetric. So if we generate a sample such that the number of samples is \(n \geq 14.2066\), we can use our resulting \(x_p\) and make a hypothesis test regarding if we have the \(p\) or \(q\) coin with \(\ alpha = 0.05\) significance level. If \(p\) and \(q\) are closer, say \(p = 0.4\) and \(q = 0.5\), then we have \(n \geq 263.345\). This makes intuitive sense, where the closer the probabilities are of the two coins, the more times we have to flip our coin to be more sure that we have one of the coins rather than the other. To be more precise, the smaller the effect size is, the larger sample size we need in order to get the certainty about a result. An example of the effect size is Cohen’s d where: $$\textrm{Cohen’s d } = {\mu_2 – \mu_1 \over \textrm{StDev / Pooled StDev}} $$ Wolfram Alpha calculation of above for \(n\) with \(p = 0.4\) and \(q = 0.5\), or enter the following into Wolfram Alpha: 1.645^2 * p * (1-p) / (ln((1-q)/(1-p))/ln(p*(1-q)/(q*(1-p))) – p )^2; p = 0.4, q = 0.5 From here, where the question is asked originally, is an answer that finds the exact values for the two \(n_{\textrm{min}}\) using R with the actual binomial distributions (not using normal distributions as approximations): Due to the discrete-ness of the distributions, the \(n_{\textrm{min}}\)’s found are slightly different: \(n_{\textrm{min}} = 17\) for the first case and \(n_{\textrm{min}} = 268\) for the second case. I.e., the difference comes from using the normal distribution as an approximation for the binomial distribution.
{"url":"https://econopunk.com/2018/08/06/test-coin2/","timestamp":"2024-11-03T10:51:23Z","content_type":"text/html","content_length":"110514","record_id":"<urn:uuid:31214443-1f25-4f11-b257-96f604ea9be7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00025.warc.gz"}
Task Aesthetic Text (est) Aesthetic Text Memory limit: 128 MB Let us consider a text consisting of words numbered from to . We represent any of its decompositions into lines by a sequence of numbers , such that the words with numbers from to are in the first line, the words with numbers from to are in the second line, and so on, and finally, the words with numbers from to are in the last, -th line. Each word has a certain length (measured in the number of characters). Let denote the length of the word no. . Furthermore, every two subsequent words in a line are separated by a space of width of a single character. By length of the line we denote the sum of lengths of the words in this line, increased by the number of spaces between them. Let denote the length of the line no. . I.e., if the line no. contains the words with numbers from to inclusive, its length is: As an example, let us consider a text consisting of words of lengths , , and , respectively, and its decomposition into lines. Then the length of the first line is , second - , and third - : XXXX (1st line) XXX XX (2nd line) XXXXX (3rd line) We shall refer to the number as the coefficient of aestheticism of a decomposition of the given text into lines. Particularly, if the decomposition has only one line, its coefficient of aestheticism is . Needles to say, the smaller the coefficient, the more aesthetical the decomposition. We shall consider only these decompositions that have no line whose length exceeds some constant . Of all such decompositions of a given text into any number of lines we seek the one most aesthetical, i.e. the one with the smallest coefficient of aestheticism. The aforementioned examplary decomposition's coefficient is , and that is exactly the minimum coefficient of aestheticism for and . Write a programme that: • reads from the standard input the numbers and and the lengths of the words, • determines the minimum coefficient of aestheticism for those decompositions, whose every line is of length not exceeding , • writes the result to the standard output. The first line of the standard input contains the numbers and , , , separated by a single space. The second, last line of the standard input contains integers, denoting the lengths of subsequent words, for , separated by single spaces. The first and only line of the standard output should contain exactly one integer: the minimum coefficient of aestheticism for those decompositions, whose every line's length does not exceed . For the input data: the correct result is: while for the following input data: the correct result is: Task author: Bartosz Walczak.
{"url":"https://szkopul.edu.pl/problemset/problem/WPdj4dv7QzWPuqWA9FJi55k5/site/?key=statement","timestamp":"2024-11-08T02:37:15Z","content_type":"text/html","content_length":"30019","record_id":"<urn:uuid:eac599b8-efb0-4c10-ac8c-6211b1d8baa0>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00384.warc.gz"}
Precision, Recall & Confusion Matrices in Machine Learning So, you’ve built a machine learning model. Great. You give it your inputs and it gives you an output. Sometimes the output is right and sometimes it is wrong. You know the model is predicting at about an 86% accuracy because the predictions on your training test said so. But, 86% is not a good enough accuracy metric. With it, you only uncover half the story. Sometimes, it may give you the wrong impression altogether. Precision, recall, and a confusion matrix…now that’s safer. Let’s take a look. Confusion matrix Both precision and recall can be interpreted from the confusion matrix, so we start there. The confusion matrix is used to display how well a model made its predictions. Binary classification Let’s look at an example: A model is used to predict whether a driver will turn left or right at a light. This is a binary classification. It can work on any prediction task that makes a yes or no, or true or false, distinction. The purpose of the confusion matrix is to show how…well, how confused the model is. To do so, we introduce two concepts: false positives and false negatives. • If the model is to predict the positive (left) and the negative (right), then the false positive is predicting left when the actual direction is right. • A false negative works the opposite way; the model predicts right, but the actual result is left. Using a confusion matrix, these numbers can be shown on the chart as such: In this confusion matrix, there are 19 total predictions made. 14 are correct and 5 are wrong. • The False Negative cell, number 3, means that the model predicted a negative, and the actual was a positive. • The False Positive cell, number 2, means that the model predicted a positive, but the actual was a negative. The false positive means little to the direction a person chooses at this point. But, if you added some stakes to the choice, like choosing right led to a huge reward, and falsely choosing it meant certain death, then now there are stakes on the decision, and a false negative could be very costly. We would only want the model to make the decision if it were 100% certain that was the choice to Cost/benefit of confusion Weighing the cost and benefits of choices gives meaning to the confusion matrix. The Instagram algorithm needs to put a nudity filter on all the pictures people post, so a nude photo classifier is created to detect any nudity. If a nude picture gets posted and makes it past the filter, that could be very costly to Instagram. So, they are going to try to classify more things than necessary to filter every nude photo because the cost of failure is so high. Non-binary classification Finally, confusion matrices do not apply only to a binary classifier. They can be used on any number of categories a model needs, and the same rules of analysis apply. For instance, a matrix can be made to classify people’s assessments of the Democratic National Debate: • Very poor • Poor • Neutral • Good • Very good All the predictions the model makes can get placed in a confusion matrix: Precision is the ratio of true positives to the total of the true positives and false positives. Precision looks to see how much junk positives got thrown in the mix. If there are no bad positives (those FPs), then the model had 100% precision. The more FPs that get into the mix, the uglier that precision is going to look. To calculate a model’s precision, we need the positive and negative numbers from the confusion matrix. Precision = TP/(TP + FP) Recall goes another route. Instead of looking at the number of false positives the model predicted, recall looks at the number of false negatives that were thrown into the prediction mix. Recall = TP/(TP + FN) The recall rate is penalized whenever a false negative is predicted. Because the penalties in precision and recall are opposites, so too are the equations themselves. Precision and recall are the yin and yang of assessing the confusion matrix. Recall vs precision: one or the other? As seen before, when understanding the confusion matrix, sometimes a model might want to allow for more false negatives to slip by. That would result in higher precision because false negatives don’t penalize the recall equation. (There, they’re a virtue.) Sometimes a model might want to allow for more false positives to slip by, resulting in higher recall, because false positives are not accounted for. Generally, a model cannot have both high recall and high precision. There is a cost associated with getting higher points in recall or precision. A model may have an equilibrium point where the two, precision and recall, are the same, but when the model gets tweaked to squeeze a few more percentage points on its precision, that will likely lower the recall rate. Additional resources Get more on machine learning with these resources: For a mathematical understanding of precision and recall, watch this video: For a more humorous look at confusion matrices: Learn ML with our free downloadable guide This e-book teaches machine learning in the simplest way possible. This book is for managers, programmers, directors – and anyone else who wants to learn machine learning. We start with very basic stats and algebra and build upon that. These postings are my own and do not necessarily represent BMC's position, strategies, or opinion. See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.
{"url":"https://blogs.bmc.com/confusion-precision-recall/","timestamp":"2024-11-01T19:25:09Z","content_type":"text/html","content_length":"114495","record_id":"<urn:uuid:a9ac8f94-b855-4173-a356-dfbe9c7e9cbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00779.warc.gz"}
Algorithm Engineering A.A. 2022-2023 Algorithm Engineering A.A. 2022-2023 Teacher: Paolo Ferragina CFU: 9 (first semester). Course ID: 531AA. Language: English. Question time: Monday: 15-17, or by appointment (also in video-conferencing through the virtual room of the course) News: about this course will be distributed via a Telegram channel. Official lectures schedule: The schedule and content of the lectures is available below and with the official registro. In this course we will study, design and analyze advanced algorithms and data structures for the efficient solution of combinatorial problems involving all basic data types, such as integers, strings, (geometric) points, trees and graphs. The design and analysis will involve several models of computation— such as RAM, 2-level memory, cache-oblivious, streaming— in order to take into account the architectural features and the memory hierarchy of modern PCs and the availability of Big Data upon which those algorithms could work on. We will add to such theoretical analysis several other engineering considerations spurring from the implementation of the proposed algorithms and from experiments published in the literature. Every lecture will follow a problem-driven approach that starts from a real software-design problem, abstracts it in a combinatorial way (suitable for an algorithmic investigation), and then introduces algorithms aimed at minimizing the use of some computational resources like time, space, communication, I/O, energy, etc. Some of these solutions will be discussed also at an experimental level, in order to introduce proper engineering and tuning tools for algorithmic development. Week Schedule of the Lectures Week Schedule Day Time Slot Room Monday 9:00 - 11:00 Room Fib C Tuesday 11:00 - 13:00 Room Fib C Wednesday 11:00 - 13:00 Room Fib C The exam will consist of a written test including two parts: exercises and “oral” questions. The exam is passed if in both parts the student gets a sufficient score (expressed in 30), which are then The first (exercises) and the second (theory questions) parts of the exam can be split into different exam dates, even of different exam sessions. The exam dates are the ones indicated in the calendar on ESAMI. In the case that the second part is not passed or the student abandons the exam, (s)he can keep the rank of the first exam, but this may occur just once. The second time this happens, the rank of the first part is dropped, and the student has to do both parts again. Dates Room Text Notes text, Correction will occur Wednesday 18 January, at 15:00 (Ferragina's office). For registration only, also online same date-hour, please show yourself on Teams' room of 16/01/2023, room solution, the course. start at 09:00 E results Students that have passed only the “exercises” part can repeat only the “theory” part on any of the following exam dates, they have to register on the portal “ESAMI” writing in the notes “only theory”. Moreover, they can come +45mins after the start of the exam, to join the class that did in the first hour the “exercises” part. 10/02/2023, room text, results Correction will occur Monday 13 February, at 11:00 (Ferragina's office). For registration only, you can just send me an email stating that you accept the rank. start at 09:00 E , solution Students that have passed only the “exercises” part can repeat only the “theory” part on any of the following exam dates, they have to register on the portal “ESAMI” writing in the notes “only theory”. Moreover, they can come +45mins after the start of the exam, to join the class that did in the first hour the “exercises” part. 05/06/2023, room text, results The “result” file reports the grades of the two-parts exam and a proposal for the final grade. You can accept it, writing me, or you can repeat one or both the start at 09:00 C parts. 05/07/2023, room start at 16:00 C 24/07/2023, room text start at 09:00 C 07/09/2023, room text start at 16:00 A1 Background and Notes of the Course I strongly suggest refreshing your knowledge about basic Algorithms and Data Structures by looking at the well-known book Introduction to Algorithms, Cormen-Leiserson-Rivest-Stein (third edition). Specifically, I suggest you look at the chapters 2, 3, 4, 6, 7, 8, 10, 11 (no perfect hash), 12 (no randomly built), 15 (no optimal BST), 18, 22 (no strongly connected components). Also, you could look at the Video Lectures by Erik Demaine and Charles Leiserson, specifically Lectures 1-7, 9-10, and 15-17. Most of the content of the course will be covered by some notes I wrote in these years; for some topics, parts of papers/books will be used. You can download the latest version of these notes from this link. I state that this material will be published by Cambridge University Press as Pearls of Algorithm Engineering by me. This prepublication version is free to view and download for personal use only. Not for redistribution, resale or use in derivative works. © Paolo Ferragina 2022. Video-lectures of last year are available at the link and they are linked just for reference, if you wish to re-check something you listened in class. This year, lectures are in presence and the program of the course could be different. The lectures below include also some EXTRA material, which is suggested to be read for students aiming to high rankings. Date Lecture Biblio 19/09 Introduction to the course. Models of computation: RAM, 2-level memory. An example of algorithm analysis (time, space and I/ Chap. 1 of the notes. /2022 Os): binary search. The B-tree (or B+-tree) data structure: searching and updating a big-set of sorted keys. Read note 1 and note 2 on B-trees. 20/09 Another example of I/Os-analysis: the sum of n numbers. The role of the Virtual Memory system. Algorithm for Permuting. Chap. 5 of the notes. /2022 Sorting atomic items: sorting vs permuting, comments on the time and I/O bounds. 21/09 Binary merge-sort, and its I/O-bounds. Snow Plow, with complexity proof and an example. Chap. 5 of the notes. /2022 Study also from my notes: Finding the maximum-sum subsequence (Chap. 2, no sect 2.5-). Students are warmly invited to refresh their know-how about: Divide-and-conquer technique for algorithm design and Master Lecture 2, 9 and 10 of Demaine-Leiserson's course at MIT Theorem for solving recurrent relations; and Binary Search Trees 27/09 Multi-way mergesort: algorithm and I/O-analysis. Lower bound for sorting: comparisons and I/Os. Chap. 5 of the notes /2022 EXTRA: Lower bound for permuting. 28/09 The case of D>1 disks: non-optimality of multi-way MergeSort, the disk-striping technique. Quicksort: recap on best-case, Chap. 5 of the notes /2022 worst-case. Quicksort: Average-case with analysis. 03/10 Selection of kth ranked item in linear average time (with proof). 3-way partition for better in-memory quicksort. Bounded /2022 Quicksort. Multi-way Quicksort: definition, design and I/O-complexity. Selection of k-1 “good pivot” via Oversampling. Proof Chap. 5 of the notes of the average time complexity of Multi-way Quicksort. 04/10 Random sampling: disk model, known length (algorithms and proofs). Random sampling on the streaming model, known and unknown Chap. 3 of the notes. /2022 length. Reservoir sampling: Algorithm and proofs. Exercises on Random Sampling. 05/10 Randomized data structures: Treaps with their query (search a key, 3-sided range query) and update (insert, delete, split, Notes by others. Study also Theorems and Lemmas. /2022 merge) operations (with time complexity and proofs). 10/10 Randomized data structures: Skip lists (with proofs and comments on time-space complexity and I/Os). String sorting: See Demaine's lecture num. 12 on skip lists. Chap. 7 of the notes. /2022 comments on the difficulty of the problem on disk, lower bound. 11/10 LSD-radix sort with proof of time complexity and correctness. MSD-radix sort and the trie data structure. Multi-key /2022 Quicksort: algorithm and analysis. 12/10 Ternary search tree. Exercises. Fast set intersection, various solutions: scan, sorted merge, binary search, mutual Chap. 6 of the notes, and Sect 9.2. /2022 partition, binary search with exponential jumps. A lower bound on the number of comparisons. 17/10 Fast set intersection: more on Exponential jumps, and two-level scan. Interpolation Search. See sect. 9.2 for interpolation search. /2022 EXTRA: random shuffling (sect. 6.4). Students are warmly invited to refresh their know-how about: hash functions and their properties; hashing with chaining. Lectures 7 of Demaine-Leiserson's course at MIT 18/10 Hashing and dictionary problem: direct addressing, simple hash functions, hashing with chaining. Uniform hashing and its Chap. 8 of the notes. All theorems with proof, except Theo 8.3 and /2022 computing/storage cost, universal hashing (definition and properties). An example of Universal Hash functions, with 8.5 without proof (only the statement). correctness proof. 19/10 Another example of Universal hashing: just the algorithm (no proof). The case of d-left hashing and the “power of two Don't read Perfect Hash (hence no sect. 8.5). /2022 choices” result. Cuckoo hashing (with all proofs). 24/10 Minimal ordered perfect hashing: definition, properties, construction, space and time complexity. Exercises. 25/10 Bloom Filter: properties, construction, query and insertion operations, error estimation (with proofs). Spectral Bloom /2022 Filter. No compressed BF. EXTRA: Lower bound on BF space 26/10 Prefix search: definition of the problem, solution based on arrays, Front-coding, two-level indexing based on compacted Chap. 9 of the notes: 9.1 and 9.4. No Locality Preserving front /2022 tries. Analysis of space, I/Os and time of all described solutions. coding (9.3). 31/10 Two-level indexing based on Patricia Trees, with exercises. Chap. 9 of the notes: 9.5. /2022 Video 02/11 Substring search: definition, properties, reduction to prefix search. The Suffix Array. Binary searching the Suffix Array: p Chap. 10 of the notes: 10.1, 10.2.1 (but no pages from 10-4 to /2022 log n. Suffix Array construction via qsort and its asymptotic analysis. Text mining use of suffix arrays. 10-8), 10.2.3 (but no “The skew algorithm”, no “The Scan-based algorithm”), and sect 10.4.3. 07/11 Prefix-free codes, the notion of entropy. Integer coding: the problem and some considerations. Shannon Theorem and optimal Chap. 11 of the notes. /2022 codes. The codes Gamma and Delta, space/time performance, and consideration on optimal distributions. 08/11 The codes Rice, PForDelta, Elias-Fano code. Definition of Rank/Select operations and comments. 09/11 Exercise on Elias-Fano and simplified NextGEQ procedure. Variable-byte code and (s,c)-dense code, with examples. 14/11 Lab activity (optional): A gentle introduction to the Succinct Data Structure Library: integer sequences. (Please, prepare slides SDSL and video /2022 in advance the environment for experimenting with the SDSL library, as indicated at the repo.) 15/11 Lab activity (optional): More on the Succinct Data Structure Library. video. Solution of the exercise. 16/11 Interpolative code. Rank and Select: definition and succinct solution for Rank. Chapter 15 of the notes. /2022 EXTRA: Solution for Select. 21/11 Compressed solution of Rank/Select based on Elias-Fano coding. Succinctly encoding binary trees, with examples. No LOUDS which is for generic trees. 22/11 Data Compression. Static, semistatic and dynamic models for frequency estimation. Huffman, with optimality (proof) and Chap. 12 of the notes. No PPM. /2022 relation to Entropy 23/11 Canonical Huffman: construction, properties. Huffman decompression. Arithmetic coding: properties, the converter tool. 28/11 More on Arithmetic coding: compression, decompression, and entropy bounds. 29/11 Dictionary-based compressors: properties and algorithmic structure. LZ77, LZSS, LZ78. Chap 13.1 and 13.2. No LZW. Slides. 30/11 Compressor bzip: MTF, RLE0, Wheeler-code, Burrows-Wheeler Transform. How to construct the BWT via qsort. How to invert the Notes: 14.1, 14.2 and 14.3. /2022 BWT. 05/12 Exercises 06/12 Exercises 07/12 Exercises 12/12 Exercises 13/12 Exercises magistraleinformaticanetworking/ae/ae2022/start.txt · Ultima modifica: 07/09/2023 alle 14:59 (14 mesi fa) da Paolo Ferragina
{"url":"http://didawiki.di.unipi.it/doku.php/magistraleinformaticanetworking/ae/ae2022/start","timestamp":"2024-11-11T04:32:48Z","content_type":"text/html","content_length":"40926","record_id":"<urn:uuid:3906c8b3-967a-4194-aed7-7e93b0cebbca>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00194.warc.gz"}
1. a geometric shape with four angles and four straight sides 2. a courtyard which is quadrangular 1. having the shape of a quadrangle 2. in the shape of a quadrangle 1. One of the four sections made by dividing an area with two perpendicular lines. 1. (mathematics) The four regions of the Cartesian plane bounded by the x-axis and y-axis. 2. (geometry) One fourth of a circle or disc; a sector with an angle of 90&deg;. 3. (nautical) A measuring device with a graduated arc of 90&deg; used in locating an altitude. 1. (mathematics) of a class of polynomial of the form <math> y = a.x^2 + b.x + c </math> noun (wikipedia, Quadratic equation) 1. In mathematics, a quadratic equation is a polynomial equation of the second degree. 1. the process of making something square; squaring 2. (mathematics) the process of constructing a square having the same area as a given plane figure, or of computing that area 3. (astronomy) a situation in which three celestial bodies form a right-angled triangle 4. (physics) the condition in which the phase angle between two alternating quantities is 90° 1. (mathematics) The problem, proposed by ancient Greek geometers, of using a finite ruler-and-compass construction to make a square with the same area as a given circle. noun quadritaleral 1. a polygon having four sides. 1. having four sides. 1. A homogeneous polynomial in two or more variables. 1. (mathematics) an algebraic equation or function of the fourth degree 1. (mathematics) of, or relating to the fourth degree 1. a polygon with fifteen sides 1. (mathematics) a quintic polynomial: ax^5+bx^4+cx^3+dx^2+ex+f 1. (mathematics) Of or relating to the fifth degree, such as a quintic polynomial which has the form ax^5+bx^4+cx^3+dx^2+ex+f (containing a term with the independant variable raised to the fifth 1. (context, arithmetic) The number resulting from the division of one number by another. □ The quotient of 12 divided by 4 is 3. 1. (context, mathematics) By analogy, the result of any process that is the inverse of multiplication as defined for any mathematical entities other than numbers. 2. (context, obsolete, rare) A quotum or quota.
{"url":"https://www.allwords.com/math-glossary-Q-words-169-40-1636.php","timestamp":"2024-11-03T04:41:46Z","content_type":"text/html","content_length":"45600","record_id":"<urn:uuid:794078d5-c88a-4180-9827-796f9c661c95>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00579.warc.gz"}
he \({ }^{13} \mathrm{C}-\mathr The \({ }^{13} \mathrm{C}-\mathrm{NMR}\) spectrum of 3 -methyl-2-butanol shows signals at \(\delta 17.88\left(\mathrm{CH}_{3}\right), 18.16\) \(\left(\mathrm{CH}_{3}\right), 20.01\left(\mathrm{CH}_ {3}\right), 35.04\) (carbon-3), and \(72.75\) (carbon-2). Account for the fact that each methyl group in this molecule gives a different signal. Short Answer Expert verified Answer: Each methyl group in 3-methyl-2-butanol gives a different signal in its 13C-NMR spectrum due to the distinct electronic environments around them, caused by their positions and proximity to the electron-withdrawing hydroxy group. Step by step solution Draw the structure of 3-methyl-2-butanol To analyze the reason behind the distinct signals for each methyl group, first draw the structure of 3-methyl-2-butanol. The molecule has a total of 5 carbons: CH3 - CH(OH) - CH(CH3) - CH3 Here, there are three methyl groups attached to carbons 1, 2, and 4. Explain the electronic environment of each methyl group Next, we need to look at the electronic environment around each methyl group: 1. Methyl group at carbon 1 (CH3): This methyl group is directly connected to the hydroxy-bearing carbon (carbon-2), which is an electron-withdrawing group (EWG). The proximity to the EWG affects the electronic environment of this CH3 group. 2. Methyl group at carbon 2 (CH(CH3)): This methyl group is attached to carbon-3, which is directly connected to the hydroxy-bearing carbon (carbon-2). The EWG effect isn't as prominent here as it is in the first methyl group, but it still affects the electronic environment of this CH3 group. 3. Methyl group at carbon 4 (CH3): This methyl group is on the other end of the molecule, far away from the hydroxy-bearing carbon (carbon-2). Hence, it is more shielded from the EWG effect and has a different electronic environment than the other two CH3 groups. Relate electronic environment to the \({ }^{13} \mathrm{C}-\mathrm{NMR}\) signals Now, we can relate the different electronic environments of the three methyl groups to their \({ }^{13} \mathrm{C}-\mathrm{NMR}\) signals: 1. Methyl group at carbon 1 (CH3) - δ 17.88: The signal at δ 17.88 corresponds to the CH3 group directly attached to the hydroxy-bearing carbon (carbon-2). The deshielding by the EWG causes this signal to appear at lower field. 2. Methyl group at carbon 2 (CH (CH3)) - δ 18.16: The signal at δ 18.16 corresponds to the CH3 group attached to carbon-3. The EWG effect at this carbon isn't as prominent as that of the first methyl group, but still causes some deshielding, making its signal appear at lower than the CH3 on carbon 4. 3. Methyl group at carbon 4 (CH3) - δ 20.01: The signal at δ 20.01 corresponds to the CH3 group at the most distant carbon from the hydroxy-bearing carbon (carbon-2). The shielding effect caused by its far position from the EWG results in a signal at a higher field. In conclusion, each methyl group in 3-methyl-2-butanol has a different \({ }^{13} \mathrm{C}-\mathrm{NMR}\) signal due to the distinct electronic environments caused by their positions and proximity to the electron-withdrawing group (hydroxy group).
{"url":"https://www.vaia.com/en-us/textbooks/chemistry/organic-chemistry-8-edition/chapter-13/problem-27-the-13-mathrmc-mathrmnmr-spectrum-of-3-methyl-2-b/","timestamp":"2024-11-05T13:30:45Z","content_type":"text/html","content_length":"264493","record_id":"<urn:uuid:9a800bc2-61d5-4bca-a9c0-8a92686e93df>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00643.warc.gz"}
The Thrill of Discovery Photo by Rob Johnson, graph by Stephen Humphries While some may view math as a tedious, required class, mathematics professor Stephen Humphries believes that it is much more than that: Math is discovery. “It’s the thrill of discovery that is part of what makes [math] exciting, fun, and interesting,” Humphries said. Humphries has taught mathematics at BYU since 1987, and he shares the joy that comes from unearthing mathematical mysteries with his the students. “I was meeting with one student a couple of weeks ago. We discovered something, and he shouted for joy,” Humphries said. “He thought it was so neat what we discovered. To experience that is Humphries says that he always encounters surprising turns during his mathematics research. “There’s always the unexpected,” said Humphries. “You’ll be studying a particular thing, then all of the sudden you’ll think to do something [different]. You check a few examples and you notice a pattern, and that pattern becomes a theorem.” One of Humphries’ areas of focus is group theory, which consists of looking at the symmetries of various objects and shapes. These shapes may seem just like intertwined circles and shapes to most people, but Humphries knows their ins and outs. “I tend to think of them as very beautiful objects,” Humphries said. “I have lots of fun studying them.” Group theory is applicable to not just mathematics, but also to physics and other sciences. Humphries mentors a number of students to teach them how to perform research in mathematics. “I like working with students and seeing how they do mathematics,” Humphries said. “I think of what I do as being an enabler of students to have a research experience.” Although it may be difficult, Humphries says that providing proofs of results is integral to his job as a mathematician. “The whole idea of mathematics is to produce proofs of significant and interesting mathematical ideas,” Humphries said. “That’s what a mathematician does.” The task of being a mathematician also includes uncovering what the logic is in the problem. “It’s not obvious what to do next,” Humphries said. “It’s only logical once you’ve seen how the argument works.” Humphries thoroughly enjoys sharing the thrill of discovery with other students. He has seen a student come from Russia with no degree, to getting a second PhD. He also saw a former student of his return to BYU after 20 years to receive her master’s degree. “I’ve enjoyed working with students. I’ve mentored a fairly large number of students in the time I’ve been here,” Humphries said. “That’s always been a rewarding and fruitful thing to do.”
{"url":"https://science.byu.edu/news-archive/the-thrill-of-discovery","timestamp":"2024-11-03T09:09:47Z","content_type":"text/html","content_length":"91832","record_id":"<urn:uuid:a5ffe3c4-94b1-431c-b864-5411c98930dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00224.warc.gz"}
There are 10 kinds of people in the world - Colin Walls There are 10 kinds of people in the world Colin Walls●September 27, 2023 Many years ago - I must have been 11 or 12 - I was at school and we were scheduled to have a Mathematics lesson. A different teacher came into the room, explained that the Mathematics teacher was off sick and that he would take the class instead. He said “I have no idea where you are on the curriculum, so we’ll go off piste and have some fun.” He taught us about number bases, explaining that base 10 was arbitrary probably to do with our 10 fingers and a number system could be based on any value. He showed that base 4 would only need the digits 0, 1, 2 and 3 and that base 12 would two more digits that could be A and B. He said that you could even have base 2, where the only digits would be 0 and 1 and that this was called binary. Some of my classmates were baffled, but for me a light bulb came on in my head and the concept was crystal clear. I wondered when I might have some use for this information. The teacher had said something about binary being the “language of computers”, but it was not until nearly a decade later that this made any sense to me and his teachings could be put to use. My first programming was using high-level languages Fortran, BASIC and others which were designed to hide the inner workings of the computer. But I soon became curious about the details and learned about assembly language and how machine instructions were just binary sequences. One of the first computers that I studied at “low level” was a DEC PDP8 - a minicomputer with 12-bit words. On the front of the machine was a line of switches that could be used to look at memory and deposit values including machine instructions in binary. This was not a very efficient way to program, of course, and there were tools to make it easier. In assembly language, the binary values were normally represented by four octal - base 8 - digits. I quickly became fluent in using octal to visualize the binary value. A more advanced computer was the 16-bit PDP11, for which DEC continued to use octal, representing words with six digits, where the most significant one could only be 0 or 1 which I always thought was a bit messy. Later, the 32-bit VAX machines were introduced and DEC started to be more conventional and used base-16 - Of course, nowadays hex is ubiquitous. However, despite 40+ years of experience, I still cannot make the instant translation to binary that I could do so easily with octal. When I learned C, I was disappointed to find that the language did not have a means to enter binary constants - only octal and hex. In due course, I decided to do something about that … My solution was very simple. I created a file - binary.h - that contain 256 lines like this: #define b00000000 ((unsigned char) 0x00) #define b00000001 ((unsigned char) 0x01) #define b00000010 ((unsigned char) 0x02) This easily enabled me to reach a personal “holy grail” - to write more readable code - as binary values in my C programs would be instantly understandable. Although it is very easy to create, if you would like a copy of the binary.h file, drop me an email. My next problem was how to deal with 16-bit values. I could have just created an upgraded binary.h file with 65,536 lines providing symbols for every possible 16-bit number in binary. Although quite easy to create, the file would be unwieldy and would not really solve the problem, as an unbroken string of 16 binary digits would be hard to read. The solution was to deal with the upper and lower bytes separately. I made a file with 512 lines like this: #define Hb00000000 ((unsigned short) 0x0000) #define Hb00000001 ((unsigned short) 0x0100) #define Hb00000010 ((unsigned short) 0x0200) #define Lb00000000 ((unsigned short) 0x00) #define Lb00000001 ((unsigned short) 0x01) #define Lb00000010 ((unsigned short) 0x02) Then it is just a matter of using the C language OR operator to join the high and low bytes: Hb00001111 | Lb1111000 This would be equivalent to 0x0ff0. The same idea could be applied to 32-bit values, defining each of the four bytes in a file with 1024 entries. Having addressed this topic, I cannot resist quoting one of my favorite jokes. It may be old, but I like it: “There are 10 kinds of people in the world: those who understand binary and those who do not.” 🙂 To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Please login (on the right) if you already have an account on this platform. Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers:
{"url":"https://www.embeddedrelated.com/showarticle/1584.php","timestamp":"2024-11-05T07:12:17Z","content_type":"text/html","content_length":"64967","record_id":"<urn:uuid:ab195bfa-c294-44dc-acf4-d5fb260cc235>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00670.warc.gz"}
JpGU-AGU Joint Meeting 2017/Equatorial waves modified by the presence of a toroidal magnetic field within the stably stratified layer at the top of the Earth’s outer core [MIS15-P03] Equatorial waves modified by the presence of a toroidal magnetic field within the stably stratified layer at the top of the Earth’s outer core Keywords:the uppermost outer core, stable stratification, equatorial waves, MHD shallow water equations A number of researches have suggested the existence of a stably stratified layer at the top of the Earth’s outer core (e.g. Buffett, 2014), including seismological evidence (e.g. Helffrich and Kaneshima, 2010). The stable stratification can make horizontal flow dominant at the top of the core. It is therefore expected that the hydrostatic approximation used in atmospheric and oceanic dynamics can be applied to fluid motion within this stratified layer provided that we include the influences of the magnetic field. In this study, we investigated waves trapped in the equatorial region at the top of the liquid core. Our research is motivated by prominent geomagnetic fluctuations in the equatorial region. For example, Chulliat et al.(2015) found some standing waves with periods of about 6 years in secular acceleration data in the equatorial region. In addition, Finlay and Jackson (2003) and other scientists showed that the geomagnetic westward drift is most prominent in the low latitude region. The governing equations we adopt are linearized non-dissipative Boussinesq-MHD equations. In addition, we use hydrostatic and equatorial beta plane approximations, and assumed that the background magnetic field has only toroidal (east-west) component. With these assumptions, the governing differential equations become separable, and can be divided into horizontal and vertical structure equations. It should be noted that the horizontal structure equations have the same form as the MHD (magnetohydrodynamics) shallow water equations (e.g. Gilman, 2000; Zaqarashvili et al., 2008). We obtained a dispersion relation and eigenfunctions with both analytical and numerical approaches, and examined the effect of toroidal magnetic fields on equatorial waves. Firstly, we considered the situation in which a uniform toroidal field is imposed. The frequencies of waves such as inertial gravity waves and Rossby waves are higher, and these waves decay more rapidly away from the equator than the non-magnetic situation. Moreover, MC Rossby waves, which can exist in the mid latitude, cannot be trapped in the equatorial region. Next, we let the strength of imposed background field depend linearly on latitude. The spectrums of Alfven waves become continuous, and the resonance appears at the latitude where the east-west phase speed of an eigenmode coincides with the Alfven wave velocity.
{"url":"https://confit.atlas.jp/guide/event/jpguagu2017/subject/MIS15-P03/detail","timestamp":"2024-11-10T18:15:13Z","content_type":"text/html","content_length":"26240","record_id":"<urn:uuid:ff4e2e42-1fdb-4bea-8241-1caf1d8aaeda>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00893.warc.gz"}
Lumens Distance Calculator Author: Neo Huang Review By: Nancy Deng LAST UPDATED: 2024-10-03 19:29:43 TOTAL USAGE: 15446 TAG: Calculation Light Physics Unit Converter ▲ Unit Converter ▼ Powered by @Calculator Ultra Find More Calculator☟ The Lumens Distance Calculator is an invaluable tool for professionals and enthusiasts in lighting design, photography, and stage production. It simplifies the process of determining the optimal distance from a light source to the subject or area to be illuminated based on the quantity of light emitted and the desired intensity. Historical Background The concept of measuring light intensity and its distribution has been a fundamental aspect of optical physics for centuries. The development of the lumens distance calculation is a direct application of these principles, enabling precise control over lighting conditions in various settings. Calculation Formula The optical distance (\(D\)) is calculated using the formula: \[ D = \sqrt{\frac{Q}{E}} \] • \(D\) is the Optical Distance in meters (m), • \(Q\) is the quantity of light emitted in lumens (lm), • \(E\) is the light intensity in lumens per square meter (\(\text{lm/m}^2\)). Example Calculation To illustrate, let's calculate the optical distance for a light source emitting 70 lumens with a desired light intensity of 50 lumen/m²: \[ D = \sqrt{\frac{70}{50}} \approx 1.183 \] Thus, the optical distance is approximately 1.183 meters. Importance and Usage Scenarios Calculating the optical distance is crucial in ensuring optimal illumination for photography, theatrical productions, architectural lighting, and in the agricultural sector to determine the correct placement of grow lights for plants. Common FAQs 1. What is lumens? □ Lumens is a unit of luminous flux, a measure of the total quantity of visible light emitted by a source. 2. Why is light intensity measured in lumen/m² important? □ Light intensity determines how much light is received per unit area, affecting how well a space or subject is illuminated. 3. How does optical distance affect lighting design? □ Optical distance helps in planning the placement of light sources to achieve desired illumination levels, ensuring efficiency and effectiveness in lighting design. This calculator bridges the gap between complex optical physics and practical application, making it a must-have tool for anyone involved in lighting design and planning.
{"url":"https://www.calculatorultra.com/en/tool/lumens-distance-calculator.html","timestamp":"2024-11-12T15:42:16Z","content_type":"text/html","content_length":"47543","record_id":"<urn:uuid:d36732b5-ef2b-4aea-9e13-05bb92ae59eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00272.warc.gz"}
Electronic math online calculation electronic math online calculation Related topics: four roots algebra graph linear equation worksheet factoring polynomials and power series algebra equations solved i need a program that will help me solve algebra problems Yr 6 Algebra Worksheet Free combining like terms worksheet free online help with graphing parabolas summation notation solvers polynomial factoring calculator free algebra 2 worksheets and reviews of chapters Prolog Simplify Expand Expression prentice hall code for power algebra polynomial systems Author Message majjx99 Posted: Thursday 29th of Mar 08:26 Can anyone please help me? I simply need a fast way out of my problem with my math. I have this test coming up fast . I have a problem with electronic math online calculation. Getting a good tutor these days quickly is difficult. Would appreciate any directions . From: EU Back to top Vofj Posted: Saturday 31st of Mar 08:40 Timidrov Hi, I think that I can to help you out. Have you ever used a program to help you with your algebra assignments? a while ago I was also stuck on a similar problems like you, but then I found Algebrator. It helped me so much with electronic math online calculation and other math problems, so since then I always count on its help! My algebra grades got better thanks to the help of Algebrator. Back to top MichMoxon Posted: Monday 02nd of Apr 08:27 Hello there , Thanks for the instant response . But could you give me the details of genuine sites from where I can make the purchase? Can I get the Algebrator cd from a local book mart available in my area? Back to top pcaDFX Posted: Tuesday 03rd of Apr 09:04 A extraordinary piece of math software is Algebrator. Even I faced similar difficulties while solving adding numerators, side-angle-side similarity and function composition. Just by typing in the problem from homework and clicking on Solve – and step by step solution to my algebra homework would be ready. I have used it through several math classes - Remedial Algebra, College Algebra and Algebra 1. I highly recommend the program. Back to top
{"url":"https://softmath.com/algebra-software/radical-equations/electronic-math-online.html","timestamp":"2024-11-10T12:30:09Z","content_type":"text/html","content_length":"39483","record_id":"<urn:uuid:f0f4982a-a302-4f62-9bf3-e554dfe9688e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00426.warc.gz"}
Re: st: -predict , reffects- after -xtmelogit- Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: -predict , reffects- after -xtmelogit- From Jeph Herrin <[email protected]> To [email protected] Subject Re: st: -predict , reffects- after -xtmelogit- Date Mon, 20 Dec 2010 23:20:59 -0500 Thanks, the typo was that I had -yrwe- and -ywre- and mixed them up. But this still doesn't tell me how to get the mle random effects, if that is possible. On 12/20/2010 4:47 PM, Tim Wade wrote: Jeph, I think your example does produce the result you expected, and the calculated random effects agree with results from -predict, reffects-. Maybe there was just a typo. This seems to work: use http://www.stata-press.com/data/r11/bangladesh xtmelogit c_use || district: predict re_cons, reffects predict ymu, mu predict yxb, xb gen ywre=logit(ymu) gen re_cons2=ywre-yxb assert round(re_cons2, 0.00001)==round(re_cons, 0.00001) On Mon, Dec 20, 2010 at 2:18 PM, Jeph Herrin<[email protected]> wrote: This is very helpful, thanks. I understand shrinkage, but it didn't click when I read the documentation that the distinction was made here. So is there a way to get the mle random effects? I tried predict ymu, mu predict yxb, xb gen yrwe=logit(ymu) gen re_cons=ywre-yfix but this doesn't agree with either sd(_cons) nor with the result of -predict, reffects- On 12/20/2010 1:09 PM, Roberto G. Gutierrez, StataCorp wrote: Jeph Herrin<[email protected]> asks: I am using -xtmelogit- to estimate a random effects model, and am about what is being predicted by -predict, reffects-. use http://www.stata-press.com/data/r11/bangladesh xtmelogit c_use || district: predict re_cons, reffects When you use -predict, reffects- after -xtmelogit-, you obtain estimates the modes of the posterior distribution of the random effects given the and estimated parameters; see pg. 277 of [XT] xtmelogit postestimation for complete discussion. Now, I would expect the standard deviation of the random effect reported the model: Random-effects Parameters | Estimate Std. Err. district: Identity | sd(_cons) | .4995265 .0798953 To be approximately the standard error of the predicted randome effects, the district level: bys district : gen tolist = _n==1 sum re_cons if tolist Variable | Obs Mean Std. Dev. Min Max re_cons | 60 .0069783 .3787135 -.9584643 .9257698 But it seems very different, 0.4995 vs .37871. I must be missing obvious, but what? The phenomenon you are seeing is known as "shrinkage". Predictions based the random-effects posterior distribution tend to be closer in magnitude zero because they are incorporating the prior information that the random effects have mean zero. That is, if you have a relatively small cluster the prior information that the random effect should be zero tends to The estimate of sd(_cons) is, in contrast, based on maximum likelihood all the clusters are considered jointly. Thus, prior information tends to dominate as much because all clusters are pooling what they have to say the random-effects standard deviation. Shrinkage dimishes as cluster size gets larger. To see this, try . clear . set seed 1234 . set obs 100 // 100 clusters . gen u = sqrt(2)*invnorm(uniform()) // random effects . gen id = _n . expand 1000 // cluster size is 1000 . gen e = log(1/runiform() - 1) // logistic errors . gen y = (e + u)> 0 // binary response . xtmelogit y || id: . predict r, reffects . bysort id: gen tolist = _n==1 . sum r if tolist The standard deviations match much more closely -- having a cluster size 1,000 helps! [email protected] * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2010-12/msg00800.html","timestamp":"2024-11-10T05:01:18Z","content_type":"text/html","content_length":"17317","record_id":"<urn:uuid:e32d8428-ba5e-4de5-b1c8-c83e9232fe8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00337.warc.gz"}
Eye Measurement and Reporting The standard of performance for a high-speed serial link is bit-error-ratio (BER). BER is estimated based on a number of factors, one of which is the inner eye contour of an eye diagram. Simulation results, both statistical and time domain, contain eye width and height measurements, along with calculated margin to a target BER. The methods and reporting of these metrics pertain to Parallel Link Designer only when simulating in STAT mode. The inner contour of the eye diagram at the Target BER is used to estimate the channel BER. Understanding the derivation of eye height, width and voltage margin as reported in statistical and time domain simulations provides insight to the BER estimate. For compliance to standards, inner and outer eye masks can be applied to simulation results and the available margin is reported. Knowing how the margin is calculated and reported is beneficial to debugging potential problems within a design. Parameters Used in Eye Measurement Eye metrics such as height, width and margin along with the BER for the channel are determined based on three metrics; the eye contours, the clock PDF and the sensitivity of the receiver. Eye Contours Eye contours are plots of the amplitude associated with fixed probabilities as a function of sampling time. They indicate the shape of the inner and outer boundaries of the eye diagram for each of a number of different probabilities. The eye contours for a given simulation are based on the Target BER. To set the target BER, open the Simulation Parameters dialog box by selecting Setup > Simulation Parameters from the Serial Link Designer or Parallel Link Designer app. The default value is 1e-12, but you can set this based on the BER requirement of the channel design. Target BER defines the contours that is generated and reported after rolling up the simulation Four eye contours are generated after the simulation, for the Target BER, Target BER + 1e3, Target BER + 1e6 and Target BER + 1e9. For example, if the Target BER is set at 1e-12, the contours are displayed at: 1e-12, 1e-9, 1e-6 and 1e-3. Regardless of the Target BER setting, a 0 contour is also generated which represents the BER = 0 point. Clock PDF The clock PDF is the probability density function (PDF) of the phase difference between the clock at the receiver decision point and an ideal transmitter symbol clock. It is represented as a Gaussian probability density function. You can determine the net BER from the interaction between the bathtub curves and the clock pdf. The bathtub curve is the probability of error as a function of the time that the data is actually sampled. The net BER is the probability of an error occurring at a given sampling time given the probability of sampling at that time. This curve is the area under the product of the bathtub curve and the clock PDF. Receiver Sensitivity Sensitivity is a keyword that is part of the IBIS-AMI specification for receiver models. It is defined as the minimum latch overdrive voltage at the data decision point of the receiver after equalization. For example if sensitivity is defined as 25mV, the latch would require +/- 25mV for switching. The default sensitivity used in the Serial Link Designer and Parallel Link Designer is 0. A statistical eye diagram with the bathtub curve set and the receiver sensitivity marked +/- 25mV (dashed lines) is shown: Calculating Eye Metrics To calculate the eye metrics, you need to first find the center eye. For even n in a PAMn modulation scheme, the center eye is considered as the eye at the zero voltage. For odd n, the center eye is considered as the first eye below the eye at the zero voltage. The time center of the 1e-3 contour of the center eye is referred to as Tmid time. After finding the Tmid time, Tmid eye height and eye width are reported for the center eye using the target BER contour. For CEI 56G PAM4 specifications, the target BER is 1e-6. The eye contour probability is measured using the cumulative distribution function (CDF). The tentative Vmid location of each eye is determined by searching up at Tmid for a maximum histogram density, then continuing to search up for a minimum eye density. This is repeated for each eye. The eye height for each eye is determined by the target CDF voltage above and below the tentative Vmid position of eye. The Vmid position of each eye is half way between these two voltages. The eye width of each eye is determined by the target CDF contour to the left and right of the Tmid position and the Vmid position of each eye. These eye widths and eye heights are measured in accordance with the CEI 56G PAM4 specifications. They do not necessarily represent the maximum eye width or eye height of each eye or the time location of the eye that is actually sampled by the hardware. Eye Reporting Signal Integrity Viewer reports the results of statistical simulation. These results include statistical eye height, eye width, eye margin, outer eye height, and threshold eye width. These results are all determined from the Target BER contour and the receiver Parameter Description Eye Diagram Representation Stat Eye Height(V) The height of the target bit error rate contour at the average clock time. Stat Eye Width(ps) The width of the eye measured at the 0V crossing. Stat Eye Outer Height (V) The maximum voltage measured on the outer eye. This is the maximum voltage measured on the zero outer contour. Stat Threshold Eye Width (ps) The eye width measured at the intersection of the inner eye and the receiver sensitivity. Stat Eye Margin (V) Voltage measured from the sensitivity threshold to the target BER contour at the average clock time. Stat Clock PDF Mean The mean of the recovered clock. Stat Clock PDF Sigma The standard deviation of the recovered clock. Stat BER The average bit error as predicted through statistical analysis. Stat BER Floor The minimum bit error rate at any point on the bathtub curve derived from statistical analysis. Eye height and eye width are reported for all contours generated in the simulation based on the Target BER. For example, if the target BER is 1e-12, statistical eye heights and widths for 1e-3, 1e-6, 1e-9 and 1e-12 BER are reported. When comparing the reported results to measurements made manually in Signal Integrity Viewer, an error is introduced from the samples per bit selection. To determine the amount of error when making a manual measurement, divide the UI (unit interval) of a bit by the number of samples per bit (UI/SPB). Using a higher number of samples per bit results in a smaller error. Calculating Eye Margin from Simulation Results to Eye Mask When an eye mask is defined and applied to a sheet being simulated, the margin between the mask and the Target BER contour are reported. An eye mask can be defined as either an inner mask, outer mask or both. A statistical eye diagram with inner and outer masks applied is shown: If both inner and outer masks are defined, the smallest margin at any point of the two is reported. The worst eye height margin and worst eye width margin at any point for a given eye mask. Only one result is reported so it is important know which masks are being applied to best identify violations. You can define and use two types of eye masks: static and skew eye masks. A static eye mask is centered at the 0.5UI point of the bit time. The margin to the mask can then be determined based on its static position. The skew eye mask is positioned by the simulator after simulation to maximize eye margin and place the mask at its optimal point in the eye To obtain the mask margin numbers using a skew eye mask: 1. Slice UI into discrete time slices (for example, 256 slices per UI). 2. Place mask edge at zero UI and obtain margins for all time slices that cross mask (margins are positive and negative). 3. Increment mask to next time slice and recapture margins for all slices that cross mask. 4. Continue this process until right side of mask hits 1 UI. Taking all of this data, obtain the best position for the mask that maximizes the worst case margin (looking for most positive result) across all time slices. Then report the worst case margin. The determination of mask margins is shown as: The worst case margin is shown to be between a perturbation of the eye contour and the mask. In this case no outer mask is assumed. If both an outer mask and inner mask are applied in a simulation and a violation is reported, you need to determine where the violation comes from. A violation to the outer eye mask when both inner and outer eye masks applied is shown: After simulation, the results for eye mask margin is reported: In this case both skew and static masks are applied. Eye height and width margins are reported in the individual columns. Comparing static and skew mask margins in the table below show slightly more margin when applying the skew mask. The red entries represent violations to either the upper or lower mask. To identify the violation the Target BER contour and the mask can be plotted. Related Topics
{"url":"https://au.mathworks.com/help/signal-integrity/ug/eye-measurement-and-reporting.html","timestamp":"2024-11-02T17:06:14Z","content_type":"text/html","content_length":"83799","record_id":"<urn:uuid:c6de68f4-4992-4427-ac54-e3ffb92d960d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00482.warc.gz"}
An object's two dimensional velocity is given by v(t) = ( sqrt(3t)-t , t^2-5t). What is the object's rate and direction of acceleration at t=2 ? | HIX Tutor An object's two dimensional velocity is given by #v(t) = ( sqrt(3t)-t , t^2-5t)#. What is the object's rate and direction of acceleration at #t=2 #? Answer 1 The rate of acceleration is $= 1.07 m {s}^{-} 2$ in the direction #=248.7º# The derivative of the velocity is the acceleration. #v(t) = < sqrt(3t)-t , t^2-5t> # #a(t)= < sqrt3*1/(2sqrtt)-1,2t-5># When #t=2# #a(2) = < sqrt3*1/(2sqrt2)-1,2*2-5 ># #=< -0.39,-1 ># #=||< -0.39,-1 >||# The third quadrant represents the direction. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the acceleration, differentiate the velocity function with respect to time. Then, substitute t=2 into the acceleration vector. Acceleration at t=2 is (1, -6), and the direction is along the vector (1, -6). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/an-object-s-two-dimensional-velocity-is-given-by-v-t-sqrt-3t-t-t-2-5t-what-is-th-1-8f9af89fd2","timestamp":"2024-11-06T02:41:10Z","content_type":"text/html","content_length":"574122","record_id":"<urn:uuid:60db5cf0-70eb-4114-bafa-b907a415d29f>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00001.warc.gz"}
How do you simplify 6sqrt2divsqrt3? | Socratic How do you simplify # 6sqrt2divsqrt3#? 1 Answer The answer is $2 \sqrt{6}$ You can do this simplification by "rationalizing the denominator". That is, multiply $\frac{6 \sqrt{2}}{\sqrt{3}}$ by $\frac{\sqrt{3}}{\sqrt{3}}$. Doing this gives $\frac{6 \sqrt{6}}{3} = 2 \sqrt{6}$ I used the fact that $\sqrt{2} \sqrt{3} = \sqrt{6}$ in this simplification. Impact of this question 1017 views around the world
{"url":"https://socratic.org/questions/how-do-you-simplify-6sqrt2divsqrt3","timestamp":"2024-11-04T08:59:30Z","content_type":"text/html","content_length":"32627","record_id":"<urn:uuid:6ac7fa27-009f-4582-bee3-1249ceda60be>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00310.warc.gz"}
etric and Gap metric and Vinnicombe (nu-gap) metric for distance between two systems [gap,nugap] = gapmetric(P1,P2) computes the gap and Vinnicombe (ν-gap) metrics for the distance between dynamic systems P1 and P2. The gap metric values satisfy 0 ≤ nugap ≤ gap ≤ 1. Values close to zero imply that any controller that stabilizes P1 also stabilizes P2 with similar closed-loop gains. [gap,nugap] = gapmetric(P1,P2,tol) specifies a relative accuracy for calculating the gaps. Compute Gap Metrics for Stable and Unstable Plant Models Create two plant models. One plant, P1, is an unstable first-order system with transfer function 1/(s–0.001). The other plant, P2, is stable, with transfer function 1/(s +0.001). P1 = tf(1,[1 -0.001]); P2 = tf(1,[1 0.001]); Despite the fact that one plant is unstable and the other is stable, these plants are close as measured by the gap and nugap metrics. [gap,nugap] = gapmetric(P1,P2) The gap is very small compared to 1. Thus a controller that yields a stable closed-loop system with P2 also tends to stabilize P1. For instance, the feedback controller C = 1 stabilizes both plants and renders nearly identical closed-loop gains. To see this, examine the sensitivity functions of the two closed-loop systems. C = 1; H1 = loopsens(P1,C); H2 = loopsens(P2,C); subplot(2,2,1); bode(H1.Si,'-',H2.Si,'r--'); subplot(2,2,2); bode(H1.Ti,'-',H2.Ti,'r--'); subplot(2,2,3); bode(H1.PSi,'-',H2.PSi,'r--'); subplot(2,2,4); bode(H1.CSo,'-',H2.CSo,'r--'); Next, consider two stable plant models that differ by a first-order system. One plant, P3, is the transfer function 50/(s+50), and the other plant, P4, is the transfer function [50/(s+50)]*8/(s+8). P3 = tf(50,[1 50]); P4 = tf(8,[1 8])*P3; Although the two systems have similar high-frequency dynamics and the same unity gain at low frequency, by the gap and nugap metrics, the plants are fairly far apart. [gap,nugap] = gapmetric(P3,P4) Compute Gap Metric and Stability Margin Consider a plant and a stabilizing controller. P1 = tf([1 2],[1 5 10]); C = tf(4.4,[1 0]); Compute the stability margin for this plant and controller. Next, compute the gap between P1 and the perturbed plant, P2. P2 = tf([1 1],[1 3 10]); [gap,nugap] = gapmetric(P1,P2) Because the stability margin b1 = b(P1,C) is greater than the gap between the two plants, C also stabilizes P2. As discussed in Gap Metrics and Stability Margins, the stability margin b2 = b(P2,C) satisfies the inequality asin(b(P2,C)) ≥ asin(b1)-asin(gap). Confirm this result. b2 = ncfmargin(P2,C); [asin(b2) asin(b1)-asin(gap)] Input Arguments P1,P2 — Input systems dynamic system models Input systems, specified as dynamic system models. P1 and P2 must have the same input and output dimensions. If P1 or P2 is a generalized state-space model (genss or uss) then gapmetric uses the current or nominal value of all control design blocks. tol — Relative accuracy 0.001 (default) | positive scalar Relative accuracy for computing the gap metrics, specified as a positive scalar. If gap[actual] is the true value of the gap (or the Vinnicombe gap), the returned value gap (or nugap) is guaranteed to satisfy |1 – gap/gap[actual]| < tol. Output Arguments gap — Gap between P1 and P2 scalar in [0,1] Gap between P1 and P2, returned as a scalar in the range [0,1]. A value close to zero implies that any controller that stabilizes P1 also stabilizes P2 with similar closed-loop gains. A value close to 1 means that P1 and P2 are far apart. A value of 0 means that the two systems are identical. nugap — Vinnicombe gap (ν-gap) between P1 and P2 scalar in [0,1] Vinnicombe gap (ν-gap) between P1 and P2, returned as a scalar value in the range [0,1]. As with gap, a value close to zero implies that any controller that stabilizes P1 also stabilizes P2 with similar closed-loop gains. A value close to 1 means that P1 and P2 are far apart. A value of 0 means that the two systems are identical. Because 0 ≤ nugap ≤ gap ≤ 1, the ν-gap can provide a more stringent test for robustness as described in Gap Metrics and Stability Margins. More About Gap Metric For plants P[1] and P[2], let ${P}_{1}={N}_{1}{M}_{1}^{-1}$ and ${P}_{2}={N}_{2}{M}_{2}^{-1}$ be right normalized coprime factorizations (see rncf). Then the gap metric δ[g] is given by: ${\delta }_{g}\left({P}_{1},{P}_{2}\right)=\mathrm{max}\left\{{\stackrel{\to }{\delta }}_{g}\left({P}_{1},{P}_{2}\right),{\stackrel{\to }{\delta }}_{g}\left({P}_{2},{P}_{1}\right)\right\}.$ Here, ${\stackrel{\to }{\delta }}_{g}\left({P}_{1},{P}_{2}\right)$ is the directed gap, given by ${\stackrel{\to }{\delta }}_{g}\left({P}_{1},{P}_{2}\right)=\underset{\text{stable}Q\left(s\right)}{\mathrm{min}}{‖\left[\begin{array}{c}{M}_{1}\\ {N}_{1}\end{array}\right]-\left[\begin{array}{c}{M}_ {2}\\ N2\end{array}\right]Q‖}_{\infty }.$ For more information, see [1] and Chapter 17 of [2]. Vinnicombe Gap Metric For P[1] and P[2], the Vinnicombe gap metric is given by ${\delta }_{u }\left({P}_{1},{P}_{2}\right)=\underset{\omega }{\mathrm{max}}{‖{\left(I+{P}_{2}{P}_{2}^{*}\right)}^{-1/2}\left({P}_{1}-{P}_{2}\right){\left(I+{P}_{1}{P}_{1}^{*}\right)}^{-1/2}‖}_{\ infty },$ provided that $\mathrm{det}\left(I+{P}_{2}^{*}{P}_{1}\right)$ has the right winding number. Here, * denotes the conjugate (see ctranspose). This expression is a weighted difference between the two frequency responses P[1](jω) and P[2](jω). For more information, see Chapter 17 of [2]. Gap Metrics and Stability Margins The gap and ν-gap metrics give a numerical value δ(P[1],P[2]) for the distance between two LTI systems. For both metrics, the following robust performance result holds: arcsin b(P[2],C[2]) ≥ arcsin b(P[1],C[1]) – arcsin δ(P[1],P[2]) – arcsin δ(C[1],C[2]), where the stability margin b (see ncfmargin), assuming negative-feedback architecture, is given by $b\left(P,C\right)={‖\left[\begin{array}{c}I\\ C\end{array}\right]{\left(I+PC\right)}^{-1}\left[\begin{array}{cc}I& P\end{array}\right]‖}_{\infty }^{-1}={‖\left[\begin{array}{c}I\\ P\end{array}\ right]{\left(I+CP\right)}^{-1}\left[\begin{array}{cc}I& C\end{array}\right]‖}_{\infty }^{-1}.$ To interpret this result, suppose that a nominal plant P[1] is stabilized by controller C[1] with stability margin b(P[1],C[1]). Then, if P[1] is perturbed to P[2] and C[1] is perturbed to C[2], the stability margin is degraded by no more than the above formula. For an example, see Compute Gap Metric and Stability Margin. The ν-gap is always less than or equal to the gap, so its predictions using the above robustness result are tighter. The quantity b(P,C)^–1 is the signal gain from disturbances on the plant input and output to the input and output of the controller. Gap Metrics in Robust Design To make use of the gap metrics in robust design, you must introduce weighting functions. In the robust performance formula, replace P by W[2]PW[1], and replace C by ${W}_{1}^{-1}C{W}_{2}^{-1}$. You can make similar substitutions for P[1], P[2], C[1] and C[2]. This form makes the weighting functions compatible with the weighting structure in the H[∞] loop shaping control design procedure used by functions such as loopsyn and ncfsyn. [2] Zhou, K., Doyle, J.C., Essentials of Robust Control. London, UK: Pearson, 1997. Version History Introduced before R2006a
{"url":"https://www.mathworks.com/help/robust/ref/dynamicsystem.gapmetric.html","timestamp":"2024-11-10T23:52:53Z","content_type":"text/html","content_length":"110916","record_id":"<urn:uuid:15ef6485-9e6a-4ea8-bcec-884834e553f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00872.warc.gz"}
Magnetohydrodynamic stagnation point on a Casson nanofluid flow over a radially stretching sheet This article proposes a numerical model to investigate the impact of the radiation effects in the presence of heat generation/absorption and magnetic field on the magnetohydrodynamics (MHD) stagnation point flow over a radially stretching sheet using a Casson nanofluid. The nonlinear partial differential equations (PDEs) describing the proposed flow problem are reduced to a set of ordinary differential equations (ODEs) via suitable similarity transformations. The shooting technique and the Adams–Moulton method of fourth order are used to obtain the numerical results via the computational program language FORTRAN. Nanoparticles have unique thermal and electrical properties which can improve heat transfer in nanofluids. The effects of pertinent flow parameters on the nondimensional velocity, temperature and concentration profiles are presented. Overall, the results show that the heat transfer rate increases for higher values of the radiation parameter in a Casson The heat transfer mechanism has been known for its significant importance in many fields of engineering and medical science in the last decades Since heat energy provides society with several benefits, the field of thermodynamics is applicable to and effectively connected with other fields. Heat transport processes plays a fundamental role in building design [1], fuel-filling systems [2], air compressor manufacturing [3], food industry [4], and in many other fields. In this regard, fluid dynamics is essential for regulating thermal energy through the usage of different fluids with good thermophysical properties. Research efforts have been focused on developing strategies to enhance thermal processes. For example, the fabrication of porous media, open and closed cavities and the implementation of magnetic effects, nanofluids and micrometer-sized channels have been employed to enhance thermal convection processes. Choi and collaborators [5] have used the term “nanofluid” for the first time to refer to a colloidal mixture of nanoparticles and a base fluid. Evidence has shown that metallic particles transfer more heat energy as compared to nonmetallic particles. A Casson fluid is a non-Newtonian fluid in nature and therefore, behaves similarly to an elastic solid. When the stress rate is zero, the Casson fluid can be considered as a shear-thinning liquid, with infinite viscosity. On the other hand, when the stress rate approaches an infinite value the viscosity of the Casson fluid drops to zero [6]. Jam, tomato ketchup, honey, and concentrated fruit syrups are some quotidian examples of Casson fluids. In addition, Casson fluids have been implemented in the preparation of printing ink, silicon suspensions and polymers [7]. Over the past few years, a vast range of experiments and investigations have been carried out using Casson fluids due to their broad applicability in the scientific and engineering fields. Dash et al. [6] used a homogeneous porous medium inside a pipe to examine its flow behavior by using the Casson fluid model. The stagnation point flow for mixed convection and convective boundary conditions was analyzed by Hayat et al. also using the Casson fluid model [8]. In addition, Mukhopadhyay et al. [9] investigated the flow behavior over an unsteady stretching surface using the same approach. Moreover, different aspects of these flows were explored in other recent studies that applied the Casson fluid model to their systems [10-14]. The field of research in which the magnetic properties of electrically conducting fluids are studied is called magnetohydrodynamics (MHD). Magnetic fluids, liquids, metals and mixtures containing water, salt and other electrolytes are examples of materials that can be investigated via MHD. Hannes Alfen was the first to introduce the term MHD. MHD applies a sequence of Navier–Stokes equations and Maxwell’s equations to understand the flow behavior of a fluid with electromagnetic properties, as discussed by Chakraborty and Mazumdar [15]. Shah et al. [16] explored the MHD and heat transfer effects on the upper-convected Maxwell (UCM) fluid upon Joule heating and thermal radiation, using the Cattaneo–Christov heat flux model. Hayat et al. [17] investigated the mass exchange and MHD flow of a UCM fluid passing over an extended sheet. Ibrahim and Suneetha [18] studied the effects of Joule heating and viscous dissipation on steady Marangoni convective MHD flow over a surface in the presence of radiation. The point in the flow field where the velocity of the fluid is zero is called the stagnation point. The study of viscous and incompressible fluids passing over a permeable plate or sheet is of great importance for the field of fluid dynamics. Over the past few decades, these studies have become even more important due to its applicability in manufacturing industries. The refrigeration of electronic instruments with a fan, cooling of atomic receptacles during an emergency power outage, and solar receivers for storage of thermal energy are a few examples in which viscous and incompressible fluids are directly applied. The study of a two-dimensional stagnation point flow was first investigated by Hiemenz [19]. Later on, Eckert [20] extended this problem by adding the energy equation in order to get a more accurate solution. In view of that, Mahapatra and Gupta [21], Ishak et al. [22], and Hayat et al. [23] have studied the effects of heat transfer at the stagnation point over a permeable plate. The MHD Casson fluid, including the effects of heat source/sink and convective boundary conditions was analyzed by Prabhakar et al. [24]. Besthapu et al. [25] examined the MHD stagnation flow of non-Newtonian fluids over a convective stretching surface. Ibrahim et al. [26] investigated the MHD stagnation point flow over a nonlinear stretching sheet by using a Casson nanofluid with velocity and convective boundary conditions. Ibrahim and Makinde [27] investigated the effect of slip and convective boundary conditions on a MHD stagnation point flow, considering heat transfer due to a Casson nanofluid passing over a stretching sheet. Moreover, the flow analysis of nanofluids passing over radially stretched surfaces have many applications in several industry sectors, such as drawing of plastic films, manufacturing of glass, production of paper, and refining crude oil. Recently, many researchers have been focusing their attention on nanoparticles, since they exhibit remarkable electrical, optical, and chemical properties in addition to having Brownian motion and thermophoretic properties. Due to these features, nanoparticles are widely used in catalysis, imaging, energy-based research, microelectronics, and in other applications in the medical and environmental fields. These nanoparticles are composed of metals and nonmetals and are frequently infused into heat transfer fluids (e.g., water, diethylene glycol and propylene glycol) to increase their efficiency. Rafique et al. [28] studied the impact of Casson nanofluid boundary layer flow over an inclined extending surface, considering Soret and Dufour effects. In addition, Rafique et al. [29] studied the impact of Brownian motion and thermophoresis diffusion on Casson nanofluid boundary layer flow over a nonlinear inclined stretching sheet. An unsteady flow of a Casson fluid along a nonlinear stretching surface was studied by Ullah et al. [30]. A Casson fluid over a non-isothermal cylinder, subjected to suction/blowing was analyzed by Ullah et al. [31]. Moreover, various researchers have been investigating the Casson fluid model for different flow problems [32-36]. Motivated by the previous findings on non-Newtonian and Newtonian fluids, the study of the stagnation point MHD flow using Casson nanofluids has been presented. The governing partial differential equations (PDEs) have been converted to a set of ordinary differential equations (ODEs) through suitable similarity transformations and the numerical solution has been derived by the shooting method. Mathematical Modelling The present model aims to investigate the laminar, incompressible and steady flow of a Casson nanofluid passing a radially stretched surface in the proximity of a stagnation point. Considering the thermal radiation and heat generation/absorption effects, the characteristics of the flow and heat transfer are examined. The coordinate system is chosen such that the r-axis is along the direction of the flow whereas the z-axis is perpendicular to the flow direction (Figure 1). Figure 1: Schematic of the physical model in which a Casson nanofluid is passing a radially stretched surface in the proximity of a stagnation point. The velocity of the outer flow is designated as U[e] and the direction of the uniform magnetic field is chosen to be normal to the surface of the fluid flow. The Brownian motion and thermophoretic effects have been considered as well as the convective surface conditions. A convective heating process is applied to regulate the sheet temperature T[w]. The nanoparticle concentration, C[w], is assumed to be constant. When y values tend to infinity, the concentration and temperature of the nanofluid is represented by C[∞] and T[∞], respectively. The constitutive equations of the Casson nanofluid model are described as follows [10-14]. Firstly, the continuity equation considers the physical principle of mass conservation: In addition, the momentum is described by Newton’s second law: Following the principles of conservation of energy, the energy equation is written as and the mass transfer equation is The corresponding boundary conditions at the boundary surface are In the previous equations, ν[f] is the kinematic viscosity, ρ[f] is the fluid density, α represents the thermal diffusivity, C[p] represents a constant pressure for a specific heat value, k[0] denotes a chemical reaction coefficient, (ρc[p])[f] represents the heat capacity, D[B] represents the Brownian diffusion coefficient, Q[0] represents the volumetric heat generation, D[T] is the thermophoresis diffusion coefficient, σ is the electrical conductivity, β represents the Casson fluid parameter, and T represents the nanofluid temperature. The following similarity variables are taken into consideration: Finally, the ODEs describing the proposed flow problem can be written as The transformed boundary conditions are and the dimensionless parameters are defined as The formulas for the dimensional form of the skin-friction coefficient C[f], the Nusselt number Nu, and Sherwood number Sh, are given by and the formulas for τ[w], q[w], and q[m] are The result of the transformation of the above formulas into their dimensionless form is where Re = rU[w]/ν[f] is the local Reynolds number and ν[f] = µ/ρ is the kinematic viscosity. Solution Methodology In order to solve the system of ODEs (Equation 8–Equation 10) subjected to the boundary conditions (Equation 11) the shooting technique has been used. First, Equation 8 is solved numerically and then the computed results of f, f' and f''are used in Equation 9 and Equation 10. For the numerical treatment of Equation 8, the missing initial condition, f''(0), has been denoted as s and the following notations have been considered: Using the previous notations, Equation 8 can be converted into a system of three first-order ODEs. The first three of the following ODEs correspond to Equation 8 and the other three are obtained by differentiating the first three equations with respect to S: The Adams–Bashforth–Moulton method has been used to solve the previous initial value problem. In order to get the approximate numerical results, the domain of the problem was bounded (i.e., [0, η [∞]], where η[∞] is chosen to be an appropriate finite positive real number such that the variation in the solution for η > η[∞] can be ignored). The missing condition for the previous system of equations is chosen such that (h[2](η[∞]))[s] = A. This algebraic equation is solved by using the Newton’s method governed by the following iterative formula: The stopping condition for the shooting method is set as in which ε is a very small positive number. Now to solve Equation 9 and Equation 10 numerically, the missing initial conditions, θ(0) and ϕ(0), have been denoted by l and m, respectively. Therefore, after taking the new notation into account, we have By incorporating the above notations, a system of first order ODEs is achieved, as follows: The Adams–Bashforth–Moulton method has been taken into account to solve the initial value problem. For the previous system of equations, the missing conditions were chosen, such that The above algebraic equations have been solved by using Newton’s method governed by the following iterative formula: The stopping criteria for the shooting method is set as: in which ε is set as a very small positive number. In this work, ε is set as 10^−5 whereas η[∞] is set as 7. Results and Discussion In this section, the numerical results of the skin-friction coefficient, Nusselt and Sherwood numbers are listed in tables and shown in graphs. The different values obtained depend on the flow parameters chosen. The physical parameters have the following admissible ranges: 0 ≤ Ha ≤ 2, 0.3 ≤ A ≤ 2.5, 0.1 ≤ β ≤ 1.5, 0.3 ≤ Pr ≤ 2.0, 0.5 ≤ Ec 2.5, 0.1 ≤ R ≤ 0.5, 0.1 ≤ Q ≤ 0.5, 0.3 ≤ Sc ≤ 0.6, 0.1 ≤ γ ≤ 2.0, 0.1 ≤ Nt ≤ 2.0, 0.1 ≤ Nb ≤ 2.0, γ = 1.0, 1 ≤ Bi1 ≤ 2.0, and 1 ≤ Bi2 ≤ 2.0. Skin-friction coefficient, Nusselt and Sherwood numbers Prabhakar et al. [24] used a fourth-order Runge–Kutta method to obtain the numerical solution of the discussed model, whereas Attia [37] used the shooting technique and the computational software MATLAB. In the present study, the shooting technique along with the fourth order Adams–Moulton method were used to reproduce the previously published solution [24,37]. To validate the code written in the computational program language Fortran, the results of –f''(0) and –θ'(0) were reproduced for the problem discussed by Attia [37] and Prabhakar et al. [24]. Tables 1–3 show that there is an excellent agreement between the results yielded by the present code and the previously published results. Table 1: Comparison between the computed values of f''(0) and the values given by Attia [37], when Nt = Nb = R = Ec = Sc = 0. Ha A f''(0) Ha A f''(0) Attia Present Attia Present 0 0.1 −1.1246 −1.1246260 2 0.1 −2.1138 −2.1137140 0.2 −1.0556 −1.0555810 0.2 −1.9080 −1.9079860 0.5 −0.7534 −0.7534078 0.5 −1.2456 −1.2455380 1.0 0.0000 0.0000000 1.0 0.0000 0.0000000 1.1 0.1821 0.1820637 1.1 0.2691 0.2690781 1.2 0.3735 0.3735214 1.2 0.5445 0.5445290 1.5 1.0009 1.0008780 1.5 1.4080 01.4080270 1 0.1 −1.4334 −1.4334070 3 0.1 −2.9174 −2.9173560 0.2 −1.3179 −1.3178900 0.2 −2.6141 −2.6140730 0.5 −0.9002 −0.9001369 0.5 −1.6724 −1.6723740 1.0 0.0000 0.0000000 1.0 0.0000 0.0000000 1.1 0.2070 0.2070196 1.1 0.3494 0.3494373 1.2 0.4004 0.4223360 1.2 0.7037 0.7037439 1.5 1.1157 1.1156770 1.5 1.7954 1.7954280 Table 2: Comparison between the computed results of Nusselt number –θ'(0) and the results given by Attia [37], when Nt = Nb = R = Ec = Sc = 0. Pr A –θ'(0) Pr A –θ'(0) Attia Present Attia Present 0.05 0.1 0.1273 0.166529400 0.5 0.1 0.4691 0.476318600 0.2 0.1421 0.175023100 0.2 0.5223 0.526475900 0.5 0.1845 0.201851100 0.5 0.6345 0.633877500 1.0 0.2439 0.247389100 1.0 0.7699 0.764000400 1.1 0.2545 0.256288100 1.1 0.7933 0.786525000 1.2 0.2632 0.265061900 1.2 0.8136 0.808239000 1.5 0.2919 0.290530900 1.5 0.8793 0.849610600 0.1 0.1 0.1618 0.194615100 1 0.1 0.7657 0.772774200 0.2 0.1911 0.212448800 0.2 0.8152 0.818562500 0.5 0.2615 0.265139300 0.5 0.9332 0.929409300 1.0 0.3343 0.342184300 1.0 1.0888 1.077056000 1.1 0.3581 0.355768200 1.1 1.1166 1.103455000 1.2 0.3700 0.368815700 1.2 1.1408 1.129085000 1.5 0.4080 0.405144700 1.5 1.2200 1.202041000 Table 3: Comparison between the computed values of f''(0) and the results given by Prabhakar et al. [24]. λ Ha f''(0) Prabhakar et al. [24] Present 0 1.0 1.64532 1.645239000 0.2 1.0 1.38321 1.383139000 0.5 1.0 0.92353 0.923487700 0.5 0.0 0.78032 0.780284500 1.0 0.92353 0.923487700 5.0 1.35767 1.357532100 10.0 1.75768 1.757437000 Table 4 and Table 5 show the numerical results of the skin-friction coefficient along with the Nusselt and Sherwood numbers for the present model, taking into account changes in the values of various parameters, such as β, Ha, R, A, Pr, Q, Nb, Nt, Ec and Sc. Table 4: The computed results for the skin-friction coefficient, Nusselt and Sherwood numbers, for γ = 1, Bi1 = 0.1 = Bi2, where a[1] = (1 + 1/β) and a[2] = (1 + 4/3·R). β Ha A R Pr Q Nb Nt Ec Sc −a[1]f´´(0) −a[2]θ´(0) −ϕ´(0) 0.5 1.0 0.1 0.1 0.7 0.1 0.5 0.1 0.1 1.2 2.485303 0.0859357 0.0939868 5.0 – – – – – – – – – 1.570312 0.0860365 0.0937332 10 – – – – – – – – – 1.503451 0.0859404 0.0937079 – 1.2 – – – – – – – – 2.688387 0.083406 0.0940580 – 1.4 – – – – – – – – 2.911371 0.0805399 0.0941407 – – 0.3 – – – – – – – 2.0619300 0.0913473 0.0938944 – – 0.5 – – – – – – – 1.5593120 0.0942490 0.0938664 – – – 0.2 – – – – – – 2.4853030 0.0954124 0.0939689 – – – 0.3 – – – – – – 2.4853030 0.1047357 0.0939546 – – – – 1.0 – – – – – 2.4853030 0.0871634 0.0940612 – – – – 2.0 – – – – – 2.4853030 0.0874003 0.0942927 – – – – – 0.5 – – – – 2.4853030 0.0688995 0.0944750 – – – – – 0.7 – – – – 2.4853030 0.1130105 0.0935984 – – – – – – 0.7 – – – 2.4853030 0.0858337 0.0939400 – – – – – – 0.8 – – – 2.4853030 0.0857826 0.0939254 – – – – – – – 0.2 – – 2.4853030 0.0858089 0.0941674 – – – – – – – 0.3 – – 2.4853030 0.0856807 0.0943550 – – – – – – – – 0.5 – 2.4853030 0.0366119 0.0959349 – – – – – – – – 1.0 – 2.4853030 −0.025837 0.0983850 – – – – – – – – – 1.4 2.4853030 0.0859497 0.0944484 – – – – – – – – – 1.6 2.4853030 0.0859616 0.0948187 Table 5: The computed results of the skin-friction coefficient, Nusselt and Sherwood numbers for β = 0.5, Ha = 1, A = 0.1, R = 0.1, Pr = 0.7, Q = 0.1, Nt = 0.1, Nb = 0.5, Ec = 0.1, Sc = 1.2, where a [1] = (1 + 1/β) and a[2] = (1 + 4/3·R). γ Bi1 Bi2 −a[1]f''(0) −a[2]θ'(0) −ϕ'(0) 1.0 0.1 0.1 2.485303 0.0859357 0.0939868 1.5 – – 2.485303 0.0859589 0.0946673 2.0 – – 2.485303 0.0859759 0.0951565 – 0.2 – 2.485303 0.1514275 0.0937815 – 0.3 – 2.485303 0.2029085 0.0936212 – – 0.2 2.485303 0.0857094 0.1770382 – – 0.3 2.485303 0.0855062 0.2509579 Velocity, temperature and concentration Figures 2–4 present the influence of the Hartmann number on the velocity, temperature and concentration distributions. For high values for Ha, the fluid velocity decreases while the temperature and concentration of the fluid increase. This stems from the fact that an opposing force generated by the magnetic field, generally referred to as the Lorentz force, reduces the fluid motion, resulting in a reduction in the momentum boundary layer thickness and an increase in the thermal and concentration boundary layer thickness values. Figures 5–7 show the effect of A on the velocity, temperature and concentration distributions. An increase in the flow velocity is observed for A > 1, whereas a reduction in the flow velocity is observed for A < 1. Also, both the temperature and concentration profiles decrease when A assumes higher values. As the value of A increases, the heat transfer from the sheet to the fluid reduces and, as a result, the temperature significantly decreases. Furthermore, the thermal boundary layer thickness is reduced as well as the concentration boundary layer thickness. Figures 8–10 show the effect of the Casson parameter on the velocity, temperature and concentration fields. The velocity profile shows an increasing trend when β increases. On the other hand, the velocity boundary layer thickness decreases for higher values of β. This stems from the fact that the plasticity of the Casson fluid increases when β decreases, leading to an increase in the momentum boundary layer thickness. In addition, the values of the temperature distribution as well as the thermal boundary thickness increase when β increases. A rise in the nanoparticle volume fraction and an increase in the concentration boundary layer thickness are observed for higher values of β. Figure 11 and Figure 12 show the effect of Pr on the temperature and concentration distributions. Since Pr is directly proportional to the viscous diffusion rate and inversely proportional to the thermal diffusivity, the thermal diffusion rate is reduced for higher estimated values of Pr. As a consequence, the temperature of the fluid is significantly reduced as well as the thermal boundary layer thickness. Conversely, the nanoparticle volume fraction of the fluid and the concentration boundary layer thickness increase for higher values of Pr. The outcome of Ec on the temperature profiles is characterized in Figure 13. Physically, the Eckert number depicts the relation between the kinetic energy of the fluid particles and the boundary layer enthalpy. The kinetic energy of the fluid particles increases for higher values of Ec. Hence, the temperature of the fluid rises marginally and therefore, the associated momentum and thermal boundary layer thickness are enhanced. Figure 14 and Figure 15 elucidate the effect of the radiation parameter, R, and the heat generation/absorption parameter, Q, respectively, on the temperature distributions. Since the heat transfer increases marginally for higher estimated values of R, an increment in the temperature of the fluid and also in the thermal boundary layer is seen. However, as the value of Q rises, more heat is generated, causing a rise in both the temperature and thermal boundary layer thickness. On the other hand, as the value of Q decreases, the absorbed heat results in a decrease of both the temperature and associated thermal boundary layer thickness. Figure 16 and Figure 17 show the effects of Sc and γ on the concentration fields. The concentration of the fluid decreases for higher values of Sc. This behavior stems from the fact that both the Schmidt number and mass diffusion rate have an inverse relation. Therefore, for higher Sc values, the mass diffusivity process slows down, decreasing the concentration profile and also the concentration boundary layer thickness. Furthermore, the chemical reaction parameter has a similar effect on the concentration profile. For higher values of γ there is a decrease in the chemical molecular diffusion rate and, consequently, both the concentration of the fluid and the associated concentration boundary layer thickness decrease. Figure 18 and Figure 19 show the influence of the thermophoresis parameter on the temperature and concentration distributions. Both the temperature and concentration profiles increase for higher values of Nt. In addition to this, an increase in the associated thermal boundary layer thickness and in the concentration boundary layer is noticed. Figure 20 and Figure 21 display the influence of the Brownian motion parameter on the temperature and concentration distributions. The temperature profile increases marginally for higher values of Nb . This happens because, as the value of Nb rises, the movement of the nanoparticles increases significantly, increasing the kinetic energy of the nanoparticles. Consequently, the temperature rises and the thermal boundary layer thickness increases. On the other hand, the concentration of the fluid and the concentration boundary layer thickness decrease as Nb assumes higher values. The impact of the thermal Biot number on the temperature and concentration distributions and also on the nanoparticle volume fraction is shown in Figure 22 and Figure 23. It is remarkable that the temperature can be observed as an increasing function of Bi1 and the concentration of the fluid also increases as Bi2 increases. In addition, the associated thermal and concentration boundary layer thickness values are enhanced. The numerical investigation of the MHD flow nearby a stagnation point over a radially stretching sheet using Casson nanofluids is presented in this article. Moreover, the radiation effects and the magnetic field are examined. In addition to this, the effects of heat generation/absorption are also explored. It is important to mention that the thermophysical properties vary with the flow rate, temperature and volume concentration. The conversion of nonlinear partial differential equations, describing the proposed flow problem, to a set of ordinary differential equations has been carried out by employing appropriate similarity transformations. The shooting method along with the Adams–Moulton method of fourth order is employed for the numerical treatment. The numerical results show that when the Hartmann number, Ha, increases, the velocity decreases whereas an opposite trend is observed for the temperature and concentration fields. In addition, for high values of the Casson parameter, the velocity, temperature and concentration profiles increase. When the Prandtl number increases, the temperature decreases while the concentration of the fluid increases. Additionally, the increase in the Eckert number increases the velocity and the temperature profiles. When the thermophoresis parameter increases, the heat and mass transfer rates also increase. Last but not least, the heat transfer rate also increases with the radiation parameter in Casson fluids. 1. Zhai, Z. Indoor Built Environ. 2006, 15, 305–313. doi:10.1177/1420326x06067336 Return to citation in text: [1] 2. Banerjee, R.; Bai, X.; Pugh, D.; Isaac, K. M.; Klein, D.; Edson, J.; Breig, W.; Oliver, L. CFD Simulations of Critical Components in Fuel Filling Systems. SAE 2002 World Congress & Exhibition; SAE International: Warrendale, PA, USA, 2002; pp 1–19. doi:10.4271/2002-01-0573 Return to citation in text: [1] 3. Zhang, C.; Saadat, M.; Li, P. Y.; Simon, T. W. Heat Transfer in a Long, Thin Tube Section of an Air Compressor: An Empirical Correlation From CFD and a Thermodynamic Modeling. In Proceedings of the ASME 2012 International Mechanical Engineering Congress and Exposition. Volume 7: Fluids and Heat Transfer, Parts A, B, C, and D, Nov 9–15, 2012; The American Society of Mechanical Engineers: Houston, Texas, USA, 2012; pp 1601–1607. doi:10.1115/imece2012-86673 Return to citation in text: [1] 4. Xia, B.; Sun, D.-W. Comput. Electron. Agric. 2002, 34, 5–24. doi:10.1016/s0168-1699(01)00177-6 Return to citation in text: [1] 5. Choi, S. U.-S. Nanofluid Technology: Current Status and Future Research. 1998; https://www.osti.gov/biblio/11048. Return to citation in text: [1] 6. Dash, R. K.; Mehta, K. N.; Jayaraman, G. Int. J. Eng. Sci. 1996, 34, 1145–1156. doi:10.1016/0020-7225(96)00012-2 Return to citation in text: [1] [2] 7. Venkatesan, J.; Sankar, D. S.; Hemalatha, K.; Yatim, Y. J. Appl. Math. 2013, 1–11. doi:10.1155/2013/583809 Return to citation in text: [1] 8. Hayat, T.; Shehzad, S. A.; Alsaedi, A.; Alhothuali, M. S. Chin. Phys. Lett. 2012, 29, 114704. doi:10.1088/0256-307x/29/11/114704 Return to citation in text: [1] 9. Mukhopadhyay, S.; De, P. R.; Bhattacharyya, K.; Layek, G. C. Ain Shams Eng. J. 2013, 4, 933–938. doi:10.1016/j.asej.2013.04.004 Return to citation in text: [1] 10. Mukhopadhyay, S. Chin. Phys. B 2013, 22, 074701. doi:10.1088/1674-1056/22/7/074701 Return to citation in text: [1] [2] 11. Nadeem, S.; Haq, R. U.; Akbar, N. S.; Khan, Z. H. Alexandria Eng. J. 2013, 52, 577–582. doi:10.1016/j.aej.2013.08.005 Return to citation in text: [1] [2] 12. Khalid, A.; Khan, I.; Khan, A.; Shafie, S. Eng., Sci. Technol., Int. J. 2015, 18, 309–317. doi:10.1016/j.jestch.2014.12.006 Return to citation in text: [1] [2] 13. Khan, M. I.; Waqas, M.; Hayat, T.; Alsaedi, A. J. Colloid Interface Sci. 2017, 498, 85–90. doi:10.1016/j.jcis.2017.03.024 Return to citation in text: [1] [2] 14. Shah, Z.; Islam, S.; Ayaz, H.; Khan, S. J. Heat Transfer 2019, 141, 022401. doi:10.1115/1.4040415 Return to citation in text: [1] [2] 15. Chakraborty, B. K.; Mazumdar, H. P. Approx. Theory Appl. 2000, 16, 32–41. Return to citation in text: [1] 16. Shah, S.; Hussain, S.; Sagheer, M. AIP Adv. 2016, 6, 085103. doi:10.1063/1.4960830 Return to citation in text: [1] 17. Hayat, T.; Abbas, Z.; Ali, N. Phys. Lett. A 2008, 372, 4698–4704. doi:10.1016/j.physleta.2008.05.006 Return to citation in text: [1] 18. Ibrahim, S. M.; Suneetha, K. Ain Shams Eng. J. 2016, 7, 811–818. doi:10.1016/j.asej.2015.12.008 Return to citation in text: [1] 19. Hiemenz, K. Die Grenzschicht an einem in den gleichförmigen Flüssigkeitsstrom eingetauchten geraden Kreiszylinder. Ph.D. Thesis, 1911. Return to citation in text: [1] 20. Eckert, E. Die Berechnung des Wärmeübergangs in der laminaren Grenzschicht umströmter Körper; VDI-Forschungsheft; 1942. Return to citation in text: [1] 21. Mahapatra, T. R.; Gupta, A. S. Heat Mass Transfer 2002, 38, 517–521. doi:10.1007/s002310100215 Return to citation in text: [1] 22. Ishak, A.; Nazar, R.; Pop, I. Meccanica 2006, 41, 509–518. doi:10.1007/s11012-006-0009-4 Return to citation in text: [1] 23. Hayat, T.; Mustafa, M.; Shehzad, S. A.; Obaidat, S. Int. J. Numer. Methods Fluids 2012, 68, 233–243. doi:10.1002/fld.2503 Return to citation in text: [1] 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Return to citation in text: [1] [2] [3] [4] [5] [6] 25. Besthapu, P.; Haq, R. U.; Bandari, S.; Al-Mdallal, Q. M. Neural Comput. Appl. 2019, 31, 207–217. doi:10.1007/s00521-017-2992-x Return to citation in text: [1] 26. Ibrahim, S. M.; Kumar, P. V.; Lorenzini, G.; Lorenzini, E.; Mabood, F. J. Eng. Thermophys. (Moscow, Russ. Fed.) 2017, 26, 256–271. doi:10.1134/s1810232817020096 Return to citation in text: [1] 27. Ibrahim, W.; Makinde, O. D. J. Aerosp. Eng. 2016, 29, 04015037. doi:10.1061/(asce)as.1943-5525.0000529 Return to citation in text: [1] 28. Rafique, K.; Anwar, M. I.; Misiran, M.; Khan, I.; Alharbi, S. O.; Thounthong, P.; Nisar, K. S. Front. Phys. 2019, 7, 139. doi:10.3389/fphy.2019.00139 Return to citation in text: [1] 29. Rafique, K.; Imran Anwar, M.; Misiran, M.; Khan, I.; Alharbi, S. O.; Thounthong, P.; Nisar, K. S. Symmetry 2019, 11, 1370. doi:10.3390/sym11111370 Return to citation in text: [1] 30. Ullah, I.; Nisar, K. S.; Shafie, S.; Khan, I.; Qasim, M.; Khan, A. IEEE Access 2019, 7, 93076–93087. doi:10.1109/access.2019.2920243 Return to citation in text: [1] 31. Ullah, I.; Alkanhal, T. A.; Shafie, S.; Nisar, K. S.; Khan, I.; Makinde, O. D. Symmetry 2019, 11, 531. doi:10.3390/sym11040531 Return to citation in text: [1] 32. Rasool, G.; Shafiq, A.; Khan, I.; Baleanu, D.; Sooppy Nisar, K.; Shahzadi, G. Symmetry 2020, 12, 652. doi:10.3390/sym12040652 Return to citation in text: [1] 33. Khan, U.; Zaib, A.; Khan, I.; Nisar, K. S. J. Mater. Res. Technol. 2020, 9, 188–199. doi:10.1016/j.jmrt.2019.10.044 Return to citation in text: [1] 34. Ali, F.; Ali, F.; Sheikh, N. A.; Khan, I.; Nisar, K. S. Chaos, Solitons Fractals 2020, 131, 109489. doi:10.1016/j.chaos.2019.109489 Return to citation in text: [1] 35. Lund, L. A.; Omar, Z.; Khan, I.; Kadry, S.; Rho, S.; Mari, I. A.; Nisar, K. S. Energies (Basel, Switz.) 2019, 12, 4617. doi:10.3390/en12244617 Return to citation in text: [1] 36. Ali Lund, L.; Ching, D. L. C.; Omar, Z.; Khan, I.; Nisar, K. S. Coatings 2019, 9, 527. doi:10.3390/coatings9080527 Return to citation in text: [1] 37. Attia, H. A. Tamkang J., Sci. Eng. 2007, 10, 11–16. Return to citation in text: [1] [2] [3] [4] [5] Reference 31 31. Ullah, I.; Alkanhal, T. A.; Shafie, S.; Nisar, K. S.; Khan, I.; Makinde, O. D. Symmetry 2019, 11, 531. doi:10.3390/sym11040531 Go to reference 31 References 32-36 32. Rasool, G.; Shafiq, A.; Khan, I.; Baleanu, D.; Sooppy Nisar, K.; Shahzadi, G. Symmetry 2020, 12, 652. doi:10.3390/sym12040652 33. Khan, U.; Zaib, A.; Khan, I.; Nisar, K. S. J. Mater. Res. Technol. 2020, 9, 188–199. doi:10.1016/j.jmrt.2019.10.044 34. Ali, F.; Ali, F.; Sheikh, N. A.; Khan, I.; Nisar, K. S. Chaos, Solitons Fractals 2020, 131, 109489. doi:10.1016/j.chaos.2019.109489 35. Lund, L. A.; Omar, Z.; Khan, I.; Kadry, S.; Rho, S.; Mari, I. A.; Nisar, K. S. Energies (Basel, Switz.) 2019, 12, 4617. doi:10.3390/en12244617 36. Ali Lund, L.; Ching, D. L. C.; Omar, Z.; Khan, I.; Nisar, K. S. Coatings 2019, 9, 527. doi:10.3390/coatings9080527 Go to references 32-36 References 10-14 10. Mukhopadhyay, S. Chin. Phys. B 2013, 22, 074701. doi:10.1088/1674-1056/22/7/074701 11. Nadeem, S.; Haq, R. U.; Akbar, N. S.; Khan, Z. H. Alexandria Eng. J. 2013, 52, 577–582. doi:10.1016/j.aej.2013.08.005 12. Khalid, A.; Khan, I.; Khan, A.; Shafie, S. Eng., Sci. Technol., Int. J. 2015, 18, 309–317. doi:10.1016/j.jestch.2014.12.006 13. Khan, M. I.; Waqas, M.; Hayat, T.; Alsaedi, A. J. Colloid Interface Sci. 2017, 498, 85–90. doi:10.1016/j.jcis.2017.03.024 14. Shah, Z.; Islam, S.; Ayaz, H.; Khan, S. J. Heat Transfer 2019, 141, 022401. doi:10.1115/1.4040415 Go to references 10-14 Reference 1 1. Zhai, Z. Indoor Built Environ. 2006, 15, 305–313. doi:10.1177/1420326x06067336 Go to reference 1 Reference 5 5. Choi, S. U.-S. Nanofluid Technology: Current Status and Future Research. 1998; https://www.osti.gov/biblio/11048. Go to reference 5 Reference 18 18. Ibrahim, S. M.; Suneetha, K. Ain Shams Eng. J. 2016, 7, 811–818. doi:10.1016/j.asej.2015.12.008 Go to reference 18 Reference 4 4. Xia, B.; Sun, D.-W. Comput. Electron. Agric. 2002, 34, 5–24. doi:10.1016/s0168-1699(01)00177-6 Go to reference 4 Reference 19 19. Hiemenz, K. Die Grenzschicht an einem in den gleichförmigen Flüssigkeitsstrom eingetauchten geraden Kreiszylinder. Ph.D. Thesis, 1911. Go to reference 19 Reference 24 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Go to reference 24 Reference 3 Zhang, C.; Saadat, M.; Li, P. Y.; Simon, T. W. Heat Transfer in a Long, Thin Tube Section of an Air Compressor: An Empirical Correlation From CFD and a Thermodynamic Modeling. In Proceedings of 3. the ASME 2012 International Mechanical Engineering Congress and Exposition. Volume 7: Fluids and Heat Transfer, Parts A, B, C, and D, Nov 9–15, 2012; The American Society of Mechanical Engineers: Houston, Texas, USA, 2012; pp 1601–1607. doi:10.1115/imece2012-86673 Go to reference 3 Reference 16 16. Shah, S.; Hussain, S.; Sagheer, M. AIP Adv. 2016, 6, 085103. doi:10.1063/1.4960830 Go to reference 16 Reference 24 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Go to reference 24 Reference 2 2. Banerjee, R.; Bai, X.; Pugh, D.; Isaac, K. M.; Klein, D.; Edson, J.; Breig, W.; Oliver, L. CFD Simulations of Critical Components in Fuel Filling Systems. SAE 2002 World Congress & Exhibition; SAE International: Warrendale, PA, USA, 2002; pp 1–19. doi:10.4271/2002-01-0573 Go to reference 2 Reference 17 17. Hayat, T.; Abbas, Z.; Ali, N. Phys. Lett. A 2008, 372, 4698–4704. doi:10.1016/j.physleta.2008.05.006 Go to reference 17 Reference 8 8. Hayat, T.; Shehzad, S. A.; Alsaedi, A.; Alhothuali, M. S. Chin. Phys. Lett. 2012, 29, 114704. doi:10.1088/0256-307x/29/11/114704 Go to reference 8 References 10-14 10. Mukhopadhyay, S. Chin. Phys. B 2013, 22, 074701. doi:10.1088/1674-1056/22/7/074701 11. Nadeem, S.; Haq, R. U.; Akbar, N. S.; Khan, Z. H. Alexandria Eng. J. 2013, 52, 577–582. doi:10.1016/j.aej.2013.08.005 12. Khalid, A.; Khan, I.; Khan, A.; Shafie, S. Eng., Sci. Technol., Int. J. 2015, 18, 309–317. doi:10.1016/j.jestch.2014.12.006 13. Khan, M. I.; Waqas, M.; Hayat, T.; Alsaedi, A. J. Colloid Interface Sci. 2017, 498, 85–90. doi:10.1016/j.jcis.2017.03.024 14. Shah, Z.; Islam, S.; Ayaz, H.; Khan, S. J. Heat Transfer 2019, 141, 022401. doi:10.1115/1.4040415 Go to references 10-14 References 24,37 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 37. Attia, H. A. Tamkang J., Sci. Eng. 2007, 10, 11–16. Go to references 24,37 Reference 6 6. Dash, R. K.; Mehta, K. N.; Jayaraman, G. Int. J. Eng. Sci. 1996, 34, 1145–1156. doi:10.1016/0020-7225(96)00012-2 Go to reference 6 Reference 15 15. Chakraborty, B. K.; Mazumdar, H. P. Approx. Theory Appl. 2000, 16, 32–41. Go to reference 15 Reference 7 7. Venkatesan, J.; Sankar, D. S.; Hemalatha, K.; Yatim, Y. J. Appl. Math. 2013, 1–11. doi:10.1155/2013/583809 Go to reference 7 Reference 24 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Go to reference 24 Reference 6 6. Dash, R. K.; Mehta, K. N.; Jayaraman, G. Int. J. Eng. Sci. 1996, 34, 1145–1156. doi:10.1016/0020-7225(96)00012-2 Go to reference 6 Reference 9 9. Mukhopadhyay, S.; De, P. R.; Bhattacharyya, K.; Layek, G. C. Ain Shams Eng. J. 2013, 4, 933–938. doi:10.1016/j.asej.2013.04.004 Go to reference 9 Reference 22 22. Ishak, A.; Nazar, R.; Pop, I. Meccanica 2006, 41, 509–518. doi:10.1007/s11012-006-0009-4 Go to reference 22 Reference 20 20. Eckert, E. Die Berechnung des Wärmeübergangs in der laminaren Grenzschicht umströmter Körper; VDI-Forschungsheft; 1942. Go to reference 20 Reference 24 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Go to reference 24 Reference 21 21. Mahapatra, T. R.; Gupta, A. S. Heat Mass Transfer 2002, 38, 517–521. doi:10.1007/s002310100215 Go to reference 21 Reference 29 29. Rafique, K.; Imran Anwar, M.; Misiran, M.; Khan, I.; Alharbi, S. O.; Thounthong, P.; Nisar, K. S. Symmetry 2019, 11, 1370. doi:10.3390/sym11111370 Go to reference 29 Reference 30 30. Ullah, I.; Nisar, K. S.; Shafie, S.; Khan, I.; Qasim, M.; Khan, A. IEEE Access 2019, 7, 93076–93087. doi:10.1109/access.2019.2920243 Go to reference 30 Reference 27 27. Ibrahim, W.; Makinde, O. D. J. Aerosp. Eng. 2016, 29, 04015037. doi:10.1061/(asce)as.1943-5525.0000529 Go to reference 27 Reference 28 28. Rafique, K.; Anwar, M. I.; Misiran, M.; Khan, I.; Alharbi, S. O.; Thounthong, P.; Nisar, K. S. Front. Phys. 2019, 7, 139. doi:10.3389/fphy.2019.00139 Go to reference 28 Reference 25 25. Besthapu, P.; Haq, R. U.; Bandari, S.; Al-Mdallal, Q. M. Neural Comput. Appl. 2019, 31, 207–217. doi:10.1007/s00521-017-2992-x Go to reference 25 Reference 26 26. Ibrahim, S. M.; Kumar, P. V.; Lorenzini, G.; Lorenzini, E.; Mabood, F. J. Eng. Thermophys. (Moscow, Russ. Fed.) 2017, 26, 256–271. doi:10.1134/s1810232817020096 Go to reference 26 Reference 23 23. Hayat, T.; Mustafa, M.; Shehzad, S. A.; Obaidat, S. Int. J. Numer. Methods Fluids 2012, 68, 233–243. doi:10.1002/fld.2503 Go to reference 23 Reference 24 24. Prabhakar, B.; Bandari, S.; Kumar, K. J. Nanofluids 2016, 5, 679–686. doi:10.1166/jon.2016.1264 Go to reference 24 © 2020 Narender et al.; licensee Beilstein-Institut. This is an Open Access article under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0). Please note that the reuse, redistribution and reproduction in particular requires that the authors and source are credited. The license is subject to the Beilstein Journal of Nanotechnology terms and conditions: (https://www.beilstein-journals.org/bjnano)
{"url":"https://www.beilstein-journals.org/bjnano/articles/11/114","timestamp":"2024-11-06T05:15:30Z","content_type":"text/html","content_length":"365644","record_id":"<urn:uuid:081c179f-0906-4548-80de-654ab328fda7>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00312.warc.gz"}
Scale the Rows and Columns of a Matrix dimScale {Matrix} R Documentation Scale the Rows and Columns of a Matrix dimScale, rowScale, and colScale implement D1 %*% x %*% D2, D %*% x, and x %*% D for diagonal matrices D1, D2, and D with diagonal entries d1, d2, and d, respectively. Unlike the explicit products, these functions preserve dimnames(x) and symmetry where appropriate. dimScale(x, d1 = sqrt(1/diag(x, names = FALSE)), d2 = d1) rowScale(x, d) colScale(x, d) x a matrix, possibly inheriting from virtual class Matrix. d1, d2, d numeric vectors giving factors by which to scale the rows or columns of x; they are recycled as necessary. dimScale(x) (with d1 and d2 unset) is only roughly equivalent to cov2cor(x). cov2cor sets the diagonal entries of the result to 1 (exactly); dimScale does not. The result of scaling x, currently always inheriting from virtual class dMatrix. It inherits from triangularMatrix if and only if x does. In the special case of dimScale(x, d1, d2) with identical d1 and d2, it inherits from symmetricMatrix if and only if x does. Mikael Jagan See Also n <- 6L (x <- forceSymmetric(matrix(1, n, n))) dimnames(x) <- rep.int(list(letters[seq_len(n)]), 2L) d <- seq_len(n) (D <- Diagonal(x = d)) (scx <- dimScale(x, d)) # symmetry and 'dimnames' kept (mmx <- D %*% x %*% D) # symmetry and 'dimnames' lost stopifnot(identical(unname(as(scx, "generalMatrix")), mmx)) rowScale(x, d) colScale(x, d) version 1.7-1
{"url":"https://stat.ethz.ch/R-manual/R-devel/library/Matrix/html/dimScale.html","timestamp":"2024-11-08T12:05:33Z","content_type":"text/html","content_length":"4022","record_id":"<urn:uuid:92f24dfd-64fe-4e48-b6cc-c8eca2e5b844>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00456.warc.gz"}
Supersolidity in a dipolar quantum gas Ultracold quantum gases have nowadays become an invaluable tool in the study of quantum many-body problems. The high level of experimental control available on these systems and well established theoretical tools make ultracold quantum gases ideal platforms for quantum simulations of other systems currently inaccessible in experiments as well as for studies of fundamental properties of matter in the quantum degenerate regime. A key manifestation of quantum degeneracy in samples of ultracold bosonic neutral atoms is the formation of a Bose-Einstein condensate (BEC), a peculiar state of matter in which a macroscopic number of atoms occupy the same single-particle state. Bose-Einstein condensation occurs in extremely rarefied gases of bosonic atoms at temperatures around the nanoKelvin. At such temperatures, the equilibrium state of all known elements (except for helium) in ordinary conditions of density and pressure would be the solid phase. To obtain a BEC it is thus necessary to consider very dilute samples with a density of the order of 1014-1015 atoms/cm3, around eight orders of magnitude smaller then the density of ordinary matter. At such densities, the three-body recombination mechanisms responsible for the formation of molecules, that cluster to form solids, are suppressed. However, despite the extreme diluteness, two-body inter-atomic interactions play a prominent role in determining the physical properties of these systems. In the temperature and density regimes typical of BECs, the theoretical description of the system can be greatly simplified by noticing that the low-energy scattering properties of the real, generally involved, inter-atomic potential, can be perfectly reproduced by a simpler pseudo-potential, usually of the form of an isotropic contact repulsion, and described by a single parameter, the s-wave scattering length. Such parameter can even be tuned, in experiments, via the so-called Feschbach resonances. Despite its simplicity, this zero-range, isotropic interaction is responsible for an enormous variety of physical effects characterizing atomic BECs. This fact stimulated, over the last twenty years, the research of different possible types of interactions, that can eventually lead to the formation of new and exotic phases of matter. In this quest, the dipole-dipole interaction attracted great attention for different reasons. First, there are several experimental techniques to efficiently trap and cool atoms (or molecules) possessing a strong dipole moment. This led, for example, to the experimental realization of BECs of Chromium, Dysprosium and Erbium, which have, in the hyperfine state trapped for condensation, a magnetic dipole moment around ten times larger then the one typical of the particles in a BEC of alkali atoms. Moreover, since the dipole-dipole interaction is anisotropic and long-ranged, its low-energy scattering properties cannot be described by a simple short-range isotropic pseudo-potential. As a consequence, dipolar BECs show unique observable properties. The partially attractive nature of the dipole-dipole interaction can make a dipolar BEC unstable against collapse, similarly to the case of an ordinary (non-dipolar) BEC with negative scattering length. This happens, in particular, if a sample of magnetic atoms, polarized along a certain direction by some magnetic field, is not confined enough along such direction (for example via a harmonic potential). However, differently from ordinary BECs, where the collapse of the system is followed by a rapid loss of atoms and the destruction of the condensed phase, in the dipolar case such instability is followed by the formation of self-bound, (relatively) high density liquid-like droplets. If the geometry of the confinement potential allows it, the droplets spontaneously arrange into a regular, periodic configuration, in a sort of "droplet crystal". Moreover, by fine-tuning the interaction parameters, it is possible to achieve global phase coherence between these droplets. The spatially modulated, phase coherent system that forms is known as supersolid, and is a very peculiar system showing simultaneously the properties of a crystal and a superfluid. Ordinary mean-field theory, so successful in describing the vast phenomenology of ordinary BECs, fails in predicting the existence of the exotic phases of supersolids, quantum droplets and droplet crystals in a dipolar quantum gas. The state of the art description of dipolar BECs in such conditions is instead based on quantum fluctuations, taking into account the local density approximation of the first-order beyond-mean-field correction of the ground state energy of the system. This correction, known as the Lee-Huang-Yang correction, results in a repulsive energy term that balances the mean-field attraction at the relatively high densities that characterize the collapsing state. Using state-of-the-art simulation techniques, in this thesis I study the behavior of a dipolar Bose gas confined in a variety of trapping configurations, focusing on ground-state properties, elementary excitations, and the dynamical behavior under several kinds of external perturbations, focusing in particular on the supersolid phase. After reviewing the basic theory of dipolar Bose gases, setting the theoretical background, and describing the numerical techniques used, I first study the behavior of the dipolar Bose gas in an ideal situation, namely when the gas is confined in a harmonic trap along the polarization direction of the dipoles as well as one of the orthogonal directions. Along the unconfined direction, instead, I set periodic boundary conditions, in order to simulate the geometry of a ring. I study in particular the phase diagram of the system, focusing on how the ground state evolves from a superfluid, homogeneous along the ring, to the supersolid regime, and eventually to an array of independent droplets, by tuning a single interaction parameter, namely the s-wave scattering length. The superfluid phase is here characterized by the occurrence of a roton minimum in the energy-momentum dispersion relation. The energy of the roton, called roton gap, decreases when the s-wave scattering length of the system is decreased and the dipole-dipole interaction becomes the dominant interaction mechanism. When the roton minimum touches the zero-energy axis, the superfluid system is not stable anymore against mechanical collapse. The system thus tend to form denser clusters of atoms, regularly arranged in an equally-spaced array of droplets, whose relative distance is fixed by the inverse of the roton momentum. Such droplets are stabilized by quantum fluctuations, which enters in the energy functional of the system via the Lee-Huang-Yang correction. The density profiles of these droplets maintain a finite overlap if the scattering length is not too small. The phase characterized by overlapping, dense droplets of dipolar atoms is called supersolid. The main signatures of supersolid behavior, which in the thesis are shown to occur in this system, are 1. The occurrence of two Goldstone modes, associated with the two symmetries spontaneously broken in the supersolid, namely the symmetry for continuous translations, which is broken in favor of a discrete one, and the U(1) symmetry associated with Bose-Einstein condensation. 2. The manifestation of Non-Classical Rotational Inertia, due to the partially superfluid character of the system. Simply speaking, since the system behaves only partially as a superfluid, any rotational perturbation drags only the non superfluid part of the system. Hence, any measurement of the moment of inertia would give a value which is smaller then the one of a classical system with the same density distribution. Having studied the behavior of the dipolar Bose gas in a ring trap, I move on to explore possible manifestations of supersolid behavior in a fully trapped configuration, namely when the system is confined in an elongated (cigar-shaped) harmonic trap, with the long axis orthogonal to the polarization direction. Part of the results obtained in the three-dimensional harmonic trap have been compared with the first available experiments. The two key signatures of supersolid behavior, namely the occurrence of two Goldstone modes and Non-Classical Rotational Inertia, can be detected, in this case, by studying the low-energy collective oscillations of the system. First, a behavior equivalent to the one of the two Goldstone modes predicted in the ring trap, can be found in the axial compressional oscillations of the harmonically trapped system, which bifurcate at the superfluid-supersolid phase transition. When the system is driven through the supersolid-independent droplet transition, the lower-energy mode, associated with phase coherence, tends to disappear, while the higher energy mode, associated with lattice excitations, tends to assume a constant frequency. This behavior is specular to the one of the two Goldstone modes in the ideal system, and thus signal the presence of supersolidity in the trapped system. Important experimental confirmation of the predictions reported in the thesis have already been found. Instead, as shown in the thesis, a key manifestation non-classical inertia in a trapped dipolar supersolid can be found by studying the rotational oscillation mode known as scissors mode, whose frequency is directly related to the value of the moment of inertia (similar to the frequency of oscillation of a torsional pendulum for a classical system). Studying the behavior of the frequency of the scissors mode across the superfluid-supersolid-independent droplets phase transitions, I demonstrate the actual occurrence of non-classical inertia in a harmonically trapped dipolar supersolid. Another key manifestation of superfluidity in general many-body systems is given by the occurrence of quantized vortices, which I study in the case of the trapped dipolar Bose gas in a harmonic trap which is isotropic in the plan orthogonal to the polarization direction. I study in particular the size of the core of the vortex as function of the interaction parameters, showing that, in the superfluid phase, it increases as the superfluid-supersolid phase transition is approached. Then, in the supersolid phase, I show that quantized vortices settle in the interstices between the density peaks, and their size and even their shape are fixed respectively by the droplet distance and the shape of the lattice cell. I also study the critical frequency for the vortex nucleation under a rotating quadrupolar deformation of the trap, showing that it is related to the frequency of the lower-energy quadrupole mode, associated with the partial superfluid character of the system. In fact, in this configuration, the quadrupole mode splits into three modes, two of which can be associated to lattice excitations, and one to superfluid excitations. I find that the critical rotational frequency for vortex nucleation is related to the lower frequency quadrupole mode only, i.e. the one related to the superfluid character of the system. In ordinary BECs, when many vortices nucleates, they typically tend to arrange in a trinagular lattice. In a supersolid, however, vortices do not form on top of a uniform superfluid background, but rather on the background of the supersolid lattice, which is itself typically triangular. I thus show that the lattice formed by the vortices in the supersolid lattice is not triangular, but rather hexagonal, since the vortices settle in the interstices between the density peaks. Finally, I show that all these features can be observed in an expansion experiment. In the last part of the thesis, I study the behavior of the dipolar Bose gas confined by hard walls. In particular, I investigate the novel density distributions, with special focus on the effects of supersolidity. Differently from the case of harmonic trapping, in this case, the ground state density shows a strong depletion in the bulk region and an accumulation of atoms near the walls, well separated from the bulk, as a consequence of the competition between the attractive and the repulsive nature of the dipolar force. In a quasi two-dimensional geometry characterized by cylindrical box trapping, the consequence is that the superfluid accumulating along the walls forms spontaneously a ring shape, showing eventually also supersolidity. For sufficiently large values of the atom density, also the bulk region can exhibit supersolidity, the resulting geometry reflecting the symmetry of the confining potential even for large systems. Supersolidity in a dipolar quantum gas / Roccuzzo, Santo Maria. - (2021 Nov 18), pp. 1-114. [10.15168/11572_321480] Supersolidity in a dipolar quantum gas Ultracold quantum gases have nowadays become an invaluable tool in the study of quantum many-body problems. The high level of experimental control available on these systems and well established theoretical tools make ultracold quantum gases ideal platforms for quantum simulations of other systems currently inaccessible in experiments as well as for studies of fundamental properties of matter in the quantum degenerate regime. A key manifestation of quantum degeneracy in samples of ultracold bosonic neutral atoms is the formation of a Bose-Einstein condensate (BEC), a peculiar state of matter in which a macroscopic number of atoms occupy the same single-particle state. Bose-Einstein condensation occurs in extremely rarefied gases of bosonic atoms at temperatures around the nanoKelvin. At such temperatures, the equilibrium state of all known elements (except for helium) in ordinary conditions of density and pressure would be the solid phase. To obtain a BEC it is thus necessary to consider very dilute samples with a density of the order of 1014-1015 atoms/cm3, around eight orders of magnitude smaller then the density of ordinary matter. At such densities, the three-body recombination mechanisms responsible for the formation of molecules, that cluster to form solids, are suppressed. However, despite the extreme diluteness, two-body inter-atomic interactions play a prominent role in determining the physical properties of these systems. In the temperature and density regimes typical of BECs, the theoretical description of the system can be greatly simplified by noticing that the low-energy scattering properties of the real, generally involved, inter-atomic potential, can be perfectly reproduced by a simpler pseudo-potential, usually of the form of an isotropic contact repulsion, and described by a single parameter, the s-wave scattering length. Such parameter can even be tuned, in experiments, via the so-called Feschbach resonances. Despite its simplicity, this zero-range, isotropic interaction is responsible for an enormous variety of physical effects characterizing atomic BECs. This fact stimulated, over the last twenty years, the research of different possible types of interactions, that can eventually lead to the formation of new and exotic phases of matter. In this quest, the dipole-dipole interaction attracted great attention for different reasons. First, there are several experimental techniques to efficiently trap and cool atoms (or molecules) possessing a strong dipole moment. This led, for example, to the experimental realization of BECs of Chromium, Dysprosium and Erbium, which have, in the hyperfine state trapped for condensation, a magnetic dipole moment around ten times larger then the one typical of the particles in a BEC of alkali atoms. Moreover, since the dipole-dipole interaction is anisotropic and long-ranged, its low-energy scattering properties cannot be described by a simple short-range isotropic pseudo-potential. As a consequence, dipolar BECs show unique observable properties. The partially attractive nature of the dipole-dipole interaction can make a dipolar BEC unstable against collapse, similarly to the case of an ordinary (non-dipolar) BEC with negative scattering length. This happens, in particular, if a sample of magnetic atoms, polarized along a certain direction by some magnetic field, is not confined enough along such direction (for example via a harmonic potential). However, differently from ordinary BECs, where the collapse of the system is followed by a rapid loss of atoms and the destruction of the condensed phase, in the dipolar case such instability is followed by the formation of self-bound, (relatively) high density liquid-like droplets. If the geometry of the confinement potential allows it, the droplets spontaneously arrange into a regular, periodic configuration, in a sort of "droplet crystal". Moreover, by fine-tuning the interaction parameters, it is possible to achieve global phase coherence between these droplets. The spatially modulated, phase coherent system that forms is known as supersolid, and is a very peculiar system showing simultaneously the properties of a crystal and a superfluid. Ordinary mean-field theory, so successful in describing the vast phenomenology of ordinary BECs, fails in predicting the existence of the exotic phases of supersolids, quantum droplets and droplet crystals in a dipolar quantum gas. The state of the art description of dipolar BECs in such conditions is instead based on quantum fluctuations, taking into account the local density approximation of the first-order beyond-mean-field correction of the ground state energy of the system. This correction, known as the Lee-Huang-Yang correction, results in a repulsive energy term that balances the mean-field attraction at the relatively high densities that characterize the collapsing state. Using state-of-the-art simulation techniques, in this thesis I study the behavior of a dipolar Bose gas confined in a variety of trapping configurations, focusing on ground-state properties, elementary excitations, and the dynamical behavior under several kinds of external perturbations, focusing in particular on the supersolid phase. After reviewing the basic theory of dipolar Bose gases, setting the theoretical background, and describing the numerical techniques used, I first study the behavior of the dipolar Bose gas in an ideal situation, namely when the gas is confined in a harmonic trap along the polarization direction of the dipoles as well as one of the orthogonal directions. Along the unconfined direction, instead, I set periodic boundary conditions, in order to simulate the geometry of a ring. I study in particular the phase diagram of the system, focusing on how the ground state evolves from a superfluid, homogeneous along the ring, to the supersolid regime, and eventually to an array of independent droplets, by tuning a single interaction parameter, namely the s-wave scattering length. The superfluid phase is here characterized by the occurrence of a roton minimum in the energy-momentum dispersion relation. The energy of the roton, called roton gap, decreases when the s-wave scattering length of the system is decreased and the dipole-dipole interaction becomes the dominant interaction mechanism. When the roton minimum touches the zero-energy axis, the superfluid system is not stable anymore against mechanical collapse. The system thus tend to form denser clusters of atoms, regularly arranged in an equally-spaced array of droplets, whose relative distance is fixed by the inverse of the roton momentum. Such droplets are stabilized by quantum fluctuations, which enters in the energy functional of the system via the Lee-Huang-Yang correction. The density profiles of these droplets maintain a finite overlap if the scattering length is not too small. The phase characterized by overlapping, dense droplets of dipolar atoms is called supersolid. The main signatures of supersolid behavior, which in the thesis are shown to occur in this system, are 1. The occurrence of two Goldstone modes, associated with the two symmetries spontaneously broken in the supersolid, namely the symmetry for continuous translations, which is broken in favor of a discrete one, and the U(1) symmetry associated with Bose-Einstein condensation. 2. The manifestation of Non-Classical Rotational Inertia, due to the partially superfluid character of the system. Simply speaking, since the system behaves only partially as a superfluid, any rotational perturbation drags only the non superfluid part of the system. Hence, any measurement of the moment of inertia would give a value which is smaller then the one of a classical system with the same density distribution. Having studied the behavior of the dipolar Bose gas in a ring trap, I move on to explore possible manifestations of supersolid behavior in a fully trapped configuration, namely when the system is confined in an elongated (cigar-shaped) harmonic trap, with the long axis orthogonal to the polarization direction. Part of the results obtained in the three-dimensional harmonic trap have been compared with the first available experiments. The two key signatures of supersolid behavior, namely the occurrence of two Goldstone modes and Non-Classical Rotational Inertia, can be detected, in this case, by studying the low-energy collective oscillations of the system. First, a behavior equivalent to the one of the two Goldstone modes predicted in the ring trap, can be found in the axial compressional oscillations of the harmonically trapped system, which bifurcate at the superfluid-supersolid phase transition. When the system is driven through the supersolid-independent droplet transition, the lower-energy mode, associated with phase coherence, tends to disappear, while the higher energy mode, associated with lattice excitations, tends to assume a constant frequency. This behavior is specular to the one of the two Goldstone modes in the ideal system, and thus signal the presence of supersolidity in the trapped system. Important experimental confirmation of the predictions reported in the thesis have already been found. Instead, as shown in the thesis, a key manifestation non-classical inertia in a trapped dipolar supersolid can be found by studying the rotational oscillation mode known as scissors mode, whose frequency is directly related to the value of the moment of inertia (similar to the frequency of oscillation of a torsional pendulum for a classical system). Studying the behavior of the frequency of the scissors mode across the superfluid-supersolid-independent droplets phase transitions, I demonstrate the actual occurrence of non-classical inertia in a harmonically trapped dipolar supersolid. Another key manifestation of superfluidity in general many-body systems is given by the occurrence of quantized vortices, which I study in the case of the trapped dipolar Bose gas in a harmonic trap which is isotropic in the plan orthogonal to the polarization direction. I study in particular the size of the core of the vortex as function of the interaction parameters, showing that, in the superfluid phase, it increases as the superfluid-supersolid phase transition is approached. Then, in the supersolid phase, I show that quantized vortices settle in the interstices between the density peaks, and their size and even their shape are fixed respectively by the droplet distance and the shape of the lattice cell. I also study the critical frequency for the vortex nucleation under a rotating quadrupolar deformation of the trap, showing that it is related to the frequency of the lower-energy quadrupole mode, associated with the partial superfluid character of the system. In fact, in this configuration, the quadrupole mode splits into three modes, two of which can be associated to lattice excitations, and one to superfluid excitations. I find that the critical rotational frequency for vortex nucleation is related to the lower frequency quadrupole mode only, i.e. the one related to the superfluid character of the system. In ordinary BECs, when many vortices nucleates, they typically tend to arrange in a trinagular lattice. In a supersolid, however, vortices do not form on top of a uniform superfluid background, but rather on the background of the supersolid lattice, which is itself typically triangular. I thus show that the lattice formed by the vortices in the supersolid lattice is not triangular, but rather hexagonal, since the vortices settle in the interstices between the density peaks. Finally, I show that all these features can be observed in an expansion experiment. In the last part of the thesis, I study the behavior of the dipolar Bose gas confined by hard walls. In particular, I investigate the novel density distributions, with special focus on the effects of supersolidity. Differently from the case of harmonic trapping, in this case, the ground state density shows a strong depletion in the bulk region and an accumulation of atoms near the walls, well separated from the bulk, as a consequence of the competition between the attractive and the repulsive nature of the dipolar force. In a quasi two-dimensional geometry characterized by cylindrical box trapping, the consequence is that the superfluid accumulating along the walls forms spontaneously a ring shape, showing eventually also supersolidity. For sufficiently large values of the atom density, also the bulk region can exhibit supersolidity, the resulting geometry reflecting the symmetry of the confining potential even for large systems. File in questo prodotto: File Dimensione Formato accesso aperto Tipologia: Tesi di dottorato (Doctoral Thesis) Licenza: Creative commons 10.04 MB Adobe PDF Visualizza/Apri Dimensione 10.04 MB Formato Adobe PDF I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione
{"url":"https://iris.unitn.it/handle/11572/321480","timestamp":"2024-11-06T20:51:24Z","content_type":"text/html","content_length":"94276","record_id":"<urn:uuid:ef26b839-f073-4e95-b491-975b5a4daa41>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00679.warc.gz"}
Inspiring Drawing Tutorials How To Draw An Integral Sign How To Draw An Integral Sign - Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Web drawing integral sign is the best part of calculus, problems comes after that :d Web integration under the integral sign is the use of the identity. How to type integral symbol with limits in word. Next, hold the alt key and, using your numeric keypad, type. I use the preety command to show the problem or answer in a. The integral symbol is u+222b ∫ integral in unicode and \int in latex. Web explore math with our beautiful, free online graphing calculator. The symbol was invented by leibniz and chosen to be a stylized script s to stand for summation. (1) to compute an integral. (1) to compute an integral. I use the preety command to show the problem or answer in a. Plt.ylabel(r'$\int_0^y du/(1+u^{2})$') personally however, i occasionally don't like the formatting (in my setup). Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. The symbol was invented by leibniz and chosen to be a stylized script s to stand for How to type integral symbol with limits in word. Type in any integral to get the solution, steps and graph. Fun fact, when you're trying to write { or }, try drawing two integral signs on top of each other. (1) to compute an integral. The symbol was invented by leibniz and chosen to be a stylized script s to. Web explore math with our beautiful, free online graphing calculator. Plt.ylabel(r'$\int_0^y du/(1+u^{2})$') personally however, i occasionally don't like the formatting (in my setup). Integration of absolute value functions. The integral symbol is used to represent the integral operator in calculus. Multiplying by and integrating between and. Fun fact, when you're trying to write { or }, try drawing two integral signs on top of each other. Web how to make integral symbol in microsoft word | how to make integral symbol on keyboard. Web first, you could use the generic _ and ^ for lower and upper limits: Web 0:00 / 1:43. Web quick guide for. Scan it with your photomath app to. Web how to make integral symbol in microsoft word | how to make integral symbol on keyboard. How to type integral symbol with limits in word. The symbol was invented by leibniz and chosen to be a stylized script s to stand for summation. You may want to review the basic integral. Web quick guide for typing the integral symbol (∫) to type the integral symbol anywhere on your pc or laptop keyboard, press option + b shortcut for mac. (1) to compute an integral. The dx in integral notation represents an infinitesimal (a number that is approaching zero, but will never equal zero). The integral symbol is u+222b ∫ integral in. Web on a windows pc, click to the place in your document where you would like to insert the integral symbol. Web quick guide for typing the integral symbol (∫) to type the integral symbol anywhere on your pc or laptop keyboard, press option + b shortcut for mac. Then, go back to the top left, draw a line to. The integral symbol is u+222b ∫ integral in unicode and \int in latex. Of course, one of the integral signs will be. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. The integral symbol is used to represent the integral operator in calculus. The original ibm pc code page 437 character set included a couple of. Type in any integral to get the solution, steps and graph. Next, hold the alt key and, using your numeric keypad, type. Web explore math with our beautiful, free online graphing calculator. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Web drawing integral sign is the best part of calculus, problems comes after that :d 26k views 2 years ago. You may want to review the basic integral. Web explore math with our beautiful, free online graphing calculator. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Web hi, i would like to know how to display the integral symbol with the original problem on the screen. Web quick guide for typing the integral symbol (∫) to type the integral symbol anywhere on your pc or laptop keyboard, press option + b shortcut for mac. Plt.ylabel(r'$\int_0^y du/(1+u^{2})$') personally however, i occasionally don't like the formatting (in my setup). Web on a windows pc, click to the place in your document where you would like to insert the. How To Draw An Integral Sign - You may want to review the basic integral. Of course, one of the integral signs will be. 26k views 2 years ago. Web first, you could use the generic _ and ^ for lower and upper limits: Scan it with your photomath app to. Next, hold the alt key and, using your numeric keypad, type. Web on a windows pc, click to the place in your document where you would like to insert the integral symbol. Web quick guide for typing the integral symbol (∫) to type the integral symbol anywhere on your pc or laptop keyboard, press option + b shortcut for mac. Web how to make integral symbol in microsoft word | how to make integral symbol on keyboard. Typically, the integral symbol is used in an expression like the one below. Plt.ylabel(r'$\int_0^y du/(1+u^{2})$') personally however, i occasionally don't like the formatting (in my setup). The dx in integral notation represents an infinitesimal (a number that is approaching zero, but will never equal zero). Of course, one of the integral signs will be. (1) to compute an integral. Multiplying by and integrating between and. Web first, you could use the generic _ and ^ for lower and upper limits: Typically, the integral symbol is used in an expression like the one below. The symbol was invented by leibniz and chosen to be a stylized script s to stand for summation. Web the symbol int used to denote an integral intf (x)dx. You may want to review the basic integral. The original ibm pc code page 437 character set included a couple of characters ⌠ and ⌡ (codes 244 and 245 respectively) to build the integral symbol. Integration of absolute value functions. How to type integral symbol with limits in word. Web explore math with our beautiful, free online graphing calculator. Web explore math with our beautiful, free online graphing calculator. The Integral Symbol Is Used To Represent The Integral Operator In Calculus. The integral symbol is u+222b ∫ integral in unicode and \int in latex. The symbol was invented by leibniz and chosen to be a stylized script s to stand for summation. Got a specific problem in front of you? Web integration under the integral sign is the use of the identity. How To Type Integral Symbol With Limits In Word. Web quick guide for typing the integral symbol (∫) to type the integral symbol anywhere on your pc or laptop keyboard, press option + b shortcut for mac. Web how to make integral symbol in microsoft word | how to make integral symbol on keyboard. Multiplying by and integrating between and. I use the preety command to show the problem or answer in a. The Original Ibm Pc Code Page 437 Character Set Included A Couple Of Characters ⌠ And ⌡ (Codes 244 And 245 Respectively) To Build The Integral Symbol. Web 0:00 / 1:43. Type in any integral to get the solution, steps and graph. Web hi, i would like to know how to display the integral symbol with the original problem on the screen. Web on a windows pc, click to the place in your document where you would like to insert the integral symbol. 26K Views 2 Years Ago. Fun fact, when you're trying to write { or }, try drawing two integral signs on top of each other. Graph functions, plot points, visualize algebraic equations, add sliders, animate graphs, and more. Integration of absolute value functions. Web explore math with our beautiful, free online graphing calculator.
{"url":"https://one.wkkf.org/art/drawing-tutorials/how-to-draw-an-integral-sign.html","timestamp":"2024-11-11T23:38:57Z","content_type":"text/html","content_length":"34796","record_id":"<urn:uuid:88355440-8a15-4780-b4a7-06b51530ba71>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00480.warc.gz"}
Softmax with Temperature Explained Softmax function is commonly used in classification tasks. Suppose that we have an input vector \([z_1, z_2, \ldots, z_N]\), after softmax, each element becomes: \[p_i = \frac{\exp(z_i)}{\sum_{j=1}^{N}\exp(z_j)}\] The denominator term normalize each element so that their sum is 1. The original vector is transformed into a probability distribution, and the index that corresponds to the highest probability is the chosen class. In practice, we often see softmax with temperature, which is a slight modification of softmax: \[p_i = \frac{\exp(x_i/\tau)}{\sum_{j=1}^{N}\exp(x_j/\tau)}\] The parameter \(\tau\) is called the temperature parameter^1, and it is used to control the softness of the probability distribution. When \(\tau\) gets lower, the biggest value in \(x\) get more probability, when \(\tau\) gets larger, the probability will be split more evenly on different elements. Consider the extreme cases where \(\tau\) approaches zero, the probability for the largest element will approach 1, while when \(\tau\) approaches infinity, the probability for each element will be the same. import math def softmax(vec, temperature): turn vec into normalized probability sum_exp = sum(math.exp(x/temperature) for x in vec) return [math.exp(x/temperature)/sum_exp for x in vec] def main(): vec = [1, 5, 7, 10] ts = [0.1, 1, 10, 100, 10000] for t in ts: print(t, softmax(vec, t)) if __name__ == "__main__": With different values of t, the output probability is (also check the title image): 0.1 [8.194012623989748e-40, 1.928749847963737e-22, 9.357622968839298e-14, 0.9999999999999064] 1 [0.00011679362893736733, 0.006376716075637758, 0.0471179128098403, 0.9463885774855847] 10 [0.14763314666550595, 0.2202427743860977, 0.26900513210002774, 0.3631189468483686] 100 [0.23827555570657363, 0.24799976560608047, 0.25300969319764466, 0.2607149854897012] 10000 [0.2498812648459304, 0.2499812373450356, 0.2500312385924627, 0.2501062592165714] According to this post, the name softmax is kind of misleading, it should be softargmax, especially when you have a very small \(\tau\) value. For example, for vec = [1, 5, 7, 10], argmax result should be 3. If we express it as one-hot encoding, the result is [0, 0, 0, 1], which is pretty close to the result of softmax when \(\tau = 0.1\). In Distilling the Knowledge in a Neural Network, they also used temperature parameter in softmax: Using a higher value for T produces a softer probability distribution over classes. Supervised contrastive learning In the MoCo paper, softmax loss with temperature is used (it is a slightly modified version of InfoNCE loss): \[Loss = -\log\frac{exp(q\cdot k_+/\tau)}{\sum_{i=0}^{K} exp(q\cdot k_i/ \tau)}\] In that paper, \(\tau\) is set to a very small value 0.07. If we do not use the temperature parameter, suppose that the dot product of negative pairs are -1, and dot product of positive pair is 1, and we have K = 1024, in this case, the model has separated the positive and negative pairs perfectly, but the softmax loss is still too large: \[-log\frac{e}{e + 1023e^{-1}} = 4.94\] If we use a parameter of \(\tau = 0.07\), however, the loss will now become literally 0.0. So using a small \(\tau\) helps collapse the probability distribution to the positive pair and reduces loss. MoCo borrows this value from Unsupervised Feature Learning via Non-Parametric Instance Discrimination, in which the authors say: τ is important for supervised feature learning [43], and also necessary for tuning the concentration of v on our unit sphere. Ref 43 refers to paper NormFace: L2 Hypersphere Embedding for Face Verification. In NormFace Sec. 3.3, the authors show theoretically why it is necessary to use a scaling factor^2 in softmax loss. Basically, if we do not use a scaling factor, the lower bound for the loss is high, and we can not learn a good representation of image features. • https://stats.stackexchange.com/questions/527080/what-is-the-role-of-temperature-in-softmax • Understanding the Behaviour of Contrastive Loss: https://arxiv.org/abs/2012.09740 • https://ogunlao.github.io/2020/04/26/you_dont_really_know_softmax.html • https://www.reddit.com/r/MachineLearning/comments/n1qk8w/d_temperature_term_in_simclr_or_moco_papers/ 1. The name temperature may come from Boltzmann distribution, where it has similar formulation and a temperature parameter.↩︎ 2. In NormFace, they use \(s=1/\tau\) as the scaling factor and multiply it, instead of dividing \(\tau\) directly.↩︎
{"url":"https://jdhao.github.io/2022/02/27/temperature_in_softmax/","timestamp":"2024-11-14T18:16:01Z","content_type":"text/html","content_length":"47084","record_id":"<urn:uuid:85366dcf-ef69-4c30-b1eb-3d55c87d6b33>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00286.warc.gz"}
ICL: Information Architecture > Classification Figure 1: The Grail of Classification: to be able to capture conceptual structure in a visually striking way. When trying to visualize the proximity structure of a high dimensional pattern, one frequently has to choose between clustering into a hierarchical tree, or projecting the pattern in two or three dimensions. Conventional wisdom considers that multi-dimensional scaling (projection) is good at representing the big picture, whereas hierarchical clustering handles local details more faithfully. Ideally, one would want a low-dimensional configuration rendering both the global and local properties; such that, for example, if one drew between the final points a hierarchical tree obtained from the original data, this tree would appear simple and related branches would stay next to each other. Our research adapts existing scaling algorithms to create such tree-friendly configurations. Why Existing Scaling Strategies Deal Poorly with Local Structure Figure 2: Classic MDS will project B and C onto the same point at intersection of dotted axes, but minimal least square error for the distances within a one-dimensional configuration is for B” and C”, whose center is at dAB from A, and whose distance dB”C” = dBC/3. Figure 2 illustrates two problems: 1. Because it lies parallel to an eigenvector pruned by the CMDS, the distance between B and C has been ignored. In contrast, the distance between A and the other points is almost conserved, because it is almost collinear to the eigenvector preserved by the CMDS. 2. The projection ignores the arc, so that even though B and C are both at the same distance from A, their projection gets closer than that distance. Instead of relying exclusively on the one-shot CMDS procedure, one can optimize the placement of the points by gradient descent, but this approach is plagued by local minima Figure 3: Local minima in a metric energy landscape. When scaling the five points (4, 4, 0), (4,-4,0), (-4,4,0), (-4,-4,0) and (1,1,7), CMDS projects the latter in the plane Z=0, onto the point (1,1,0) shown in grey. This graph shows a 2-D section of a 10-variable function: the sum of residuals landscape, by fixing the coordinates of the 4 points that were already in the plane (“the base”) and showing the sum of residuals for different locations of the 5th point. For moving the grey point only, there are at least five local minima: four located outside the base and one about (-1, -1, 0). In fact, this latter point is where gradient descent takes the grey point, but it is not the absolute minimum. The Tree-Expansion Strategy In order to jointly minimize stress and render the cluster structure, we propose to apply gradient descent (or Kruskal-Shepard) repeatedly to a growing configuration, obtained by expanding the hierarchical tree. This is illustrated by Figure 4 in which the hierarchical tree and successive configurations are shown bide by side. Prior to using this algorithm, one needs the dissimilarity matrix at all stages of expansion. If the tree has been constructed from a SAHN algorithm these dissimilarities are already available, otherwise they have to be computed in a one-sweep forward pass. Given these dissimilarities, the algorithm is to: 1. Position a starting configuration: 2. Expand the tree: 3. Make a final adjustment down to the desired accuracy. Figure 4: MDS by Tree expansion In the left column, the SAHN tree clustering 9 color samples used by Roger Shepard in misidentification judgments. The 9 circles at the bottom are the colors used experimentally; higher level tree nodes received interpolated RGB values. The top of the tree (top node and its immediate offspring) is truncated. In the right column, successive 2-D configurations obtained by tree expansion. The 3-point configuration is an exact CMDS scaling. Subsequent configurations (4 to 9 points) are obtained by Kruskal-Shepard, using as initial condition the previous configuration in which one point is replaced by its two contributors. The point chosen is the current highest tree node, it will be replaced by the two tree nodes that it consists of, and those two nodes start with their parent’s location – it is the Kruskal-Shepard algorithm that pulls them apart. On both sides, a blue arrow points at the node about to be split. In the right panel, as the pattern is expanded from 3 points to the full 9 points, we see that some expansions require more reorganization (e.g., from 6 to 7 points) but there is no reversal of relative positions. Example: Clustering the American Statistical Association Sample Data on Cereals Figure 5: The American Statistical Association published in 1993 a test bed for MDS algorithms, consisting of 13-dimensional data on 77 types of breakfast cereals. This cereal data is rendered here by different algorithms: a. classical scaling (Young-Torgerson reduction); b. classical scaling followed by a gradient descent to minimize residual square error; c. expansion of the SAHN tree, applying gradient descent to adjust the configuration every time a point is split. Notice how Classic cereals (rendered by green triangles) are dispersed when gradient descent takes classical scaling as an initial configuration, because that cluster overlaps another one which is more compact. This is a “mountain pass” effect, as is probably the isolation of two “dark diamond” points inside the cluster of “purple squares”. In contrast, the tree expansion is relatively free of such effects.
{"url":"https://interactivity.ucsd.edu/projects/infoArch/classification.html","timestamp":"2024-11-06T20:02:13Z","content_type":"text/html","content_length":"25065","record_id":"<urn:uuid:23227211-2d91-4e69-b0da-50bf9d332016>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00804.warc.gz"}
Sliding Window - 1 Arrays Hands-On: Warming Up Sliding Window - 1 To sum up the previous articles of this section, we looked a couple of different patterns, one being simply iterating and doing something or the other with the elements while doing it, second being where more than one iterative pointers are in use. In the next couple of articles, let's warm up another standard way, i.e. sliding window. Max Sum Subarray of size K The problem says that we'll be given an array, let's say, arr, and a number K ( 1 <= K <= len(arr) ); we need to find the maximum sum that a subarray of size K has. What is a f***ing subarray? In layman's language, a subarray is a contiguous sub-part of a given array. It'll be easier to understand using an example. If a given array is [5, 2, 8, 8], then: 1. [5], [2], [8] and [8] are its subarrays of size/length equal to 1. 2. [5, 2], [2, 8] and [8, 8] are the subarrays of size 2. 3. [5, 2, 8] and [2, 8, 8] are the subarrays of size 3. 4. [5, 2, 8, 8] is the one and only subarray of size 4. 5. [5, 8], [5, 8, 8] are not subarrays because they are not contiguous because 2 is missing. A small exercise for you: how many subarrays do you think an array of size N will have? Hint Try to think about each different possibility of size and how many subarrays will be there for that particular size. Add the number of subarrays for each subarray_size, and you'll have the total number. Simple math, or not? Answer If you observe carefully: 1. There will be exactly N subarrays of size = 1. 2. There will be exactly N - 1 subarrays of size = 2. 3. There will be exactly N - 2 subarrays of size = 3. 4. the trend carries on... 5. There will be exactly 3 subarrays of size = N - 2. 6. There will be exactly 2 subarrays of size = N - 1. 7. There will be exactly 1 subarray of size = N. Therefore, the total number of subarrays = 1 + 2 + 3 + ... + (N - 2) + (N - 1) + N = N*(N+1)/2 How to solve the problem? As always, let's begin with a beginner-friendly brute-force approach. We'll go to all subarrays of size K, find their sum, and keep track of the maximum sum encountered so far. long maximumSumSubarray(int K, vector<int> &Arr , int N){ // code here long max_sum = 0; for(int st = 0, en = K-1; st < N && en < N; st++, en++) { // Find the sum: arr[st] + arr[st+1] ... arr[en-1] + arr[en] long cur_sum = 0; for(int i = st; i <= en; ++i) { cur_sum += arr[i]; max_sum = max(max_sum, cur_sum); return max_sum; I hope you folks have learned enough by now that you can understand the above code without needing an explanation. (or not?) Time & Space Complexity Space Complexity: O(1) [simply because no extra space is used to get the answer] Time Complexity: O(N*K). If you use some math, you'll see that the outer loop will run (N-K+1) number of times, and the inner loop will run K times every time. Therefore, operations = N*K - K*K + K. Now, according to what we learned in the time complexity section, if we only take the most significant term, time complexity = O(N*K) How to solve the problem efficiently? Here comes the technique to slide our way to efficiency. Hint 1 When we look at the 1st subarray of size K, i.e. arr[0...(K-1)], surely, calculate the sum, but what about the subsequent window/subarrays of size K, do we need to calculate the sum from scratch every time? Hint 2 Complete Explanation A suggestion: keep the above image open while reading; it'll probably help in better visualisation. So, the idea, as visible in the above illustration, is that when we move from 1 particular subarray of size k (let's say i-1 to j-1), to the just next subarray (i to j), the elements arr[i], arr[i+1] ... arr[j-1] are common among the two. The only differentiating elements are: 1. arr[i-1]: present in window 1 but not in window 2. 2. arr[j]: present in window 2 but not in window 1. So, based on the above intuition, we can do the following: 1. Calculate the subarray sum (let's say cur_sum) for the 1st subarray of size K. 2. Then keep on sliding the window. 3. If the current window is from st to en, but the cur_sum variable represents the subarray sum for st-1 to en-1, just subtract arr[st-1] from cur_sum and add arr[en] to cur_sum. 4. Keep track of the maximum subarray sum seen so far. Return the max_sum after having gone through all the subarrays of size K. Implementation long maximumSumSubarray(int K, vector<int> &arr , int N){ // code here long cur_sum = 0; // find the sum for 1st window of size k for(int i = 0; i < k; ++i) cur_sum += arr[i]; long max_sum = cur_sum; // keep updating cur_sum as we // slide through other windows // and track max_sum for(int st = 1, en = k; en < N; st++, en++) { cur_sum -= arr[st - 1]; cur_sum += arr[en]; max_sum = max(max_sum, cur_sum); return max_sum; This is another problem very similar to the above problem that you can try to solve.
{"url":"https://read.learnyard.com/dsa/sliding-window-1/","timestamp":"2024-11-04T01:48:00Z","content_type":"text/html","content_length":"219538","record_id":"<urn:uuid:8a093d3f-16b5-4bc2-8261-ff28f89d5c02>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00089.warc.gz"}
Comments on The Geomblog: Algorithms and Technologies, and ESA 2006"...it is not often that one can invoke a sophisti... tag:blogger.com,1999:blog-6555947.post112965657717147057..comments2024-09-11T05:56:17.068-06:00Suresh Venkatasubramanianhttp://www.blogger.com/profile/ 15898357513326041822noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-6555947.post-1130279480903941222005-10-25T16:31:00.000-06:002005-10-25T16:31:00.000-06:00"<I>...it is not often that one can invoke a sophisticated algorithm that has known worst-case bounds, and find an implementation for it.</I>"<BR/><BR/>I find it intriguing that the sophisticated implementations (at least for linear programming) don't actually achieve the worst-case bounds. For efficiency in practice they make parameter choices which result in non-polynomial worst-case bounds (or at least, in versions for which a polynomial bound isn't known). Of course, you can always combine the two by running for a long time with the "efficient" parameter choices, and then if you haven't terminated yet, switching to the worst-case parameter choices.Anonymousnoreply@blogger.com
{"url":"http://blog.geomblog.org/feeds/112965657717147057/comments/default","timestamp":"2024-11-03T10:12:32Z","content_type":"application/atom+xml","content_length":"4158","record_id":"<urn:uuid:80b2bdf1-0880-4e79-9b44-cfc5a0b33256>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00622.warc.gz"}
Meaning of tanh activation function? - Oduku - The Newspaper Tanh activation function The activation function determines the range of possible activation levels for a synthetic neuron. The total weighted input data to the neuron is used for this operation. can be identified by their non-linearity. Without an activation function, multilayer perceptrons simply multiply the weights by the input values to calculate the outputs. Any two linear operations performed in succession are identical to only one. With a non-linear activation function, the artificial neural network and its approximation function are non-linear. The approximation theorem states that any multilayer perceptron with a single hidden layer and a nonlinear activation function is a universal function approximator. Activation Functions seem pointless, so why use them? Activation functions in neural networks produce non-linear results. However, without the activation functions, the neural network can only compute linear mappings between x and y. Is there a particular reason why this is happening? Forward propagation would simply involve the multiplication of weight matrices by input vectors if activation functions weren’t used. In order to do useful calculations, neural networks need to be able to infer non-linear correlations between input vectors x and output y. Non-linearity in the x-to-y mapping occurs when the underlying data is intricate. If our neural network didn’t have an activation function in its hidden layer, it wouldn’t be able to mathematically realise the complex relationships we’ve programmed into it. The Big Four Activation Functions in Deep Learning It is time to discuss the most popular activation functions used in Deep Learning, along with the benefits and drawbacks of each. Anti-Sigmoid Function. There was a time when the sigmoid activation function was the most widely used. The Sigmoid function maps inputs onto the interval [0,1]. Taking x as input, the function returns a value between and (0, 1]). Signed nonlinearity is rarely used in practise nowadays. In particular, it has these two problems: As a practical matter, Sigmoid functions “kill” gradients. The first is that it is possible for gradients to vanish for sigmoid functions. One important issue with the function is that neural activation peaks between 0 and 1 (blue areas). The derivative of the sigmoid function approaches 0 in these azure areas (i.e., large negative or positive input values). Weight updates and learning would be impossible if the derivative was very small around 0. function of activating tanh In Deep Learning, the tanh activation function is also commonly utilised. Below is a graphic of the tangent hyperbolic function: The derivative of the neuron’s response resembles the sigmoid function, with the value tending toward zero as the magnitude of the response increases or decreases dramatically (blue region in Fig. 3). Its outputs are zero-centered, unlike the sigmoid function’s. Compared to sigmoid, tanh is more commonly used in clinical settings. This article will show you how to implement the tanh activation function in TensorFlow with the help of the following code. To use TensorFlow in your code, simply import tf as tf from the Keras library. TensorFlow (tanh) (tanh) The tangent can be calculated as follows: z = tf.constant([-1.5, -0.2, 0, 0.5], dtype=tf.float32) (z) print(output.numpy()) #[-0.90514827, -0.19737533, 0., 0] is an example of this .46211714] Where can I get the Python code for the tanh activation function and its derivative? This allows for the straightforward expression of both the tanh activation function and its derivative. To put it another way, we need to define a function in order to use the formula. The process is depicted in the following diagram: meaning of the tanh activation function (np.exp(z) – np.exp(-z)) / (np.exp(z) + np.exp(-z)) = tanh function(z). One way to characterise the tanh prime function is as follows: return 1 – np. power(tanh function(z),2). Use the tanh activation function when: the values of the tanh activation function vary from -1 to 1, making it useful for putting data into a more centred location, with a mean closer to 0, which facilitates learning in the subsequent layer. This is why the tanh activation function can be put to good use in practise. Here is some basic Python code for the tanh activation function: # import libraries “import matplotlib.pyplot as plt” bring in NumPy as np Making a tanh activation function defined tanh(x): revert to a, d Prepare axes with centres # plt.subplots(figsize=(9, 5)); fig, axe = plt.subplots set position(‘center’) set position(‘center’) set color(‘none’) ax.spines[‘top’].set color(‘none’) x-axis.set ticks position(‘bottom’) y-axis.set ticks position(‘left’) # Construct the plot and display it. the code: ax.plot(b,tanh(b)[0], color=”#307EC7″, linewidth=3, label=”tanh”) label = “derivative,” linewidth = 3, and colour = “#9621E2” in an ax.plot(b,tanh(b)[1]) Frameon=false upperright”>ax.legend(loc=”upper right”). The following is the output of the aforementioned code, which plots the tanh and its derivative. The Softmax Activation Function One final activation function I’d want to cover is the softmax. This activation function is unique in comparison to others. The softmax activation function limits the values of the output neurons to be between 0 and 1, which accurately represents probabilities in the interval [0, 1]. To put it another way, each feature vector, x, belongs to a specific category. It is impossible for the classes dog and cat to be equally represented by a feature vector that is an image of a dog. It is crucial that this feature vector adequately represents dogs as a whole.
{"url":"https://oduku.com/2023/02/02/meaning-of-tanh-activation-function/","timestamp":"2024-11-02T08:49:32Z","content_type":"text/html","content_length":"122963","record_id":"<urn:uuid:bfef78ed-5152-4950-b4e3-2dbde979096c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00430.warc.gz"}
Financial Functions - Excel Tutorial for Excel 2013 How to use Financial functions in Excel In this tutorial I will show you how to use some very useful financial functions in Excel by walking you through an example about a mortgage. PMT Function The PMT function calculates the monthly payments for a loan based on constant payments and a constant interest rate. As an example let’s day we have a loan with monthly payments, an annual interest rate of 8%, a 30-year duration, a present value of $200,000, and a future value of 0 (amount of debt remaining after making all repayments). We make monthly payments so we put a rate of 0.66% (this is what you get when you divide 8% by 12 months). Subsequently, we set 360 for nper (30 years * 12 months) which is the total number of Using the PMT function in Excel Example of a completed PMT function in Excel In this case you would have to pay $1,456.39 dollars per month to repay your loan in 30 years. Note that for loans FV can be omitted (FV of a loan equals 0). If Type is left out Excel assumes that payments will be due at the end of each period. RATE Function Using the same example, it might be the case that instead of not knowing the monthly payment, you have that information but you don’t know how much the rate is. In this case, you can use the RATE function to calculate the interest rate. Using the RATE function in Excel NPER Function In a similar fashion, when you don’t know the number of periods but you do have all the other information, you should use the NPER function to calculate the number of periods. Using the NPER function in Excel Another thing you can do is play around with the number of months in your calculations to see how it impacts the monthly payments. Find the number of repayment periods necessary with different monthly payments In this case if you increase the monthly repayments to $3,000 dollars per month, you will only need 88 periods to pay back your loan. PV Function And you might have understood the pattern already, but if you know all the other data while you don’t know the loan amount, you can use the PV function to find out how much money is borrowed given you know the monthly payments, the interest rate, and number of periods. Using the PV function in Excel FV Function Use the FV function to determine if you will successfully pay off your debt or will have outstanding debt given you make certain monthly payments, on a loan with an interest rate. In the first example it shows that by paying $1456.39 you will pay off your loan in 30 years, but if you only make monthly payments of $1400 as is shown in the second example, you’ll be left with a debt of $82689.88 at the end of the period. Using the FV function in Excel Find the amount of debt remaining after finishing all repayment periods with different monthly payments As this might be a quite complex issue I’m sure you might have some questions. If you do, just let me know below and I’ll answer you personally.
{"url":"http://www.spreadsheetpro.net/excel-financial-functions/","timestamp":"2024-11-02T19:02:39Z","content_type":"application/xhtml+xml","content_length":"49098","record_id":"<urn:uuid:8d17fc6a-50db-45dc-a2be-836becac0f79>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00829.warc.gz"}
Simulation of piles subjected to excavation or embankment loading - New Zealand Geotechnical Society This paper provides investigation into response of (forward-rotated or backrotated) piles subjected to embankment loading, in light of 2-/3-layer models. First, recent advance is briefly reviewed concerning range of parameters used to model piles subjected to excavation and embankment loading. Second, simulations of six centrifuge tests on embankment piles are exemplified. Third, 1g model tests are provided to reveal backrotation behaviour of piles, and its simulation. The study is aimed to unite the design of piles under various types of passive loading (e.g. lateral spreading, embankment, excavation, sliding slope, etc.) Numerical analysis indicates the challenge in gaining parameters to achieve reliable prediction of the pile response affected by excavation, and embankment loading. For instance, the on-pile pressure from centrifuge tests on piles subjected to lateral spreading is reportedly quite close to the residue strength for liquefied sand (Armstrong et al. 2014), but is sometimes inconsistent with JRA (2002) methods. The prevalent p–y curve based methods are not suitable for modelling piles subjected to lateral spreading against 3D numerical analysis, although modifying the residual strength and stiffness of the p-y curve improve the prediction. l[m]= US or UD l = 700 Uniform loading US=200 UD=400 Uniform loading Triangular loading Figure 1: Tests on piles under an inverse triangular, uniform or arc profile of soil movement Recently, Guo (2015, 2016) developed 2- and 3-layer theoretical models and the associated closed-form solutions. The solutions well capture nonlinear response of piles in sliding soil [e.g. lateral spreading (Dobry et al. 2003), sliding sand (Guo et al. 2017; see Fig. 1), and slope (Frank & Pouget 2008)]. In particular, the same set of input parameters well capture the 3−5 times higher bending moment induced under translational movement than rotational movement, which is not seen in other simulations. This prompts the use of the models to capture impact of embankment or excavation loading on piles. 2-/3-LAYER MODELS AND SOLUTIONS Rigid passive piles are simulated for a stepwise, uniform soil movement w[s] (thus external loading p[s] of w[s]k[s], k[s] is modulus of subgrade reaction) to a sliding depth l[m] [see Fig. 2]. A fictitious layer is introduced to account for dragging above the underlying stable layer, and forms a 3-layer model [see Fig. 2, Guo (2016)]: a sliding layer with the modulus k[s] (depth 0 − l[m]), a transition (fictitious) layer with a linearly increasing modulus from the k[s] at depth l[m] to mk[s] at the depth z[m] of maximum bending moment M[m], and a stable layer with a modulus mk[s] in depth z[m]−l (l = pile embedment), respectively. Without the transition layer, the model is referred to as 2-layer model (Guo 2015). Under a rotational restraining stiffness k[θ], and a lateral force H, the pile rotates rigidly about a depth z[r] to an angle ω[r] and a mudline deflection w[g] under the sliding movement w[s], or on-pile force per unit length (FPUL) p[s] [= αp[ub]l[m]/l, and p[b] = α[.]p[ub]], to a sliding depth of l[m] on the pile. As shown elsewhere, the l[m ]is equal to (0.6−3)l[exc] for an excavation depth l[exc], for a distance between excavation face and piles of about 0.25l. The model is underpinned by the five input parameters k[s], m, p[b] (FPUL at pile-base level), k[θ], and a factor α[. ]of non-uniform soil movement. Explicit expressions were developed to estimate the displacement w(z), shear force T(z) and bending moment M(z) at depth z, the maximum bending moment M[m], its depth z[m ]of M[m] and the maximum shear force T[m] (sliding or stable layer). The solutions are repeated for a series of l[m]/l (e.g. raising by a step of 0.1) and p[s] = p[b]l[m]/l until a final l[m]/l = 0.5−0.9 to gain nonlinear response (Guo 2015). It should be stressed that the solutions are contemplated for rigid piles with rotational stiffness k[θ],, k[θG], and k[θT], otherwise other solutions (e.g. Poulos and Davis 1980; Guo 2012) should be consulted. Figure 2: Models for rigid, passive piles: model, p[b] and p[ub], k[θ], and p[s] for nonlinear response The 2-layer and 3-layer models are adopted to simulate the response of piles in ten model tests, subjected to excavation (Fig. 3, e.g. Leung, et al. 2000; Ong, et al. 2006) or embankment loading (e.g. Stewart et al. 1994; Armstrong et al. 2014). Typical parameters [of FPUL p[ub], modulus of subgrade reaction k[s], modulus ratio of stable over sliding layers m, and rotating stiffness k[θ]] are deduced against measured piles response. The study reveals that (1) The k[s] reduces from 225s[u] (stable piles, s[u]= undrained shear strength) to (55−130)s[u] (collapsed piles) due to increasing soil deformation from adjacent excavation. It is about 2.8N or 6z (MPa, N = SPT blow counts, and z = depth, m). Only (0.1−1.0)% k[s] (of excavation loading) is noted for consolidating and lateral spreading embankment. (2) Normalised by overburden stress (σ[v]′), pile width (d) and coefficient of passive earth pressure (K[p]), the ratio p[ub]/(K[p]σ[v]′d) is deduced as 0.26−0.41 or 0.8−1.1 for embankment underlain by thick-clay layer or laterally spreading, and by clay-sand layers, respectively. (3) Excavation induces deformation of soil to a depth of 1.08αl[exc], and on-pile FPUL p[s] (= αp[ub]l[exc]/l). The evolution of wall collapse from stable wall is associated with increasing deformation zone (with α raising from 1.2−2.3 to 2.5−2.9), friction angle of soil sliding [from (0.61−0.65)φ′ to (0.65−0.83)φ′, φ′= angle of internal friction] and ratio of rotational resistance (modulus) m (from 2.7 to 4) at constant loading depths for clay-sand layers. The stress reduces by 50% (with α = 0.63−1.1 and m = 1.4) for piles in a single sand layer but for a high stiffness (k[s] = 4.3 MPa). (4) Consolidating embankment involves a residual p[ub], smaller dimension and stiffness (with α = 0.5−0.72, k[s] = 15−20 kPa) but larger rotational resistance (m = 17.7) than that induced by excavation loading (of α = 1.2−2.9, k[s] = 0.55−2.25 MPa, and m = 2.9−4.0); and finally (5) The α (= 0.9−1.3), m (= 5.2−9.0), and k[s] (= 26−80 kPa) of laterally spreading sand-embankment sit in between consolidating embankment and excavation loading. Response of flexible piles is well captured using the 2-layer model (for rigid piles) and modified k[s], p[ub] and m values of (1.2−3)k[s], and (−0.8)p[ub] and(−0.8)m. The impact of pile flexibility is also assessed using new models incorporating hinges of rigid piles. The new findings are useful to design of piles, regardless of which methods. Typical examples are provided next for modelling pile response subjected to embankment loading. 6.5 m 6.0 m Single pile l[d] = 3 m 1.5 m Tests 5 &7 Test 6 10 m 2.5 m Single pile l[d]= 3 m 6.5 m Sand or Clay 6.0 m Sand Single pile l[d] =1− 9 m 1.5 m 6.5 m 6.0 m Single pile l[d] =2 & 4 m 1.5 m Figure 3: Centrifuge tests on piles (behind wall) subjected to excavation in sand or clay-sand layer Given potential dragging’ on the piles, the 3-layer model is adopted to analyse centrifuge model tests on pile groups either subjected to ‘static’ embankment loading (q), or spreading of sand Piles nearby embankment Stewart, et al (1994) conducted centrifuge tests on model pile groups, which consisted two rows of seven piles held in a rigid cap 2 m (all at prototype scale) above the soil surface and able to deflect freely. The piles penetrated through a soft clay layer and into an underlying dense sand stratum. Next to it, a sand embankment was constructed in six stages to a height of ~ 8.5 m (maximum). Four piles (22.5 m in length, 0.43 m in diameter) were instrumented in the pile group. Two typical Tests 9 and 11 are simulated herein. In Test 9, the soft clay layer was 18-m thick, and had an average s[u] of 17 kPa. The pile-cap displacement w[g] (front/back row) and maximum moment M[m] (both rows) were measured and are shown in Fig. 4(a) and (b), respectively for the increasing embankment height (or the average vertical stress q). The measured moment M[m] is plotted in Fig. 4(c) against the pile-cap displacement w[g] for both rows. The measured bending moment profiles with depth at ‘ultimate’ state are depicted in Fig. 4(d) for front- and back-row piles. The piles (l = 22.0 m, d = 0.43 m) are simulated using following parameters: p[ub] = 51 kN/m [= 4s[u]dl/(0.82l[m]), with s[u] = 17 kPa, d = 0.43 m, and l[m] = 18 m], m =17.7 (= ratio of coefficient of passive earth pressure K[p] [= tan^2(45+0.5φ)] over the active one K[a ][= tan^2(45-0.5φ)]) at ultimate state], k[θ] = k[G] = 1.12 MN⋅m/radian (with = 0.007, G = ground level), and α = 0.5 (deep sliding) (see Fig. 4(a)). Given φ ′ = 23^o, γ′ = 16.5 kN /m^3, and l[m] = 0.5l = 11 m, and q = 100 kPa, it follows an average p[s] of 34.2 kN/m (= 51α), and p[ub]/(σ[v]′d) = 0.72. Figure 4: Predicted versus centrifuge Test 9 (Stewart et al. 1994): (a) embankment load versus maximum bending moment, (b) embankment load versus pile-head deflection, (c) pile-head deflection versus maximum bending moment, (d) bending moment profiles Figure 5 Predicted versus centrifuge Test 9 (Stewart et al. 1994): (a) embankment load versus maximum bending moment, (b) embankment load versus pile-head deflection, (c) pile-head deflection versus maximum bending moment, (d) bending moment profiles Figure 5 Predicted versus centrifuge Test 9 (Stewart et al. 1994): (a) embankment load versus maximum bending moment, (b) embankment load versus pile-head deflection, (c) pile-head deflection versus maximum bending moment, (d) bending moment profiles The corresponding p[s]/(dq) = 0.72 (for over-consolidated clay) is slightly smaller than 0.75−0.792 deduced from measured response of piles subjected to short-term embankment loading, but is much higher than p[s]/(dq) = 0.277−0.35 for long-term embankment loading (Jeong et al. 1995). The p[s]/(σ[v]′dK[p]) is equal to 0.286. The previous study (Guo 2016) indicates k[s] (= 15~20 kPa) and m = 12.3 (2-layer) or 17.7 (3-layer). The q–w[g], q–M[m], and w[g]–M[m] curves were thus predicted, and are shown in Fig. 4, along with the measured data. The ‘limiting’ bending moment profile was predicted for a loading depth c of 14.7 m (= 0.81l[m], l[m] = 18 m), as plotted in Fig. 4(d). The predictions are insensitive to the modulus k[s], and indicate 50% (α = 0.5) surcharge loading q being transferred onto the on-pile pressure (= p[s]/d) for the deep sliding. The k[s ]may also be obtained using the scant modulus G[sec]= 60s[u][1-0.985(c/l[m])^0.2] (Stewart et al. 1994), assuming a maximum shear stress mobilization ratio of c/l[m]. At c/l[m]= 0.82 with s[u]= 17 kPa, it follows G[sec] = 54.4 kPa, and k[s ]= 18.1 kPa {= G[sec]/[2(1+0.5)], Poisson’s ratio = 0.5}. In Test 11, the clay layer was 8-m-thick with s[u] = 11 kPa. The piles (with l = 22.0 m, d = 0.43 m) are modelled using p[ub] = 121 kN/m (= 3.1s[u]dl[m], with s[u] = 11 kPa, d = 0.43 m, and l[m] = 8 m), k[s] = 20 kPa, m =17.7, k[θ] = 26.6 MN⋅m/radian (== 0.15 for the pile-cap), and α = 0.72 (a profile of inverse triangular movement). The p[ub ]offers an average p[s] of 86 kN/m (= 121α), p[ub]/(d σ[v]′) = 1.5−2.1 and p[ub]/(σ[v]′dK[p]) = 0.68−0.93 (with φ ′ = 23^o, γ′ = 16.5 kN/m^3, and l[m] = 8−11 m). The normalised rotational stiffness was raised to 0.15, due to a higher impact of underlying sand layer on the soft clay layer. These parameters allow a good prediction of q–w[g], q–M[m], and w[g]–M[m] curves (not shown herein). As a comparison, the 2-layer predictions were made using the same parameters (but for a reduced m of 12.3), which agree well with the measured data, respectively. 4.2 Piles embedded in lateral spreading embankment Three centrifuge tests (Armstrong et al. 2014) were conducted to examine the response of embankment-pile subjected to lateral spreading. Each test comprised of two identical approach embankments (of dry, dense Monterey sand, D[r] = 100%), which are separated by a 12-m wide channel. The embankments were 8 m high at the crest and 11 m high at the model container wall (or an average of 10 m in current modelling), with a crest width of 12 m and side slopes of 2:1 (horizontal to vertical) in all directions. Below the embankment was a compacted, non-plastic silt layer 1.3 m thick, a 5-m-thick loose sand layer (D[r] =30%), a second silt layer 0.7 m thick, and a 17-m-thick, dense sand layer (D[r] = 75%). One embankment from each centrifuge test (referred to as Kobe-1×6, Kobe-2×4, and Sine-2×4) included a pile group that extended into the dense sand layer. The Kobe-1×6 test was conducted on a single row of six closed-ended, aluminium piles with an outer diameter of 0.72 m and a flexural stiffness (EI) of 174 MN⋅m^2. The Kobe-2×4 and Sine-2×4 group tests both had two rows of four closed-ended, aluminium piles, with d[o] (outside diameter) = 1.22 m, a center-to-center spacing of 3d[o], and EI = 1,876 MN⋅m^2. An aluminium-epoxy pile cap connected the piles together in each test. Input motions were applied to the model base in the longitudinal direction. Model Kobe-1×6 and Kobe-2×4 tests were subjected to a modified version of the ground motion recorded at a depth of 83 m at Port Island in the 1995 Kobe Earthquake, with a peak base acceleration of 0.8g and 0.7g, respectively. Model Sine-2×4 was first subjected to a shaking event contained a total of 20 sine wave cycles with packets at 0.2g, 0.3g, and 0.5g, respectively. Kobe 1×6: m = 5.2, k[s] =26 kPa, p[ub]= 471.1 kN/m, k[θ]/(k[s]l^3)= 0.0045, α =0.9 Kobe 2×4: m = 9, k[s] = 79.5 kPa, p[ub]= 730.2 kN/m, k[θ]/(k[s]l^3)= 0.019, α =1.3 (p[b]= 949.3 kN/m) SINE 2×4: m = 9, k[s] =71 kPa, p[ub]= 730.2 kN/m, k[θ]/(k[s]l^3)= 0.0016, α =1.0 Loose sand Dense sand M[o]= ω[r]k[G] Figure 5: Predicted versus numerical solutions (Armstrong et al. 2014) of (a) maximum shear force versus embankment displacement, and (b) maximum bending moment versus pile-head deflection An equivalent static analysis (ESA) was conducted on the centrifuge tests to gain in sequence (1) the embankment displacement and displacement profile for a range of pile/bridge restraining forces, (2) the pile/bridge restraining forces for a range of imposed ground displacements, and (3) the point of compatibility in the forces and displacements between the two steps. The analysis overestimated the embankments deformations (without piles) (Armstrong et al. 2014), and bending moment of the piles (Fig. 5) against the centrifuge tests. It is thus important but difficult to select input parameters (e.g. undrained shear strength). In simulating the piles of Kobe 2×4 test, the 3-layer solutions adopt: p[ub] = 730.2 kN/m, m = 9 (φ′ = 30^o), k[s] = 70.5 kPa [= 51d1.5m(l[m]/l)^21.5, d = 1.22 m], and k[θ] = 1.3 MN⋅m/rad (see Fig. 5). The surcharge q was calculated for the embankment height h[e] of 8 to 11-m (above the loose sand). The lateral spreading FPUL p[s] was estimated as 162.26 kN/m (= 0.78σ[v]′d,[, ]with σ[v]′= γ′h [e],[, ]γ′ = 17 kN/m^3, and h[e] = 10 m) in light of p[s]/(σ[v]′dK[p]) = 0.26 (with φ ′ = 30^o). The normalised on-pile pressure p[s]/(dq) is estimated as 0.78, as with short-term embankment loading (Jeong et al. 1995). The p[ub] is obtained using l = 22.5m, l[m] = 5 m, and α =1.0 with p[ub]/(σ[v]′d) = 0.6. The current predictions agree well the numerical solution of restraining force−embankment displacement relationship, and pile-head displacement − maximum bending moment curve, respectively in Fig.5; and (2) the profiles of bending moment in Fig.6. As with Kobe 2×4, the groups Kobe 1×6 and Sine 2×4 were simulated and shown in Fig.5. A design level of m = 5.2 (about 60% ultimate value of 9) was adopted for the Kobe 1×6 to compensate the impact of flexibility associated with the small diameter. A similar ratio p[s]/(σ[v]′dK[p]) of 0.28 (based on /(σ[v]′ = 170 kPa and φ ′ = 30^o) [or p[ub]/(dσ[v]′) = 0.66] was adopted, which offers p[s] = 105.3 kN/m (Kobe 1×6) and 176.3 kN/m (Sine 2×4), respectively. The current predictions agree well the numerical solutions. The pile-head displacement (at c = l[m] = 5 m) is predicted as 1.05 m (Kobe 1×6), 0.75 m (Kobe 2×4) and 0.56 m (Sine 2×4), which agree well with measured values of 1.03 m, 0.75 m and 0.57 m, respectively. The agreement is observed (Fig. 6) for the profiles of bending moment as well. The current solutions are much simple and efficient to apply than other approaches, and have a good accuracy. Note that the stipulated α value can only be verified with evolution of the pile response with the soil movement, which is not available Figure 6: Predicted versus measured (Armstrong et al. 2014) bending moment profiles at a pile-head displacement of (a) 1.03 m (Kobe 1×6), (b) 0.75 m (Kobe 2×4) and (c) 0.57 m (Sine 2×4) Piles Subjected to backrotation The existing tests, analytical and numerical simulations have been confined to forward rotation (e.g. Bransby & Springman 1997; Juirnarongrit & Ashford 2003; Armstrong et al. 2014). However, it was back-rotation that incurred failure of most of the piles during nature disasters (Knappett & Madabhushi 2009; Fraser 2013; Haskell et al. 2013, Guo 2020). The tests were conducted using the apparatus shown previously (Guo et al 2017) on 2 piles in line, without an axial loading P (= 0) or with P of 294 N/pile, respectively, at the sliding depth (SD) l[m] of 0.57l (i.e. 0.4 m). The model piles were subjected to a uniform translation of the sand (at the loading location, Fig .1) to a total movement w[f] of 140 mm. The piles were made of aluminum tube, 1.2 m in length, 32 mm in diameter (d [32]), and 1.5 mm in wall thickness (t). The bending stiffness E[p]I[p] is 1.28×10^6 kNmm^2. The model sand has a dry density γ′ of 16.27 kN/m^3 and an internal (residual) frictional angle φ of 38°. The d[32 ]piles were socketed into an aluminum cap. The bending capacity M[o]^y of the pile-cap connections is 5−10% of the pile body (M[y]). The tests provide profiles of bending moment, shear force , soil reaction, and deflection, and maximum bending moment M[m], maximum shear force Q[m], pile-rotation angle [r], and pile-head deflection w[g], and k[s] of 20−35 kPa. The model piles display features of sway, sliding and back-rotation, as with in-situ piles. The pile deflection w[g], rotation angle ω[r] and bending moment need to be curbed to prevent formation of hinges. They are estimated as (Guo 2016) (i) k[s] = 25 kPa (= 2.1G[s], and G[s] = 12 kPa). (ii) m = 17.7 (= K[p]/K[a], ultimate state). (iii) p[ub] = 10 kN/m (= s[g]γ′K[p]^2dl), using γ′ = 16.5 kN/m^ 3, φ = 38^o, d = 32 mm, l = 0.7 m, and s[g] = 1.53 (average). The p[ub] and k[s] values for rotating piles are reduced to 5.0−5.6 kN/m and 14−16 kPa, respectively to simulate translating piles. The parameters are ‘identical’ to those of forward rotating piles but a negative k[θ ] (with = -0.04). They are presented together as m/k[s](kPa)/p[b](kN/m)/ of 17.7/25/5.6/-0.04, and 17.7/16/5/-0.04, respectively for no-loading and with vertical loading, respectively. Figure 7: Yielding and sliding piles − Predicted versus measured development of 2-pile in-line groups: (a) M[mi ]− w[f], (b) Q[mi] − w[f], (c) w[g]−M[mi], and (d) ω[ r]−M[mi ] The test piles are subjected to soil movement only and no shear force at head-level (with H = 0). The M[y] is estimated grossly using yielding stress of 350 MPa (the aluminum) as 0.4 kNm [= 350×10^ 3×1.28/(70×10^6)/ (0.5d) m, d = 0.032 m, Young’s modulus of 70 GPa]. The connection M[o]^y (= M[y]/ω[r]) is estimated as 20−60 Nm using /ω[r] = 0.05−0.2. The M[o]^y/M[y] ratio is much lower than 0.25−0.6 gained from steel pile-to pile-cap connection due to stress concentration at the pile and cap connection. This paper provides investigation into response of piles subjected to embankment loading, and backrotated piles in light of 2-/3-layer models. Recent advance is briefly reviewed concerning range of parameters used to model piles subjected to excavation and embankment loading. Simulations of six centrifuge tests on embankment piles are elaborated. 1g model tests are provided to reveal backrotation behaviour of piles, and its capture using a negative stiffness in the 2-/3- models. The study is aimed to unite the design piles under passive loading (e.g. lateral spreading, embankment, excavation, sliding slope, etc.). Armstrong, R. J., R. W. Boulanger & M. H. Beaty 2014. “Equivalent static analysis of piled bridge abutments affected by earthquake-induced liquefaction.” Journal of Geotechnical and Geoenvironmental Engineering, ASCE 140(8): 04014046. Bransby, M. F. & S. M. Springman 1997. Centrifuge modelling of pile groups adjacent to surcharge loads. Soils and Foundations 37(2): 39-49. Dobry, R., T. Abdoun, T. D. O’Rourke & S. H. Goh 2003. Single piles in lateral spreads: field bending moment evaluation. Journal of Geotechnical and Geoenvironmental Engineering, ASCE 129 Frank, R. & P. Pouget 2008. Experimental pile subjected to long duration thrusts owing to a moving slope. Geotechnique 58(8): 645-658. Guo, W. D. 2012. Theory and practice of pile foundations. Boca Raton, London, New York, CRC press. Guo, W. D. 2015. Nonlinear response of laterally loaded rigid piles in sliding soil. Canadian Geotechnical Journal 52(7): 903-925. Guo, W. D. 2016. Response of rigid piles during passive dragging. International Journal for Numerical and Analytical Methods in Geomechanics 40(14): 1936-1967. Guo, W. D., H. Y. Qin & E. H. Ghee 2017. Modeling single piles subjected to evolving soil movement. International Journal of Geomechanics 17(4): 180-196. Haskell, J. J. M., S. P. G. Madabhushi, M. Cubrinovski & A. Winkley 2013. Lateral spreading-induced abutment rotation in the 2011 Christchurch earthquake: observation and analysis. Geotechnique 63 Jeong, S., J. L. D. Seo & J. Park 1995. “Time-dependent behavior of pile groups by staged construction of an adjacent embankment on soft clay.” Canadian Geotechnical Journal 41: 644-656. JRA 2002. Specifications for highway bridges, Japan Road Association. Prepared by Public Works Research Institute (PWRI) and Civil Engineering Research Laboratory (CRL). Japan. Knappett, J. A. & S. P. G. Madabhushi 2009. Influence of axial load on lateral pile response in liquefiable soils. Part II: numerical modelling. Geotechnique 59(7): 583-592. Leung, C. F., Y. K. Chow & R. F. Shen 2000. “Behaviour of pile subject to excavation-induced soil movement.” Journal of Geotechnical and Geoenvironmental Engineering, American Society of Civil Engineers 126(11), 947-954. Ong, D. E. L., C. F. Leung, Y. K. Chow and T. G. Ng 2006. “Severe damage of a pile group due to slope failure.” Journal of Geotechnical and Geoenvironmental Engineering, American Society of Civil Engineers 141(12): 04015014. Stewart, D. P., R. J. Jewell & M. F. Randolph 1994. Design of piled bridge abutment on soft clay for loading from lateral soil movements. Geotechnique 44(2):277-296. Xiao, Y., H. Wu, T. T. Yaprak, G. R. Martin & J. B. Mander 2006. Experimental studies on seismic behavior of steel pile-to-pile-cap connections. J. Bridge Engrg. ASCE 11(2): 151-159.
{"url":"https://www.nzgs.org/libraries/simulation-of-piles-subjected-to-excavation-or-embankment-loading/","timestamp":"2024-11-14T18:09:36Z","content_type":"text/html","content_length":"300648","record_id":"<urn:uuid:c668fbde-1d90-4b31-8201-c96c240cf7c7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00757.warc.gz"}
Denjoy integral From Encyclopedia of Mathematics The narrow (special) Denjoy integral is a generalization of the Lebesgue integral. A function $f$ is said to be integrable in the sense of the narrow (special, $D^*$) Denjoy integral on $[a,b]$ if there exists a continuous function $F$ on $[a,b]$ such that $F'=f$ almost everywhere, and if for any perfect set $P$ there exists a portion of $P$ on which $F$ is absolutely continuous and where where $\{(\alpha_n,\beta_n)\}$ is the totality of intervals contiguous to that portion of $P$ and $\omega(F;(\alpha,\beta))$ is the oscillation of $F$ on $(\alpha,\beta)$; This generalization of the Lebesgue integral was introduced by A. Denjoy who showed that his integral reproduces the function with respect to its pointwise finite derivative. The $D^*$ integral is equivalent to the Perron integral. The wide (general) Denjoy integral is a generalization of the narrow Denjoy integral. A function $f$ is said to be integrable in the sense of the wide (general, $D$) Denjoy integral on $[a,b]$ if there exists a continuous function $F$ on $[a,b]$ such that its approximate derivative is almost everywhere equal to $f$ and if, for any perfect set $P$, there exists a portion of $P$ on which $F$ is absolutely continuous; here Introduced independently, and almost at the same time, by Denjoy and A.Ya. Khinchin , . The $D$ integral reproduces a continuous function with respect to its pointwise finite approximate derivative. A totalization $(T_{2s})_0$ is a constructively defined integral for solving the problem of constructing a generalized Lebesgue integral which would permit one to treat any convergent trigonometric series as a Fourier series (with respect to this integral). Introduced by Denjoy . A totalization $(T_{2s})$ differs from a totalization $(T_{2s})_0$ by the fact that the definition of the latter totalization involves an approximate rather than an ordinary limit. Denjoy [5] also gave a descriptive definition of a totalization $(T_{2s})$. For relations between $(T_{2s})_0$ and $(T_{2s})$ and other integrals, see [6]. [1a] A. Denjoy, "Une extension de l'intégrale de M. Lebesgue" C.R. Acad. Sci. , 154 (1912) pp. 859–862 [1b] A. Denjoy, "Calcul de la primitive de la fonction dérivée la plus générale" C.R. Acad. Sci. , 154 (1912) pp. 1075–1078 [2] A. Denjoy, "Sur la dérivation et son calcul inverse" C.R. Acad. Sci. , 162 (1916) pp. 377–380 [3] A.Ya. [A.Ya. Khinchin] Khintchine, "Sur une extension de l'integrale de M. Denjoy" C.R. Acad. Sci. , 162 (1916) pp. 287–291 [4] A.Ya. Khinchin, "On the process of Denjoy integration" Mat. Sb. , 30 (1918) pp. 543–557 (In Russian) [5] A. Denjoy, "Leçons sur le calcul des coefficients d'une série trigonométrique" , 1–4 , Gauthier-Villars (1941–1949) [6] I.A. Vinogradova, V.A. Skvortsov, "Generalized Fourier series and integrals" J. Soviet Math. , 1 (1973) pp. 677–703 Itogi Nauk. Mat. Anal. 1970 (1971) pp. 65–107 [7] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) Just as the Lebesgue integral allows one to compute the mass corresponding to some density function, the Denjoy integral (called totalization by Denjoy also in the case 1) or 2)) allows one to compute the primitive (defined up to a constant) of some function. And, whereas for smooth functions calculating primitives is the usual way of calculating masses, in the general case the calculus of primitives (in the sense of 1) or 2)) depends on and is more involved than the calculus of masses. Denjoy gave a constructive scheme (one for $(D^*)$ and a similar one for $(D)$) to calculate when possible the totalization $F$ of a function $f$ by induction over the countable ordinal numbers, something which does not exist for similar integrals like Perron's integral: If $f$ has a totalization (for example, if $f$ is the derivative in case 1), or the approximate derivative in case 2), of some function) the construction stops at some countable ordinal number and gives $F$; if $f$ does not have a totalization, the construction never stops before $\aleph_1$. This constructive scheme uses the Lebesgue integral, and two ways of defining "improper" integrals coming from the theory of the Riemann integral for unbounded functions and due, respectively, to A.L. Cauchy and A. Harnack. For details see [7] or [a1]. [a1] G. Choquet, "Outils topologiques et métriques de l'analyse mathématique" , Centre Docum. Univ. Paris (1969) (Rédigé par C. Mayer) How to Cite This Entry: Denjoy integral. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Denjoy_integral&oldid=54122 This article was adapted from an original article by T.P. Lukashenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"https://encyclopediaofmath.org/wiki/Denjoy_integral","timestamp":"2024-11-10T06:43:31Z","content_type":"text/html","content_length":"19098","record_id":"<urn:uuid:5f9db465-bc50-45d0-9fe1-2504c0a58532>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00679.warc.gz"}
Euclid's Elements Euclid's Elements About this schools Wikipedia selection SOS Children offer a complete download of this selection for schools for use on schools intranets. SOS mothers each look after a a family of sponsored children. Euclid's Elements (Greek: Στοιχεῖα) is a mathematical and geometric treatise consisting of 13 books written by the Greek mathematician Euclid in Alexandria circa 300 BC. It comprises a collection of definitions, postulates ( axioms), propositions (theorems and constructions), and mathematical proofs of the propositions. The thirteen books cover Euclidean geometry and the ancient Greek version of elementary number theory. With the exception of Autolycus' On the Moving Sphere, the Elements is one of the oldest extant Greek mathematical treatises and it is the oldest extant axiomatic deductive treatment of mathematics. It has proven instrumental in the development of logic and modern science. Euclid's Elements is the most successful and influential textbook ever written. Being first set in type in Venice in 1482, it is one of the very earliest mathematical works to be printed after the invention of the printing press and is second only to the Bible in the number of editions published, with the number reaching well over one thousand. It was used as the basic text on geometry throughout the Western world for about 2,000 years. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the 20th century, by which time its content was universally taught through school books, did it cease to be considered something all educated people had read. Euclid was a Greek mathematician who wrote Elements in Alexandria during the Hellenistic period (around 300 BC). Scholars believe that the Elements is largely a collection of theorems proved by other mathematicians as well as containing some original work. Proclus, a Greek mathematician who lived several centuries after Euclid, writes in his commentary of the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus' theorems, perfecting many of Theaetetus', and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his Although known to, for instance, Cicero, there is no extant record of the text having been translated into Latin prior to Boethius in the fifth or sixth century. The Arabs received the Elements from the Byzantines in approximately 760; this version, by a pupil of Euclid called Proclo, was translated into Arabic under Harun al Rashid circa 800 AD. The first printed edition appeared in 1482 (based on Giovanni Campano's 1260 edition), and since then it has been translated into many languages and published in about a thousand different editions. In 1570, John Dee provided a widely respected "Mathematical Preface", along with copious notes and supplementary material, to the first English edition by Henry Billingsley. Copies of the Greek text still exist, some of which can be found in the Vatican Library and the Bodleian Library in Oxford. The manuscripts available are of variable quality, and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been drawn about the contents of the original text (copies of which are no longer available). Ancient texts which refer to the Elements itself and to other mathematical theories that were current at the time it was written are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of the text. Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or elucidation. Some of these are useful and add to the text, but many are not. A difficult text Although we now consider the Elements to be an elementary text on geometry, that was not always the case. It is said that King Ptolemy asked for a way in geometry that was shorter than the Elements. Euclid answered that "there is no royal road to geometry." More recently, Sir Thomas Little Heath wrote, in the introduction to the 1932 Everyman's Library Euclid Introduction "The simple truth is that it was not written for schoolboys or schoolgirls, but for the grown man who would have the necessary knowledge and judgment to appreciate the highly contentious matters which have to be grappled with in any attempt to set out the essentials of Euclidean geometry as a strictly logical system..." . The first difficult passage of Book I is refered to as the pons asinorum, which is Latin for "Bridge of Asses" (traditionally, it is hard to get asses to cross a bridge). Outline of the Elements The Elements is still considered a masterpiece in the application of logic to mathematics. In historical context, it has proven enormously influential in many areas of science. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Sir Isaac Newton were all influenced by the Elements, and applied their knowledge of it to their work. Mathematicians and philosophers, such as Bertrand Russell, Alfred North Whitehead, and Baruch Spinoza, have attempted to create their own foundational "Elements" for their respective disciplines, by adopting the axiomatized deductive structures that Euclid's work introduced. The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about 2,000 years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remain the cornerstone of mathematics. Although Elements is primarily a geometric work, it also includes results that today would be classified as number theory. Euclid probably chose to describe results in number theory in terms of geometry because he couldn't develop a constructible approach to arithmetic. A construction used in any of Euclid's proofs required a proof that it is actually possible. This avoids the problems the Pythagoreans encountered with irrationals, since their fallacious proofs usually required a statement such as "Find the greatest common measure of ..." First principles Euclid's Book 1 begins with 23 definitions — such as point, line, and surface — followed by five postulates and five "common notions" (both of which are today called axioms). These are the foundation of all that follows. 1. A straight line segment can be drawn by joining any two points. 2. A straight line segment can be extended indefinitely in a straight line. 3. Given a straight line segment, a circle can be drawn using the segment as radius and one endpoint as centre. 4. All right angles are equal. 5. If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough. Common notions: 1. Things which equal the same thing are equal to one another. ( Euclidean property of equality) 2. If equals are added to equals, then the sums are equal. (Addition property of equality) 3. If equals are subtracted from equals, then the remainders are equal. (Subtraction property of equality) 4. Things which coincide with one another are equal to one another. ( Reflexive property of equality) 5. The whole is greater than the part. These basic principles reflect the interest of Euclid, along with his contemporary Greek and Hellenistic mathematicians, in constructive geometry. The first three postulates basically describe the constructions one can carry out with a compass and an unmarked straightedge. A marked ruler, used in neusis construction, is forbidden in Euclid construction, probably because Euclid could not prove that verging lines meet. Parallel postulate The last of Euclid's five postulates warrants special mention. The so-called parallel postulate always seemed less obvious than the others. Euclid himself used it only sparingly throughout the rest of the Elements. Many geometers suspected that it might be provable from the other postulates, but all attempts to do this failed. By the mid-19th century, it was shown that no such proof exists, because one can construct non-Euclidean geometries where the parallel postulate is false, while the other postulates remain true. For this reason, mathematicians say that the parallel postulate is independent of the other postulates. Two alternatives to the parallel postulate are possible in non-Euclidean geometries: either an infinite number of parallel lines can be drawn through a point not on a straight line in a hyperbolic geometry (also called Lobachevskian geometry), or none can in an elliptic geometry (also called Riemannian geometry). That other geometries could be logically consistent was one of the most important discoveries in mathematics, with vast implications for science and philosophy. Indeed, Albert Einstein's theory of general relativity shows that the real space in which we live is non-Euclidean. Contents of the books Books 1 through 4 deal with plane geometry: • Book 1 contains the basic propositions of geometry: the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area). • Book 2 is commonly called the "book of geometrical algebra," because the material it contains may easily be interpreted in terms of algebra. • Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point. • Book 4 is concerned with inscribing and circumscribing triangles and regular polygons. Books 5 through 10 introduce ratios and proportions: • Book 5 is a treatise on proportions of magnitudes. • Book 6 applies proportions to geometry: Thales' theorem, similar figures. • Book 7 deals strictly with elementary number theory: divisibility, prime numbers, greatest common divisor, least common multiple. • Book 8 deals with proportions in number theory and geometric sequences. • Book 9 applies the results of the preceding two books: the infinitude of prime numbers, the sum of a geometric series, perfect numbers. • Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration. Books 11 through 13 deal with spatial geometry: • Book 11 generalizes the results of Books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds. • Book 12 calculates areas and volumes by using the method of exhaustion: cones, pyramids, cylinders, and the sphere. • Book 13 generalizes Book 4 to space: golden section, the five regular Platonic solids inscribed in a sphere. Despite its universal acceptance and success, the Elements has been criticised as having insufficient proofs and definitions. For example, in the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points. Later, in the fourth construction, he used the movement of triangles to prove that if two sides and their angles are equal, then they are congruent; however, he did not postulate or even define movement. In the 19th century, non-Euclidean geometries attracted the attention of contemporary mathematicians. Leading mathematicians, including Richard Dedekind and David Hilbert, attempted to reformulate the axioms of the Elements, such as by adding an axiom of continuity and an axiom of congruence, to make Euclidean geometry more complete. Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose." It was not uncommon in ancient time to attribute to celebrated authors works that were not written by them. It is by these means that the apocryphal books XIV and XV of the Elements were sometimes included in the collection. The spurious Book XIV was likely written by Hypsicles on the basis of a treatise by Apollonius. The book continues Euclid's comparison of regular solids inscribed in spheres, with the chief result being that the ratio of the surfaces of the dodecahedron and icosahedron inscribed in the same sphere is the same as the ratio of their volumes, the ratio being $\sqrt The spurious Book XV was likely written, at least in part, by Isidore of Miletus. This inferior book covers topics such as counting the number of edges and solid angles in the regular solids, and finding the measure of dihedral angles of faces that meet at an edge. • 1460s, Regiomontanus (incomplete) • 1533 editio princeps by Simon Grynäus • 1572, Commandinus • 1574, Christoph Clavius • 1505, Zamberti (Latin) • 1543, Venturino Ruffinelli (Italian) • 1555, Johann Scheubel (German) • 1562, Jacob Kündig (German) • 1564, Pierre Forcadel de Beziers (French) • 1570, John Day (English) • 1576, Rodrigo de Zamorano (Spanish) • 1594, Typografia Medicea (edition of the Arabic translation of Nasir al-Din al-Tusi) • 1607, Matteo Ricci, Xu Guangqi (Chinese) • 1660, Isaac Barrow (English) • Present, Irineu Bicudo (Portuguese)(work in progress) Currently in print "Euclid's Elements - All thirteen books in one volume" Green Lion Press. ISBN 1-888009-18-7 Based on Heath's translation.
{"url":"https://www.valeriodistefano.com/en/wp/e/Euclid%2527s_Elements.htm","timestamp":"2024-11-14T18:27:43Z","content_type":"text/html","content_length":"100847","record_id":"<urn:uuid:489b7ae6-92de-4446-b02a-16eb98705005>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00467.warc.gz"}
AIMA Exercises This exercise explores the differences between agent functions and agent programs. 1. Can there be more than one agent program that implements a given agent function? Give an example, or show why one is not possible. 2. Are there agent functions that cannot be implemented by any agent program? 3. Given a fixed machine architecture, does each agent program implement exactly one agent function? 4. Given an architecture with $n$ bits of storage, how many different possible agent programs are there? 5. Suppose we keep the agent program fixed but speed up the machine by a factor of two. Does that change the agent function? This exercise explores the differences between agent functions and agent programs. 1. Can there be more than one agent program that implements a given agent function? Give an example, or show why one is not possible. 2. Are there agent functions that cannot be implemented by any agent program? 3. Given a fixed machine architecture, does each agent program implement exactly one agent function? 4. Given an architecture with $n$ bits of storage, how many different possible agent programs are there? 5. Suppose we keep the agent program fixed but speed up the machine by a factor of two. Does that change the agent function?
{"url":"https://aimacode.github.io/aima-exercises/agents-exercises/ex_8/","timestamp":"2024-11-13T02:30:06Z","content_type":"text/html","content_length":"16948","record_id":"<urn:uuid:bf124811-8a50-4373-9a74-3de3919f6fea>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00626.warc.gz"}
P7) Quiz 9 – Questions – Edexcel Physics Click here for a printable PDF of the graphs in this quiz. 1) What is the calculation for finding total resistance in a circuit? a) For series circuits, does adding more identical resistors increase or decrease the total resistance of the circuit? b) For parallel circuits, does adding more identical resistors increase or decrease the total resistance of the circuit? 3) This question is about investigating how the total resistance of a series circuit changes with the number of identical resistors. a) Draw a diagram of the circuit that we would use to measure the total resistance in a series circuit for 2 identical resistors. Assume that we know the potential difference of the power source (which can be a cell or battery). b) Explain how we can investigate how the total resistance of a series circuit is affected by the number of identical resistors. c) On the graph below, draw how the total resistance of a series circuit changes with the number of identical resistors. 4) This question is about investigating how the total resistance of a parallel circuit changes with the number of identical resistors. a) Draw a diagram of the circuit that we would use to measure the total resistance in a parallel circuit for 2 identical resistors. Assume that we know the potential difference of the power source (which can be a cell or battery). b) Explain how we can investigate how the total resistance of a parallel circuit is affected by the number of identical resistors. c) On the graph below, draw how the total resistance of a parallel circuit changes with the number of identical resistors. 5) Let’s suppose that we did not know the potential difference of our cell or battery. Describe how we could modify our circuits to find the potential difference of the cell or battery. Feel free to draw a diagram.
{"url":"https://www.elevise.co.uk/gepp7q9.html","timestamp":"2024-11-03T20:08:15Z","content_type":"text/html","content_length":"91336","record_id":"<urn:uuid:2486df44-a258-43aa-acfd-9f3673f13750>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00573.warc.gz"}
Remapping of Greenland ice sheet surface mass balance anomalies for large ensemble sea-level change projections Articles | Volume 14, issue 6 © Author(s) 2020. This work is distributed under the Creative Commons Attribution 4.0 License. Remapping of Greenland ice sheet surface mass balance anomalies for large ensemble sea-level change projections Future sea-level change projections with process-based stand-alone ice sheet models are typically driven with surface mass balance (SMB) forcing derived from climate models. In this work we address the problems arising from a mismatch of the modelled ice sheet geometry with the geometry used by the climate model. We present a method for applying SMB forcing from climate models to a wide range of Greenland ice sheet models with varying and temporally evolving geometries. In order to achieve that, we translate a given SMB anomaly field as a function of absolute location to a function of surface elevation for 25 regional drainage basins, which can then be applied to different modelled ice sheet geometries. The key feature of the approach is the non-locality of this remapping process. The method reproduces the original forcing data closely when remapped to the original geometry. When remapped to different modelled geometries it produces a physically meaningful forcing with smooth and continuous SMB anomalies across basin divides. The method considerably reduces non-physical biases that would arise by applying the SMB anomaly derived for the climate model geometry directly to a large range of modelled ice sheet model geometries. Received: 11 Aug 2019 – Discussion started: 04 Sep 2019 – Revised: 12 Mar 2020 – Accepted: 17 Apr 2020 – Published: 02 Jun 2020 Process-based ice sheet model projections are an important tool for estimating future sea-level change in the context of the Intergovernmental Panel on Climate Change assessment cycle (IPCC, 2013). For the first time, in the upcoming IPCC assessment report (AR6), ice sheet model (ISM) projections are formally embedded in the Coupled Model Intercomparison Project (CMIP; Eyring et al., 2016) in the form of the CMIP-endorsed Ice Sheet Model Intercomparison Project ISMIP6 (Nowicki et al., 2016, 2020). ISMIP6 aims to provide estimates of the future sea-level contribution from the Greenland and Antarctic ice sheets based on stand-alone ice sheet model simulations, forced by output from CMIP atmosphere–ocean global climate models (GCMs) and fully coupled ISM–GCMs. This paper focuses on stand-alone simulations of the Greenland ice sheet (GrIS). The first ISMIP6 activities focused mainly on the problem of ice sheet model initialization (Goelzer et al., 2018; Seroussi et al., 2019) but also identified issues that may be encountered when a large range of ice sheet models is forced with climate model output. The most important forcing derived from climate models in the context of future sea-level change projections for the GrIS is the surface mass balance (SMB), which describes the rate at which mass is added or removed at the ice sheet surface. For the ISMIP6 projections it was decided to apply the SMB forcing as an anomaly, i.e. as the change in SMB relative to a given reference period. This approach has the important advantage that it allows participating ice sheet modellers to use their own SMB product during initialization and simply add provided SMB anomalies in a projection experiment. However, problems were identified when a given surface mass balance anomaly (aSMB) was applied to the wide range of Greenland ice sheet models used in the community (Goelzer et al., 2018). The key issue is a mismatch between modelled initial and observed ice sheet geometries, the latter of which underlies the SMB field. These differences are related to uncertainties in forcing, physical parameters, and the underlying ice sheet model physics. For instance, a geometrical mismatch generally means that the modelled ablation zone and the prescribed anomalous ablation are not co-located, leading to an incorrect mass balance forcing. With the original intention to apply identical forcing to all participating models, a forcing data set was prepared for initMIP-Greenland (Goelzer et al., 2018) that consisted of an SMB anomaly based on the present-day observed geometry. The SMB anomaly was extended outside the observed ice sheet mask following a simple parameterization to accommodate larger-than-observed ice sheet model extents. In practice, however, ice sheet models with larger-than-observed initial areas exhibit larger melting under such forcing simply because their ablation areas are extended outwards. To address this problem, we present here a method for remapping the SMB anomaly as a function of surface elevation and thereby produce physically consistent forcing for different ice sheet model geometries. The proposed method was developed for future sea-level change projections made with a large ensemble of ice sheet models (with possibly widely differing initial geometries) forced by output of different climate models and scenarios. However, other applications can be envisioned, for example any other case where the climate model forcing is generated for an ice sheet geometry differing from that of the ice sheet model itself. Asynchronously coupled climate–ice sheet simulations and experiments with accelerated climatic boundary conditions may also be improved with the presented method. In the following we describe our approach and method (Sect. 2), the resulting forcing (Sect. 3), and time-dependent applications (Sect. 4) and finally discuss the results (Sect. 5). Our approach aims to generate an SMB forcing (at a yearly timescale) applicable to an ensemble of Greenland ice sheet models that exhibit a wide range of initial present-day ice sheet geometries. The forcing is based on an existing aSMB product that is generated at a fixed present-day surface elevation. This aSMB product will typically be the output of a regional climate model (RCM) but could come from any SMB model or GCM. While the forcing will have to be adapted for the individual model geometries, it should remain as close as possible to the original product when applied to the observed present-day geometry. The proposed method is based on the strong elevation dependence of the SMB and aSMB and is illustrated for a schematic flow line of a land-terminating ice sheet margin (Fig. 1). For a larger ice sheet geometry (dashed red line), the horizontal equilibrium line position lies farther from the ice divide than for a smaller ice sheet (black line). It is this effect that we are trying to capture with our method: a different ice sheet geometry requires a different forcing to honour physical consistency. Remapping the SMB anomaly as a function of surface elevation, as we propose, allows for a “stretching” of the SMB product to match the larger ice sheet extent while maintaining its overall shape. For initMIP-Greenland, the SMB anomaly was parameterized as a fixed function of observed surface elevation and latitude sampled across the entire ice sheet (Goelzer et al., 2018), which was subsequently used to define a forcing product everywhere on the grid. In principle, we could use the same global approach to generate SMB forcing for a range of different initial ice sheet geometries. However, regional differences in the height–aSMB relationship can be large and justify a spatially better-resolved approach. To capture regional differences, we therefore apply the remapping separately for a set of drainage basins (Shepherd et al., 2012; Zwally et al., 2012; Mouginot et al., 2019). In practice, the following steps are executed to (1) derive and (2) apply the height–aSMB relationship to different geometries. 1. Defining an elevation–aSMB lookup table: □ – Divide the ice sheet into drainage basins. □ – For each individual drainage basin, complete the following steps: ☆ – For each elevation band with central height h[c] and range R of heights ○ – find aSMB values for all heights in R ○ – calculate the median aSMB of these results ○ – save result to lookup table aSMB =f(h[c]). 2. Remap aSMB to a new geometry: □ – Use the drainage basin separation in (1). □ – For each individual drainage basin take the following step: ☆ – For each ISM grid point ○ – interpolate aSMB linearly as a function of height using a combination of lookup tables (1) for this and neighbouring basins (see Sect. 2.2). 2.1Defining an elevation–aSMB lookup table The first step (defining an elevation–aSMB lookup table) is independent of the ice sheet model characteristics and relies only on the initial aSMB product, the reference field's elevation, and a meaningful basin selection. Ideally, the basin division should separate regions with largely different SMB characteristics, e.g. wet and dry regions. At the same time, our method requires each basin to contain a wide elevation range so that the lookup tables can be completely filled. For this study we created 25 basins by combining several smaller basins from a recent drainage delineation (Mouginot et al., 2019). The basins could consist of individual outlet glaciers or even flow lines, as long as they cover a sufficiently large elevation range. The basin delineation is extended outside the observed ice sheet mask to accommodate different (i.e. larger) ice sheet geometries than observed (Fig. 2). This was done once manually using observed topography of ice-free regions and bathymetry as guidance. In order to test the robustness of the method to the number of basins, we have constructed an alternative basin set that can be subdivided semi-automatically, albeit not following observed drainage divides (Fig. S1 in the Supplement). While the method can be applied to any aSMB product, here we use model output from the regional climate model MAR (Fettweis et al., 2013) forced by MIROC5 (Watanabe et al., 2010) as it has been run for the RCP8.5 scenario and was chosen for ISMIP6. We use output of MAR version 3.9 run at a horizontal resolution of 15km, which has been downscaled to 1km (Delhasse et al., 2020) and subsequently interpolated to 5km resolution for our analysis. If needed for a coarser-resolution climate model output, for example, the aSMB could be interpolated to a high enough target resolution to guarantee that sufficient samples are present in each basin and elevation band. We demonstrate the method here with aSMB at the end of the century relative to the 1960–1989 reference period, calculated as the time mean change: $\begin{array}{}\text{(1)}& \mathrm{aSMB}={\stackrel{\mathrm{‾}}{\mathrm{SMB}}}^{\text{2091–2100}}-{\stackrel{\mathrm{‾}}{\mathrm{SMB}}}^{\text{1960–1989}}.\end{array}$ For each drainage basin we define an elevation–aSMB lookup table based on the MAR SMB data in that basin. We define elevation bands with centre h[c] and range R, find all grid points with matching elevation, and register the associated aSMB values. We calculate the median aSMB value of all available points for each elevation band (Fig. 3), resulting in a lookup table aSMB=f(h[c]). The median is chosen rather than the mean for its robustness to outliers. The step size dh=100m between subsequent elevations h[c] and the value for the range of R=100m was chosen after some initial testing but was not formally optimized. The main factors influencing this parameter choice are spatial variability and smoothness of the original aSMB product, which also depends on the original resolution of the SMB model (in this case: 15km). Given the relatively smooth aSMB field, the chosen parameters were judged sufficient to describe the variation in the elevation–aSMB relationships for each basin (Fig. 3). Other interval sizes may be more appropriate for other climate forcing products. For all table entries at 0m elevation, we have copied the more robust table entry at 100m rather than using the 0–50m height interval with sparser data. For basins with missing values for high elevations, we repeated the highest-elevation aSMB value until 3500m (circles in Fig. 3). 2.2Remap aSMB to a new geometry For the reconstruction of the SMB on an ice sheet model geometry, we define the aSMB for each grid point using a combination of lookup tables from the local and neighbouring basins. We weight the aSMB values of the surrounding neighbour basins by proximity, which results in a gradual decrease in influence of the next neighbouring basin away from the divides (Fig. 4). The aSMB for each point in a specific basin b[0] is calculated as $\begin{array}{}\text{(2)}& \begin{array}{rl}& {\mathrm{aSMB}}_{b\mathrm{0}}\left(x,y\right)={\mathrm{aSMB}}_{b\mathrm{0}}\left(h\right)×{w}_{\mathrm{0}}\left(x,y\right)+{\mathrm{aSMB}}_{b\mathrm{1}} \left(h\right)\\ & \phantom{\rule{1em}{0ex}}×{w}_{\mathrm{1}}\left(x,y\right)+\mathrm{\dots }+{\mathrm{aSMB}}_{bn}\left(h\right)×{w}_{n}\left(x,y\right),\end{array}\end{array}$ where aSMB[bi](h) is the aSMB value found by interpolating the lookup table for basin b[i] at the elevation h(x,y). The weights of the gradients in the current basin b[0] are calculated as $\begin{array}{}\text{(3)}& {w}_{\mathrm{0}}=\mathrm{1}-\frac{{p}_{\mathrm{1}}+{p}_{\mathrm{2}}+\mathrm{\dots }+{p}_{n}}{{p}_{\mathrm{0}}+{p}_{\mathrm{1}}+{p}_{\mathrm{2}}+\mathrm{\dots }+{p}_{n}},\ which is the residual of the sum of the weights for neighbouring basins b[1] through b[n] defined as $\begin{array}{}\text{(4)}& \begin{array}{rl}& {w}_{\mathrm{1}}=\frac{{p}_{\mathrm{1}}}{{p}_{\mathrm{0}}+{p}_{\mathrm{1}}+{p}_{\mathrm{2}}+\mathrm{\dots }+{p}_{n}}\\ & \mathrm{\dots }\\ & {w}_{n}=\ frac{{p}_{n}}{{p}_{\mathrm{0}}+{p}_{\mathrm{1}}+{p}_{\mathrm{2}}+\mathrm{\dots }+{p}_{n}}.\end{array}\end{array}$ Here p[0]=1 and p[1], p[2], …p[n] are proximities of a given point to the neighbouring basins b[1]–b[n], which are limited to the interval [0, 1]: $\begin{array}{}\text{(5)}& {p}_{i}=\mathrm{1}-min\left(\frac{{\mathrm{ds}}_{i}}{{\mathrm{ds}}_{\mathrm{norm}}},\mathrm{1}\right),\end{array}$ where ds[i] is the distance from a given point in b[0] to the nearest point in neighbouring basin b[i], which is normalized by a prescribed distance ds[norm]=50km. This value of ds[norm] was chosen to minimize the mismatch between the original and reconstructed aSMB (other tested values were 75, 100, and 125km), though variations in ds[norm] have limited influence on the results. As an example, near divides with only one neighbouring basin in the proximity, the local weighting factor w[0] increases from 0.5 at the divide to 1.0 at the centre of the basin (Fig. 4). Figure 5 shows results for the aSMB at the end of the MAR RCP8.5 simulation (Eq. 1). The original MAR aSMB (Fig. 5a) has been used to remap the aSMB at the same surface elevation (Fig. 5b). The reconstructed aSMB is very similar to the original, reproducing the overall pattern. Some smaller-scale features are lost, however, by averaging laterally across the basin and over elevation bands. The difference map (Fig. 5c) reveals some along-flow features at the margins (e.g. in basins 2, 3, 9, 15, 16, and 17), suggesting that the local median value is not a good representation and that refinement of those basins could further improve the remapping. The absolute error in the spatially integrated aSMB per region in this case is on average 2.3% with extremes of 4%, 6%, and 16% in basins 5, 8, and 9, respectively (Fig. 6). These three basins all exhibit detailed and varied topography at the margins, which may contribute to the errors. The largest signed errors are found in basin 7, with compensating biases of opposite signs. We consider these errors acceptable given typical uncertainties in climate model forcing (e.g. van den Broeke et al., 2017) and our specific interest in large-scale, ice-sheet-wide results to be used in ISMIP6. Specifically, the aSMB error integrated over all basins is 18km^3yr^−1 (Fig. 6) compared to an ensemble range (650km^ 3yr^−1) and ensemble standard deviation (240km^3yr^−1) for the six CMIP5 models used in ISMIP6 (Goelzer et al., 2020). The robustness of the method to changes in the number of basins has been evaluated with a schematic basin set that can be subdivided semi-automatically (Supplement). Within the range of tested basin numbers (20–100) the remapping error is the lowest for the largest number of basins (100) but varies non-steadily and by only up to 15% across the tested range (Fig. S2). The remapped aSMB for an example modelled geometry with large differences relative to the observed geometry is shown in Fig. 7c for one member of the initMIP ensemble (VUB_GISM). The remapped aSMB shows a pattern similar to the original (Fig. 7a), with a smooth and continuous aSMB across basin divides. Where the ice sheet extends well beyond the observed ice mask (grey contour lines), the aSMB is naturally extended following the modelled surface elevation, as is best visible in sector 3. Results from a standard method of extending the SMB outside the observed ice sheet mask at the observed surface elevation (Franco et al., 2012) are shown in Fig. 7b for the footprint of the modelled ice sheet. This method uses the four closest, distance-weighted SMB values inside the MAR ice mask and applies a correction based on the elevation difference between the interpolated elevation of the 4 SMB pixels and the local elevation by using the local vertical SMB gradient computed in this area. Due to low elevation of the tundra surrounding the ice sheet, the extension provides a generally low aSMB for regions outside the observed ice sheet mask, which is illustrated in Fig. 7d, showing the difference between the original (Fig. 7a) and extended (Fig. 7b) aSMB. By definition, the original and extended aSMB are identical over the common ice mask, but positive differences can be seen in regions where the modelled ice sheet is smaller (e.g. basin 16, Fig. 7d). The remapping method notably prevents the occurrence of a large-amplitude negative aSMB outside of the observed ice sheet mask, illustrated by the difference between the two approaches (Fig. 7e). We quantify the differences between the three aSMB products again by integrating them over the drainage basins (Fig. 8a). The largest differences between the original and extended aSMB are found in basins where the modelled ice sheet extends far beyond the observed ice sheet mask (basins 3, 4 , 6, and 7), or where the aSMB has a large negative amplitude (basins 12, 14, and 15). In all these cases, the remapping reduces the bias (in most cases considerably), which is visualized by showing basin integrals of differences between the original and extended (blue) and between the remapped and extended aSMB (yellow) in Fig. 8b. In most cases, biases in the extended aSMB (blue) are reduced by the remapping, illustrated by bars of the same sign (yellow). The biases are reduced but are not expected or supposed to be entirely removed by the remapping because a physically larger ice sheet should have a larger accumulation and/or larger ablation areas. This also illustrates why the method is not designed to conserve mass when remapping to a different geometry: it demands a different SMB forcing. The improvement of the aSMB forcing by the remapping is mainly found in regions where the modelled ice sheet extends beyond the observed mask and where the remapped aSMB is predominantly higher than the extended aSMB (Fig. 7e). Differences between the original and remapped aSMB in the interior of the ice sheet (Fig. 7e) indicate averaging in the remapping process as discussed before but more importantly are due to differences in the modelled surface elevation compared to the observed surface elevation. This illustrates a feature of the remapping method that can be interpreted both as an asset or as a shortcoming, namely that biases in surface elevation (Fig. 7f) are propagated to the aSMB forcing. For ice sheet models with initial states close to observations, the reconstructed aSMB looks very similar to the original, while for models with largely different geometry, the overall structure of the decreasing aSMB towards lower elevation is well captured. A similar comparison as in Figs. 7c and 8a for three other modelled geometries from the initMIP-Greenland ensemble is given in the Supplement (Figs. S3 and S4). The same method can be used to define elevation–aSMB lookup tables and calculate the remapped aSMB for climate change scenarios, generating a time-dependent forcing. We have done this as a pilot application for MARv3.9 forced by MIROC5 (Watanabe et al., 2010) under scenario RCP8.5 (Fig. 9) with available SMB data from 1950 to 2100 (Fettweis et al., 2013; Delhasse et al., 2020) computed for ISMIP6. We have calculated the aSMB for the period 2015–2100 against a reference SMB as an average of the period 1960–1989. The resulting lookup tables (Fig. 9) show the decrease in the aSMB for the lower parts of each basin as expected. 4.1Future sea-level change projections The initial goal of the proposed method was to apply it to future sea-level change projections with a large ensemble of ice sheet models (with possibly widely different initial geometries) and forced by output of different climate models and scenarios, e.g. in the framework of the Ice Sheet Model Intercomparison Project ISMIP6 (Nowicki et al., 2016, 2020; Goelzer et al., 2020). For such applications, the basin separation can be defined, and the lookup tables can be calculated for specific climate models and scenarios ahead of time. Basin separation and weighting functions can be calculated for each specific ice sheet grid in advance. To apply a specific forcing scenario, the information transmitted to an individual ice sheet modeller consists of aSMB values for L elevation bands for M basins at N time steps. When the initial ice sheet geometries are known in advance, the remapping can also be done offline, and the aSMB($x,y,t$) can be distributed directly, avoiding the need to implement the remapping in each individual ice sheet model (see Sect. 2.2). To test the feasibility of our method, we have applied it to a projection using only a modelled and remapped aSMB to infer changes in ice sheet geometry. By ignoring any ice dynamic adjustment (i.e. no ice sheet model is used) and assuming the ice sheet to be in a steady state with an unknown reference SMB, the time evolution of the ice sheet is fully determined by the initial geometry (surface elevation and mask) and the given aSMB. This set-up does not consider any ice dynamic effects, such as the adjustment of ice flow to the SMB change itself and variations in marine-terminating outlet glaciers. We emphasize that this experimental set-up serves to illustrate the use of the remapping method and should not be interpreted as a full ice sheet projection including the dynamic response. We first compare two different representations of the cumulative (time-integrated) SMB anomaly as a measurement of the spatially resolved ice thickness change at the end of the scenario. 1. The first representation is the original time-integrated aSMB of the climate model, which is by definition at a fixed surface elevation (MOD). 2. The second representation is the time-integrated aSMB calculated by remapping to the same fixed surface elevation (MAP). In both cases, the resulting thickness change for an aSMB less than 0 is limited by the available ice thickness at each grid point. The two cases MOD and MAP show similar results (Fig. 10a, b), indicating that the remapping effectively captures the general pattern of SMB change in this time-dependent application as well. Direct comparison between MOD and MAP (Fig. 10c) reveals limitations in the remapping, mainly arising from localized melt and precipitation anomalies that are not resolved with 25 basins or where the relationship between surface elevation and aSMB breaks down (see also Fig. 5c). The difference map (Fig. 10c) shows some along-flow features on a larger spatial scale, suggesting that further refinement of the regions could improve the representation. 4.2SMB–height feedback In general the SMB anomaly that should be applied at any point on the evolving ice sheet surface h depends both explicitly on time t because the climate is changing and implicitly on time because the ice sheet surface h(t) is changing. The aim of this sub-section is to derive a method including both effects for estimating the SMB anomaly from regional climate model output and to determine how this method can be applied in an ensemble of ice sheet models. In all other parts of this paper we have used “aSMB” for the SMB anomaly both in the RCM and as applied to the ice sheet model. In this section (and Appendix A) alone, where the distinction is crucial, we reserve “SMB” and “aSMB” for quantities on the RCM grid, while by “ASMB” we mean the SMB anomaly to be applied to the ice sheet on its own surface h(t). We denote the height by three symbols for different circumstances: $\stackrel{\mathrm{‾}}{h}$ for the SMB anomaly and other quantities calculated from the RCM output at a fixed surface elevation, h [0]=h(0) when remapping to the initial surface elevation that the ice sheet has at t=0, and h=h(t) when remapping to a time-evolving geometry. The SMB anomaly in the RCM (at a fixed surface elevation $\stackrel{\mathrm{‾}}{h}$) can then be expressed as $\mathrm{aSMB}\left(t\right)=\mathrm{SMB}\left(t\right)-\mathrm{SMB}\left(\mathrm{0}\right)$. In order to perform the remapping, we first need to estimate a 3D field (including height dependence) from the 2D field (at $\stackrel{\mathrm{‾}}{h}$) given by the RCM. To do this, we need to estimate the local variation of the SMB and aSMB with surface elevation, i.e. d(SMB(t))∕dz and d(aSMB(t))∕dz, respectively. The latter can be written as $\begin{array}{}\text{(6)}& \mathrm{d}\left(\mathrm{aSMB}\left(t\right)\right)/\mathrm{d}z=\mathrm{d}\left(\mathrm{SMB}\left(t\right)\right)/\mathrm{d}z-\mathrm{d}\left(\mathrm{SMB}\left(\mathrm{0}\ where the term d(SMB)∕dz(t) can be approximated from the RCM output, typically by analysing spatial SMB gradients in close proximity of the point of interest (Franco et al., 2012; Noël et al., 2016; Le clec'h et al., 2019) or by parameterising the effect (e.g. Edwards et al., 2014a, b; Goelzer et al., 2013). Here, we derive d(SMB)∕dz(t) using MAR output (Franco et al., 2012). The remapping of a time-dependent quantity X from the fixed RCM grid and fixed surface elevation $\stackrel{\mathrm{‾}}{h}$ to some other ice sheet surface Z may be formally written as an operator $R \left(X\left(t,\stackrel{\mathrm{‾}}{h}\right),Z\right)$. Since the RCM surface $\stackrel{\mathrm{‾}}{h}$ is fixed we will write the operator more simply as R(X(t),Z) in the following. With this notation, the quantity used in the test procedure of Sect. 4.1 is R(aSMB(t),h[0]), the time-evolving aSMB(t) remapped from the fixed RCM topography to the initial ice sheet topography. This is not the SMB anomaly which should be applied to the time-evolving ice sheet because it includes only the climate dependence of the aSMB (its explicit dependence on time) and omits the effect of changing surface elevation (the implicit dependence on time via h(t)). At first sight it may be surprising that the elevation effect is still not properly taken into account by the time-evolving aSMB(t) remapped to the evolving h(t), R(aSMB(t),h(t)). This quantity involves a dependence on the modelled elevation change $\mathrm{d}h\left(t\right)=h\left(t\right)-{h}_{\mathrm{0}}$ and can be approximated as $\begin{array}{}\text{(7)}& \begin{array}{rl}& R\left(\mathrm{aSMB}\left(t\right),h\right)\approx R\left(\mathrm{aSMB}\left(t\right),{h}_{\mathrm{0}}\right)\\ & \phantom{\rule{1em}{0ex}}+R\left(\ By using Eq. (6), we get $\begin{array}{}\text{(8)}& \begin{array}{rl}& R\left(\mathrm{aSMB}\left(t\right),h\right)\approx R\left(\mathrm{aSMB}\left(t\right),{h}_{\mathrm{0}}\right)\\ & \phantom{\rule{1em}{0ex}}+\left[R\left (\mathrm{d}\left(\mathrm{SMB}\left(t\right)\right)/\mathrm{d}z,{h}_{\mathrm{0}}\right)-R\left(\mathrm{d}\left(\mathrm{SMB}\left(\mathrm{0}\right)\right)/\mathrm{d}z,{h}_{\mathrm{0}}\right)\right]\\ & (shown in Fig. 11c). This quantity however includes only the elevation dependence of the time dependence of the aSMB, which is a second-order effect, and it omits the first-order effect of the height feedback on the SMB. To preserve the full effect of elevation change on the SMB, the quantity ASMB(h,t) that we need is the anomaly in the remapped SMB rather than the remapped SMB anomaly R(aSMB(t),h(t)). The desired quantity is $\begin{array}{}\text{(9)}& \begin{array}{rl}& \mathrm{ASMB}\left(t,h\right)\equiv R\left(\mathrm{SMB}\left(\mathrm{t}\right),\mathrm{h}\right)-R\left(\mathrm{SMB}\left(\mathrm{0}\right),{h}_{\mathrm {0}}\right)\\ & \approx R\left(\mathrm{SMB}\left(t\right),{h}_{\mathrm{0}}\right)-R\left(\mathrm{SMB}\left(\mathrm{0}\right),{h}_{\mathrm{0}}\right)+R\left(\mathrm{SMB}\left(t\right),h\right)\\ & \ phantom{\rule{1em}{0ex}}-R\left(\mathrm{SMB}\left(t\right),{h}_{\mathrm{0}}\right)\end{array}\text{(10)}& \begin{array}{rl}& \mathrm{ASMB}\left(t,h\right)\approx R\left(\mathrm{aSMB}\left(t\right), {h}_{\mathrm{0}}\right)\\ & \phantom{\rule{1em}{0ex}}+R\left(d\left(\mathrm{SMB}\left(t\right)\right)/\mathrm{d}z,{h}_{\mathrm{0}}\right)×\mathrm{d}h\left(t\right).\end{array}\end{array}$ Comparing Eqs. (8) and (10), we can appreciate that Eq. (8) is incomplete because the first term in square brackets, which also appears in Eq. (10), is mostly cancelled by the second term in square brackets; indeed, if the vertical gradient of the SMB is the same in the two climates, there is no effect of elevation change in Eq. (8). To enable the calculation of Eq. (10) in ISMIP6, we remap the time-dependent $\mathrm{aSMB}\left(t,\stackrel{\mathrm{‾}}{h}\right)$ and d(SMB$\left(t,\stackrel{\mathrm{‾}}{h}\right)\right)/$dz to the initial ice sheet topography h[0]. We have chosen this approach because the remapping can be done offline for a given initial ice sheet geometry. The format of data to be exchanged for an ensemble projection is then the same with and without remapping: the modeller receives time-dependent R(aSMB$\left(x,y,t\right),{h}_{\mathrm{0}}$) and R(d(SMB)∕d$z\left(x,y,t\right),{h}_{\mathrm{0}}$) and has to implement a mechanism to calculate the additional term due to elevation change from the latter. An alternative online formulation, where the remapping would have to be implemented in each ice sheet model, is given in Appendix A. 4.3Application to a large ice sheet model ensemble To illustrate the use of the proposed method (Eq. 10) for a larger group of models, we have applied the transient aSMB calculation for the modelled initial states of the initMIP-Greenland ensemble (Goelzer et al., 2018). We use the publicly available output of the initial model states, which are provided on a common diagnostic grid (Goelzer, 2018). The time-dependent aSMB of MIROC5-forced MAR (RCP8.5) is remapped to the surface elevation of the initial state of each model. The geometry is then propagated (similar to Sect. 4.1) over the period 2015–2100 as a function of the applied SMB anomaly (no ice sheet model is used), taking the height–SMB feedback into account as described in the last section. The resulting sea-level contribution (Fig. 12a) is calculated by time integration of the aSMB assuming an ocean surface area of 361.8×10^6km^2 (Charette and Smith, 2010) and an ice density of 917kgm^−3. Differences between models are due to differences in (initial) ice sheet extent and surface elevation. We compare this result to a control experiment, with surface elevation changes considered as above, but here the original MAR aSMB is applied without remapping (Fig. 12b). Comparison between the two cases shows that (unphysical) biases in the estimated sea-level contribution are considerably reduced, especially for the models that show an initial ice sheet extent – and consequently a sea-level contribution – that is too large. However, some (physical) biases remain as expected, e.g. because a larger ice sheet has a larger ablation area. 5Discussion and conclusions The described method allows the application of SMB anomaly forcing for a large range of different ice sheet models and addresses problems arising from differences in initial ice sheet geometry. Remapping to the same geometry closely reproduces the original aSMB, while remapping to other modelled geometries shows patterns similar to the original, with a smooth and continuous aSMB across basin divides. This shows that the method is indeed suited to record and remap the aSMB for a wide range of ice sheet geometries while retaining the physical patterns originally represented by the Because the method produces a physically motivated aSMB forcing for a given ice sheet geometry, it also propagates biases in surface elevation to the SMB. This implies that for a given ice sheet geometry, biases due to a different ice sheet mask or due to elevation differences have to be accepted. In cases where the ice sheet mask is quite well matched, it may be preferred to apply the aSMB without remapping to prevent propagation of small biases in surface elevation to the SMB forcing. In the initMIP-Greenland ensemble as a whole, biases due to differences in the ice sheet mask were dominant, but this is not necessarily the case for each individual model. Therefore, we propose to evaluate the magnitude of the implied aSMB biases in offline calculations to decide whether remapping should be applied or not. This “diagnostic mode” of the method can also be envisioned for other applications, such as quantifying unphysical model biases for coupled and stand-alone ice sheet simulations. The main difference between our method and existing approaches of transforming the SMB to a different geometry (Franco et al., 2012; Helsen et al., 2013) is the non-locality of the remapping process, which may be described as its key feature. Like Helsen et al. (2013) and Franco et al. (2012), we assume a linear relationship between elevation and SMB for a given time and location, but that relationship is not geographically uniform or constant in time. This means, however, that the original aSMB field is not exactly reproduced when the remapping is applied to an ice sheet with identical surface elevation, at least not for the basin delineation currently used. However, in the limit of reducing the width of the basins to individual flow lines, the reproduction of the aSMB at the original geometry should converge to the original field. Using a basin separation based on flow lines is preferable because they mostly follow the surface elevation gradient so the aSMB can be sampled in a continuous method that largely maintains the spatial structure. While this would increase the number of parameters that have to be fitted for each individual model geometry, it would also allow further improvement of the aSMB representation. We have based our delineation on an existing basin separation, but considerable handwork is required as long as automatic methods for generating meaningful basin separations of chosen detail for a complex geometry and flow like the GrIS are unavailable. We have tested the performance of the method for a schematic set of basins that can be more easily extended, albeit not following observed basin divides. The ice sheet integrated mass anomaly is not conserved when remapping to a different geometry given that a different geometry demands a different SMB forcing. It would in principle be possible to impose mass conservation on the ice sheet or even on the basin scale by comparing spatial averages of the original and remapped forcing and subtracting the difference. This would lead, however, to a spatial shift of regions where positive and negative anomalies are applied and, in the latter case, to discontinuities between neighbouring basins. Similar problems would arise for rescaling of the We have shown how to apply the method for different ice sheet geometries but so far have circumvented the problem of different model grids. While for ISMIP6 we have chosen to interpolate the already remapped aSMB to the native ice sheet model grids, the method could also be applied directly after interpolating the basin division and weighting to the individual ice sheet model grid. If the remapping were to be implemented in the ice sheet model itself, it could even be applied for adaptive grids that change over time. On the input side, the aSMB is provided in the present application at 5km resolution, which was statistically downscaled from the regional climate model MAR run at 15km. A similar grid resolution of the input data set should be envisioned when the aSMB comes instead from a coarse-resolution GCM because sufficient grid resolution is required to derive the lookup table for a chosen number of elevation bands. However, since remapping with a lookup table acts locally as a spatial linear interpolator over the observed ice sheet, it propagates shortcomings of the input data set. The limiting factor for applying remapping to an aSMB derived from GCMs or other coarse-resolution models lies therefore in the quality of the original aSMB itself rather than in technical aspects of the The remapping is illustrated here with MAR v3.9 forced by MIROC5 as one of the data sets used in ISMIP6 projections (Goelzer et al., 2020). We have also successfully applied the remapping to output of the same MAR model forced by five other CMIP5 GCMs and four CMIP6 GCMs and to output from an older MAR model version forced by four different GCMs. We therefore consider the remapping to be robust for a number of different forcing products. Appendix A:Alternative formulation for the SMB–height feedback An alternative method of calculating the dependence of the ASMB on surface elevation (Sect. 4.2) is described in the following. We can replace Eqs. (9) and (10) by writing $\begin{array}{}\text{(A1)}& \begin{array}{rl}& \mathrm{ASMB}\left(t,h\right)\equiv R\left(\mathrm{SMB}\left(t\right),h\right)-R\left(\mathrm{SMB}\left(\mathrm{0}\right),{h}_{\mathrm{0}}\right)\\ & = R\left(\mathrm{SMB}\left(t\right),h\right)-R\left(\mathrm{SMB}\left(\mathrm{0}\right),h\right)+R\left(\mathrm{SMB}\left(\mathrm{0}\right),h\right)\\ & \phantom{\rule{1em}{0ex}}-R\left(\mathrm{SMB}\ left(\mathrm{0}\right),{h}_{\mathrm{0}}\right)\end{array}\text{(A2)}& \begin{array}{rl}& \mathrm{ASMB}\left(t,h\right)\approx R\left(\mathrm{aSMB}\left(t\right),h\right)\\ & \phantom{\rule{1em}{0ex}} To calculate Eq. (A2), we would have to remap the time-dependent $\mathrm{aSMB}\left(t,\stackrel{\mathrm{‾}}{h}\right)$ and the initial d(SMB(0))∕dz to the time-evolving ice sheet topography h. This implies that the remapping has to be implemented in the ice sheet model so that the lookup tables for both quantities can be applied online as a function of the changing geometry. From a practical point of view, the option described in the main text (remap to a fixed initial elevation and apply d(SMB)∕dz(t); Eq. 10) is much easier to achieve and has been chosen for the ISMIP6 projections (Nowicki et al., 2016, 2020; Goelzer et al., 2020). We have implemented and compared both methods in one ice sheet model and find nearly identical results for both of them. HG conceived the study and developed the remapping method in discussion with the other authors. HG wrote the manuscript with the assistance of the other authors. Xavier Fettweis is a member of the editorial board of the journal. This article is part of the special issue “The Ice Sheet Model Intercomparison Project for CMIP6 (ISMIP6)”. It is not associated with a conference. We would like to thank Matthew Beckley for help with the extended basin delineation and Florian Ziemen for helpful discussion of early ideas for the proposed method. We acknowledge CMIP6 and the modelling groups participating in the initMIP-Greenland experiments of ISMIP6 for sharing their data and all members of the ISMIP6 team for discussions and feedback, with particular thanks to Sophie Nowicki and Tony Payne for their leadership. This is ISMIP6 contribution no. 7. Heiko Goelzer and Brice Noël have received funding from the programme of the Netherlands Earth System Science Centre (NESSC), financially supported by the Dutch Ministry of Education, Culture and Science (OCW) (grant no. 024.002.001). Brice Noël acknowledges additional funding from the Netherlands Organisation for Polar Research (NWO). Computational resources for the MAR simulations performed for ISMIP6 have been provided by the Consortium des Équipements de Calcul Intensif (CÉCI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.–FNRS) (grant no. 2.5020.11), and the Tier-1 supercomputer (Zenobe) of the Fédération Wallonie Bruxelles infrastructure, funded by the Walloon Region (grant agreement no. 1117545). This material is based in part on work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation (cooperative agreement no. 1852977). This paper was edited by Joel Savarino and reviewed by Mario Krapp and two anonymous referees. Charette, M. A. and Smith, W. H. F.: The Volume of Earth's Ocean, Oceanography, 23, 112–114, https://doi.org/10.5670/oceanog.2010.51, 2010. Delhasse, A., Kittel, C., Amory, C., Hofer, S., van As, D., S. Fausto, R., and Fettweis, X.: Brief communication: Evaluation of the near-surface climate in ERA5 over the Greenland Ice Sheet, The Cryosphere, 14, 957–965, https://doi.org/10.5194/tc-14-957-2020, 2020. Edwards, T. L., Fettweis, X., Gagliardini, O., Gillet-Chaulet, F., Goelzer, H., Gregory, J. M., Hoffman, M., Huybrechts, P., Payne, A. J., Perego, M., Price, S., Quiquet, A., and Ritz, C.: Effect of uncertainty in surface mass balance–elevation feedback on projections of the future sea level contribution of the Greenland ice sheet, The Cryosphere, 8, 195–208, https://doi.org/10.5194/ tc-8-195-2014, 2014a. Edwards, T. L., Fettweis, X., Gagliardini, O., Gillet-Chaulet, F., Goelzer, H., Gregory, J. M., Hoffman, M., Huybrechts, P., Payne, A. J., Perego, M., Price, S., Quiquet, A., and Ritz, C.: Probabilistic parameterisation of the surface mass balance–elevation feedback in regional climate model simulations of the Greenland ice sheet, The Cryosphere, 8, 181–194, https://doi.org/10.5194/ tc-8-181-2014, 2014b. Eyring, V., Bony, S., Meehl, G. A., Senior, C. A., Stevens, B., Stouffer, R. J., and Taylor, K. E.: Overview of the Coupled Model Intercomparison Project Phase 6 (CMIP6) experimental design and organization, Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016, 2016. Fettweis, X.: Atmospheric forcing for ISMIP6, dynamically downscaled with the regional climate model MAR, available at: ftp://climato.be/fettweis/MARv3.9/ISMIP6, last access: 19 May 2020. Fettweis, X., Franco, B., Tedesco, M., van Angelen, J. H., Lenaerts, J. T. M., van den Broeke, M. R., and Gallée, H.: Estimating the Greenland ice sheet surface mass balance contribution to future sea level rise using the regional atmospheric climate model MAR, The Cryosphere, 7, 469–489, https://doi.org/10.5194/tc-7-469-2013, 2013. Franco, B., Fettweis, X., Lang, C., and Erpicum, M.: Impact of spatial resolution on the modelling of the Greenland ice sheet surface mass balance between 1990–2010, using the regional climate model MAR, The Cryosphere, 6, 695–711, https://doi.org/10.5194/tc-6-695-2012, 2012. Goelzer, H.: Results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison (Version v1) [Data set], Zenodo, https://doi.org/10.5281/zenodo.1173088, 2018. Goelzer, H.: hgoelzer/aSMB-remapping v1.0.0 (Version v1.0.0), Zenodo, https://doi.org/10.5281/zenodo.3762384, 2020a. Goelzer, H.: Dataset for “Remapping of Greenland ice sheet surface mass balance anomalies for large ensemble sea-level change projections” (Version v1) [Data set], Zenodo, https://doi.org/10.5281/ zenodo.3760526, 2020b. Goelzer, H., Huybrechts, P., Fürst, J. J., Andersen, M. L., Edwards, T. L., Fettweis, X., Nick, F. M., Payne, A. J., and Shannon, S. R.: Sensitivity of Greenland ice sheet projections to model formulations, J. Glaciol., 59, 733–749, https://doi.org/10.3189/2013JoG12J182, 2013. Goelzer, H., Nowicki, S., Edwards, T., Beckley, M., Abe-Ouchi, A., Aschwanden, A., Calov, R., Gagliardini, O., Gillet-Chaulet, F., Golledge, N. R., Gregory, J., Greve, R., Humbert, A., Huybrechts, P., Kennedy, J. H., Larour, E., Lipscomb, W. H., Le clec'h, S., Lee, V., Morlighem, M., Pattyn, F., Payne, A. J., Rodehacke, C., Rückamp, M., Saito, F., Schlegel, N., Seroussi, H., Shepherd, A., Sun, S., van de Wal, R., and Ziemen, F. A.: Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison, The Cryosphere, 12, 1433–1460, https:// doi.org/10.5194/tc-12-1433-2018, 2018. Goelzer, H., Nowicki, S., Payne, A., Larour, E., Seroussi, H., Lipscomb, W. H., Gregory, J., Abe-Ouchi, A., Shepherd, A., Simon, E., Agosta, C., Alexander, P., Aschwanden, A., Barthel, A., Calov, R., Chambers, C., Choi, Y., Cuzzone, J., Dumas, C., Edwards, T., Felikson, D., Fettweis, X., Golledge, N. R., Greve, R., Humbert, A., Huybrechts, P., Le clec'h, S., Lee, V., Leguy, G., Little, C., Lowry, D. P., Morlighem, M., Nias, I., Quiquet, A., Rückamp, M., Schlegel, N.-J., Slater, D., Smith, R., Straneo, F., Tarasov, L., van de Wal, R., and van den Broeke, M.: The future sea-level contribution of the Greenland ice sheet: a multi-model ensemble study of ISMIP6, The Cryosphere Discuss., https://doi.org/10.5194/tc-2019-319, in review, 2020. Helsen, M. M., van de Berg, W. J., van de Wal, R. S. W., van den Broeke, M. R., and Oerlemans, J.: Coupled regional climate–ice-sheet simulation shows limited Greenland ice loss during the Eemian, Clim. Past, 9, 1773–1788, https://doi.org/10.5194/cp-9-1773-2013, 2013. IPCC: Climate Change 2013: The Physical Science Basis, Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T. F., Qin, D., Plattner, G.-K., Tignor, M., Allen, S. K., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P. M., Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2013. Le clec'h, S., Charbit, S., Quiquet, A., Fettweis, X., Dumas, C., Kageyama, M., Wyard, C., and Ritz, C.: Assessment of the Greenland ice sheet–atmosphere feedbacks for the next century with a regional atmospheric model coupled to an ice sheet model, The Cryosphere, 13, 373–395, https://doi.org/10.5194/tc-13-373-2019, 2019. Mouginot, J., Rignot, E., Bjørk, A. A., van den Broeke, M., Millan, R., Morlighem, M., Noël, B., Scheuchl, B., and Wood, M.: Forty-six years of Greenland Ice Sheet mass balance from 1972 to 2018, P. Natl. Acad. Sci. USA, 116, 9239, https://doi.org/10.1073/pnas.1904242116, 2019. Noël, B., van de Berg, W. J., Machguth, H., Lhermitte, S., Howat, I., Fettweis, X., and van den Broeke, M. R.: A daily, 1km resolution data set of downscaled Greenland ice sheet surface mass balance (1958–2015), The Cryosphere, 10, 2361–2377, https://doi.org/10.5194/tc-10-2361-2016, 2016. Nowicki, S. M. J., Payne, A., Larour, E., Seroussi, H., Goelzer, H., Lipscomb, W., Gregory, J., Abe-Ouchi, A., and Shepherd, A.: Ice Sheet Model Intercomparison Project (ISMIP6) contribution to CMIP6, Geosci. Model Dev., 9, 4521–4545, https://doi.org/10.5194/gmd-9-4521-2016, 2016. Nowicki, S., Payne, A. J., Goelzer, H., Seroussi, H., Lipscomb, W. H., Abe-Ouchi, A., Agosta, C., Alexander, P., Asay-Davis, X. S., Barthel, A., Bracegirdle, T. J., Cullather, R., Felikson, D., Fettweis, X., Gregory, J., Hatterman, T., Jourdain, N. C., Kuipers Munneke, P., Larour, E., Little, C. M., Morlinghem, M., Nias, I., Shepherd, A., Simon, E., Slater, D., Smith, R., Straneo, F., Trusel, L. D., van den Broeke, M. R., and van de Wal, R.: Experimental protocol for sealevel projections from ISMIP6 standalone ice sheet models, The Cryosphere Discuss., https://doi.org/10.5194/ tc-2019-322, in review, 2020. Seroussi, H., Nowicki, S., Simon, E., Abe-Ouchi, A., Albrecht, T., Brondex, J., Cornford, S., Dumas, C., Gillet-Chaulet, F., Goelzer, H., Golledge, N. R., Gregory, J. M., Greve, R., Hoffman, M. J., Humbert, A., Huybrechts, P., Kleiner, T., Larour, E., Leguy, G., Lipscomb, W. H., Lowry, D., Mengel, M., Morlighem, M., Pattyn, F., Payne, A. J., Pollard, D., Price, S. F., Quiquet, A., Reerink, T. J., Reese, R., Rodehacke, C. B., Schlegel, N.-J., Shepherd, A., Sun, S., Sutter, J., Van Breedam, J., van de Wal, R. S. W., Winkelmann, R., and Zhang, T.: initMIP-Antarctica: an ice sheet model initialization experiment of ISMIP6, The Cryosphere, 13, 1441–1471, https://doi.org/10.5194/tc-13-1441-2019, 2019. Shepherd, A., Ivins, E. R., A, G., Barletta, V. R., Bentley, M. J., Bettadpur, S., Briggs, K. H., Bromwich, D. H., Forsberg, R., Galin, N., Horwath, M., Jacobs, S., Joughin, I., King, M. A., Lenaerts, J. T. M., Li, J., Ligtenberg, S. R. M., Luckman, A., Luthcke, S. B., McMillan, M., Meister, R., Milne, G., Mouginot, J., Muir, A., Nicolas, J. P., Paden, J., Payne, A. J., Pritchard, H., Rignot, E., Rott, H., Sørensen, L. S., Scambos, T. A., Scheuchl, B., Schrama, E. J. O., Smith, B., Sundal, A. V., van Angelen, J. H., Van De Berg, W. J., Van Den Broeke, M. R., Vaughan, D. G., Velicogna, I., Wahr, J., Whitehouse, P. L., Wingham, D. J., Yi, D., Young, D., and Zwally, H. J.: A Reconciled Estimate of Ice-Sheet Mass Balance, Science, 338, 1183–1189, https://doi.org/10.1126/ science.1228102, 2012. van den Broeke, M., Box, J., Fettweis, X., Hanna, E., Noël, B., Tedesco, M., van As, D., van de Berg, W. J., and van Kampenhout, L.: Greenland Ice Sheet Surface Mass Loss: Recent Developments in Observation and Modeling, Current Climate Change Reports, 3, 345–356, https://doi.org/10.1007/s40641-017-0084-8, 2017. Watanabe, M., Suzuki, T., O'ishi, R., Komuro, Y., Watanabe, S., Emori, S., Takemura, T., Chikira, M., Ogura, T., Sekiguchi, M., Takata, K., Yamazaki, D., Yokohata, T., Nozawa, T., Hasumi, H., Tatebe, H., and Kimoto, M.: Improved Climate Simulation by MIROC5: Mean States, Variability, and Climate Sensitivity, J. Climate, 23, 6312–6335, https://doi.org/10.1175/2010JCLI3679.1, 2010. Zwally, H. J., Giovinetto, M. B., Beckley, M. A., and Saba, J. L.: Antarctic and Greenland Drainage Systems, GSFC Cryospheric Sciences Laboratory, available at: http://icesat4.gsfc.nasa.gov/cryo_data /ant_grn_drainage_systems.php (last access: 2 May 2020) 2012.
{"url":"https://tc.copernicus.org/articles/14/1747/2020/","timestamp":"2024-11-15T04:05:48Z","content_type":"text/html","content_length":"283942","record_id":"<urn:uuid:c6cc0786-4837-4f2f-8b5f-39b6a55e95dd>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00756.warc.gz"}
Wikileaks offers tech giants access to sourcecode for CIA Vault 7 exploits Responsible disclosure. Critical irony failure. 8 Likes I have worked on several bug bounty programs. Companies that don’t understand how they work likely don’t understand how QA, devops, or continuous testing work either. Well maintained bounty programs are a godsend. They are cheaper to run than hiring a bunch of fte’s; usually more accurate; and they build trust and goodwill. 5 Likes Incontrovertible proof that wikileaks is a shill! Russian propaganda! Gratuitous DRM reference? Check. 2 Likes Ugh. The tool that purportedly fakes Russian malware signatures right wing nutters (and general tinfoil hat types) have been making way too much hay out of. I have little doubt it is nothing terribly fancy, just automating otherwise manual steps. Wish I could peek at it, though. Indeed. Much better for America to slip ever closer to the panopticon without any of the filthy plebs knowing about it. After all, saying the wrong thing here and there might inconvenience the hyper rich and the preposterously powerful and that, ah, that would be a tragedy. Well fact of the matter is that we don’t know how much hay is the correct amount of hay. And the only thing connecting Russia to those DNC hacks is forensics. Whose credibility just took a serious hit. Of course the CIA could be as pure as the driven snow here but much in the same way it is possible to be carrying lockpicks in a bank after midnight for perfectly innocent reasons and still people will be, ah, profoundly suspicious of your motives. 1 Like Well fact of the matter is that we don’t know how much hay is the correct amount of hay. A point the nutters don’t want to hear–that the evidence behind the DNC hack hasn’t been released. Whose credibility just took a serious hit. Because laymen looked at something technical and relayed it in layman’s terms to other laymen instead of releasing the materials. Alas, we have learned from this past election insinuation goes farther than nuanced facts backed up with data. I am going to be pretty annoyed if the source gets released and it’s just a find-and-replace of strings with Russian words. 1 Like Hey, one of the bits of forensics that was seriously floated about was “Well, there was a Word document with Russian settings on it and registered to Felix Dzerzhinsky[1]” so a a few regexps with Russian words might be enough. But no, you’ve the right of it, we don’t have the evidence. We do have the word of the intelligence agencies but I personally trust those as far as I can throw New Jersey. [1] I’d have held out for Viktor Kagebeovitch, personally, but fine. Well, forensics, plus all the people who are furiously lying about Russia. I think the matter deserves an independent investigation, don’t you? 2 Likes I actually downloaded the word docs from the DNC and took a spin through. These ones were posted by “Guccifer 2.0” and floated as belonging to the Clinton campaign (but were obviously curated DNC docs). The “MSM” didn’t report on it (rightfully), but it popped up on YC news. I’d be interested in which doc was mentioned? I can’t give details. This isn’t a false flag operation. The question of how far up this goes in the RU chain of command is still open. And the true intent is still open for debate. And if the real mission was successful is open for debate. But the code speaks for itself. Perhaps, in a similar vein like our tangerine colored POTUS implied it could have been a lone wolf in a basement in Russia, but the provenance of the code isn’t up for 4 Likes It does. It won’t get it, but it does. But I must confess, there’s an inner skepticism that’s unshakable in me because, well, isn’t it convenient that we can blame the utter, utter, farcical failure of the DNC at doing politics on a nation that most people in America seem to utterly loathe. Goodness knows bigger coincidences have happened, but it still niggles at me. And, of course, nobody will show me any significant evidence which doesn’t help. It is one of those, apparently. It’s not a particularly good piece of evidence. Save for the name it could have come from my computer and I’m not Russian, nor have I ever set foot in Russia. (I also have German keyboard[1] settings without being German, and, come to think of it, US English without being American). And Феликс Эдмундович is such a on-the-nose name that it strikes of either false-flag or, far more likely, someone deliberately making fun of people looking at the doc. [1] Also, while it’s damned hard to do so, you can totally type Russian text from a US keyboard and if you choose the ‘mnemonic’ layout with chording, it isn’t even particularly difficult. I’m sorry, and you are? I mean, I’ve been here for a bit, and your name looks familiar but I can’t recall any details? Are you a computer forensics specialist? CIA agent? I do beg your pardon but “I can’t give you any details, trust me” might fly from my nearest and dearest but you aren’t either. 1 Like I am easy to find, ironically. Pm me if you want my number. Also, I can grant access to some of my OSINT resources if you’d like. As a last resort, I can also name drop Canary::wharf is the main OSINT project I run, along with Maze and DLP. Early openvpn integrator. 6 Likes I will readily acknowledge your superior experience in the field. I won’t pry for classified details because it’d be rather insulting to suggest you’d part with them, but could you perhaps explain roughly how, in principle, one may establish the provenance of an attack with such fidelity, especially to be sure that it is Russian, but not be sure if merely by nationality or by governmental This is not my field—clearly!—but what I’ve seen so far has not inspired great confidence. 1 Like Semi Anonymized network circuits that were used Pdb and debug information Method calls Specific Compilers 7 Likes So a couple things, I appreciate your patience and politeness. There is doubt. It isn’t clear exactly what is going on. However given the same sets of analysis tools that led us to Equation (US), flame, (Us), stuxnet (us, Germany, Israel), it becomes increasingly easy to identify at least general sources. As I like to tell more junior members, it only takes one OPSEC failure to unmask who you are. And it only takes one OPSEC failure for nerd like me to figure out a bunch about the adversary. There are bigger, more fundamental questions that havent–and likely won’t ever be–answered. But provenance is not one of them. 7 Likes I am sort of shocked that somebody would use code that hasn’t had its debug information stripped but, damn it all, you got me with ‘specific compilers.’ That leaves a huge signature: same code compiled with very nearly the same compiler/settings can produce binaries that are shockingly different. Fingerprinting those is probably a worthwhile endeavor. I’ve revised my opinion. There can be such evidence and I’m provisionally willing to accept that such evidence exists. So, to recap: there is credible evidence that someone from the Russian Federation made the attacks and that’s the extent of it? Much obliged for the reply. I think I understand how the attribution was made and why it has the error bars that it does. This is useful information. Thank you. 5 Likes Giggle you know that’s the default setting for visual studio, right? 5 Likes Certainly. So’s not showing line numbers. Doesn’t mean it’s a good idea. 2 Likes
{"url":"https://bbs.boingboing.net/t/wikileaks-offers-tech-giants-access-to-sourcecode-for-cia-vault-7-exploits/96737","timestamp":"2024-11-05T06:47:56Z","content_type":"text/html","content_length":"64118","record_id":"<urn:uuid:f7ae76bf-06e4-4301-b7b0-1a6c59475294>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00485.warc.gz"}
Ohms law OHMS Law Probably the most important bit of maths you need to know as a vaper! There’s nothing mystical or magical about Ohm’s Law. It’s a few formulas, usually depicted inside of a triangle, and anyone can easily learn and use the formulas with any regular calculator. The goal here is to show you the formulas behind Ohm’s Law and hopefully give you an understanding of the relationships between the different elements in a basic electronic circuit as related to The Triangle Ohms Law Triangle Inside the triangle you can see the three main elements in any electrical circuit, represented by the letters V, I, and R. I would vocalize the triangle as “V over I times R” with “times” being multiplication. The hardest part of this will be remembering what the letters represent, and that’s easy: V = Voltage (your battery voltage) I = Current (the amperage drawn by your coil) R = Resistance (the resistance, in ohms, of your coil) So, how do we use the Ohm’s Law triangle? Again, simple – the triangle visually depicts the relationship between voltage, current, and resistance. In the following examples we’ll explore how to use the triangle and formulas to help you build coils targeting the current and wattage you desire. Calculating current Ohms law for current If you want to determine the current draw through a resistance (your coil) the formula is: I = V ÷ R (or I = V/R) How did we arrive at that? Look at the triangle and you will see that to solve for current (I) you must divide voltage (V) by resistance (R). Let’s put the formula to work in a real life example. If you are using a mechanical mod, with a freshly charged battery you theoretically have 4.2 V available to power your coil. If your coil is 0.5Ω, you now have everything you need to determine current, in amps: I = 4.2 V ÷ 0.5Ω (or 4.2/0.5) I = 8.4 A As you can see, with your 0.5-ohm coil and a freshly charged battery at 4.2 volts, the resulting max current draw will be 8.4 amps. If your battery has a 10-amp limit, you are well below the cap. Don’t forget that using a dual mechanical mod in series configuration will double your amp draw per battery, and you will have to build coils with twice as low resistance to be safe. Also note that as the battery depletes, the current will also tail off. For example, when the battery reaches 3.7 volts with the same load, current will drop to 7.4 amps (3.7 volts / 0.5 ohms) Calculating power (wattage) The next thing you will probably want to know is the power generated at the coil, or wattage. It’s not shown in the triangle, but the formula is simple. Just multiply the current in your circuit by the voltage applied: P = V x I In our original example, the formula would look like this: P = 4.2 V x 8.4 A P = 35.3 W So that 0.5-ohm coil with a fully charged battery at 4.2 volts will pull a maximum of 8.4 amps and deliver 35.3 watts. You can see that as the resistance of your coil increases, current will drop and wattage will drop. Calculating resistance Ohms law resistance The second Ohm’s Law formula that can be of use to us is calculating resistance. Let’s say that you have a battery with a 10-amp current limit and you want to determine the lowest coil resistance that you can safely run without exceeding the CDR of the battery. To calculate, you would use the following formula: R = V ÷ I Since you know that the battery CDR is 10 amps, you might want to target 9 amps in your calculation, to give yourself 1 amp of headroom. You also know that your max voltage will be 4.2 volts on a single battery mod. So the calculation goes like this: R = 4.2 V ÷ 9 A R = 0.47Ω The result tells you that your safe lower limit with the 10-amp battery is 0.47 ohms – anything lower and you risk exceeding the current limit of the battery. Of course, if you have a 25-amp battery, your low resistance drops to 0.17 ohms: R = 4.2 V ÷ 25 A R = 0.17Ω Calculating voltage Ohms law Voltage Finally, and probably not as useful to us, using the triangle you can solve for voltage in a circuit, as long as you know the values of the other two variables. To solve for voltage when current and resistance is known, the formula looks like this: V = I x R Important safety tips Always assume the full battery voltage (4.2V) of a fully charged Li-Ion battery, even though resistance in the Mod will mean the coil will see a lower voltage! Never use a battery at above its rated current draw! This can cause the battery to explode! If the battery or the mod gets hot, something is wrong! Stop and find out what. See also: Battery safety External Links Steam Engine ohm's law calculator Youtube tutorial on ohms law in vaping
{"url":"https://safernicotine.wiki/mediawiki/index.php/Ohms_law","timestamp":"2024-11-10T00:10:25Z","content_type":"text/html","content_length":"36265","record_id":"<urn:uuid:ade88bad-d61a-41c4-b88f-5fd3eb72cf3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00505.warc.gz"}
Finite element, least squares and domains decomposition methods for the numerical solution of nonlinear problems in fluid dynamics Attention is given to fluid dynamics-related methods developed in recent years, some of which are of industrial interest; typically, these methods employ finite element approximations in order to handle complex geometries and nonlinear least squares formulations to treat nonlinearities. Conjugate gradient methods with scaling then solve the least squares problems, and subdomain decomposition is applied to reduce the solution of very large problems to that of problems of the same type but smaller domains. This decomposition approach allows the use of vector processors. Application of these methods to transonic flow calculations, to the numerical solution of the time-dependent Navier-Stokes equations for incompressible viscous fluids, and in the numerical solution of partial differential equation problems by domain decomposition, are presently noted. NASA STI/Recon Technical Report A Pub Date: December 1985 □ Computational Fluid Dynamics; □ Finite Element Method; □ Least Squares Method; □ Nonlinear Equations; □ Computerized Simulation; □ Conjugate Gradient Method; □ Incompressible Fluids; □ Navier-Stokes Equation; □ Numerical Flow Visualization; □ Partial Differential Equations; □ Transonic Flow; □ Viscous Fluids; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1985STIA...8629472G/abstract","timestamp":"2024-11-10T18:43:42Z","content_type":"text/html","content_length":"36619","record_id":"<urn:uuid:8e17c26a-61cb-49d8-9734-207fc0661789>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00832.warc.gz"}
Two Balls Rolling Down Slopes and Colliding, a Computer model for the Pedagogy of Modeling JavaScript HTML5 Applet Simulation ModelThin Converging Diverging Lens Ray Diagram Lens JavaScript HTML5 Applet Simulation Model Code Language Translator Run This email address is being protected from spambots. You need JavaScript enabled to view it.; Francisco Esquembre; Felix J. Garcia Clemente Sample Learning Goals 2. Kinematics • Speed, velocity, and acceleration • Graphical analysis of motion • Free-fall Learning Outcomes Candidates should be able to: 1. Understand Basic Concepts: □ (a) State what is meant by speed and velocity. □ (b) Calculate average speed using the formula: \[ Average Speed = \frac{Distance Travelled}{Time Taken} \] □ (c) Define uniform acceleration and calculate acceleration using: \[ {Acceleration} = \frac{\text{Change in Velocity}}{\text{Time Taken}} \] 2. Interpret Motion Graphs: □ (d) Interpret examples of non-uniform acceleration. □ (e) Plot and interpret: ☆ Displacement-time graphs for motion in one dimension. ☆ Velocity-time graphs for motion in one dimension. 3. Analyze Displacement-Time Graphs: □ (f) Deduce from the shape of a displacement-time graph when a body is: 1. At rest. 2. Moving with uniform velocity. 3. Moving with non-uniform velocity. 4. Analyze Velocity-Time Graphs: □ (g) Deduce from the shape of a velocity-time graph when a body is: 1. At rest. 2. Moving with uniform velocity. 3. Moving with uniform acceleration. 4. Moving with non-uniform acceleration. 5. Calculate Displacement from Graphs: □ (h) Calculate the area under a velocity-time graph to determine the displacement for motion with uniform velocity or uniform acceleration. 6. Understand Free-Fall: □ (i) State that the acceleration of free fall for a body near to the Earth is constant and approximately 10 m/s². For Teachers added with inputs from SL 20240904 updated with energy stores with inputs from BC 20240903 1. Understanding Gravitational Potential Energy Store Question: Observe the Gravitational Potential Energy Store (G) of both spheres before they start moving down the slopes. How is the Gravitational Potential Energy Store related to the height of each sphere on the slope? Follow-up: Predict how the Gravitational Potential Energy Store will change as each sphere rolls down the slope and explain your reasoning. 2. Comparing Kinetic Energy Store Question: As the spheres roll down their respective slopes, how does their Kinetic Energy Store (K) change? Compare the Kinetic Energy Stores of the two spheres when they reach the horizontal track. Follow-up: Based on your observations, what factors influence the amount of Kinetic Energy Store each sphere has at the bottom of the slope? 3. Effect of Slope Angle on Speed Question: Adjust the inclined angle of one slope to be steeper than the other. How does changing the slope angle affect the speed of the sphere at the bottom of the slope? Explain your observations using the concepts of Gravitational Potential Energy Store and Kinetic Energy Store. Follow-up: What do you predict will happen to the position of collision if one slope is steeper than the other? Justify your answer. 4. Investigating the Impact of Mass Question: Set both slopes to the same angle, but give the two spheres different masses. How does the mass of each sphere affect the speed and Kinetic Energy Store at the bottom of the slope? Does mass influence the position of the collision? Follow-up: Why does the mass of the spheres not affect their acceleration down the slope, even though it does influence their Kinetic Energy Store? 5. Investigating the Radius of the Balls Question: Adjust the radius of each sphere. How does changing the radius affect the speed and Kinetic Energy Store of the spheres as they reach the bottom of the slope? Follow-up: How does the radius of the spheres influence the collision position? Consider the moment of inertia in your explanation. 6. Analyzing the Collision Position Question: Using the physics equations of motion and the energy principles, predict the position where the two spheres will collide. How accurate is your prediction compared to the simulation result? Follow-up: What adjustments would you make to your calculations or the simulation parameters to improve the accuracy of your prediction? EJS Simulation: Two Balls Rolling Down Slopes and Colliding This EJS (Easy Java Simulations) model simulates the motion of two spherical balls rolling down inclined planes and eventually colliding on a horizontal track. The simulation is designed with careful attention to both physics and user interface, allowing for a rich exploration of the dynamics involved. Initialization and Setup The initialization code sets up the starting conditions for the simulation: • Variables: The positions (x, y, x2, y2) and angles (angle, angle2) for the two balls are initialized. These variables are key in determining how the balls will move down their respective slopes. • Precision Handling: A small tolerance (tol) is added to ensure that the event-driven nature of the ODE solver doesn’t cause the simulation to hang due to floating-point precision issues. var tol = 1e-5; x = L * Math.sin(angle) + tol; y = -L * Math.cos(angle); acuteAngle = 3 * pi / 2 - angle; acuteAngledeg = acuteAngle * 180 / pi; x2 = lengthHorizontalTrack + L * Math.sin(-angle2); y2 = L * Math.cos(angle2); acuteAngle2 = pi / 2 + angle2; acuteAngle2deg = acuteAngle2 * 180 / pi; getA(); This setup ensures that both balls are positioned correctly on their slopes at the start of the simulation. Control Panel and UI Design The control panel is a crucial part of this simulation, providing the user with the ability to manipulate various parameters: • Sliders: The control panel includes sliders for adjusting the angles of the slopes (Angle of Slope, Angle of Slope2). These sliders are tied to the angles that determine the steepness of the • Labels and Text Fields: The horizontal track length is displayed and can be modified, giving users control over the length of the flat section where the balls eventually collide. • Play, Step, and Reset Buttons: These buttons control the simulation’s playback, allowing users to run, step through, or reset the simulation. The control panel and the main simulation area are defined using HTMLView elements in the EJSS environment, enabling a clean and interactive interface. <panel> <slider name="angleSlider" value="angle" min="0" max="90" label="Angle of Slope"/> <slider name="angleSlider2" value="angle2" min="0" max="90" label="Angle of Slope2"/> <textfield name= "lengthHorizontalTrackField" value="lengthHorizontalTrack" label="Horizontal Track Length"/> <button name="playButton" text="Play" action="play()"/> <button name="stepButton" text="Step" action="step ()"/> <button name="resetButton" text="Reset" action="reset()"/> </panel> HTMLView Construction The HTMLView components are constructed carefully to allow users to interact with the simulation intuitively. The use of sliders and text fields allows for real-time adjustments, which directly influence the behavior of the simulation. The main simulation view is designed to update dynamically as the user interacts with the control elements, ensuring a seamless experience. The graphical representations of the balls and slopes update immediately to reflect changes in parameters like slope angles. Ordinary Differential Equations (ODEs) and Event Handling The ODEs are defined to simulate the motion of the balls along the slopes: dx/dt = vx; dvx/dt = ax; dy/dt = vy; dvy/dt = ay; dx2/dt = vx2; dvx2/dt = ax2; dy2/dt = vy2; dvy2/dt = ay2; These equations govern the velocity and acceleration in both x and y directions for the two balls. Event Handling is meticulously designed to manage transitions in the simulation: • Hit Horizontal Track: This event sets the vertical velocity (vy) to zero when the balls hit the horizontal track, effectively simulating the transition from slope to flat ground. • Collision Detection: The simulation includes an event to detect when the two balls collide, at which point the simulation pauses. This collision detection is crucial for accurately modeling the interaction between the two balls. // Event for ball 1 hitting horizontal return y - 0; action ax = 0; ay = 0; vy = 0; // Event for ball 2 hitting horizontal return y2 - 0; action ax2 = 0; ay2 = 0; vy2 = 0; // Collision event return (x + radius) - (x2 - radius); action _pause(); Custom Functions and Sphere Dynamics The getA function calculates the accelerations based on the ball’s position on the slope. The factor 5/7 is used to account for the rolling motion of the sphere, considering its moment of inertia: function getA() { var factorOfSphere = 5/7; var zeroPositive = 0.01; // Ball 1 if (x < zeroPositive) { ax = factorOfSphere * Math.cos(acuteAngle) * (-g * Math.cos(angle)); } else { ax = 0; } if (x < zeroPositive) { ay = factorOfSphere * Math.sin(acuteAngle) * (g * Math.cos(angle)); } else { ay = 0; } if (x >= zeroPositive) { vy = 0; } // Ball 2 if (x2 > lengthHorizontalTrack - zeroPositive) { ax2 = factorOfSphere * Math.cos(acuteAngle2) * (-g * Math.cos(angle2)); } else { ax2 = 0; } if (x2 > lengthHorizontalTrack - zeroPositive) { ay2 = factorOfSphere * Math.sin(acuteAngle2) * (-g * Math. cos(angle2)); } else { ay2 = 0; } if (x2 <= lengthHorizontalTrack - zeroPositive) { vy2 = 0; } } This EJS simulation provides a rich, interactive platform for exploring the dynamics of rolling spheres. By carefully integrating the physics with a user-friendly interface, it allows users to visualize and manipulate key aspects of the simulation, such as slope angles and track length. The detailed control panel, coupled with precise ODEs and event handling, ensures that the simulation is both accurate and engaging. This model serves as an excellent educational tool, demonstrating the complexities of rotational motion and collision dynamics in a clear and accessible manner. The source code’s attention to detail, particularly in constructing the HTMLView elements and managing the control panel design, underscores the power and flexibility of the EJS platform in creating sophisticated simulations. Other Resources Written by Loo Kang Wee Parent Category: 03 Motion & Forces Category: 01 Kinematics Hits: 1683
{"url":"https://sg.iwant2study.org/ospsg/index.php/interactive-resources/physics/04-waves/04-light/2-uncategorised/1248-rolling2balls","timestamp":"2024-11-10T05:40:56Z","content_type":"application/xhtml+xml","content_length":"142192","record_id":"<urn:uuid:a747d087-ee71-4958-85d8-c33fa5f336bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00184.warc.gz"}
How to transfer fuel in tonsHow to transfer fuel in tons 🚩 Specific gravity of diesel fuel 🚩 More. When you want to calculate gasoline, diesel fuel, liquefied petroleum gas not in bulk, in weight units as basic unit of calculation is necessary to take their actual density (specific gravity). To calculate this quite simple. For this you just need available the number of liters multiply by actual density. Then the result is divided by 1000, and obtained the required number. The only thing that can affect the conversion of fuel from gallons to tonnes, is the temperature. The hotter, the more fuel specific weight. Therefore, the Ministry of industry and energy of the Russian Federation to facilitate these calculations and to do it without errors, made the decision to average the density value for gasoline. For example, for fuel grade A-76 (AI-80) the average specific density is 0,715 grams per cubic centimeter. For gasoline AI-92 is set to the average actual density within 0,735 grams per cubic centimeter, and for AI-95, this figure is 0.750 in grams. As for brand fuel AI-98, the average specific weight is 0,765 grams per cubic centimeter. In the case where it is necessary to calculate the translation from liters to tonnes of diesel fuel (this is usually necessary in the retail trade), you need to use the following formula: M = V x 0.769 /1000. Where M is the volume of diesel fuel in tons, V is the volume of diesel fuel in litres, 0,769 is a density for diesel fuel is based on kilograms per liter. It is possible to calculate the transfer of fuel to tonnes, use the average value, which was adopted in Rostekhnadzor. There are own rules in calculations fuel density. For example, for liquefied natural gas, the average value of 0.6 tonnes per cubic meter, petrol - 0.75 tons per cubic meter, and for diesel fuel, the figure of 0.84 t/m3.
{"url":"https://eng.kakprosto.ru/how-38105-how-to-transfer-fuel-in-tons","timestamp":"2024-11-03T00:20:47Z","content_type":"text/html","content_length":"38123","record_id":"<urn:uuid:2a558104-5fa7-4372-a335-888317ddd0bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00103.warc.gz"}
Bayesian Optimization Process - Metron Bayesian Optimization Process 1 Use a surrogate model to form a prior distribution of surrogate Unlike the objective function, surrogate models are cheap to evaluate and allow for the incorporation of prior qualitative function that approximate the given black-box objective function. information about the objective function (e.g., smoothness). One commonly used surrogate model is Gaussian processes. 2 Collect an initial set of data by evaluating the objective function at some number of initial data points. 3 Update the posterior distribution using all available data, fitting the surrogate model to the data. Acquisition functions use the posterior distribution to strategically guide the next sample selection to areas most likely to improve the current estimate. The functions balance two competing goals: 4 Sample the objective function at a value determined by the 1) Exploiting the previously computed existing black-box evaluations to optimize the current black-box function representation acquisition function to obtain a new data point. in the surrogate function space, 2) Exploring the function domain for further evaluations in unmapped regions that have reasonable probability of containing optimal arguments. 5 Repeat from (3) until the maximum number of iterations is reached. The maximum number of iterations is defined by how many evaluations of the expensive black-box objective function is budgeted. This procedure is designed to well-represent the black-box objective function efficiently (i.e., with minimal costly black-box evaluations) in the regions in which optimal solutions are most likely to exist.
{"url":"https://www.metsci.com/references/bayesian-optimization-process/","timestamp":"2024-11-12T07:33:31Z","content_type":"text/html","content_length":"23615","record_id":"<urn:uuid:4a7ff952-e75c-4dd1-aedf-6434f65fa79a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00604.warc.gz"}
Wind Tunnel V-tail vices and virtues. Barnaby Wainfan A recent discussion about tail design (Wind Tunnel, March 2011) prompted several readers to ask about V-tails. Accordingly, the virtues and vices of V-tails are our subject this month. The tail of an airplane provides stabilization and control power on two axes. In a conventional arrangement, the vertical fin provides directional (weathercock) stability, and the rudder provides yaw control. The horizontal tail stabilizes the airplane longitudinally, while the elevator (or moving the entire horizontal tail) controls the airplane in pitch. A V-tail would simplify the build with one less surface to construct, but the requirement for a mixer in the control linkage offsets that to some extent. It is possible to replace the three surfaces of the conventional tail with a pair of surfaces mounted at a large dihedral angle, forming a V in front view. The large dihedral angle allows them to produce both lift and side force on the aft end of the fuselage, generating stability and control in both pitch and yaw. The conventional elevator and rudder control surfaces are replaced by a pair of "ruddervators," one on each tail surface, or by making both V-tail surfaces all-moving. These controls move symmetrically (both up or both down) to produce pitching moment and anti-symmetrically (one up and one down) to produce yawing moment. At first glance, the V-tail seems to offer both reduced drag and lower weight than a conventional tail because there are two surfaces instead of three. It would appear that the total wetted area of the empennage is smaller. Also, eliminating the vertical fin reduces the number of intersections on the airplane, thus decreasing interference drag. Upon closer examination, though, things are not as they initially appear. There are indeed fewer intersections on a V-tail airplane than on an airplane with a conventional tail, and interference drag may be slightly lower as a result. But looking more closely, we find that the drag reduction due to fewer intersections is really quite small. The total interference drag for a well-designed complete airplane amounts to only 5% to 7% of the airplane’s total drag. The elimination of the vertical fin eliminates one of five junctions, and the junction between the vertical fin and the fuselage is the lowest-interference drag junction of the five. This is because the fin is not producing much lift at any time during normal cruising flight, and the airflow over the fin is not highly accelerated like the flow over the upper surface of the wing. The drag reduction from eliminating the vertical-fin-to-fuselage junction will be quite small (considerably less than 1% of the airplane’s total The idea that a V-tail can have less total wetted area than a conventional tail because the surfaces are doing double duty also merits closer examination. The proper way to compare the two concepts is to require that the V-tail provide the same stability as the conventional tail. To do this we need to determine the "effective area" of a V-tail surface in both the horizontal and vertical directions. This will get a little mathematical, so the impatient reader can skip ahead to the conclusion, and then come back and read the math for confirmation of what is being presented. We first consider the horizontal direction. The projected area of the V-tail surface in the top view is equal to the area of the surface times the cosine of the dihedral angle: S[ph] = S cos(G) Where S = surface area and G = surface dihedral angle. Many people leave it at that and consider the effective area to be the projected area. In fact, this is not the case. The dihedral angle, in addition to reducing the projected area by a factor of cosine(G), also reduces the lift curve slope (rate of change of lift with angle of attack) by a similar factor. The effective area in the horizontal direction of a V-tail surface is thus the area of the surface times the cosine squared of the dihedral angle: S[eff(h)] = S cos^2(G). Thus, the effective area of the V-tail surface in the horizontal plane is less than its projected area by a factor of the cosine of the dihedral angle. In the lateral direction, things are very similar. The only thing that changes is that the area of the surface must be multiplied by the cosine squared of the quantity (90 – G) because the vertical direction is rotated 90 relative to the horizontal. It is a basic identity of trigonometry that the cosine of 90 minus an angle is equal to the sine of that angle. Thus, the effective vertical area of a V-tail surface is equal to the area of the surface times the sine squared of the dihedral angle: S[eff(v)] = S sine^2(G) The total effective area of a V-tail surface is the sum of the effective vertical area and the effective horizontal area. This total effective area is the area of tail surface on a conventional tail that is required to give the same directional and longitudinal stabilization as the V-tail surface: S[eff] = S[eff(v)] + S[eff(h)] S[eff ]= S sine^2(G) + S cos^2(G) S[eff ]= S [sine^2(G) + cos^2(G)] It is a fundamental trigonometric identity that the sum of the sine squared of an angle and the cosine squared of that angle is always equal to one. Thus: S[eff] = S [sine^2(G) + cos^2(G)] = S Simply put, what the equation above is telling us is that to get the same stability from a V-tail as from a conventional tail, the V-tail must have exactly the same area as the conventional tail. There is actually no savings in wetted area and thus no drag reduction due to reduction in skin friction. If a V-tail is sized to give the airplane the same stability as the conventional tail it replaces, there is little or no reduction in drag. Accordingly, using a V-tail will not measurably improve the performance of an airplane. Let us now turn our attention to some of the disadvantages of the V-tail. Although none are truly "fatal flaws," they must be considered. Stability and Control One downside of V-tails is that they are not well understood by many designers. The simplistic idea of sizing the V-tail based on projected areas has led to the construction of several airplanes with undersized tails. Typically, the problem shows up as a lack of directional stability. One example from history is the "Sweet Pea" Goodyear racer in which Art Chester was killed. The airplane suffered from lack of rudder power and poor directional stability for its entire career, and this may have contributed to the final fatal accident. Dihedral Effect and Dutch Roll Another concern is not so much a disadvantage as it is simply something that demands attention. A V-tail produces a large amount of dihedral effect. An airplane with a V-tail will have more effective dihedral than an airplane with a conventional tail and the same wing. This can lead to the airplane having a lightly damped Dutch roll mode. This is somewhat exacerbated by the fact that, in addition to producing rolling moment due to yaw (dihedral effect), V-tails also produce rolling moment due to yaw rate. This happens because the tail is far from the CG and is pushed sideways through the air by the yaw rotation. Although it is not as big a player as pure dihedral effect, roll due to yaw rate also destabilizes the Dutch roll mode. To the occupants of the airplane, the Dutch roll will feel like a wallowing or "fishtailing" motion. It is uncomfortable when flying in turbulence and can make instrument flying a real chore. The conventional cure for poor Dutch roll damping is to reduce dihedral effect and/or increase directional stability. The designer of a V-tail airplane should compensate for the dihedral effect of the V-tail by reducing the dihedral of the wing to get acceptable Dutch roll characteristics. Enlarging a V-tail to increase directional stability is likely to also increase dihedral effect so much that the effect on Dutch roll damping will be negligible or unfavorable. This highlights another difficulty with V-tail design. Because the tail surfaces affect all three axes, it is difficult to make a change in the characteristics about one axis without adversely affecting another. Adverse Roll The cross-coupling between axes causes another aerodynamic drawback of the V-tail. When the ruddervators are deflected differentially to act as a rudder and yaw for the airplane, they also produce a rolling moment. Unfortunately, the rolling moment is opposite to the yawing moment. In other words, if you step on the left rudder pedal, the tail will try to yaw the airplane left and roll it right. A conventional top-mounted vertical fin and rudder also produce some adverse roll, but not as much as a V-tail. This adverse roll is undesirable, but in general it is small enough that the pilot can easily overcome it with the ailerons, and the airplane will feel relatively normal. On airplanes with very large V-tails, or when the tails are mounted outboard on a wide fuselage, this could be potentially troublesome. Pitch-Up and Hung Stall The final aerodynamic ill of the V-tail is similar to one encountered with T-tails. The surfaces of the V-tail are above the plane of the wing, so the tail will tend to enter the wing wake more and more as the airplane’s angle of attack increases. This can lead to a tendency to pitch up at the stall. It also increases the possibility that the airplane will have a stable deep-stall mode. A stable deep stall, or hung stall, is extremely dangerous, as it may not be possible to recover, and the airplane may stay stalled all the way to the ground. The problem is not quite as severe with V-tails as it is with T-tails because the V-tail surfaces enter the wing wake progressively rather than all at once. Nonetheless, the designer of a V-tail airplane should be concerned with the possibility of poor behavior at higher angles of attack. The large dihedral effect of a V-tail also has structural implications. The tail may generate large rolling moments when the airplane is yawed or sideslipped. These rolling moments put large torsional loads on the fuselage. The extra torsional strength required of the fuselage will tend to increase its weight. Unlike a conventional horizontal tail, the two surfaces of the V-tail cannot have a straight carry-through structure between them to carry tail bending moments. The structure inside the fuselage that picks up the tails will accordingly be heavier than the carry-through structure of a straight horizontal tail. V-tails can also be more flutter-prone than conventional tails. Because they must move independently to control both pitch and yaw, the two ruddervators cannot be interconnected like the elevators of a conventional tail. The combination of independent ruddervators and a torsionally soft fuselage can drive a flutter mode in which the entire tail section rolls side to side, driven by the independent flapping of the control surfaces. Accordingly, as some Bonanza owners are all too aware, V-tails are much more sensitive to proper mass balancing of control surfaces than conventional Fewer intersections on a V-tail airplane lower the amount of interference drag, but the difference is minimal. Why Use a V-tail? V-tails can sometimes be useful as solutions to particular configuration integration problems, but I would use them only if some consideration other than aerodynamics were driving the design. Some sailplanes use V-tails to provide sufficient ground clearance during takeoff and landing. Sailplanes have single-wheel landing gear and sit on the ground with one wingtip resting on the ground. The resultant bank angle can cause a conventional horizontal tail to hit the ground or be so close to the ground that it will hit small bumps or rocks. Use of the V-tail eliminates this problem. Another attractive feature for the homebuilder is that with a V-tail there is one less tail surface to build, and fewer hinges and control cables to rig. This simplicity is offset somewhat by the fact that there must be a mixer in the control linkage so that both the stick and rudder pedals can affect the deflection of the ruddervators. V-tails and inverted V-tails have shown up on a number of unmanned aerial vehicles (UAVs). If the vehicle must fold for storage, or unfold when launched, the use of a V-tail reduces the number of surfaces that must fold/deploy. Surface deployment is always a major design issue, so this is an advantage for the V-tail configuration. Recently, V-tails have appeared on several military airplanes, notably the Northrop Tacit Blue demonstrator and the YF-23 fighter prototype. In both cases, the V-tail was used as a way to reduce radar return rather than for any aerodynamic or performance reasons. Barnaby Wainfan is a principal aerodynamics engineer for Northrop Grummans Advanced Design organization. A private pilot with single engine and glider ratings, Barnaby has been involved in the design of unconventional airplanes including canards, joined wings, flying wings and some too strange to fall into any known category. LEAVE A REPLY Cancel reply This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.kitplanes.com/wind-tunnel-169/","timestamp":"2024-11-14T08:08:25Z","content_type":"text/html","content_length":"169592","record_id":"<urn:uuid:c755ea80-bee3-4342-8eec-54edaf4c2cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00569.warc.gz"}
GMAT Critical Reasoning - Know the gimmicks to get right answers. | Marty Murray Coaching There are gimmicks that show up consistently in GMAT critical reasoning questions. If you familiarize yourself with the gimmicks, then you become more like an insider, someone who knows how the game is created, and so you put yourself in a better position to win at the GMAT CR game. Here’s a CR question that incorporates five commonly used gimmicks. Researchers are using a new method to show that genes cause certain diseases. A group of volunteer subjects is recruited from the group of people determined by physicians to be experiencing a particular health issue. The DNA of the volunteer subjects is then analyzed, and if a certain gene shows up in their DNA with greater frequency than in the DNA of people in the general population, and the difference in frequencies can be shown to be statistically significant, then the researchers conclude that the health issue must be caused by that gene. Which of the following is an assumption upon which the researchers’ conclusions depend? (A) Unlike some who are paid subjects, volunteers will not falsely claim that they are experiencing a particular health issue in order to be involved in research. (B) It is not common for many members of an extended family to volunteer to be subjects of research into a health issue that many of the members of the family have experienced. (C) Determining whether genes actually cause health problems can be a useful step in solving those health problems. (D) Most people in the general population will remain healthy even after the study is completed. (E) If genes do cause disease and if this research methodology is used repeatedly, then genetic causes of diseases will be found. The correct answer is at the end of the post. Now, here are the gimmicks, ones like those that you can expect to see regularly used in creating CR answer choices. (A) The gimmick in this one involves bringing up something that seems relevant but is rendered irrelevant by a specific thing said in the prompt. If you miss a key detail in a prompt, you might pick this type of choice. The key detail in this case is the prompt’s saying that physicians rather than the volunteers determine which people are experiencing the health issues. So, that people may claim to have health issues when they don’t is irrelevant. (B) The gimmick used here is the sounds too weird and unrelated to be the right answer gimmick. Most of the other answer choices seem related to research and genes, but this one suddenly focuses on families of the volunteers. To get CR questions right, you have to be careful to not eliminate answer choices just because they initially sound weird or unrelated to the argument. Often the most offbeat sounding answer is the OA. (C) This answer choice uses a scope gimmick. The GMAT will give you answers that sound relevant or mention something that the test taker is likely thinking. They are, however, beyond the scope of the specific conclusion being discussed. The very specific conclusion being discussed in this case is only that the research connects genes with disease. In the real world, people doing such research may be assuming that it will prove useful, but on the GMAT often such an assumption is not considered within the scope of the discussion. (D) This one uses a going way overboard while seeming to discuss something relevant gimmick. Yes, the researchers are assuming that the health the general population is somehow different from the health of the volunteer group, but to say that the people in the general population will “remain healthy” is going way too far. The researchers are assuming that most of the people in the general population will never experience any health issue? (E) This one uses a conclusion that sounds like an assumption gimmick. What better way to get someone to choose the wrong answer to an assumption question than to provide an answer choice that seems to be supported by what is said in the prompt. Anyone looking for an answer choice that “sounds right” might get smoked by a conclusion that sounds like an assumption gimmick. The more you carefully look at GMAT critical reasoning questions, the more you will become aware of the types of subtly and not so subtly used gimmicks that are incorporated into their construction. The more aware of those gimmicks you become, the less you will get fooled by them. Correct Answer: B Explanation: The researchers are assuming that the genes are more common in the DNA of the volunteers because the genes are related to a particular health issue, not because many members of extended families, people who are likely to have similar genes, are among the volunteer subjects. 2 Responses 1. Hi Marty, I am a non-native test taker, and consider myself to have decent Verbal skils. I have hit a wall with respect to CR – I normally average 2.5min on CR which is far too much and this restricts my score to the 38-40 (best case 41) range on Verbal – I end up guessing blindly on the last few questions, around 4 during a test. I try and distribute these guesses among the CR questions in between by clicking on the longest looking answer choice. What can you suggest to overcome this timing issue? Also I seem to be running out of material to practice CR from as I have already exhausted OG, Veritas online bank, Question Pack 1 and GMATPrep exam questions. Any good sources you discovered during your own preparations? 1. If you are reading the question before the prompt, try reading the prompt first. Doing that can save some time, as otherwise you may end up reading the question twice. Get used to reading and understanding the first time more of the time. People sometimes reread unnecessarily. Be sure to have a good way to keep track of which answers you have decided are probably incorrect. Having to confirm this repeatedly can suck up time. Clarity is key in getting CR answers. Practice finding conclusions, finding gaps and doing anything else that will result in your seeing more clearly what’s going on in the prompts, questions, and answer choices. For more questions, you could try LSAT questions. They are not quite the same as GMAT questions, but they provide practice in seeing clearly. There are also, Magoosh, Grockit, and other resources that have some decent CR questions. I found that almost any source has some useful questions and can be a source of insight. Some people use the 800Score verbal CATs for timing practice. Also, often Veritas sells seven tests for fifteen or twenty dollars. You could get the seven tests and use them in any way that you want, doing questions timed or untimed or doing the verbal sections only.
{"url":"https://martymurraycoaching.com/gmat-critical-reasoning-know-the-gimmicks-to-get-right-answers/","timestamp":"2024-11-09T02:43:56Z","content_type":"text/html","content_length":"57818","record_id":"<urn:uuid:189911a6-9d86-4cb9-bc30-6c0309b66a87>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00811.warc.gz"}
[Solved] LetA1, A2, andA3 be three arbitrary event | SolutionInn LetA1, A2, andA3 be three arbitrary events. Show that the probability that exactly one of these three LetA1, A2, andA3 be three arbitrary events. Show that the probability that exactly one of these three events will occur is Pr(A1) + Pr(A2) + Pr(A3) − 2 Pr(A1 ∩ A2) − 2 Pr(A1 ∩ A3) − 2 Pr(A2 ∩ A3) + 3 Pr(A1 ∩ A2 ∩ A3). Fantastic news! We've Found the answer you've been seeking! Step by Step Answer: Answer rating: 81% (11 reviews) We can use Fig S11 by relabeling the events A B and C in the figure as A ...View the full answer Answered By Hemstone Ouma "Hi there! My name is Hemstone Ouma and I am a computer scientist with a strong background in hands-on experience skills such as programming, sofware development and testing to name just a few. I have a degree in computer science from Dedan Kimathi University of Technology and a Masters degree from the University of Nairobi in Business Education. I have spent the past 6 years working in the field, gaining a wide range of skills and knowledge. In my current role as a programmer, I have had the opportunity to work on a variety of projects and have developed a strong understanding of several programming languages such as python, java, C++, C# and Javascript. In addition to my professional experience, I also have a passion for teaching and helping others to learn. I have experience as a tutor, both in a formal setting and on a one-on-one basis, and have a proven track record of helping students to succeed. I believe that with the right guidance and support, anyone can learn and excel in computer science. I am excited to bring my skills and experience to a new opportunity and am always looking for ways to make an impact and grow as a professional. I am confident that my hands-on experience as a computer scientist and tutor make me a strong candidate for any role and I am excited to see where my career will take me next. 5.00+ 8+ Reviews 23+ Question Solved Students also viewed these Statistics questions Study smarter with the SolutionInn App
{"url":"https://www.solutioninn.com/leta1-a2-anda3-be-three-arbitrary-events-show-that-the","timestamp":"2024-11-14T23:35:06Z","content_type":"text/html","content_length":"81013","record_id":"<urn:uuid:d454dd9c-3aad-4a0b-a79d-180273c7869a>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00340.warc.gz"}
LSMEANS Statement LSMEANS <model-effects> </ options> ; The LSMEANS statement computes and compares least squares means (LS-means) of fixed effects. LS-means are predicted population margins—that is, they estimate the marginal means over a balanced population. In a sense, LS-means are to unbalanced designs as class and subclass arithmetic means are to balanced designs. Table 40.5 summarizes the options available in the LSMEANS statement. If you specify the BAYES statement, the ADJUST=, STEPDOWN, and LINES options are ignored. The PLOTS= option is not available for a maximum likelihood analysis; it is available only for a Bayesian analysis. If you specify a zero-inflated model (that is, a model for either the zero-inflated Poisson or the zero-inflated negative binomial distribution), then the least squares means are computed only for effects in the model for the distribution mean, and not for effects in the zero-inflation probability part of the model. Table 40.5: LSMEANS Statement Options Option Description Construction and Computation of LS-Means AT Modifies the covariate value in computing LS-means BYLEVEL Computes separate margins DIFF Requests differences of LS-means OM= Specifies the weighting scheme for LS-means computation as determined by the input data set SINGULAR= Tunes estimability checking Degrees of Freedom and p-values ADJUST= Determines the method for multiple-comparison adjustment of LS-means differences ALPHA= Determines the confidence level () STEPDOWN Adjusts multiple-comparison p-values further in a step-down fashion Statistical Output CL Constructs confidence limits for means and mean differences CORR Displays the correlation matrix of LS-means COV Displays the covariance matrix of LS-means E Prints the matrix LINES Produces a “Lines” display for pairwise LS-means differences MEANS Prints the LS-means PLOTS= Requests graphs of means and mean comparisons SEED= Specifies the seed for computations that depend on random numbers Generalized Linear Modeling EXP Exponentiates and displays estimates of LS-means or LS-means differences ILINK Computes and displays estimates and standard errors of LS-means (but not differences) on the inverse linked scale ODDSRATIO Reports (simple) differences of least squares means in terms of odds ratios if permitted by the link function For details about the syntax of the LSMEANS statement, see the section LSMEANS Statement of Chapter 19: Shared Concepts and Topics.
{"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_genmod_syntax20.htm","timestamp":"2024-11-05T07:07:26Z","content_type":"application/xhtml+xml","content_length":"29883","record_id":"<urn:uuid:0dea8698-c45a-437c-ad40-eeb80d5fa09b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00682.warc.gz"}
Minimum flow speed of sugars in plant cells Range 5 to 50 µm/sec Organism Plants Reference Kaare H. Jensen & Maciej A. Zwieniecki, Physical Limits to Leaf Size in Tall Trees, Physical review letters, PRL 110, 018104 (2013) DOI: 10.1103/PhysRevLett.110.018104 p.4 left column 2nd paragraphPubMed ID23383844 Primary Vogel S. Living in a physical world. J Biosci. 2004 Dec29(4):391-7.PubMed ID15625395 "With typical plant cell sizes in the range of d=10–100µm, diffusion and advection of sugars are equally effective over these length scales when the Peclet number Pe=vd/D=1 [primary Comments source], where v is the flow speed and D is the diffusion coefficient (D=0.5?10^-9m^2/s for sucrose [CRC Handbook of Chemistry and Physics, edited by W. M. Haynes (CRC Press, Boca Raton, FL, 2012), 93rd ed.]). [Researchers] therefore expect v~D/d=5–50µm/s to provide a lower estimate of the minimum flow speed umin." Entered Uri M ID 108687
{"url":"https://bionumbers.hms.harvard.edu/bionumber.aspx?id=108687&ver=1&trm=Plant+Alocasia+macrorhiza&org=","timestamp":"2024-11-14T15:05:03Z","content_type":"text/html","content_length":"14466","record_id":"<urn:uuid:dc899af6-a497-41ab-b9a0-7ea544c6cc25>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00463.warc.gz"}
Besides frequently visiting the FriCAS Wiki website. There are several ways to stay in contact with the PanAxiom community. Talk to Us! There is an irc channel where developers can find other developers. It is: server: irc.freenode.net The FriCAS Email List The FriCAS Email List is supported by Google Groups FriCAS - computer algebra system ( fricas-devel@googlegroups.com ) FriCAS Commits ( fricas-commit@googlegroups.com ) One place to discuss major new features of FriCAS and other related issues is the FriCAS Colloquium. We have collected a list of some suggested new features in the FriCAS Wish List and there are more described in OldSummer of Code. Other E-mail Lists The Axiom mailinglists are: Discussion of math theory and philosophy related to Axiom General discussion on Axiom When you have issues to compile Axiom or with Axiom internals All legal issues, like liceg/mailman/listinfo/axiom-legal (archive issues The OpenAxiom project has many mailing lists, archived on the web. a read-only list where we send announcements about releases or other important events. a list for people interested in using or building OpenAxiom. a list for development discussions about OpenAxiom. a list for bug reports and feature requests. a list for patch submissions and discussion of particular patches. All patches and all discussion of particular patch should be sent to this list. a read-only list which tracks checkins to the OpenAxiom SVN repository. a list where test results of the OpenAxiom testsuites are posted. Info below is mostly historical and out of date. Axiom Meetings The Axiom Workshop 2007 took place at the Research Institute for Symbolic Computation in Hagenberg, Austria, from June 14 to June 16, 2007. A poster for the workshop is available. The Axiom Workshop 2006 took place at the Research Institute for Symbolic Computation in Hagenberg, Austria, from April 27 to April 29, 2006. A poster for the workshop is available. Doyen @ ASEE Mid-Atlantic Spring Conference 2006 The Axiom meeting 2005 was held in New York City, April 22, 2005. News Feeds You can use a NewsReader? (e.g. FireFox? "active" bookmarks) to efficiently keep track of changes going on here. For subscriptions: See MathActionRSS? See MathActionRSSedit? Axiom and the scientific commnunity (based on an email from Daniel Augot on Friday, September 19, 2003 5:18 AM) I think there is an issue for the future life of Axiom, which involves researchers in computer algebra. From some email exchanges with friends, I do not feel that the revival of Axiom as free software will motivate them to go back to Axiom. In the French commnunauty, which I know a bit, because I was involved during my PHD thesis, 10 years ago, many people got very involved with Axiom. Many wrote domains, packages etc... for implementing the algorithms they were introducing in their research work. But at his time, Axiom was a very closed software, and the French communauty was on its own, for documentation, help, contacting gurus, etc... Axiom also had its load of defaults~: a cumbersome system of categories, poor speed of code, compiler and interpreter bizareness, no way to use Unix pipes and redirection etc. Furthermore NAG did not show any clear sign about the future of Axiom. Even more, there was the promising A#/axiomxl/aldor project, with Basicmath, but it was immature, so the choice between Axiom and aldor was unclear. As a consequence many were confused and discouraged, and switched to other computer algebra systems, for instance Magma, which offers a large library and is very fast (although it does not offer the rich mechanism of Axiom for contructing domains and categories). Consequently, I think there must be a thinking about the state of Axiom, and clear signs concerning its future, beyond the point of making it publicly available. Will the compiler will be fixed ? Will it be documented ? Will compiled code be faster ? Will researchers in computer algebra be able to incorporate easily their software ? Will they be able to redesign the system of categories ? Will it be possible to link against efficient C code ? etc, etc... May be the benefits of the free software will show up, but, after discussion with friends, it will not be enough for switching back to Axiom. Kind regards, and felicitations for the work done. I have been able to download and compile all Axiom, and I am very glad for that. Daniel Augot (based on an email from Tim Daly, on Friday, September 19, 2003 10:43 AM) I have had both face-to-face and direct email discussions that the issues you raise are real and need to be addressed. I'll try to give you my current thinking on the subject. I don't expect that Axiom will gain a great deal of use simply because it is free. I have collected about 100 free "computer algebra" systems which I distribute on my Rosetta CD collection. Free CA systems are "a dime a dozen" quite literally. Indeed many of these systems were built by researchers as part of their research work. My experience shows that most of these free systems start with the insight that math "types" and programming "types" are similar. Starting with this idea it becomes clear that you can build a nice, clean system from scratch. It takes about a semester to build up a full, general purpose, polynomial manipulation library in C++ and, indeed, you find that the math and computer types interact very well. Then the insight occurs that the library isn't useful to anyone but the researcher so the second semester of work involves writing a front-end interpreter on the library. Subsequent effort involves trying to convince others that this could be a very useful system given sufficient effort. This is very seductive since it looks like great progress. It has several problems. First, a local problem is that the research work that is "reduced to practice" using a newly implemented system cannot be effectively used by others (e.g. library systems rarely do simplification and almost never document their algorithms). Second, a local problem is that reduction to practice, that is, "programming" is generally not "valorized". The research is recognized but the year or two spent building a working system is either ignored or considered to be of little worth during tenure discussions. Third, a global problem is that the algorithmic work, while free, is generally useless to others. Either the system is so specialized that it only does one thing well which makes it into a single-purpose, once-only use tool or it tries to be general purpose and has such a limited range of algorithms that it quickly reaches a point of frustration for the user. Thus the one great algorithm at the center of the system is buried and lost. Fourth is the "rule of 3". It takes 1 unit of work to get something for yourself. It takes 3x1 units of work to make is so your office neighbor can use it. It takes 3x3x1 units of work so you can use it "in the department" and in courses. It takes 3x3x3x1 units of work to give it to the world for free without support. It takes 3x3x3x3x1 units of work to make it into a commercial product with support, a hotline, lawyers, etc. Most "computer algebra" systems stop at the 3x1 level as there is no interesting new research work beyond the first unit and the 3x1 units are expended as a matter of trying to get the work out to the world. Building your research on top of Axiom or the 3Ms (Mathematica, Maple, Matlab) immediately gives you the benefit of the 81 units of work already done. Systems like the 3Ms get purchased because they are general purpose enough to do virtually anything and hold out the hope that research done with these systems will be picked up and made useful to others. However you tend to lose control of your work. If it is badly implemented in the 3M world and has your name attached to it you have little choice but to suffer the hit on your reputation. In a free system like Axiom your reputation is yours to make or break. Detailed discussions with researchers highlights another subtle fact. The 3Ms are built on weak theory ground. Practically speaking this has the effect of "limits of scale". You'll find that the more complex the package you build the more difficult these systems become, for reasons not related to your package. The difficulty is compounded if you need to use other "non-core" packages. In some sense, these systems are like Perl which is easy to use, fast to write, hard to scale to large projects, and impossible to maintain (yes, I know this is a religious debate). Axiom started out like any other home-grown system, called Scratchpad. However it was started at a time that major funding was available (computer algebra was considered to be a branch of artificial intelligence). It was heavily funded by both the U.S. government and IBM Research for about 23 years. Many researchers came to visit, many people worked on the system, many algorithms were created in a broad range of areas. This is the "dream realized" for the authors of the many free "library" systems. Fortunately Axiom started out as a "theory" system and not a "library". (See the IBM ran into financial trouble and sold Scratchpad (as Axiom) to raise cash. As a business decision this made sense but as a global decisions it was pointless. Scratchpad is a great system for doing research work and had the support and attention of about 400 researchers worldwide. If you're going to do real math research Scratchpad was definitely the place to work. It required at least a master's degree to learn but was easy to extend if you understood the underlying math. Your work could be integrated and used by the research community. As a "product" (Axiom) it had a very limited market with cash-poor clients who could not support Axiom as a commercial product. Axiom could never generate sufficient cash flow to cover the cost of a development team in the commercial, closed source world. And open-source generates no cash. So Axiom is the best place to do research and the worst place to make money. Scratchpad was "open source" before the term existed. People who asked me (while I was at IBM) could get a free copy of the source code. Axiom when it was released followed the standard commercial path of closed source software. This depends on a staff of people to maintain, which depends on a good cash flow, and clearly Axiom couldn't generate the cash flow. So Axiom was cut off from the customers who made it useful and could never survive in a closed source model. Scratchpad was ported onto AKCL, a compiled, hand-optimized version of common lisp specifically developed under contract. I worked closely with Bill Schelter on several detailed features like second-compile optimization of function calling, tail recursive optimizations, memory management, etc. to make Scratchpad perform well. When Axiom became a commercial product it was ported to run on CCL, a byte-code interpreted partial common lisp. This solved the portability problem (AKCL was very hard to port as it compiles to optimized machine code) but basically broke Scratchpad. Function calling and garbage collection optimizations disappeared. Axiom is now back to running on GCL, the open-source version of AKCL. Camm, the GCL lead developer, is on the Axiom maintainer list. As to contacting gurus we on the scratchpad team were told to "circle our chairs" until we came up with something other than computer algebra to work on (they even brought in an industrial psych. to "reprogram" us which I found to be a very painful experience both professionally and emotionally). NAG was in a very difficult situation guru-wise as they lost the help of the guys who wrote it. Aldor has great promise but people insist on trying to build the world "from scratch". It may be several years and several failed experiments before it becomes clear that the "library" approach is flawed. In the mean time Axiom and Aldor have committed to supporting cross-compiled compatibility. As to the other issues like "cumbersome categories", hey, it's now open-source and I'm open to ways of improving it. Scratchpad/Axiom evolved to the system it is now because hundreds of people worked on it and improved it. Someone needs to "unify" the Axiom type tree with the mathematics in a much more systematic way. This is about a whole Ph.D-thesis level of effort. The benefits of such thesis work would be enormous as it would clarify how to correctly build these systems. Axiom's types grew with knowledge of the theory but without benefit of the detailed analysis. Now Axiom is back as open source. That is a necessary condition but not sufficient. We need two things to survive. We need a community and we need a funding model. For signs about its future visit savannah.nongnu.org/projects/axiom and click on the "homepage" link. There are long range plans to unify with theorem proving (ACL2 or MetaPRL?), group theory (GAP and Magnus), numerical work (Octave, etc). These are evolving as discussions proceed and the webpage has not kept pace. In addition there is the CATS (computer algebra test suite) effort to unify the test cases from the many systems and put them on a better mathematical footing. The documentation is in process. Virtually all of Axiom is now written in a Literate Programming style and examples of literate programs which combine a Ph.D thesis (Richard Paul's Robotics work) with a domain (DHMATRIX) exist in the distribution. for example More work needs to be done both on this domain and on recovering the research papers behind the other domains. I've contacted several researchers and gotten their permission to use their research papers and integrate them into the documentation. This work continues but it is tedious because I need to find the research and secure the permissions. In addition I'm rewriting the Axiom book as we speak. The current work is rebuilding the book but the later steps involve much rewriting as well as exploiting the power of Active DVI. The online book will likely be the "next announcement" so documentation is being given a priority. Community can only occur if people feel there efforts will bring reward. In most cases, especially open source, this involves recognition. This is a particularly thorny problem as computer algebra systems tend to be the child of mathematicians. Math departments seems to feel that the reduction to practice of math theory is uninteresting. Until the cross breeding of the Computer Science and the Mathematics department yields a "Computational Mathematics" department I suspect the problem will continue. In the long run, of course, this has to occur as some of the mathematics can no longer be done without a computer and some of the mathematics only exists BECAUSE computers exist. Beyond University issues Axiom needs to build a wider community of users who are not researchers. This involves reaching out to the teaching community. It involves finding ways to make Axiom easier to use and reducing the learning curve. It involves developing a focusing agenda for coordinating and developing teaching materials for the sciences (see footnote2). The MIT suggestion is motivated by the question of who is the Axiom community and how to reach/motivate/support it. As to the funding issue I believe that this will also be a struggle. I've been mulling the idea of creating an Axiom company which would be chartered to work with schools to construct grant requests and administer grants for researchers. This is an outgrowth of the way Scratchpad worked during the IBM days. Researchers would come and spend time on site writing new domains, learning Axiom, and returning back to their schools. Unfortunately I'm not sufficiently skilled at grant writing to figure out how to make this work. The Axiom company wouldn't own anything and would only be a paper agency with a grant number and financial tracking. An alternative approach is to "side-effect" the grants. That is, encourage and support researchers who are applying for grants that involve Axiom (such as classroom development of courses) in their efforts. This is much harder as the researcher has to decide to make it part of the grant. Clearly I have no creative new ideas about how to get money. However if you take the "30 year view" of computer algebra it is clear that we need to build on the current systems rather than start from scratch. For one thing there is already 30 years of funding investment in Axiom which shows just how expensive it can be to develop a "real" system. For another it is clear that no-one is going to support another 30 years of funding just to achieve the same level as Axiom has reached. So either we expend a great deal of energy creating small, special purpose library systems or we contribute to the larger systems like the 3Ms and Axiom. I would argue that Axiom is better designed to scale to higher mathematics and also better able to support the amplification of the teaching and research work because of the "open source" effect (that is, freely downloadable packages widely available with multiple authors as with Linux). I don't expect people to switch back to Axiom because it is free. I hope they will switch back because it is better, both technically and socially, than the alternatives. In addition I believe that "Computational Mathematics" will eventually arrive as a discipline in its own right just as the Computer Science departments eventually arose. As an undergrad 35 years ago I had to convince my math profs to create courses in "computer science" as it was "just a fad". We need to begin to collect the widespread research (enshrined in libraries and journals) and use it to document the algorithms. There needs to be a whole tower of computer algebra theory that can be taught as a field of research combining work in math, computer science, complexity, types, simplification, etc. There needs to be support work for the sciences that need computer algebra. Switching to Axiom isn't really the point (pretend I never said that) but Axiom provides a kernel for the field of developing theory. In 30 years this will be perfectly obvious. Computer algebra both stimulates fields of research and provides answers that can't be arrived at by any non-computer means. Anyway, that's my current thinking on the subject. There is more to come as work on Axiom continues. Tim Daly Footnote: My survey of the computer algebra field shows that systems can be classified in one of three ways. "Library systems" start out with the realization that math types and computer types are similar. A C++ (usually but not always) library gets created. An interpreter is grafted on top of the library. The focus of these systems is "speed". "Engineering systems", like Mathematica, Maple, and Matlab, are systems that try to get "an answer". They seem to ignore math types and sometimes specialize in one general purpose programming type (e.g. lists). You can recognize these systems because operations like subtracting two equal matricies will yield the zero integer. They are general purpose, easy to use, and hard to scale (i.e. build on the work of other non-core packages). "Theory systems" like Axiom or Docon. These systems start with the math types and build up a language based on them. The programming representations are flexible. Subtracting two matricies yields the zero matrix. They are general purpose, hard to learn, but scale very cleanly. Footnote2: There is a game called Pontifex which simulates bridge building. It includes a construction set. Bridges are built and simulated loads (such as trains) are applied to the bridges. Learning occurs, "for example":(http://www.bridgebuilder-game.com) Axiom could be very useful when packaged with games of this type in a classroom setting. Consider a mechanical engineering course where you have software that can compute virtual work loads on beam members (which are basically delta-displacements). You could explain the theory, show how to compute loads of 2 and 3 dimensional lattices, how to apply them to bridges, how to predict bridge successes and failures, and use the game to build the simulated bridges. Game playing and mathematics in the same course. You could do the same in other sciences such as biology. If you look at the forces involved in protein folding you could create models using Axiom, define the forces, and compute results. The challenge would be to create a game that accurately simulates the models. Perhaps in chemistry you could model glue-binding forces for various types of glues (like post-it notes) and construct things with glue. Game playing is a fast growing (and profitable) field. We should bring games into the classroom along with the theory. Thanks for professor W.Kendall, Warwick Univ,U.K. kindly allow us to include his effort, itovsn3 (in Axiom) in our LiveTeXmacs?. itovsn3 supports many functions for computing stochastic integrals, solving stochastic differential equations and others. Also itovsn3 gives several examples. With the such powerful tool for studying Stochastic process, I also make a lecture file about stochastic process in OldTeXmacs format ( more than 100 pages but not completed yet). In the future, I will add a part how to use Axiom+itovsn3 to solve SDE. Welcome all interesting readers to use. itovsn3 (author: Wilfrid S. Kendall) LiveTeXmacs? (author: chu-ching huang) ftp://math.cgu.edu.tw/pub/KNOPPIX ftp://math.cgu.edu.tw/pub/KNOPPIX/StocCal216.iso [User Profiles]? Dear root, I and my supervisor have a paper accepted by ICMS2006. http://www.icms2006.unican.es/ They would like to make a list about the softwares developed or used by the authors. They also can include the software in their DVD. Since we used Axiom in our paper, I am not sure if you are interested of this. We already sent them a description of ALdor?. The dealine of giving them a very brief description of the software with links is June.15,2006. For the DVD project, the deadline is later which can be found in the forwarded message. Thanks for your time. Xin Li ORCCA University of Western Ontarion -- ---------- Forwarded message ---------- Date: Mon, 12 Jun 2006 15:41:24 +0900 (JST) From: Nobuki Takayama To: xli96@scl.csd.uwo.ca Subject: icms2006_links_to_projects Dear speakers, contributors, PC and advisory members of icms2006---developer's meeting; The deadline of "Links to Projects" is June 15 (Thu). This article will be included in our proceedings and will also be the master data file to edit our DVD's. You can browse the submitted data for links to projects We welcome submissions to this article by speakers, contributors, PC and advisory members of icms2006. We accept a description of any mathematical software/documentation projects related to them. The release schedule of our DVD's can be found at (DVD1 and DVD2) The DVD1---testing 0 is now ready to download (800M, VMware? player image). The due for DVD1 (free license) is June 24. The due for DVD2 (free and non-free licenses) is July 14. We welcome your contributions and I look forward to seeing you at Castro-Urdiales and hopefully also at Madrid soon. Nobuki Takayama Corrections re Mathematica, http://DesignerUnits.com/axiomphilosophy Thanks for your comments. Since this is a wiki and editable by everyone, please feel free to make such corrections regarding Mathematica or other computer algebra systems right here.
{"url":"http://wiki.fricas.org/PanAxiomCommunity?subject=MMA%20is%20Theory%20System&in_reply_to=%3C20100203154501-0800%40axiom-wiki.newsynthesis.org%3E","timestamp":"2024-11-10T23:58:32Z","content_type":"application/xhtml+xml","content_length":"55702","record_id":"<urn:uuid:d621309b-0410-4461-86c0-e202c987ffb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00256.warc.gz"}
Notes: Plain calculator with an 8-digit fluorescent display and with the percentage function. The red minus key − is clearly standing out. Multiplication and division automatically repeat after pressing the = key. Addition and subtraction do not however. One example, at the back of the calculator, uses this to calculate 2^4 by using 2 × = = =.
{"url":"https://calculator-museum.nl/calculators/ibico-083-body.html","timestamp":"2024-11-10T17:26:12Z","content_type":"text/html","content_length":"3182","record_id":"<urn:uuid:60951652-1015-4eb5-9b46-8e69013747b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00634.warc.gz"}
Top 10 Praxis Core Math Prep Books (Our 2023 Favorite Picks) Although passing the Praxis Core Math Test may seem difficult, by taking the right steps you can also pass the Praxis Core Math Test. One of the basic steps is to choose the right resources and use the best books to prepare for the Praxis Core Math Test. But there are many Praxis Core Math prep books on the market that make it difficult to choose from, so we will try to guide you in an article to choose the best Praxis Core Math prep books. Best Praxis Core Math Prep Books 1- Praxis Core Math for Beginners: The Ultimate Step by Step Guide to Preparing for the Praxis Core Math Test Are you looking for a good book for the Praxis Core math test? Praxis Core Math for Beginners is not just a book for beginners but can be used by all levels. This book prepares you for test day and boosts your confidence with the strategies it offers. With this book, you do not need to use a teacher and you can prepare for the exam as a self-study according to your own schedule. Key features of this book are: • Content is fully compliant with the Praxis Core test • Designing the content of the book by experts • Step-by-step guide on all topics • Full coverage of Praxis Core test concepts and topics • 2 complete practice tests with accurate descriptive answers • 500 additional practice questions to better understand the concepts • High-value content • Helps you schedule a study plan • Sufficient practice questions • Easy to read and understand • Reasonable price • Includes well-described answers • Full-length practice tests for aspirants • Extra online features Check price on Amazon • Doesn’t cover all Praxis Core subjects (it’s a Praxis Core Math Prep book) 2- Praxis Core Study Guide 2020-2021: Praxis Core Academic Skills for Educators: Math, Reading, and Writing APEX Test Prep’s Praxis Core Study Guide 2020-2021 is a book written to make your work for the Praxis Core exam easier and includes everything you need to pass the Praxis Core exam. This book is also a comprehensive guide that covers all three sections of reading, writing, and math of the Praxis Core test. Let’s take a look at the benefits of this book: • It provides the tips you need for the test • It Explains the content and examples in a way that is easy for you to understand • This book covers all the information and concepts needed for the test • It Includes Praxis Core Practice Test Questions to further practice and improves your testing skills • It provides an accurate and detailed answer to each question • Sufficient practice questions • Includes well-described answers • A detailed review of ALL subjects • Numerous strategies to guide students Check price on Amazon • Some aspirants felt There are some of math mistakes 3- Praxis Core Study Guide 2020-2021: Praxis Core Academic Skills for Educators Test Prep Book with Reading, Writing, and Mathematics Practice Exam Questions Praxis Core Study Guide 2020-2021 is an excellent study guide designed by experts in accordance with the latest Praxis Core tests. This book gives you a quick but complete overview of everything in the Praxis Core exam. Some of the key features of this book are: • full-length practice tests including answer explanations • unique test-taking strategies with highlighted key concepts • Free online resources • online flashcards • 35 test tips available anytime • Full-length practice tests • Extra online features • All sections thoroughly covered • High-value content • Easy language Check price on Amazon • Some aspirants feel it is better to focus more on math 4- Praxis Core Math Study Guide 2021 – 2022: A Comprehensive Review and Step-By-Step Guide to Preparing for the Praxis Core Math (5733) It is a comprehensive book that includes everything you need to prepare for the Praxis Core Math exam. This book also includes two complete and simulated Praxis Core Math tests that can help you identify your strengths and weaknesses. Some of the features of this book to prepare you for the exam: • The contents of the book are fully compliant with the Praxis Core Math test. • The contents of the book cover all the topics in the Praxis Core Math exam • There is a step-by-step guide to all the topics and chapters • There are many exercises for test-takers to become more familiar with new content • There are two complete simulated tests with detailed answers. • In-depth answers to each question • Full-length practice tests for excellence • Easy language • Well organized sections • Value for money • High-quality exam-oriented subject material Check price on Amazon • Doesn’t cover all Praxis Core subjects (it’s a Praxis Core Math Prep book) 5- Praxis Core Study Guide 2020-2021 Secrets – Praxis Core Math (5733), Writing (5723), Reading (5713), Full-Length Practice Test, Step-by-Step Review Video Tutorials Mometrix Test Preparation’s Praxis Core Study Guide 2020-2021 Secrets is an ideal book for those who want to pass the Praxis Core exam. In this book, the concepts are explained in detail for your better understanding. It has a step-by-step video tutorial for mastering difficult concepts and contains tips and strategies that can help you perform better on the test. Here are some of the strengths of this book: • Many of the concepts in this book include links to online review videos for a better understanding of complex concepts. • Includes study guides and practices for all three sections of Praxis Core: Reading, writing, and math. • The examples are step-by-step, so you can see exactly what you need to do. • The answers to the questions are clearly explained so that the principles and reasoning behind them are quite clear. • This book also contains sample questions for further practice. • Full-length practice tests for aspirants • A satisfactory explanation of answers • A comprehensive review of all test topics • Online review videos Check price on Amazon • Some aspirants felt can be better organized 6- Praxis Core Study Guide 2020-2021: Praxis Core Academic Skills for Educators: Math 5733, Reading 5713, and Writing 5723 Praxis Core Study Guide 2020-2021 is a useful book to get an excellent score in the Praxis Core exam. This book is a comprehensive study guide that is a good option for an overview of contents and learning test strategies. This book includes study guides and practice questions in all three sections of the Praxis Core exam, Reading Writing and Math. Some of the benefits of this book are: • Covers all the concepts of the Praxis Core test along with its details • Includes practice test questions • It has Answer Explanations for each question to better understand the concepts • Provides top tips for each test • Proven test-taking strategies • A satisfactory explanation of answers • Well organized sections • All sections thoroughly covered • Sufficient practice questions Check price on Amazon • Some aspirants believe that there are errors in mathematics 7- Praxis Core Academic Skills for Educators, 2nd Ed.: Reading (5712), Writing (5722), Mathematics (5732) Book + Online REA’s Praxis Core Academic Skills for Educators Test Prep is a complete preparation package that includes everything you need to raise your Praxis Core score. REA is an up-to-date book and all Praxis Core test standards have been considered in writing this book. The advantages of this book are: • This package contains in-depth reviews of all the reading, writing, and mathematics content tested on the Praxis Core exam. • This preparation package includes 6 complete diagnostic practice tests. In fact, for each test, an online test is provided and helps you identify your strengths and weaknesses. • The tests in this book are timed and give you a detailed scoring analysis so you can easily see where to focus your study. • 6 full-length practice tests • Faster online diagnostic tests • Extra online features • A detailed review of ALL subjects Check price on Amazon • Some aspirants believe that there are errors in mathematics 8- Praxis Core Study Guide 2020-2021: Praxis Core Academic Skills for Educators Test Prep with Reading, Writing, and Mathematics Practice Questions This book is a comprehensive study guide for the Praxis Core test that gives you a quick but full review of all Praxis Core test items. This book provides you a complete review of the reading, writing, and math part of the Praxis Core test, and can be a good option to help you pass the test. Some of the positive features of this book are: • Comprehensive content and including all three main parts of the Praxis Core test: reading, writing and math • Brief but complete information required for a quick review is provided in this book • Free online resources is provided with this study guide • Free online resources • online flash cards • 35 test tips available anytime • In-depth answers to each question • Full-length practice tests for aspirants Check price on Amazon • Some aspirants feel that Content quality can be improved. 9- Prepare for the Praxis Core Math Test in 7 Days: A Quick Study Guide with Two Full-Length Praxis Core Math (5733) Practice Tests Prepare for the Praxis Core Math Test in 7 Days is a quick study guide and contains only the most important math concepts that the student needs to succeed in the Praxis Core Math Test. With this book, you can maximize your score and minimize study time. This book also contains examples that will help you learn more accurately and teach you step-by-step what to do. With the help of this book, you can pass the Praxis Core Math Test in just a seven-day period by spending 3 to 5 hours a day. Some of the strengths of this book are: • Written by a professional team of experts and teachers • It Covers all the concepts and topics you need for the Praxis Core math test • Step-by-step guide and helpful tips for better learning Praxis Core Math • It can be used as a self–study course • Includes over 500 additional practice Praxis Core Math questions in both multiple-choice and grid-in formats with topic-grouped answers for better understanding • Includes 2 complete practice tests with detailed answers • Great Layout • High-value content • Helps you schedule a study plan • Full-length practice tests for excellence • Includes well-described answers • Affordable pricing • Easy language Check price on Amazon • Doesn’t cover all Praxis Core subjects (it’s a Praxis Core Math Prep book) 10- Praxis Core Math Prep 2021-2022: The Most Comprehensive Review and Ultimate Guide to the Praxis Core Math (5733) Test Praxis Core Math Prep 2021-2022 is a book that covers everything you need to complete your Praxis Core math preparation. This book is a comprehensive book that includes hundreds of examples, examples of Praxis Core Math questions, and two complete and realistic Praxis Core math tests, all you need to complete your Praxis Core math preparation. With Praxis Core Math Prep 2021-2022, you can learn basic math structurally and this book can help you understand basic math skills. Some of the positive features and strengths of this book are: • Contains content in accordance with the 2022 Praxis Core test. • Written by a professional team of Praxis Core Math test experts • Full coverage of Praxis Core Math test content • Includes over 2,500 additional Praxis Core Math questions and exercises in both multiple-choice and grid-in formats with topic-grouped answers to better identify your math weaknesses • exercises in various Praxis Core Math topics such as integers, percentages, equations, polynomials, symbols, and radicals to increase your math skills • Includes 2 complete practice tests (including a variety of new questions) with detailed answers • 100% aligned with the 2022 Praxis Core test • 2 full-length practice tests • Value for money • Includes well-described answers • Well-organized content • Decent presentation Check price on Amazon • Doesn’t cover all Praxis Core subjects (it’s a Praxis Core Math Prep book) Praxis Core Math Prep Books Comparison Table (2022) Rank Title Publisher # of Pages Practice test Online material Price #1 Praxis Core Math for Beginners Effortless Math Education 214 pages 2 full-length practice tests Yes $19.99 #2 Praxis Core Study Guide 2020-2021 APEX Test Prep 241 pages Yes No $24.99 #3 Praxis Core Study Guide 2020-2021 Cirrus Test Prep 194 pages 2 full-length practice tests No $26.75 #4 Praxis Core Math Study Guide 2021 – 2022 Effortless Math Education 217 pages 2 full-length practice tests No $16.46 #5 Praxis Core Study Guide 2020-2021 Secrets Mometrix Media LLC 268 pages Yes Yes $43.99 #6 Praxis Core Study Guide 2020-2021 Test Prep Books 269 pages Yes No $19.03 #7 Praxis Core Academic Skills for Educators Research & Education Association 560 pages 6 full-length practice tests Yes $8.30 #8 Praxis Core Study Guide 2020-2021 Cirrus Test Prep 194 pages 1 full practice tests Yes $19.81 #9 Prepare for the Praxis Core Math Test in 7 Days Effortless Math Education 136 pages 2 full-length practice tests No $15.29 #10 Praxis Core Math Prep 2021-2022 Effortless Math Education 183 pages 2 full-length practice tests No $16.26 What is the Praxis Core Math Test? The Praxis Core Academic Skills for Educators is a test to get an accurate assessment of students’ academic ability for admissions in teacher preparation programs in the United States. A good score on this standardized test is very important for admission to teacher preparation programs. This test is designed and run by the Educational Testing Service (ETS) and its math section consists of 56 multiple choices and grid-in questions that you have 90 minutes to answer. Praxis Core math questions include: Number and Quantity (36%) Algebra and Functions (20%) Geometry (12%) Data Interpretation, Statistics and Probability (32%) The Best Books to Ace the Praxis Core Math Test Related to This Article What people say about "Top 10 Praxis Core Math Prep Books (Our 2023 Favorite Picks) - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/blog/top-10-praxis-core-math-prep-books-our-2021-favorite-picks/","timestamp":"2024-11-06T15:39:42Z","content_type":"text/html","content_length":"119165","record_id":"<urn:uuid:e87d3f53-a9b3-4e3b-9097-4c4d7d25a66c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00166.warc.gz"}
6.889: Algorithms for Planar Graphs and Beyond (Fall 2011) We discuss recursive divisions and how to obtain them in planar and minor-free graphs. This is one of the main tools that is used to obtain a linear-time SSSP algorithm and, in fact, once we have a suitable recursive division, the same SSSP algorithm works for both planar and minor-closed classes. However, both the recursive division algorithm and the SSSP analysis require the graph to be of bounded degree and it turns out that, in general H-minor-free graphs, a reduction to the bounded-degree case as in the planar case is not possible. We will see how to use knitted H-partitions to overcome this issue and obtain a generalized recursive division algorithm for all H-minor-free classes. [HKRS97] M. R. Henzinger, P. N. Klein, S. Rao, S. Subramanian: Faster shortest path algorithms for planar graphs. In: Journal of Computer and System Sciences, vol. 55(1):pp. 3-23, 1997. [RW09] B. Reed, D. R. Wood: A linear-time algorithm to find a separator in a graph excluding a minor. In: ACM Transactions on Algorithms, vol. 5(4):pp. 1-16, 2009. [TM09] S. Tazari, M. Müller-Hannemann: Shortest paths in linear time on minor-closed graph classes, with an application to Steiner tree approximation. In: Discrete Applied Mathematics, vol. 157:pp. 673-684, 2009. [Taz10, Chapter 5] S. Tazari: Algorithmic Graph Minor Theory: Approximation, Parameterized Complexity, and Practical Aspects. Doctoral Dissertation, Humboldt-Universität zu Berlin, 2010.
{"url":"https://courses.csail.mit.edu/6.889/fall11/lectures/L05.html","timestamp":"2024-11-11T20:07:13Z","content_type":"text/html","content_length":"6394","record_id":"<urn:uuid:b3e2dad0-bb9c-43a6-b898-261f442a0e09>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00639.warc.gz"}
Python list[-1] Kodeclik Blog What does [-1] mean in Python? Python lists are sequential collections of entities. They are iterable, meaning you can explore them sequentially by requesting one element at a time from the list. They are also random access, meaning you can request specific elements by specifying the location of the element. Below is a simple Python list of numbers: numbers = [2,3,-1,56,7,38] If you would like to print just the first element, you use index 0 (recall that indices begin from zero). If you would like to use the last element, we use index 5 (because this particular list has 6 numbers = [2,3,-1,56,7,38] print("First element is: ",numbers[0]) print("Last element is: ",numbers[5]) First element is: 2 Last element is: 38 As you can see above it can feel a bit inconvenient to print the last element because it requires you to first know the length of the list (and then subtract 1). You can do it by: numbers = [2,3,-1,56,7,38] print("First element is: ",numbers[0]) print("Last element is: ",numbers[len(numbers)-1]) The output will still be: First element is: 2 Last element is: 38 The “-1” index is really shorthand for what is going on in the second print statement. This means you can shorten the above expression to: numbers = [2,3,-1,56,7,38] print("First element is: ",numbers[0]) print("Last element is: ",numbers[-1]) First element is: 2 Last element is: 38 In other words, -1 is the index of the last element. Similarly, -2 is the index of the second-to-last element. numbers = [2,3,-1,56,7,38] print("Last element is: ",numbers[-1]) print("Second to last element is: ",numbers[-2]) Last element is: 38 Second to last element is: 7 We can continue this all the way to do: numbers = [2,3,-1,56,7,38] print("Last element is: ",numbers[-1]) print("Second to last element is: ",numbers[-2]) print("First element is: ",numbers[-6]) Last element is: 38 Second to last element is: 7 First element is: 2 Again, the last index is a bit awkward. So you can replace it as follows: numbers = [2,3,-1,56,7,38] print("Last element is: ",numbers[-1]) print("Second to last element is: ",numbers[-2]) print("First element is: ",numbers[-1*len(numbers)]) yielding the same output as above. To summarize, the index -1 is really the last element. Similarly, the index 0 is the first element. You can count forwards from zero and increment the index one by one (yielding positive numbers). Similarly, you can count backwards from -1 and decrement the index by one each step of the way (yielding more negative numbers). Just think of this as convenient ways to iterate over the list, either first-to-last or last-to-first. Interested in more things Python? Checkout our post on Python queues. Also see our blogpost on Python's enumerate() capability. Also if you like Python+math content, see our blogpost on Magic Squares . Finally, master the Python print function!
{"url":"https://www.kodeclik.com/what-does-minus-one-index-mean-in-python/","timestamp":"2024-11-02T06:07:43Z","content_type":"text/html","content_length":"105949","record_id":"<urn:uuid:a21d501e-fc81-40e0-820f-aec93cdb38b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00828.warc.gz"}
How can I set boundary condition in "Custom Equation" xfdywy wrote I try to implement the Navier-Stokes Equation by myself in the custom equation mode. But I find that I can only set Dirichlet or Neumann boundary condition for each boundary. And I must set one of the two types boundary condition for each boundary and each variable which is different from the case when I directly select the Navier-Stokes Equations in the "select physics". Dirichlet (prescribed value) and Neumann (prescribed gradient/flux) are the two fundamental boundary conditions. The physics modes translate more "physical" boundary conditions to these, for example the "no slip" condition is equivalent to zero Dirichlet conditions for the velocities, and a "pressure/outlet" condition sets a Dirichlet condition for the pressure. When entering your own custom equations the system cannot know what kind of physics you are trying to model, and therefore you also have to set your boundary conditions of Dirichlet/Neumann type. xfdywy wrote So, my question is how can I implement the correct boundary condition when I want to use the custom equation mode? As you didn't define what you mean by "correct boundary condition" it is hard to give any suggestions, but as stated above there is nothing wrong with prescribing Dirichlet/Neumann conditions.
{"url":"https://forum.featool.com/How-can-I-set-boundary-condition-in-quot-Custom-Equation-quot-td1227.html","timestamp":"2024-11-07T10:30:34Z","content_type":"text/html","content_length":"42817","record_id":"<urn:uuid:685f7ac8-d315-48d9-8845-7dc556a05583>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00329.warc.gz"}
Probability Function 8623 Views 1 Reply 0 Total Likes I am trying to calculate comparative statics for a system of 14 equations. Unfortunately, there is an error return and by looking for the problem I came across this issue, which is part of the overall project. I am working with normal probability function that has a normally distributed error term and three exogenous variables, q,m, and a. \[Delta] = \!\( \*SubsuperscriptBox[\(\[Integral]\), \(-\[Infinity]\), \(y\)]\(w \ (* Dismissal Probability Function with Density Function w(.) of \ \[Epsilon] *) w = PDF[NormalDistribution[\[Mu], \[Sigma]], x] (* Density Function *) y = q - (\[Phi]*m)/a (*Error Term y - Normally Distributed with mean and variance Sigma^2 *) D[\[Delta], q] D[\[Delta], m] D[\[Delta], a] The derivatives (by hand) would be: but I receive only conditional expressions. Thank you. 1 Reply then your then your and then the rest. That doesn't quite give you what you are expecting, but it might help you make some progress
{"url":"https://community.wolfram.com/groups/-/m/t/158911","timestamp":"2024-11-07T16:03:33Z","content_type":"text/html","content_length":"94481","record_id":"<urn:uuid:42c3dfc5-8dfc-4cf9-bea7-c5a0a299f069>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00506.warc.gz"}
how to measure internal resistance Warning: I am expecting you to do more than just read this text. Please plot the graph and find the properties of the cell. You’ve just completed an experiment in class (it is listed as “Method 2″ on page 8 your printed notes) where you built a simple series circuit using a cell, a resistance box and an ammeter. A voltmeter was connected across the resistance box and you recorded the voltage across (TPD) & current through the resistor as you changed the resistance from 0.5? to 1.5? in steps of 0.1?. The video below shows the same type of experiment, but uses a potato and two different metals in place of normal cell. Watch the video and note the values of I and V each time the resistance is changed – remember you can pause the video or go back if you miss any. Now plot a graph with current along the x-axis and TPD along the y-axis. If you don’t have any sheets of graph paper handy, there is a sheet available to download using the button at the end of this post. Or you could try printing out a sheet from a graph paper site, use Excel or download the free LibreOffice.org Calc spreadsheet. Draw a best-fit straight line for the points on your graph and find the gradient of the line. When calculating gradient, remember to convert the current units from microamps (uA) to amps (A). The gradient of your straight line will be a negative number. The gradient is equal to -r, where r is the internal resistance of the potato cell used in the video. You can obtain other important information from this graph; • Extend your best fit line so that it touches the y-axis. The value of the TPD where the line touches the y-axis is equal to the EMF of the cell. (Explanation: on the y-axis, I is zero so TPD = • Now extend the best-fit line so that it touches the x-axis, the current at that point is the short-circuit current – this is the maximum current that the potato cell can provide when the variable resistor is removed from the circuit altogether and replaced with just a wire. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://mrmackenzie.co.uk/2011/11/how-to-measure-internal-resistance/","timestamp":"2024-11-10T12:30:48Z","content_type":"text/html","content_length":"63757","record_id":"<urn:uuid:3aebc3cd-be1f-4707-8544-36938f3a0fbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00699.warc.gz"}
Honors Program | Department of Mathematics The Honors Programs in Mathematics are an option for highly motivated and mathematically talented students who are interested in mathematics, engineering, and sciences. The Mathematics Department offers Honors courses at various levels of sophistication so as to enable any Honors student interested in mathematics to take challenging and stimulating mathematics courses appropriate for his or her background. These programs enable students to obtain an undergraduate mathematics education of the highest level that will prepare them for graduate study or a position in industry, education, or UConn Honors Programs There are two graduation awards, which you may work towards simultaneously: Honors Scholar in the Major and University Honors Laureate. For general information on these go to the UConn Honors Program web site. Honors credit is available for several upper division Mathematics Department offerings including Undergraduate Seminars (MATH 3094) and the Honors Senior Thesis course (MATH 3796W). In addition, the student may request Honors Conversion credit for any of the mathematics 2000+ level courses with the approval of the course instructor and the student’s advisor. The Mathematics Department offers designated sections of Honors versions of the standard introductory calculus sequence: It is expected that this sequence will be accessible to most Honors students in the university whether or not they are mathematics majors. They are a good choice for students seeking an Honors mathematics experience. The basic subject matter of all courses above is similar to that in the corresponding non-honors versions, however, instructors have considerable opportunity to enrich the courses. They are suited for students who will need calculus and differential equations as a basic tool for their future work. These classes are open to students who have taken earlier Honors calculus classes in the same series, or are in the University Honors Program, or have the instructor’s permission. Students in the Honors Program receive Honors credit for these courses. Elective Courses Honors math majors are encouraged to take at least one of our first-year graduate courses in analysis (MATH 4110), algebra (MATH 4210), and geometry/topology (MATH 4310), for which Honors students receive Honors credit. Information about these elective opportunities is in the Elective Courses section of our mathematics majors page. The Honors Thesis To graduate as an Honors Scholar, the student must complete an Honors thesis under the supervision of faculty, usually, but not necessarily, from the Department of Mathematics. Information about the thesis is in the Senior Thesis section of our mathematics majors page. Note especially the time frame: find an advisor by the spring of your junior year.
{"url":"https://math.uconn.edu/degree-programs/undergraduate/honors-program/","timestamp":"2024-11-12T09:09:00Z","content_type":"text/html","content_length":"100788","record_id":"<urn:uuid:8c6c6555-3b95-4e7d-9000-1eaa70f4d7b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00068.warc.gz"}
Miscellaneous Puzzles 1. At a certain convention, there were 100 politicians. We know the following two statements are true: 1. At least one politician was honest. 2. Given any two politicians, at least one of the two was crooked. Can it be determined from the above facts how many of the politicians were honest and how many were crooked? 2. Three subjects, A, B, and C, are all perfect logicians, and are all aware that both of the other two are also perfect logicians. The three were shown seven stamps: Two red, two yellow, and three green. They were then blindfolded, and a stamp pasted on each of their foreheads. When the blindfolds were removed, A was asked, "Do you know one colour that you definitely do not have?" A answered, "No." B was then asked the same question and also answered, "No." Is it possible, from this information, to deduce the colour of any of the three stamps placed on any of A, B, or C? 3. A bottle of wine cost $10. The wine was worth $9 more than the bottle. How much was the bottle worth? 4. Suppose that you and I have the same amount of money. How much money would I have to give you such that you have $10 more than I do? Answers are on the answer page.
{"url":"http://mathlair.allfunandgames.ca/miscpuzzles.php","timestamp":"2024-11-13T08:12:02Z","content_type":"text/html","content_length":"3216","record_id":"<urn:uuid:0c369a75-5f78-4872-9c39-160dcec1791f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00149.warc.gz"}
Numerical Integration in Excel: A How-To Guide Table of Contents : Numerical integration is a crucial mathematical tool used to calculate the area under curves, compute definite integrals, and solve problems in physics and engineering. Excel, a powerful spreadsheet application, offers several methods to perform numerical integration efficiently. In this guide, we will explore various techniques to perform numerical integration in Excel, ensuring that both beginners and advanced users can grasp and utilize these methods effectively. Let’s dive in! 📊 What is Numerical Integration? 🤔 Numerical integration is a computational method to estimate the value of an integral. Unlike analytical integration, which derives the exact integral of a function, numerical integration approximates this value using discrete data points. This is particularly useful when dealing with complex functions that are difficult or impossible to integrate analytically. Common Methods of Numerical Integration There are several methods of numerical integration, but we will focus on a few popular techniques that can be easily implemented in Excel: 1. Trapezoidal Rule 🏞️ 2. Simpson’s Rule 🔍 3. Midpoint Rule ⚖️ Each method has its advantages and is suitable for different types of functions and data. Setting Up Excel for Numerical Integration Before jumping into the methods, let’s set up our Excel sheet. 1. Open Excel and create a new workbook. 2. Label the columns in the first row: □ A1: "x" □ B1: "f(x)" □ C1: "Integration Results" Now that we have our basic setup, let’s look at how to apply the different numerical integration techniques. Method 1: Trapezoidal Rule The Trapezoidal Rule estimates the integral of a function by dividing the area under the curve into trapezoids rather than rectangles. The formula for the Trapezoidal Rule is: [ \text{Area} \approx \frac{b-a}{2n} \left( f(a) + 2 \sum_{i=1}^{n-1} f(x_i) + f(b) \right) ] • (a) and (b) are the limits of integration • (n) is the number of subintervals • (f(x_i)) is the function value at each subinterval Steps to Implement Trapezoidal Rule in Excel 1. Enter Data: In Column A, enter your x-values from a to b, spaced evenly based on your chosen (n). 2. Calculate f(x): In Column B, use a formula to compute (f(x)) for each x-value. For example, if (f(x) = x^2), you can enter the formula =A2^2 in cell B2 and drag it down. 3. Calculate the Area: □ In the C2 cell, use the following formula for the trapezoidal approximation: =(A[n+1] - A[1]) / (2 * n) * (B[1] + 2 * SUM(B[2:n]) + B[n+1]) □ Replace [n] with the last row number of your data. Example Table for Trapezoidal Rule x f(x) Integration Result 1.0 1.00 1.1 1.21 1.2 1.44 ... ... 2.0 4.00 Important Note: Ensure that your x-values are evenly spaced for accurate results. Method 2: Simpson’s Rule Simpson’s Rule offers higher accuracy by using parabolic segments instead of straight lines. The formula for Simpson’s Rule is given by: [ \text{Area} \approx \frac{b-a}{3n} \left( f(a) + 4 \sum_{i=1,3,5...}^{n-1} f(x_i) + 2 \sum_{i=2,4,6...}^{n-2} f(x_i) + f(b) \right) ] Steps to Implement Simpson’s Rule in Excel 1. Enter Data: Similar to the Trapezoidal Rule, enter x-values in Column A. 2. Calculate f(x): Use the same formulas as before in Column B. 3. Calculate the Area: □ In cell C2, enter the Simpson's Rule formula: =(A[n+1] - A[1]) / (3 * n) * (B[1] + 4 * SUM(B[2:n-1:2]) + 2 * SUM(B[3:n-2:2]) + B[n+1]) Example Table for Simpson’s Rule x f(x) Integration Result 1.0 1.00 1.2 1.44 1.4 1.96 ... ... 2.0 4.00 Important Note: Simpson's Rule requires that the number of intervals (n) be even. Method 3: Midpoint Rule The Midpoint Rule approximates the area under a curve by using the midpoint of each interval. The formula is: [ \text{Area} \approx \Delta x \sum_{i=1}^{n} f\left( \frac{x_i + x_{i+1}}{2} \right) ] Steps to Implement Midpoint Rule in Excel 1. Enter Data: Populate x-values in Column A, ensuring they are evenly spaced. 2. Calculate Midpoints: In Column D, calculate the midpoint of each subinterval: □ For the first midpoint, use: =(A2 + A3) / 2 3. Calculate f(midpoint): In Column E, use a formula like =f(D2) to compute function values at midpoints. 4. Calculate the Area: □ In C2, use: =((A[n+1] - A[1]) / n) * SUM(E[1:n]) Example Table for Midpoint Rule x f(x) Midpoint f(midpoint) Integration Result 1.0 1.00 1.05 1.1025 1.1 1.21 1.15 1.3225 ... ... ... ... 2.0 4.00 2.05 4.2025 Mastering numerical integration in Excel can significantly enhance your analytical capabilities, allowing you to solve complex problems more efficiently. Whether you're using the Trapezoidal Rule, Simpson’s Rule, or the Midpoint Rule, these methods offer flexibility for a range of applications in mathematics, engineering, and the sciences. 🧮 By following this guide, you can confidently utilize these techniques in your projects, ensuring accurate results and a better understanding of numerical methods. Happy integrating! 🌟
{"url":"https://tek-lin-pop.tekniq.com/projects/numerical-integration-in-excel-a-how-to-guide","timestamp":"2024-11-01T20:04:11Z","content_type":"text/html","content_length":"86849","record_id":"<urn:uuid:3949fc57-d349-4049-bbab-6af0f1735754>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00886.warc.gz"}
In the mathematical area of knot theory, a Reidemeister move is any of three local moves on a link diagram. Kurt Reidemeister (1927) and, independently, James Waddell Alexander and Garland Baird Briggs (1926), demonstrated that two knot diagrams belonging to the same knot, up to planar isotopy, can be related by a sequence of the three Reidemeister moves. Reidemeister moves Type I Type II Type III Type I' Each move operates on a small region of the diagram and is one of three types: 1. Twist and untwist in either direction. 2. Move one loop completely over another. 3. Move a string completely over or under a crossing. No other part of the diagram is involved in the picture of a move, and a planar isotopy may distort the picture. The numbering for the types of moves corresponds to how many strands are involved, e.g. a type II move operates on two strands of the diagram. One important context in which the Reidemeister moves appear is in defining knot invariants. By demonstrating a property of a knot diagram which is not changed when we apply any of the Reidemeister moves, an invariant is defined. Many important invariants can be defined in this way, including the Jones polynomial. The type I move is the only move that affects the writhe of the diagram. The type III move is the only one which does not change the crossing number of the diagram. In applications such as the Kirby calculus, in which the desired equivalence class of knot diagrams is not a knot but a framed link, one must replace the type I move with a "modified type I" (type I') move composed of two type I moves of opposite sense. The type I' move affects neither the framing of the link nor the writhe of the overall knot diagram. Trace (1983) showed that two knot diagrams for the same knot are related by using only type II and III moves if and only if they have the same writhe and winding number. Furthermore, combined work of Östlund (2001), Manturov (2004), and Hagge (2006) shows that for every knot type there are a pair of knot diagrams so that every sequence of Reidemeister moves taking one to the other must use all three types of moves. Alexander Coward demonstrated that for link diagrams representing equivalent links, there is a sequence of moves ordered by type: first type I moves, then type II moves, type III, and then type II. The moves before the type III moves increase crossing number while those after decrease crossing number. Coward & Lackenby (2014) proved the existence of an exponential tower upper bound (depending on crossing number) on the number of Reidemeister moves required to pass between two diagrams of the same link. In detail, let ${\displaystyle n}$ be the sum of the crossing numbers of the two diagrams, then the upper bound is ${\displaystyle 2^{2^{2^{.^{.^{n}}}}}}$ where the height of the tower of ${\ displaystyle 2}$s (with a single ${\displaystyle n}$ at the top) is ${\displaystyle 10^{1,000,000n}}$ Lackenby (2015) proved the existence of a polynomial upper bound (depending on crossing number) on the number of Reidemeister moves required to change a diagram of the unknot to the standard unknot. In detail, for any such diagram with ${\displaystyle c}$ crossings, the upper bound is ${\displaystyle (236c)^{11}}$. Hayashi (2005) proved there is also an upper bound, depending on crossing number, on the number of Reidemeister moves required to split a link.
{"url":"https://www.knowpia.com/knowpedia/Reidemeister_move","timestamp":"2024-11-07T17:16:06Z","content_type":"text/html","content_length":"96936","record_id":"<urn:uuid:e2342562-9f4f-403b-94ea-4275581fb3a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00827.warc.gz"}
Wolfram|Alpha Examples: Packing & Covering Problems Examples for Packing & Covering Problems Packing and covering problems are special optimization problems concerning geometric objects in a given space or region. Many of the problems involve arranging geometric objects (usually identical) into the space or region as densely as possible with no overlap. Wolfram|Alpha can find the best-known solutions for many two-dimensional packing problems. It can also do estimations for packing/ covering with everyday‐life objects. Geometric Packing in 2D Optimize the packing of common 2D geometric figures into a given area. Compute properties of a geometric packing: Specify dimensions of the container: Specify dimensions of packed objects: More examples Packing & Covering of Objects Estimate the number of objects required to pack or cover another object. Estimate the number of objects required to fill a container: Estimate the number of objects required to cover a specified area: Estimate the number of objects required to circle another object: More examples
{"url":"https://www6b3.wolframalpha.com/examples/mathematics/geometry/packing-and-covering-problems","timestamp":"2024-11-12T22:21:44Z","content_type":"text/html","content_length":"69112","record_id":"<urn:uuid:c2f64aea-733c-45e2-8361-e0af46eebcdb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00254.warc.gz"}
The simple Dietz method^[1] is a means of measuring historical investment portfolio performance, compensating for external flows into/out of the portfolio during the period.^[2] The formula for the simple Dietz return is as follows: ${\displaystyle R={\frac {B-A-C}{A+C/2}}}$ ${\displaystyle R}$ is the portfolio rate of return, ${\displaystyle A}$ is the beginning market value, ${\displaystyle B}$ is the ending market value, and ${\displaystyle C}$ is the net external inflow during the period (flows out of the portfolio are negative and flows into the portfolio are positive). It is based on the assumption that all external flows occur at the half-way point in time within the evaluation period (or are spread evenly across the period, and so the flows occur on average at the middle of the period). To measure returns net of fees, allow the value of the portfolio to be reduced by the amount of the fees. To calculate returns gross of fees, compensate for them by treating them as an external flow, and exclude accrued fees from valuations, i.e. do not reduce the portfolio market value by the fee amount accrued. 1. The simple Dietz method is a variation upon the simple rate of return, which assumes that external flows occur either at the beginning or at the end of the period. The simple Dietz method is somewhat more computationally tractable than the internal rate of return (IRR) method. 2. A refinement of the simple Dietz method is the modified Dietz method,^[3] which takes available information on the actual timing of external flows into consideration. 3. Like the modified Dietz method, the simple Dietz method is based on the assumption of a simple rate of return principle, unlike the internal rate of return method, which applies a compounding 4. Also like the modified Dietz method, it is a money-weighted returns method (as opposed to a time-weighted returns method). In particular, if the simple Dietz returns on two portfolios over the same period are ${\displaystyle R_{1}}$ and ${\displaystyle R_{2}}$ , then the simple Dietz return on the combined portfolio containing the two portfolios is the weighted average of the simple Dietz return on the two individual portfolios: ${\displaystyle R=w_{1}\times R_{1}+w_{2}\times R_{2}}$ . The weights ${\displaystyle w_{1}}$ and ${\displaystyle w_{2}}$ are given by: ${\ displaystyle w_{i}={\frac {A_{i}+{\frac {C_{i}}{2}}}{A_{1}+A_{2}+{\frac {C_{1}+C_{2}}{2}}}}}$ . The method is named after Peter O. Dietz. According to his book Pension Funds: Measuring Investment Performance,^[1] "The method selected to measure return on investment is similar to the one described by Hilary L. Seal in Trust and Estate magazine. This measure is used by most insurance companies and by the SEC in compiling return on investment in its Pension Bulletins.^[4] The basis of this measure is to find a rate of return by dividing income by one-half the beginning investment plus one-half the ending investment, minus one-half the investment income. Thus where A equals beginning investment, B equals ending investment, and I equals income, return R is equivalent to ${\displaystyle R=I\div {1/2}(A+B-I)}$ For the purpose of measuring pension fund investment performance, income should be defined to include ordinary income plus realized and unrealized gains and losses."^[1] "The investment base to be used is market value as opposed to book value. There are several reasons for this choice: First, market value represents the true economic value which is available to the investment manager at any point in time, whereas book value is arbitrary. Book value depends on the timing of investments, that is, book value will be high or low depending on when investments were made. Second, an investment manager who realizes capital gains will increase his investment base as opposed to a manager who lets his gains ride, even though the funds have the same economic value. Such action would result in an artificially lower return for the fund realizing gains and reinvesting if book value were used."^[1] Using ${\displaystyle M_{1}}$ and ${\displaystyle M_{2}}$ for beginning and ending market value respectively, he then uses the following relation: ${\displaystyle M_{2}={M_{1}}+C+I}$ to transform ${\displaystyle R=I\div {1/2}({M_{1}}+{M_{2}}-I)}$ ${\displaystyle R={\frac {{M_{2}}-{M_{1}}-C}{{1/2}({M_{1}}+{M_{2}}-{M_{2}}+{M_{1}}+C)}}}$ ${\displaystyle R={\frac {{M_{2}}-{M_{1}}-C}{{M_{1}}+C/2}}}$ He goes on to rearrange this into: ${\displaystyle {M_{2}}={M_{1}}+C+{RM_{1}}+RC/2}$ This formula "reveals that the market value at the end of any period must be equal to the beginning market value plus net contributions plus the rate of return earned of the assets in the fund at the beginning of the period and the return earned on one-half of the contributions. This assumes contributions are received midway through each investment period, and alternately, that half the contributions are received at the beginning of the period, and half at the end of the period."^[1] See also Further reading • MEASURING INVESTMENT PERFORMANCE^[5] 1. ^ ^a ^b ^c ^d ^e Peter O. Dietz (1966). Pension Funds: Measuring Investment Performance. Free Press. 2. ^ Charles Schwab (18 December 2007). Charles Schwab's New Guide to Financial Independence Completely Revised and Upda ted: Practical Solutions for Busy People. Doubleday Religious Publishing Group. pp. 259–. ISBN 978-0-307-42041-1. 3. ^ Bernd R. Fischer; Russ Wermers (31 December 2012). Performance Evaluation and Attribution of Security Portfolios. Academic Press. pp. 651–. ISBN 978-0-08-092652-0. 4. ^ Seal, Hilary L. (November 1956). "Pension & Profit Sharing Digest: How Should Yield of a Trust Fund Be Calculated?". Trust and Estates (XCV): 1047. 5. ^ Jacobson, Harold (2013). MEASURING INVESTMENT PERFORMANCE. Author House. p. 48. ISBN 978-1-4918-3023-9.
{"url":"https://www.knowpia.com/knowpedia/Simple_Dietz_method","timestamp":"2024-11-03T19:55:48Z","content_type":"text/html","content_length":"106636","record_id":"<urn:uuid:be4bfaa4-074d-4521-b3ac-2c12537a9ef0>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00206.warc.gz"}
The two-step optimal estimator and example applications The two-step filter is a new approach for nonlinear recursive estimation that substantially improves the estimate error relative to the extended Kalman Filter (EKF) or the iterated extended Kalman filter (IEKF). Historically, when faced with an optimal estimation problem involving a set of nonlinear measurements, designers have been forced to choose between optimal, but off-line, iterative batch techniques or sub-optimal, approximate techniques, typically the EKF or IEKF. These techniques linearize the measurements and dynamics to take advantage of the well known Kalman filter equations. While broadly used, these filters typically result in sub-optimal and biased estimates and often can go unstable. The two-step estimator, introduced in 1996, provides a dramatic improvement over these filters for situations with nonlinear measurements. It accomplishes this by dividing the estimation problem (a quadratic minimization) into two-steps - a linear first step and a non-linear second step. The result is a filter that comes much closer to minimizing the desired cost, virtually eliminating any biases and dramatically reducing the mean-square error relative to the EKF. This paper presents an overview of the two-step estimator, outlining the derivation of the two-step measurement update and cost function minimization. It also presents the newest time update, resulting in a robust and accurate estimation technique. This presentation is followed by several simple aerospace examples to illustrate the utility of the filter and its improvement over the EKF and IEKF. These include both open loop estimation and closed loop control applications. All Science Journal Classification (ASJC) codes • Aerospace Engineering • Space and Planetary Science Dive into the research topics of 'The two-step optimal estimator and example applications'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/the-two-step-optimal-estimator-and-example-applications","timestamp":"2024-11-10T05:36:42Z","content_type":"text/html","content_length":"49908","record_id":"<urn:uuid:8036a09f-a6a0-45dc-a250-aff0bd300683>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00177.warc.gz"}
Glossary of Terms All complex subjects have their own terminology that sometimes makes it hard for new people to break into the field. This sometimes includes uncommon words, but more often than not a subject will have very specific meanings for common words - the discussion of errors vs mistakes in this video is a good example of this. This glossary is a reference of some of the uncommon terms and specific definitions of more common words that you will encounter throughout Data Tree and your broader dealings with data. Many of these definitions come from the course materials and experts that helped develop Data Tree. Others come from the CASRAI Dictionary. Those definitions are kindly made available under a Creative Commons Attribution 4.0 International License. Browse the glossary using this index The probability value (p-value) of a hypothesis test is the probability of getting a value of the test statistic as extreme, or more extreme, than the one observed, if the null hypothesis is true. Small p-values suggest the null hypothesis is unlikely to be true. The smaller it is, the more convincing is the evidence to reject the null hypothesis. In the pre-computer era it was common to select a particular p-value, (often 0.05 or 5%) and reject H0 if (and only if) the calculated probability was less than this fixed value. Now it is much more common to calculate the exact p-value and interpret the data accordingly. A parameter is a numerical value of a , such as the mean. The values are often modelled from a distribution. Then the shape of the distribution depends on its parameters. For example the parameters of the normal distribution are the mean, μ and the standard deviation , σ. For the binomial distribution, the parameters are the number of trials, n, and the probability of success, θ. The pth percentile of a list is the number such that at least p% of the values in the list are no larger than it. So the lower quartile is the 25th percentile and the median is the 50th percentile. One definition used to give percentiles, is that the p’th percentile is the 100/p*(n+1)’th observation. For example, with 7 observations, the 25th percentile is the 100/25*8 = 2nd observation in the sorted list. Similarly, the 20th percentile = 100/20*8 = 1.6th observation. Prefix denoting a factor of 1015 or a million billion Physical data Data in the form of physical samples. Examples: Soil samples, ice cores. Polar orbiting A satellite orbit passing above or nearly above both poles on each orbit. Polar orbiting satellites have a lower altitude above the Earth's surface than satellites and therefore an increased resolution. A population is a collection of units being studied. This might be the set of all people in a country. Units can be people, places, objects, years, drugs, or many other things. The term population is also used for the infinite population of all possible results of a sequence of statistical trials, for example, tossing a coin. Much of statistics is concerned with estimating numerical properties (parameters) of an entire population from a random of units from the population. Precision is a measure of how close an is expected to be to the true value of a . Precision is usually expressed in terms of the standard error of the . Less precision is reflected by a larger standard error Primary Data Data that has been created or collected first hand to answer the specific research question. For a variable with n observations, of which the frequency of a particular characteristic is r, the proportion is r/n. For example if the frequency of replanting was 11 times in 55 years, then the proportion was 11/55 = 0.2 of the years, or one fifth of the years. (See also percentages.) In the case of data, the process of tracing and recording the origins of data and its movements between databases. Data's full history including how and why it got to its present palace. In the case of data, other data that you may use and/or transform when you do not have a direct measurement of the data you require.
{"url":"https://datatree.org.uk/mod/glossary/view.php?id=230&mode=letter&hook=P&sortkey=&sortorder=asc","timestamp":"2024-11-12T00:49:35Z","content_type":"text/html","content_length":"66492","record_id":"<urn:uuid:165084c5-10ec-4700-9498-e915c15974f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00200.warc.gz"}
International Conference "Algebra and Geometry", dedicated to the 65-th anniversary of Askold G. Khovanskii (June 4–9, 2012, Moscow) Toric geometry exhibited a profound relation between algebra and topology on one side and combinatorics and convex geometry on the other side. In the last decades, the interplay between algebraic and convex geometry has been explored and used successfully in a much more general setting: first, for varieties with an algebraic group action (such as spherical varieties) and recently for all algebraic varieties (construction of Newton–Okounkov bodies). The main goal of the conference is to survey recent developments in these directions. The conference is not intended to focus on a narrow set of problems, but rather to present a broad look at recent progress in the field, highlighting new techniques and ideas. Main topics of the conference are: • Theory of Newton polytopes and Newton–Okounkov bodies • Toric geometry, geometry of spherical varieties, Schubert calculus, geometry of moduli spaces • Tropical geometry and convex geometry • Real algebraic geometry, fewnomial theory and o-minimal structures • Polynomial vector fields and the Hilbert 16th problem The conference is dedicated to Askold Georgievich Khovanskii, who will turn sixty five in June 2012. Askold Khovanskii put his indelible mark on many areas of mathematics: real and complex algebraic geometry, singularity theory, differential equations, topology. He obtained many major results (both alone and with his collaborators) in all areas covered by the conference. E-mail: Website: https://bogomolov-lab.ru/AG2012/Askoldfest2012.htm © , 2024
{"url":"https://m.mathnet.ru/php/conference.phtml?confid=312&option_lang=eng","timestamp":"2024-11-09T06:23:23Z","content_type":"text/html","content_length":"8049","record_id":"<urn:uuid:e6a6cb61-0bb3-45fb-9ce6-86c1a4cdb741>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00254.warc.gz"}
Solving quadratic equations Solving quadratic equations resources Quadratic Equations 1 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 10 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 2 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 3 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 4 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. Quadratic Equations 5 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 6 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 7 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 8 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Quadratic Equations 9 This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Parliamentary debate - Tony McWalter In this mathtutor extention video Tony McWalter MP discusses the relevance of studying quadratic equations. This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Algebra Refresher A refresher booklet on Algebra with revision, exercises and solutions on fractions, indices, removing brackets, factorisation, algebraic frations, surds, transpostion of formulae, solving quadratic equations and some polynomial equations, and partial fractions. An interactive version and a welsh language version are available. Cwrs Gloywi Algebra An Algebra Refresher. This booklet revises basic algebraic techniques. This is a welsh language version. Quadratic equations 1 This leaflet explains how to solve a quadratic equation by factorisation. (Engineering Maths First Aid Kit 2.14) Quadratic equations 2 This leaflet explains how quadratic equations can be solved using the formula. (Engineering Maths First Aid Kit 2.15) Quadratic equations This booklet explains how quadratic equations can be solved by factorisation, by completing the square, using a formula, and by drawing graphs. Maths EG Computer-aided assessment of maths, stats and numeracy from GCSE to undergraduate level 2. These resources have been made available under a Creative Common licence by Martin Greenhow and Abdulrahman Kamavi, Brunel University. Mathematics Support Materials from the University of Plymouth Support material from the University of Plymouth: The output from this project is a library of portable, interactive, web based support packages to help students learn various mathematical ideas and techniques and to support classroom teaching. There are support materials on ALGEBRA, GRAPHS, CALCULUS, and much more. This material is offered through the mathcentre site courtesy of Dr Martin Lavelle and Dr Robin Horan from the University of Plymouth. University of East Anglia (UEA) Interactive Mathematics and Statistics Resources The Learning Enhancement Team at the University of East Anglia (UEA) has developed la series of interactive resources accessible via Prezi mind maps : Steps into Numeracy, Steps into Algebra, Steps into Trigonometry, Bridging between Algebra and Calculus, Steps into Calculus, Steps into Differential Equations, Steps into Statistics and Other Essential Skills. Solving Quadratic Equations This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. (Mathtutor Video Tutorial) This resource is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd. Solving Quadratic Equations This unit is about the solution of quadratic equations. These take the form ax^2+bx+c = 0. We will look at four methods: solution by factorisation, solution by completing the square, solution using a formula, and solution using graphs. (Mathtutor Video Tutorial) The video is released under a Creative Commons license Attribution-Non-Commercial-No Derivative Works and the copyright is held by Skillbank Solutions Ltd.
{"url":"https://www.mathcentre.ac.uk/topics/algebra/solving-quadratics/","timestamp":"2024-11-11T20:11:44Z","content_type":"text/html","content_length":"22990","record_id":"<urn:uuid:39c26cd8-6371-4223-afa9-c7271230c351>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00862.warc.gz"}
Lorentz Force: Formula, Unit, Rule & Definition - La Cultura de los Mayas Lorentz Force: Formula, Unit, Rule & Definition Colorful Northern Lights can occasionally be seen in the night sky in the polar regions. Vikings used to imagine valkyries riding across the night sky in search of heroes, moonlight refracting off their silver armour. However, the actual cause of the light spectacle lies in the solar storms and the Lorentz force. Experiment on the Lorentz force: ladder swing But before we turn to large natural phenomena, let’s look at the effect of force on a smaller scale in an experiment. All you need for this is a horseshoe magnetone ladder swing and a voltage source. Step 1: Connect the ladder swing to the (still switched off) voltage source and position it in the homogeneous magnetic field of the horseshoe magnet. A magnetic field will as homogeneous denoted when it is the same at every point field strength owns. In the field line representation you draw it as parallel arrowswhich are located at regular intervals from each other and in same direction demonstrate: You can find out more about this in our articles on the magnetic field or the Lorentz force in current-carrying conductors. Step 2: Turn on the voltage source and see what happens. You will find that the ladder swing moves as if by itself. Now you can try a variation of the experiment, change the orientation of the magnet and see what happens. the veYou can see the different versions of the experiment in the following table: Attempt Attempt 1: Attempt 2: Attempt 3: DescriptionThe north side of the magnet faces upThe south side of the magnet faces upThe magnet is horizontalConstruction Figure 3: North side facing up Figure 4: South side faces up Figure 5: horizontal orientation ResultThe ladder swing moves to the right ladder swing moves to the left Die ladder swing stay calm You would get a similar result if you positioned the ladder swing differently instead of the magnet. If you look closely, you can see from the drawn field lines that in the first two experiments the direction of the magnetic field lines and the direction of the current are perpendicular to each other. In the last attempt, they run parallel. So in two cases there seems to be a force moving the conductor, in the third not. To explain that, you need the mathematical and physical definition of the force acting here. Lorentz force: formula, definition and unit So what exactly do the Northern Lights and moving ladder swings have in common so that the same force is at work in both cases? In both cases there is moving loads (the charged particles in the solar storm and the electrons in the current-carrying conductor), as well as a magnetic field (the earth’s magnetic field and the homogeneous magnetic field of the horseshoe magnet). The combination of these two components leads to the appearance of a force, the so-called Lorentz force. The force was named after physicist Hendrik Antoon Lorentz, who discovered it in 1895. The Lorentz force also depends on the angle between the direction of the current and the magnetic field lines. In this case, the sine can assume values between 0 and 1: Correspondingly, the Lorentz force is maximum and minimum. The table below shows three examples of the relationship between the angle and the Lorentz force. You can also see the angle between the direction of the magnetic field and the direction of the current in the following figure: If the magnetic field lines and the direction of movement of the charge are perpendicular (90°) to one another, the Lorentz force is therefore greatest. At the same time, it can be explained mathematically why the ladder swing does not move in Experiment 3: when aligned parallel, the angle is 0° and the sine is therefore also zero, just like the Lorentz force. In the case of vertical alignment, you can use a simple rule to determine the direction of the Lorentz force. Lorentz force: Three Finger Rule (UVW – Rule) We can therefore state that the strength of the Lorentz force depends on the angle between the magnetic field and the direction of movement. But how can you tell in which direction the charge is deflected in the magnetic field? For this you use the so-called Three fingers rule or UVW rule. The initials UVW stand for cause, Mediation (V) and Effect (W). For this rule you need your thumb, your index finger and your middle finger of the right hand, which you stretch out perpendicularly to each other. Your thumb points in the direction of movement of the load – the current direction (Cause) -, you position your index finger in the direction of the magnetic field lines (mediation). Now your right middle finger should be pointing in the direction the ladder is moving. So towards the Lorentz force (effect). With the Three Finger Rule (also UVW rule) can you the direction of the Lorentz force to determine charges that are in a magnetic field perpendicular move to the field lines. To do this, your thumb points in the direction of the current flow, your index finger in the direction of the magnetic field lines and your middle finger in the direction of the Lorentz force. Depending on the direction of the current, you use your right or your left hand. Usually from the technical flow direction run out (charge flows from plus to minus in the conductor) and you can use your right hand. In the physical direction of current (from minus to plus) you use your left hand to determine the direction of the force. You can find out more in the article on Three Finger Rule. Derivation and formula for the Lorentz force on a current-carrying conductor The Lorentz force acts on the individual electrons in the conductor and deflects it in a certain direction. As a result, the conductor also moves in this direction. The force on the conductor is therefore the sum of the forces on the individual electrons. In the following example you will learn how you can use this approach to derive the formula for the force on a current-carrying conductor. If n electrons move in the conductor, the formula for the total force on the conductor applies: But not even physicists go to the trouble of counting the electrons individually. Instead, you summarize the number of electrons n and their charge q to the total charge Q, accordingly you get the following formula: However, since the speed of the electrons and the total charge in the conductor are unknown, the formula has to be slightly modified. You can generally write speed as distance s per time t. In the case of the ladder swing, the distance s is the length L of the ladder. Inserted into the formula, you get the term depending on the conductor length L and the time t: The current strength results from the quotient of the total charge Q and the time t. Both components already exist in the equation, so you can combine them into the current: You can use the amperage, conductor size and magnetic flux density to calculate the force acting on a piece of conductor. The one acting on a piece of ladder power You calculate with the product of the Current Imagnetic Flux Density B and the Length L of the ladder section: You can explain the formula logically if you see the force on the conductor as the sum of the Lorentz force on each individual electron. Accordingly, it increases as more electrons flow through the conductor. You can achieve this, for example, by increasing the current. Lorentz force between two current-carrying conductors So far you have used a horseshoe magnet to generate a homogeneous magnetic field in the experiment. However, a separate magnetic field also forms around a current-carrying conductor. The horseshoe magnet cannot be used here. As you can see in the figure, the magnetic field forms in a circle around the conductor. You can determine the direction of the field with the rule of thumb determine. To do this, form your right hand into a fist and stretch your thumb in the direction of the current flow. Now your fingers are pointing in the direction of the magnetic field. Now you extend the experiment and bring a second conductor with the same length L into the magnetic field of the first conductor. DYou connect the second conductor to a voltage source so that the current flows in the same direction in both conductors. The two conductors are now moving towards each other. You reverse the direction of the current by one of the two leaders um, they move away from each other. This movement is triggered by the Lorentz force. The conductors are each in the magnetic field of the other conductor. The force on both conductors is the same. For the power on two current-carrying conductors of length L is applicable: The power to Focus between the conductors in distance r directional if the current direction is the same in both conductors. If you reverse the current direction of one of the two conductors, they move apart. You can find out exactly how you came up with this formula in the following in-depth study: You can determine the magnetic flux density of the first conductor by ampere law determine: So you can calculate the force on the second conductor by applying Ampere’s law and plugging in the magnetic flux density of the first conductor: Analogously, the force on the first conductor results from the magnetic flux density that the second conductor generates: The mathematical derivation shows that the formulas for both forces are the same: So far, the charges in the experiment have been caused by switching on the voltage, which leads to a current flow in the conductor. You can also cause the charge to move mechanically. The Lorentz force on moving charges To illustrate, let’s expand our first attempt a bit. You also need a small one for this light bulbwhich you instead of voltage source put into the circuit. Otherwise the device remains the same. Now you take your index finger and carefully pull the ladder swing towards you. The lamp then flickers on briefly and goes out as soon as you stop the mechanical movement. The faster you pull the ladder swing towards you, the brighter the light will shine. In this case, you create a short-term voltage with the help of the Lorentz force. By moving the ladder you move indirectly at the same time electrons inside the leader. Just like the attempt before, here are a magnetic field and moving charge carriers (the electrons). This creates one Lorentz forcewhich acts on the electrons and moves them in the conductor. As a result (depending on which direction you move the conductor) there is a at one end Plus-on the other one negative pole. A voltage is briefly created between these poles, causing the lamp to light up. Here one speaks of the fact that a voltage is induced. If you stop the movement, the charge in the conductor is not separated and therefore no voltage is generated. This attempt lies, among other things, in the Lenz’s rule perish. Everything you need to know is explained in our article on Lenz’s rule. Lorentz force and centripetal force, simply explained Finally, let’s look at one more attempt. Maybe you know from the…
{"url":"https://culturalmaya.com/lorentz-force-formula-unit-rule-definition/","timestamp":"2024-11-05T05:46:08Z","content_type":"text/html","content_length":"56887","record_id":"<urn:uuid:b0418b3a-25bb-44b3-86a4-b84535403edc>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00742.warc.gz"}
implies is used to define implications between logic cases implies is used to define logic implications. As an example, the following code will ensure that a variable x satisfies a set of linear inequalities if a binary variable d is true. If d is false, the value of x is arbitrary the constraints could be satisfied, but they do not have to be) d = binvar(1); F = implies(d,A*x <= b); implies is mainly intended for (BINARY -> BINARY) or (BINARY -> Linear constraint), although it can be used also for more general constructions such as (Linear Constraint -> Linear Constraint) (bearing in mind that these models typically are numerically sensitive and may require a lot of binary variables to be modeled). The following code reverts the logic: if a set of linear inequalities are satisfied, the binary variable is forced to be true. d = binvar(1); F = implies(A*x <= b, d); For more examples, check out another logic programming example and general theory Since implies is implemented using a big-M approach, it is crucial that all involved variables have explicit bound constraints.
{"url":"https://yalmip.github.io/command/implies/","timestamp":"2024-11-09T10:54:51Z","content_type":"text/html","content_length":"30210","record_id":"<urn:uuid:ab28e16c-ee5d-44d2-b1ae-182963bc8cf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00568.warc.gz"}
1. Introduction2. The Growth Phase3. The Decay Phase4. Some Interesting Deviations5. ConclusionsDeclaration about FundsConflicts of InterestCite this paperReferences AiMAdvances in Microbiology2165-3402Scientific Research Publishing10.4236/aim.2024.141001AiM-130476ArticlesBiomedical&Life Sciences Microbial Growth and Decay: A Commented Review of the Model AlbertoSchiraldi[1]^*Formerly at Dept. Food Environ. Nutr. Sci. (DeFENS), University of Milan (Italy), Milan, Italy12012024140111029, November 202312, January 2024 15, January 2024© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ The paper reviews previous publications and reports some comments about a semi empirical model of the growth and decay process of a planktonic microbial culture. After summarizing and reshaping some fundamental mathematical expressions, the paper highlights the reasons for the choice of a suitable time origin that makes the parameters of the model self-consistent. Besides the potential applications to predictive microbiology studies and to effects of bactericidal drugs, the model allows a suitable proxy of the fitness of the microbial culture, which can be of interest for the studies on the evolution across some thousand generations of a Long Term Evolution Experiment. Microbial Cultures Model Time Scale Growth and Decay Evolution Every microbial culture behaves like a factory where substrates become living organisms [1] . This transformation seems a peculiar property of the living cells (either pro-, or eukaryotic), since it does not occur if they are absent. The cells trigger the process, but are themselves accelerators of the evolution. In such perspective, the microbial growth looks like an auto-catalyzed process. However, this is not what really happens, since the growth rate rises up to a maximum and then declines toward zero, in spite of the largely increased number of the cells. Such a behavior is reflected by the so-called growth curve, log (N)-vs-t, that describes the trend of the population density, N, as function of the time, t. Since the first decades of the last century, many authors, including the Nobel laureate Jaques Monod [2] , described the “macroscopic” evidence of the growth curve through models ( [3] and therein quoted authors) that include some best fit adjustable parameters, like maximum specific growth rate, lag phase, etc., which were also given some biological meaning. Recent investigations addressed the attention to the underlying biochemical mechanism and to the molecular peculiarities that characterize the growth of various microbial species in different mediums [4] [5] , as well as to some collective effects, like quorum sensing and mechanosensing [6] [7] . Finally, the awareness of possible genomic mutations that can either enhance or depress the fitness of a given microbial culture, led to a vast specialized literature on the evolution of the microbes and the related effects, like the emergence of resistance to antibiotic drugs, or the improved fitness after thousand generations in Long Term Evolution Experiments (LTEE) [8] [9] . These approaches could mislead to the conclusion that the early models are just naïve descriptions of the microbial growth. However, in spite of the wide variability of biochemical cycles within the cells and chemical composition of the surrounding medium, all the microbial cultures show the same kind of “macroscopic” behavior, namely, the growth curve, which therefore must obey to laws that prevail biological and biochemical differences and still deserves a reliable interpretation [10] . The present paper is a review of previous works [11] - [17] that suggest a new approach to the growth curve, looking at the microbial culture as at an overall system (medium + cells) that is thermodynamically unstable and evolves irreversibly. A major novelty of this approach is that it includes the decay phase that follows the growth phase. This extension can be of interest for predictive microbiology applied to food preservation and pharmaceutical investigations. The quantitative traits of the approach come from the description of an ideal planktonic microbial culture that may be a reference for every microbial culture. The empirical parameters that come from the log (N)-vs-t data are interconnected with one another theoretically, showing a self-consistency that is a common feature of every microbial culture. The present paper summarizes the fundamental aspects of the model, the details being reported elsewhere [11] - [17] . The model deals with a planktonic culture with cells evenly dispersed in the medium. The progress of the growth is synchronic for all the duplication lines stemmed from the starting N[o] cells. No cell dies during the growth progress. The basic assumption of the model is a variable generation time, τ, that allows reproduction of the sigmoid trend of the growth curve as the best fit of the experimental log (N) data that significantly (confidence > 95%) differ from the log (N[o]) starting population. Such empirical basis therefore reflects a duplication process in action and does not directly concern any earlier adjustment of the cells. This selection of the experimental data differentiates the present model from all the previous others, which propose best fits that gather data that clearly reflect a no-growth condition together with data that indicate the increase of the cell population. This apparent contradiction comes from the common opinion that the microbes must first “self-adjust” to the medium at the beginning of any plate count experiment [18] , namely, that the growth process would actually start before the onset of the cell duplication. However, such reasonable opinion does not justify the use of a single function to fit data that deal with the increase of the cell population, namely, the experimental evidence of a change of the population density, together with data that indicate that no duplication is occurring. The two conditions (increase and no-increase of N) require two different mathematical approaches, while the description of the whole growth process should come from side constraints imposed by the model used. The present model takes into account such a gap between no-duplication and duplication phases through a wider vision of the issue, stating that the time origin of the whole process (thus including the self-adjustment phase) does not necessarily coincide with the start of the experiment. This statement rises the problem of defining an ideal time origin of the growth. It was noticed that the condition N = 1 is the lowest requirement to obey the constraint that the system has to host a trigger, without which no duplication is possible. For this reason, the “ideal” time origin, θ = 0, for the growth process should comply with such requirement. In order to single out the “ideal” time origin using the available experimental data, one has to address the attention to the best evidence of the duplication process, namely, the fastest duplication rate. This corresponds to the largest specific growth rate, N ˙ / N , namely the flex point of the growth curve. This condition, dubbed “balanced growth” [5] , reflects the perfect synchronism of the biochemical activities underlying the cell duplication, namely, with no “self -adjustment” delay: the process is purely cell duplication. This means that the extrapolation of the straight- line tangent to the growth curve at its flex point down to the level log (N) = 0 allows singling out the ideal time origin, θ = 0. It is also worth noticing that the condition log (N) = 0 (namely, N = 1) holds for any log base used for the growth curve. Furthermore, since the argument of the logarithm must be a pure number, a correct expression would indeed be log (N/[N]), where [N] stands for the units used for the microbial population (e.g., CFU, CFU/mL, CFU/g, etc.). This means that the extrapolation process mentioned above can single out the time origin, no matter the units of N and the log base, which can be helpful in many practical applications. Figure 1 clarifies this conclusion. This choice creates a self-consistency between the time scale and the progress of the cell duplication. Now the experimental data to treat must be those that actually reflect the increase of N, neglecting those that do not significantly differ from the starting value, N[o]. These data enter the best-fit routine based on the equation for the cell duplication, N = N o 2 ( θ − θ o ) / τ ( θ ) (1) where τ is the variable generation time and θ[o] is the time lapse that precedes the onset of the duplication process. The experimental evidence indicates that the duplication rate, 1/τ, is null for θ = θ[o] and for θ → ∞. A simple function that obeys this condition [16] [17] is: τ = α ϑ − θ o + ϑ − θ o β (2) Equation (2) allows rewriting Equation (1) in the log[2] scale, log 2 ( N N o ) = β ( ϑ − θ o ) 2 α β + ( ϑ − θ o ) 2 (3) Equation (3) states that the largest extent of the cell duplication (for θ → ∞) is N[max] = N[o]2^β, which identifies β as the number of duplication steps experienced by each of the N[o] duplication lines. A straightforward algebra [13] shows that the specific duplication rate goes through a maximum, μ = ( N ˙ / N ) max , at ( θ ∗ − θ o ) = ( α β / 3 ) 1 / 2 , when log 2 ( N N o ) ∗ = β 4 . The value of μ is μ = 3 3 8 β α (4) This means that the straight line tangent to the growth curve in the plot log[2] (N)-vs-θ corresponds to the equation log 2 ( N ) = ( log e ( 2 ) 3 3 8 β α ) θ = log e ( 2 ) μ θ (5) This straight line goes through log 2 ( N ) = [ log 2 ( N o ) − β / 8 ] for θ = θ[o], while crosses the log[2] (N[o]) and the log[2] (N[max]) levels at θ (0) and θ[end], respectively (Figure 2), with ( θ end − θ o ) = 3 ( θ ∗ − θ o ) and [ θ ( 0 ) − θ o ] = ( θ ∗ − θ o ) / 2 . For θ < 0, any cellular activity does not directly aim at the duplication. This is another main difference from all the other models reported in the literature. It is important to remind that the above parameters are properties of the whole system (cells + medium). This means that microbial cultures prepared pouring a given microbial population in different mediums will show different values of the parameters. All the above relationships hold for the growth curves of every prokaryotic microbial culture so far checked by the author, including those quite far from planktonic conditions, once the time scale of the experiment is reported to their own time origin θ = 0. When so, one can use the reduced variables θ R = ( θ − θ o ) / ( θ ∗ − θ o ) and ξ = log 2 ( N / N o ) / β to gather all the growth curves in a single ξ-vs-θ[R] master plot [13] that corresponds to the equation ξ = θ R 2 3 + θ R 2 (6) Such master plot reflects the collective self-consistent behavior underlying the shape of the growth curve of every microbial culture, in spite of the physical, chemical, biochemical and biological peculiarities of the system. Another important relationship easily achievable through a straightforward algebra is: μ θ o = log 2 ( N o ) − β / 8 log e ( 2 ) (7) that interconnects three fundamental parameters of the model, namely, μ, β and θ[o]. In particular, Equation (7) states that, for given N[o] and β, the shorter the latency phase that precedes the duplication onset, the faster the specific duplication rate. This makes sense as long as θ[o] indicates the promptness of the cells to duplicate in the surrounding medium. Finally, since (μθ[o]) > 0, β < 8log[2] (N[o]). Finally, it can be of some interest to notice that the formal treatment used to describe the duplication process that is typical of prokaryotic microbes holds also for eukaryotic yeasts and molds that show a sigmoid growth curve with a flex point. What one needs to do is just to put log e ( N ) = log e ( N o ) + [ ( θ − θ o ) / t ] log e ( n ) (8) with n > 1, and replace β and log[2] with νβ, where ν = log[e] (n), and log[e], respectively, in the above equations. The experimental evidence shows that most microbial cultures undergo a decline once they have attained a maximum level of the population density. Between ascending and descending trends of N, a steady intermediate phase or a broad maximum can occur. Literature reports either kind of trends. Since the relationship between growth and decay could be of some relevance in food predictive microbiology as well as in the antibiotic pharmacology, an overall description seemed of some interest. The above growth model finds a natural extension to the decay phase through the assumption of a basic principle. In the absence of any external adverse agent, death primarily hits the oldest cells [16] . The N[o] starting cell of the model planktonic culture have the same age, but, after few duplication steps, the population hosts cells of different age: the newest born represent 50% of the whole population (no cell death occurs during the growth process) and soon overwhelm the starting N[o] ancestors. For this reason, if one accepts the above principle for the cell death, the first to dye will be just the N[o] ancestors, namely a negligible fraction of the population. Small fractions will follow, until death hits the younger generations that correspond to major fractions of N[max]. Once reported in a log scale, the start of the decay is almost undetectable and remains so for some while, giving the impression of a culture in a steady condition if the average life span of the cells is large compared to the duplication time, τ. Conversely, if the life span is comparable or lower than τ, the observed trend of N shows a broad maximum between ascending and descending branches. This approach justifies the “cascade” effect of the decay trend that shows an increasingly downward slope [16] [17] . Formally, one can describe such a behavior through a pseudo exponential function of the time, t, elapsed after the attainment of the N[max] threshold, N surv = N max exp ( − t 2 d ) (9) where d is a constant. To be reconciled with the ascending trend of N, Equation (9) requires an adjustment of the time scale to align it with the selection of the time origin θ = 0 described above. The real start of the decay, θ[s], remains rather uncertain and therefore becomes a further parameter to determine through a best-fit routine based on the expression for the surviving population, N surv = N θ = θ s 2 θ − θ s τ d (10) where τ[d] is the decay pace. Just like the generation time, τ, τ[d] depends on the conditions of the medium that likely worsen for increasing (θ − θ[s]). Putting τ d = d / ( θ − θ s ) , one finally obtains the expression N surv = N θ = θ s 2 ( θ − θ s ) 2 d (11) Equation (11) accounts for the above picture of the decay process through a ranking of the cell ages that parallels, in the reverse direction, the growth process [16] [17] . Finally, if one identifies N ( θ − θ s ) = N max , the number of viable cells throughout the decay phase is log 2 ( N surv ) = log 2 ( N o ) + β − ( θ − θ s ) 2 d (12) To define an expression for the overall (growth + decay) profile of N, one can use an expression like log 2 ( N ) = log 2 ( N o ) + β ( θ − θ o ) 2 α β + ( θ − θ o ) 2 − ( θ − θ o ) 2 δ (13) where δ ≠ d is an adjustable parameter. If δ ≫ ( θ s − θ o ) 2 , Equation (13) leads to practically the same value as Equation (3), for θ = θ[s], namely, log[2] (N[max]). Otherwise, the curve bends downward after going through a broad maximum below the level [log[2] (N[o]) + β]. Figure 3 shows some kinds of expected trends. Applications of Equation 13 to real growth & decay data appeared in previous papers that also report the modified decay trends observed after addition of bactericidal drugs [16] . When facing adverse conditions, some microorganisms are able to modify their own phenotype or physiological behavior [19] , as in the case of sporulation, or even genotype, as in the case of mutants that show resistance to antibiotic drugs [16] [17] . When so, the growth & decay trend significantly deviates from the above model: an easy - to - detect macroscopic signal that something important is occurring. In other cases, the changes are very subtle, requiring specific approaches to perceive their effects. These approaches deal with observations of the evolution of a given microbial culture in the course of several thousand generations, as in the Lenski’s LTEE [8] [9] . This kind of investigation highlights the previous history of the cells, the memory of which seems related to the so-called fitness (or any suitable proxy of it) of the culture. As long as β is in relation with μ and θ[o] (Equation (7)), which are both representative of the efficiency of the cell in a given surrounding medium, β can be an effective proxy of the fitness. The increase of fitness after some thousand generations in a LTEE [8] (all starting from the same N[o] level in the same medium) implies increase of μ and decrease of θ[o] (Figure 4). These changes take place together with possible genomic mutations, which do not interfere with the growth progress [8] [9] . In spite of its semi-empirical nature and the assumed idealized conditions, the growth and decay model summarized in the present paper seems rather flexible and adequate to describe the behavior of real microbial cultures. It suggests a vision of the evolution of a given culture, in a given medium, that relates the duplication activity with the preceding no-growth phase, thanks to a suitable choice of a virtual time origin. The phenomenological description of the growth curve, through the assumption of a variable generation time, reveals some important interconnections between the parameters of the model (all determined with best-fit treatments of experimental data). These suggest an overall vision of the growth process as the result of a cooperative behavior of the microbes since the no-growth phase that precedes the duplication onset. Ranking the population fractions according to the respective age allows a naïve, but reasonable, interpretation of the steady plateau, or broad maximum, between rising and declining branches of the population density. The satisfactory check of the model once applied to a number of experimental data leaves me confident about its The proposed approach and related model are easy to use thanks to the simple mathematical treatment of the experimental data and have a number of possible applications to microbial spoilage of food, pharmaceutical investigations about the efficacy of bactericidal and bacteriostatic drugs, as well as to studies on the evolution of microbial organisms in chemostat cultures [20] . The author did not receive funds to sustain the work and the publication of the paper. The author declares no conflicts of interest regarding the publication of this paper. Schiraldi, A. (2024) Microbial Growth and Decay: A Commented Review of the Model. Advances in Microbiology, 14, 1-10. https://doi.org/10.4236/aim.2024.141001 Neidhardt, F.C. (1999) Bacterial Growth: Constant Obsession with dN/dt. Journal of Bacteriology, 181, 7405-7408. https://doi.org/10.1128/JB.181.24.7405-7408.1999Monod, J. (1949) The Growth of Bacterial Cultures. Annual Review of Microbiology, 3, 371-394. https://doi.org/10.1146/annurev.mi.03.100149.002103Good, B.H and Hallatschek, O. (2018) Effective Models and the Search for Quantitative Principles in Microbial Evolution. Current Opinion in Microbiology, 45, 203-212. https://doi.org/10.1016/j.mib.2018.11.005Egli, T. (2015) Microbial Growth and Physiology: A Call for Better Craftsmanship. Frontiers in Microbiology, 6, 287-298. https://doi.org/10.3389/fmicb.2015.00287Schaechter M. ,et al. (2006)From Growth Physiology to Systems Biology 9, 157-161.Hense, B.A., Kuttler, C., Müller, J., Rothballer, J.M., Hartmann, A. and Kreft J.U. (2007) Does Efficiency Sensing Unify Diffusion and Quorum Sensing? Nature Reviews Microbiology, 5, 230-239. https://doi.org/10.1038/ nrmicro1600Leiphart, R.J., Chen, D., Peredo, A.P., Loneker, A.E. and Janmey, P.A. (2019) Mechanosensing at Cellular Interfaces. Langmuir, 35, 7509-7519. https://doi.org/10.1021/ acs.langmuir.8b02841Lenski, R.E. (2017) What Is Adaptation by Natural Selection? Perspectives of an Experimental Microbiologist. PLOS Genetics, 13, e1006668. https://doi.org/10.1371/ journal.pgen.1006668Baverstock K. ,et al. (2021)The Gene: An Appraisal 164, 46-62.Gonze, D., Coyte, K.Z., Lahti, L. and Faust, K. (2018) Microbial Communities as Dynamical Systems. Current Opinion in Microbiology, 44, 41-49. https://doi.org/10.1016/j.mib.2018.07.004Schiraldi, A. (2017) Microbial Growth in Planktonic Conditions. Cell and Developmental Biology, 6, 185. https://doi.org/10.4172/ 2168-9296.1000185Schiraldi, A. (2017) A Self-Consistent Approach to the Lag Phase of Planktonic Microbial Cultures. Single Cell Biology, 6, Article ID: 1000166.Schiraldi, A. (2020) Growth and Decay of a Planktonic Microbial Culture. International Journal of Microbiology, 2020, Article ID: 4186468. https://doi.org/10.1155/2020/4186468Schiraldi, A. and Foschino, R. (2021) An Alternative Model to Infer the Growth of Psychrotrophic Pathogenic Bacteria. Journal of Applied Microbiology, 132, 642-653.Schiraldi, A. and Foschino, R. (2021) Time Scale of the Growth Progress in Bacterial Cultures: A Self-Consistent Choice. RAS Microbiology and Infectious Diseases, 1, 1-8. https://doi.org/10.51520/2766-838X-12Schiraldi, A. (2021) Batch Microbial Cultures: A Model That Can Account for Environment Changes. Advances in Microbiology, 11, 630-645. https://doi.org/10.4236/aim.2021.1111046Schiraldi, A. (2022) The Origin of the Time Scale: A Crucial Issue for Predictive Microbiology. Journal of Applied & Environmental Microbiology, 10, 35-42. https://doi.org/10.12691/jaem-10-1-4Bertrand, R.L. (2019) Lag Phase Is a Dynamic, Organized, Adaptive, and Evolvable Period That Prepares Bacteria for Cell Division. Journal of Bacteriology, 201, e00697-18. https://doi.org/10.1128/JB.00697-18Balaban, N.Q., Merrin, J., Chait, R., Kowalik, L. and Leibler, S. (2004) Bacterial Persistence as a Phenotypic Switch. Science, 305, 1622-1625. https://doi.org/10.1126/science.1099390Ziv, N., Brandt, N.J. and Gresham, D. (2013) The Use of Chemostats in Microbial Systems Biology. Journal of Visualized Experiments, 80, e50168. https://doi.org/10.3791/50168-v
{"url":"https://www.scirp.org/xml/130476.xml","timestamp":"2024-11-08T21:56:13Z","content_type":"application/xml","content_length":"31375","record_id":"<urn:uuid:b6839bc3-0c94-4162-86a1-62eb5959d32f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00339.warc.gz"}
Monoids up to coherent homotopy in two-level type theory | Academic When defining a monoid structure on an arbitrary type in HoTT, one should require a multiplication that is not only homotopy-associative, but also has an infinite tower of higher homotopies. For example in dimension two one should have a condition similar to Mac Lane’s pentagon for monoidal categories. We call such a monoid a monoid up to coherent homotopy. The goal of my internship in Stockholm was to formalize them in Agda. It is well-known that infinite towers of homotopies are hard to handle in plain HoTT, so we postulate a variant of two-level type theory, with a strict equality and an interval type. Then we adapt the set-theoretical treatment of monoids up to coherent homotopy using operads as presented by Clemens Berger and Ieke Moerdijk. Our main results are (a) Monoids up to coherent homotopy are invariant under homotopy equivalence (b) Loop spaces are monoids up to coherent homotopy. In this talk I will present the classical theory of monoids up to coherent homotopy, and indicates how two-level type theory can be used to formalize it. Deducteam seminar
{"url":"https://www.hugomoeneclaey.com/talk/monoids-up-to-coherent-homotopy-in-two-level-type-theory/","timestamp":"2024-11-10T15:10:03Z","content_type":"text/html","content_length":"15021","record_id":"<urn:uuid:fd8ab31f-80ce-4a1d-b6a0-028c802f0dee>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00123.warc.gz"}
how to convert inches to linear yards How to Know the Difference between COVID-19 and Allergies? From. Convert 10.5 Inches to Yards with our online conversion. How many inches are in a yard? The converter will automatically include all definitions for the units. A linear yard is an English unit of length used in the British imperial and U.S. customary measurement systems. A linear yard of fabric is a length of fabric that is a yard long. However, some professionals still use the older method of square yards. It is also equal to 3 feet, or 36 inches. The two units are therefore incompatible.As long as you know the width you can convert square yards to linear feet. The inch is a US customary and imperial unit of length. More information from the unit converter. Make a list of the separate materials you will need for your project.Calculating Linear Feet for a Deck Project Calculate the square footage of your deck area. 1 yard * (12 feet / 3) yard = 4 square yards. Calculate project cost based on price per cubic foot, cubic yard or cubic meter. Assume for this example that you are building a simple rectangular deck.Using Specialized Linear Foot Calculators Search the Internet for online calculators. Often a double-quote (") is used instead of a double-prime for convenience. We recommend using a ruler or tape measure for measuring length, which can be found at a local retailer or home center. Linear inches is a term invented by the airline industry to measure baggage. How To Calculate Linear Yards. An inch is a unit of linear length measure equal to 1/12 of a foot or 1/36 of a yard. Datasheets. Just type the number of yards into the box and the conversion will be … How to Protect Your Health from Covid-19. for example, $10 per yard x 1 yard per / 0.91 meter = $11 per meter. Linear yards only measure length and do not account for the width of the fabric. Identify all the pieces of any particular material type that you need.Measuring Linear Feet for a Bookshelf Project Divide your project into different categories of materials. Cubic Ft measures volume. 5 1/3 yards = 5.33 yards Cubic Ft measures volume. An inch was defined to be equivalent to exactly 25.4 millimeters in 1959. » Start Calculating. How to Practice Boxing at Home During Pandemic? Definition: An inch (symbol: in) is a unit of length in the imperial and US customary systems of measurement. Are You Planning a Home Improvement Project? The inch is a US customary and imperial unit of length. Use this page to learn how to convert between linear feet and yards. In 1959, an international agreement standardized one yard as as exactly 0.9144 meters. Or leave the math to us and contact one of our expert sales representatives. Check the chart for more details. Enter measurements in US or metric units and get volume conversions to other units. There is some variability in how prepreg material is measured in the US. There are 12 inches in a foot, and 36 inches in a yard. So let me write that down. • Convert 54” wide goods to Square Yards: Divide number of linear yards by 13.5 then multiple result by 9. Convert Yards to Inches Enter a value below and we will automatically convert it to Inches Nanometers (nm) Meters (m) Yards (yd) Kilometers (km) Millimeters (mm) Centimeters (cm) Feet (ft) Miles (mi) Micrometers (μm) Nautical miles (NM) If the yardage for a particular width is known, and the width of the material is changed, to convert from one width to another, work backward to determine the yardage. inches and yards together, and it also automatically converts the results to US customary, imperial, and SI metric values. The international yard is legally defined to be equal to exactly 0.9144 meters.[2]. These worksheets are pdf files.. There are 3 feet per yard. To convert linear feet to square yards, it is necessary to find the square feet first by multiplying the length by the width and then dividing the square feet by 9. You cannot convert that to $ per square yard. Tips: There is no need to specify the country code or the type of the units. Because the international yard is legally defined to be equal to exactly 0.9144 meters, one inch is equal to 2.54 centimeters.[1]. Cubic and Linear measurements are 2 different things, so how can you convert one to the other? (Divide by 12 to include waste or 9 for patterns with large repeats.) Tell me what to convert. (If the width is 50” then 1 LY would be 50 by 36 inches.) To illustrate, a carpet with a linear feet or length of 10 feet and a width of 12 feet is 120 square feet. Q: How many Inches in 1 Yards? Generally woven materials, referred to as broadgoods, are measured in linear yards (LY) while some measure composite material by the square foot (SF). To convert an inch measurement to a yard measurement, divide the length by the conversion ratio. Conversion Table. 1 yd = 36 in; 2 yd = 72 in; 3 yd = 108 in and so on Use this easy and mobile-friendly calculator to convert between yards and inches. Carpet installer needs to know how to convert lineal yards into square yards Carpet comes in three different size rolls. More than one definitions are found: There are 2 definitions of inch: Length/Distance Conversion: From: To: 1 inch [in][international] = 0.0277777777777778 linear yard [=yard]: 1 inch [in][US, survey] = 0.027777833552056 linear yard [=yard]: Quantity & Converter(s) Linear Ft measures length. One yard is equal to 36 inches, so use this simple formula to convert: The length in yards is equal to the inches divided by 36. To convert a linear yard of fabric into square yards multiply the linear yard by 1.35 = number of square yards in 55” wide fabrics. Keep reading to learn more about each unit of measure. Since 1959, a yard has been defined as exactly 0.9144 meters. Other types of measuring devices include scales, calipers, measuring wheels, micrometers, yardsticks, and even lasers. Do a quick conversion: 1 inches = 0.083333333333333 linear feet using the online calculator for metric conversions. For quick reference purposes, below is a conversion table that you can use to convert from in 2 to yd 2. If the question is meant as it reads, then the answer could be to take the square root of the square yard and so the square root of one square yard is one yard. (Divide by 12 to include waste or 9 for patterns with large repeats.) ›› Quick conversion chart of linear feet to yard The yard is a US customary and imperial unit of length. Our inch fraction calculator can add Review the design plan for your project. Linear Ft measures length. Inquiries around Since you want to convert 5.33 yards into feet, the math problem you need to solve is 5.33 x 3. The yard is a unit of length measurement equal to 3 feet or 36 inches. Calculate cubic yards, cubic feet or cubic meters for landscape material, mulch, land fill, gravel, cement, sand, containers, etc. The standard ruler has 12", and is a common measuring tool for measuring inches. Wazoodle Retail yardage converter. Definition: A yard (symbol: yd) is a unit of length in both the imperial and US customary systems of measurement. Now, we want to convert this into inches. The answer is 36.000 Just so we can take this in baby steps, maybe we convert this into feet first, and then once we have it in feet, then we can convert it into inches. If you want … How far is 120 inches in yards? For example: "centimeter to inch". A variety of sources online provide special calculators that will help you decide the linear footage that you need for a range of projects. For example, if a roll of wallpaper is 30 inches wide, a lineal yard would be a 36-inch length of it, or 36 by 30 inches. To make matters more confusing unidirectional prepreg is measured by the pound. History/origin: The origin of the yard as a unit is unclear. more ››. Yards can be abbreviated as yd; for example, 1 yard can be written as 1 yd. (Divide by 12 to include waste or 9 for patterns. To. Inches can be abbreviated as in; for example, 1 inch can be written as 1 in. • Convert Square Yards to Linear Yards: Multiple square yard quantity by 9, then divide the result by 13.5. A lineal yard, also known as a linear yard, can be misleading. To convert a linear yard of fabric into square yards multiply the linear yard by 1.35 = number of square yards in 55” wide fabrics. If your material comes in set width and thickness, you can calculate the volume of a given total length of this material. Linear yards only measure length and do not account for the width of the fabric. 12 feet, 13.5 feet, and 15 feet. An inch is a unit of length equal to exactly 2.54 centimeters. Most fabrics are 38”, 50”. If you have the length of an object in inches or meters or a distance in miles, you can convert to yards by carrying out a simple calculation. Divide the width by 12 (i.e. How to calculate cubic yards for rectangular, circular, annular and triangular areas. How to Always Keep Your Shoes Clean & New? Re: inches to linear yards There are 36" in a yard so you have a piece of fabric that is 44 / 36 = 1.22222 yards or 1 yard and 8 inches wide 78 / 36 = 2.16666 yards or 2 yards and 6 inches long. So to convert it into feet, we just have to remember that there are 3 feet for every 1 yard. There are 12 inches in a foot and 36 inches in a yard. how to calculate intersection non linear lines, https://sciencing.com/calculate-linear-yard-6396008.html, https://carnegiefabrics.com/wallcovering-calculator/, https://www.hunker.com/12468870/ how-to-convert-a-lineal-yard-to-square-feet, https://www.phillipjeffries.com/yardage-calculator, https://odysseywallcoverings.com/about-us/wallcoverings-yardage-calculator/, https:// www.convertunits.com/from/linear+feet/to/yard, https://www.speedyconverter.com/?c=inch+to+linear+yard, https://www.onlinefabricstore.net/makersmill/calculating-fabric-yardage-for-your-project/, https://www.inchcalculator.com/square-yards-calculator/, https://www.unitconverters.net/length/yards-to-meters.htm, https://www.reference.com/science/ convert-linear-feet-square-yards-2238db9151bcf371, http://www.colouranddesign.com/media/downloads/wallcovering-conversions-document.pdf, https://www.inchcalculator.com/convert/inch-to-yard/, https:// www.blocklayer.com/linear-cubiceng.aspx, https://www.calculatorsoup.com/calculators/construction/cubic-yards-calculator.php, https://www.transcendia.com/calculator-tools, https:// www.axiommaterials.com/how-to-convert-linear-yards-to-square-feet/, https://onlineconversion.vbulletin.net/forum/main-forums/convert-and-calculate/2665-lineal-yards-to-sqare-yards, https:// www.wikihow.com/Calculate-Linear-Feet, Read Cubic and Linear measurements are 2 different things, so how can you convert one to the other? Re: ounces per linear yard to grams per square meter Please tell me how can you convert a yard of any thing if you dont know the density.it seems to me that you are saying a square yard of carpet weighs so many lbs or ounces or grams.for example if i had a square yard of 1/2inch capet or anything,something 1inch thick is going to weigh more.I am seeing this on the threads. 120 in to yd conversion. Similar: Convert between miles, yards and feet A 20-by-20-by-5-inch suitcase, a 1-by-11-by-4-inch painting and a 1-by-1-by-43-inch fishing rod are all the same size in terms of linear inches. To illustrate, a carpet with a linear feet or length of 10 feet and a width of 12 feet is 120 square feet. Measurement worksheets: Convert between yards, feet and inches. Most decorator fabrics are 60 inches wide, which is why that column is shaded in the table below, so start your calculations based on 60-inch wide fabric. To find the square yard, simply divide your square foot number by nine. The size of an item in linear inches is the sum of the length plus the width plus the height of the item. Type in your own numbers in the form to convert the units! Copyright © 2018-2020 All rights reserved. Worksheets > Math > Grade 5 > Measurement > Convert units of length. Get The Specs . Answer Square yards and linear yards measure two different things (area and length) so it is not possible to convert between the two. The only thing you need to input is the names of the units. 1 metre is equal to 3.2808398950131 linear feet, or 1.0936132983377 yard. Below are six versions of our grade 5 math worksheet on converting units of length between yards, feet and inches. Frozen Avocado Walmart, Stony Brook University Summer 2019, Luxury Leather Repair Automotive Leather Dye, Vortex Diamondback 4-12x40 Academy, The Hayworth Comedy Club, How To Clean Up A Scanned Image In Photoshop, Kicker Pt250 Bass Knob, Who Is Higher Than Social Services, Jumbo French Fries Jollibee, T9 Urad Variety, Light-up Deer Walmart, Ww Purple Meal Plan,
{"url":"https://miro.acadiasi.ro/syringic-acid-sfh/0e86e5-how-to-convert-inches-to-linear-yards","timestamp":"2024-11-12T07:49:59Z","content_type":"text/html","content_length":"85544","record_id":"<urn:uuid:40d23138-8522-4ff9-a133-0d843bf2a1ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00248.warc.gz"}
ThmDex – An index of mathematical definitions, results, and conjectures. Word Explanation ansatz 'an educated guess' axiom a mathematical statement which is accepted as true without a proof in order to start somewhere and derive a theory characterization a result which says that two mathematical statements are equivalent closed-form often used in the context of real or complex calculus to describe a mathematical expression which consists only of real or complex arithmetic and such that any functions present expression in the expression are only the elementary functions conjecture a result which has not yet been proven but which is strongly suspected to be true corollary a result which follows effortlessly from another result exercise a light challenge which requires only the application of routine operations already known to the student (compare to 'problem') factorization a multiplicative-type algebraic operation (e.g. integer multiplication, binary set intersection, ...) repeatedly applied (compare to 'partition') gedankenexperiment 'a thought experiment' hypothesis a word deriving from latin, being a synonym for 'assumption' lemma a supporting result, something used to reach a theorem or a proposition parameter in the context of a collection of mathematical object, a parameter refers to an element of the index set partition an additive-type algebraic operation (e.g. integer addition, binary set union, ...) repeatedly applied (compare to 'factorization') penultimate step the expression or statement from which the final theorem (or proof) follows with a single step of deduction portmanteau theorem postulate an archaic near-synonym for 'axiom' proposition a result of secondary interest problem a substantial challenge which requires perhaps even the invention of new methods not yet known to the student (compare to 'exercise') subdefinition a synonym for a special case or a particular case of a definition subresult a synomym for special case or particular case of a result theorem a result of primary interest well-defined often used in the context to a function or a mapping recently introduced or defined, meaning that the mapping satisfies the actual definition of a map, being a right-unique and left-total binary relation
{"url":"https://thmdex.org/glossary","timestamp":"2024-11-15T03:42:12Z","content_type":"text/html","content_length":"8786","record_id":"<urn:uuid:ddb2216f-d2a3-4076-b2e8-779443943beb>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00173.warc.gz"}
Square Feet to Square Miles Conversion (sq ft to sq mi) Square Feet to Square Miles Converter Enter the area in square feet below to convert it to square miles. Do you want to convert square miles to square feet? How to Convert Square Feet to Square Miles To convert a measurement in square feet to a measurement in square miles, divide the area by the following conversion ratio: 27,878,400 square feet/square mile. Since one square mile is equal to 27,878,400 square feet, you can use this simple formula to convert: square miles = square feet ÷ 27,878,400 The area in square miles is equal to the area in square feet divided by 27,878,400. For example, here's how to convert 50,000,000 square feet to square miles using the formula above. square miles = (50,000,000 sq ft ÷ 27,878,400) = 1.793503 sq mi Square feet and square miles are both units used to measure area. Keep reading to learn more about each unit of measure. What Is a Square Foot? One square foot is equivalent to the area of a square with sides that are each 1 foot in length.^[1] One square foot is equal to 144 square inches or 0.092903 square meters . The square foot is a US customary and imperial unit of area. A square foot is sometimes also referred to as a square ft. Square feet can be abbreviated as sq ft, and are also sometimes abbreviated as ft². For example, 1 square foot can be written as 1 sq ft or 1 ft². You can use a square footage calculator to calculate the area of a space if you know its dimensions. Learn more about square feet. What Is a Square Mile? One square mile is equal to the area of a square with sides that are each 1 mile long. One square mile is roughly equal to 2.59 square kilometers or 640 acres. The square mile is a US customary and imperial unit of area. Square miles can be abbreviated as sq mi, and are also sometimes abbreviated as mi². For example, 1 square mile can be written as 1 sq mi or 1 mi². Learn more about square miles. Square Foot to Square Mile Conversion Table Table showing various square foot measurements converted to square miles. Square Feet Square Miles 1 sq ft 0.00000003587 sq mi 2 sq ft 0.00000007174 sq mi 3 sq ft 0.00000010761 sq mi 4 sq ft 0.00000014348 sq mi 5 sq ft 0.00000017935 sq mi 6 sq ft 0.00000021522 sq mi 7 sq ft 0.00000025109 sq mi 8 sq ft 0.00000028696 sq mi 9 sq ft 0.00000032283 sq mi 10 sq ft 0.0000003587 sq mi 100 sq ft 0.000003587 sq mi 1,000 sq ft 0.00003587 sq mi 10,000 sq ft 0.000359 sq mi 100,000 sq ft 0.003587 sq mi 1,000,000 sq ft 0.03587 sq mi 10,000,000 sq ft 0.358701 sq mi 100,000,000 sq ft 3.587 sq mi 1. Merriam-Webster, square foot, https://www.merriam-webster.com/dictionary/square%20foot More Square Foot & Square Mile Conversions
{"url":"https://www.inchcalculator.com/convert/square-foot-to-square-mile/","timestamp":"2024-11-09T03:08:43Z","content_type":"text/html","content_length":"68120","record_id":"<urn:uuid:cfab7e42-79f0-4fa1-baa7-480d2c849484>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00592.warc.gz"}
What Is The Mean Symbol On Ti 84? Arithmetic Mean! (2024) The mean symbol on a TI-84 calculator refers to the statistical function used to calculate the arithmetic mean or average of a data set. The mean is a measure of central tendency that represents the sum of all values in a data set divided by the total number of values. It is commonly used in various fields such as mathematics, statistics, economics, and finance to provide a general understanding of the data set’s central location. On a TI-84 calculator, the mean function can be accessed through the statistical functions of the calculator to analyze and interpret data sets. The mean symbol is a statistical function on a TI-84 calculator. Arithmetic mean or average is the primary use of the mean function. Calculating the mean involves adding all data values and dividing by the number of values. The mean function is used in multiple fields, including mathematics, statistics, and finance. Accessing the mean function on a TI-84 calculator is quite simple. To calculate the mean of a data set, you need to enter the data values into a list, and then navigate to the “1-Var Stats” function within the “STAT” menu. From there, the calculator will compute various statistical measures, including the mean, for your data set. Understanding the Mean Symbol on TI 84: A Comprehensive Guide Function Description Calculator Model Symbol on Calculator Mean Calculates the average (mean) of a data set TI-84 mean( or mean(list) Key Takeaway The mean symbol on TI-84 refers to the calculation of the average of a dataset It is a statistical function present on the calculator for quick calculations The TI-84 calculator is commonly used in schools and colleges for teaching math and statistics Understanding the mean symbol on TI-84 helps students efficiently analyze and interpret data Five Facts About: The Mean Symbol on TI 84 The TI-84 calculator is produced by Texas Instruments, a major producer of advanced technology tools for education and engineering purposes. Source The mean symbol on TI-84 can be accessed through the 1-Var Stats function, which calculates a range of statistical data including the mean (average), standard deviation, and variance. Source The mean calculated by the TI-84 calculator refers to the sum of all data points in a dataset, divided by the total number of data points. This is a central measure of a dataset’s tendency. Source In addition to mean, the TI-84 calculator can also calculate other central tendency measures, such as median and mode, through its various inbuilt functions. Source The TI-84 calculator is widely used across the United States in high school and college-level courses, particularly mathematics, statistics, and engineering. It is a versatile tool that enables students to enhance their knowledge and understanding of analytical methodologies. Source Understanding The Ti-84 Calculator The ti-84 calculator is a popular calculator amongst students and professionals alike. The mean symbol on the ti-84 calculator is one of the many functions that this amazing device offers. We will walk you through the basics of the ti-84 calculator and help you understand what the mean symbol is used for. Introduction To The Ti-84 Calculator The ti-84 calculator is a graphing calculator that is manufactured by texas instruments. It is a reliable and robust calculator that allows you to solve a wide range of mathematical problems, making it a popular choice amongst students and professionals alike. When it comes to understanding the ti-84 calculator, here are some of the key points you should keep in mind: • The ti-84 is a graphing calculator, meaning it can plot and analyze graphs and functions. • The calculator has a large display screen that allows you to view complex computations and equations. • It has various functions that include trigonometry, statistics, probability, and calculus. Using the ti-84 calculator, you can solve a variety of math problems with ease and efficiency. Understanding The Mean Symbol On Ti-84 The mean symbol is one of the many essential functions that the ti-84 calculator offers. To compute the mean using the ti-84 calculator, Here are the steps you should follow: • Enter the data set into the ti-84 calculator by pressing the “stats” button, then selecting “1: Edit.” • Once the data is entered, use the arrow keys to highlight the “calc” menu, then select “1: 1-var stats.” • Press “enter” to calculate the mean, and the result will be displayed on the screen. Here are some more key points to keep in mind regarding the ti-84 calculator and the mean symbol: • The ti-84 calculates the arithmetic mean, which is calculated by adding up all the values in a data set and dividing by the number of values. • The mean symbol is represented by the letter “x-bar.” • The mean is used in a wide range of statistical calculations, and is often used to measure central tendency. The ti-84 calculator is an amazing tool that provides precise mathematical calculations. To calculate the mean using the ti-84 calculator, simply input your data into the calculator and follow the steps outlined above. The mean symbol is just one of the many functions that this powerful calculator offers. Overall, the ti-84 calculator is an essential tool for anyone who needs to solve complex math problems with ease and efficiency. What Is The Mean Symbol On Ti-84? The ti-84 calculator is a beloved tool for many students and professionals. One of the symbols that often confuses its users is the mean symbol. In this post, we will explore this concept and how you can use it on your ti-84 calculator. Let’s jump right in! Brief Explanation Of The Concept Of Mean The mean is a statistical measure that represents the average value of a set of numerical data. This value is obtained by adding all the numbers in the dataset and dividing the sum by the total number of values. The mean is used to describe the central tendency of a data set and is often represented by the symbol “x̄” in mathematics. Overview Of The Functionality Of Ti-84 The ti-84 calculator is a powerful tool that comes equipped with a range of features and functionalities. When it comes to calculating the mean, the ti-84 makes things easy. Here’s how: • To calculate the mean of a given dataset, enter the values into a list on your calculator. • Press stat, then enter to access the statistics menu. • Choose option 1 (1-varstats) to calculate the mean and other statistical measures such as standard deviation and variance. • The calculator will automatically display the mean (x̄) along with other statistics such as the sample size (n), standard deviation (sx), and variance (σx²). Ensuring accurate calculations is important when dealing with numerical data, and that is why using the mean symbol on your ti-84 calculator is so important. With just a few clicks, you can calculate the mean of a dataset and have access to reliable statistical measures. Understanding the concept of the mean and the functionality of your ti-84 calculator can be a game-changer for professionals and students alike. We hope this post has been helpful in demystifying this concept and equipping you with the necessary tools to make accurate statistical calculations on your calculator. Basic Calculations On Ti-84 The ti-84 graphing calculator is one of the most popular calculators used in math classes worldwide. It is known for its computational prowess and usefulness to students in a range of disciplines. The mean symbol is one tool that is commonly used by students when working with data sets. Let’s explore how to use this function on the ti-84. Access The Stat Menu Before we begin, let’s ensure we have access to the stat menu on the ti-84. To access this menu, follow these steps: • Press the ‘stat’ key • Select ‘edit’ • Enter the data into the lists Finding Mean Using Ti-84 Once we have our data entered, we can find the mean in just a few simple steps: • Press the ‘stat’ key • Use the arrow keys to select the calc option • Select option 1: 1-var stats • Press enter The 1-var stats function will calculate a range of statistics for our data, including mean, standard deviation, and median. These statistics are useful in a variety of contexts, including hypothesis testing and data visualization. Overall, the ti-84 graphing calculator is an essential tool for students working with data. With the mean symbol and access to the stat menu, finding summary statistics has never been easier. By using this powerful calculator, students can save time and focus on the true concepts at hand. Advanced Calculations On Ti-84 If you’re a maths enthusiast, you might be familiar with the mean symbol. The mean symbol on ti-84 is a statistical concept that helps to find the average value of a set of numbers. However, the ti-84 calculator is capable of much more than finding just the mean value. Additional Features On Ti-84, Beyond Mean The ti-84 calculator comes with advanced statistical properties that can save you time and effort, especially when working with complex data sets. Here are some of the features that can take your calculations to the next level: • Median and mode: Finding the median and mode is essential in statistics, and the ti-84 calculator can help you with that. Use the calculator to calculate the median, which is the middle value in a set of numbers when arranged in order. Similarly, the calculator can help you determine the mode, which is the most occurring number in a set of data. • Standard deviation: Standard deviation measures the variability of data around the mean value, and it’s an essential concept in statistics. The ti-84 calculator can help you calculate the standard deviation for any data set. • Regression analysis: Regression analysis is a statistical technique that helps you find the relationship between different variables. The ti-84 calculator can perform regression analysis, which helps to predict one variable’s value based on the other variable’s value. • Hypothesis testing: Hypothesis testing is a significant concept in statistics that helps to determine whether a hypothesis is true or not. The ti-84 calculator can perform several hypothesis tests, such as t-tests and z-tests, that can help you determine whether the hypothesis is correct. In essence, the ti-84 calculator is not just a simple calculator that finds the mean. It is a powerful tool that can perform advanced statistical calculations that are essential in many fields. Whether you’re a student, researcher, or working professional, the ti-84 calculator is a must-have tool that can save you time and effort. Applications Of Ti-84 What Is The Mean Symbol On Ti-84? Applications Of Ti-84 Are you wondering what the mean symbol on ti-84 means? Ti-84 is a popular calculator that can perform calculations of mean, median, mode, regression, and many other statistical operations. The mean symbol on the ti-84 is represented by the symbol “x-bar”. When you see this symbol, you can assume that it refers to the mean. How To Use Ti-84 In Practice Ti-84 is an incredibly versatile calculator that is perfect for students, teachers, and professionals who need to do complex calculations quickly. Here are some basic steps for using the ti-84: • Start by turning on the calculator and accessing the home screen. • Input your data into the calculator. • Choose the appropriate statistical function for your calculations. • Fill in the necessary data and press enter to receive the result. Real-Life Examples And Use Cases Now, let’s explore some real-world examples and use cases of a ti-84 calculator. • Business: Ti-84 can be used to calculate financial ratios such as return on investment (roi), net present value (npv), and internal rate of return (irr) to make informed decisions. • Science: Ti-84 can be used to analyze scientific experiments and create graphs and charts to visually represent data. • Education: Ti-84 can help students solve complex mathematical problems and check their answers for accuracy. Don’t miss out on the power and convenience of the ti-84 calculator. Whether you’re a student, teacher, or professional, ti-84 can simplify your calculations and make your life easier. Happy What Does the Triangle Symbol Mean on Google Play Points? The triangle symbol meaning on google Play points to a featured collection of apps and games outlined by Google, representing the selected content available for redemption using Play Points. This symbol serves as a visual cue for users to easily identify and access the exclusive offerings tied to the Play Points rewards program. FAQ About The Mean Symbol On Ti 84 What Does The Mean Symbol On Ti 84 Calculator Mean? The mean symbol on a ti 84 calculator represents the average of a set of numbers. How Is The Mean Symbol Calculated On Ti 84? To calculate the mean symbol on ti 84, enter the numbers, press the “stat” key, select “1:edit” enter the data set, select “stat”, select “5:1-var stats”, and press “enter”. The mean symbol can be found in the resulting data. What Is The Difference Between Mean And Median On Ti 84? The mean symbol represents the average of a set of numbers, while the median symbol represents the middle value in a set of data. How Do I Use The Mean Symbol In Statistical Analysis? The mean symbol is a commonly used tool in statistical analysis to help understand the average value of a set of data and can be utilized in calculations of variance and standard deviation. How Do I Use The Mean Symbol In Statistical Analysis? The mean symbol is a commonly used tool in statistical analysis to help understand the average value of a set of data and can be utilized in calculations of variance and standard deviation. Why Is The Mean Symbol Important In Data Analysis? The mean symbol is an important statistical concept used in data analysis as it allows researchers to understand the central tendency of a set of data. It is often used to compare different data sets or to understand if a result is statistically significant. By now, you have learned a lot about the mean symbol on ti 84 and its significance in calculating statistical data. You have seen how easy and convenient it is to use this feature, even if you are not a math expert. No longer do you have to spend hours manually calculating averages and other statistics. With the ti 84 mean symbol, you can calculate them in seconds! The key takeaway from this post is that the mean symbol on ti 84 is a valuable tool for anyone conducting statistical analysis. It simplifies the calculation process, saves time and produces accurate results. Whether you are a student preparing for an exam or a professional working with data, ti 84’s mean symbol is a must-have feature. So, next time you use ti 84, make sure you use the mean symbol to make your statistical calculations a breeze!
{"url":"https://xoso2023.net/article/what-is-the-mean-symbol-on-ti-84-arithmetic-mean","timestamp":"2024-11-06T05:46:24Z","content_type":"text/html","content_length":"122230","record_id":"<urn:uuid:2940b61e-b443-45f8-a970-02ff7dfb57d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00155.warc.gz"}
Review and Analysis of Van Wijngaarden and Happer Concerning Radiative Transfer in Earth’s Atmosphere in the Presence of Clouds - CO2 Coalition Review and Analysis of Van Wijngaarden and Happer Concerning Radiative Transfer in Earth’s Atmosphere in the Presence of Clouds Download 2n-Stream Radiative Transfer here Kees de Lange October 2, 2022 The understanding of Earth’s climate depends to a large extent on our knowledge of radiative transfer processes in the atmosphere. Short wavelength radiation in the visible range from the sun enters the atmosphere and finds its way to the surface to warm it. Long wavelength radiation in the infrared range is emitted from the surface to find its way to the universe and cools the planet. The energy balance between these two streams of radiation has a profound influence on the temperature and the conditions to sustain life as we know it on our planet. Atmospheric physics is essential in understanding the relevant processes. The Sun emits radiation in the visible wavelength range. The Sun’s spectrum can be approximated by blackbody radiation at a temperature of ~ 5800 K. This radiation warms the surface of the earth to a temperature of ~255 K if an albedo of 0.30 is assumed [1]. In a similar vein the conditions on the Moon, with a darker surface than Earth, lead to a daytime surface temperature at the equator of ~ 390 K. This temperature drops at the end of the lunar night to ~100 K [2]. Since the Moon has no atmosphere, slow surface heat conduction is the only mechanism to even out these temperatures. On Earth the presence of an atmosphere has important consequences. Lateral atmospheric currents even out temperature differences between Earth’s sun-lit and dark sides relatively quickly. Temperature differences between night and day on Earth are therefore much smaller than on the Moon. On Earth the balance between short wavelength radiation that warms the surface and long wavelength radiation that cools it is significantly affected by molecular gases in the atmosphere. The atmosphere consists mainly of the diatomic gases nitrogen (78.1 %) and oxygen (20.9 %) that do not possess an electric dipole moment. Hence, the only way in which these gases can interfere with the outgoing long wavelength radiation is via very weak quadrupole-induced absorption. The gases water (H2O) and carbon dioxide (CO2) in the atmosphere do interfere with the outgoing long wavelength radiation. Transfer of infrared radiation is inhibited by electric-dipole induced absorption of these gases, leading to the so-called greenhouse effect. As a result, the global mean surface temperature is ~288 K, approximately 30 K warmer than it would be without these gases. Of course, this greenhouse effect depends on the concentration of these greenhouse gases. Water on our planet is the main greenhouse gas which can occur in different states of aggregation (gas, clusters, liquid micro-droplets, micro-particles of ice), all with their typical infrared absorption spectrum, in concentrations that can vary enormously as a function of local temperature. CO2 occurs at present in a concentration of ~ 420 ppm and is fairly evenly distributed around the globe. In order to understand the climate of our planet, a thorough understanding of radiative transfer of radiation is required. However, in order to treat radiation transfer from the fundamental point of view of atomic, molecular and optical (AMO) physics one is well advised to attack the problem step by step. In this stepwise approach the role of scattering is crucial. As a first logical approach radiative transfer through an atmosphere without clouds should be considered. When that problem can be solved satisfactorily, the role of clouds can be considered next. In a previous ground-breaking article [3] Van Wijngaarden and Happer studied the problem of radiation transfer in the atmosphere in the absence of clouds, and hence in the absence of scattering, but in the presence of the five most abundant greenhouse gases water (H2O), carbon dioxide (CO2), ozone (O3), nitrous oxide (N2O) and methane (CH4). This study took satellite observations over a wide range of infrared frequencies as starting point. In the theoretical description the Schwarzschild Equation was solved and simulations of the experimental results were obtained for three regions on earth, viz. the Mediterranean, the Sahara, and Antarctica. The correspondence between experimental satellite data and the simulations was truly remarkable [3]. An assessment of this paper is also available [4]. A key result of this work lies in the saturation effects that occur when the concentration of greenhouse gases is increased. Clouds are the Achilles heel of climate science because complicated scattering processes take place in clouds. Clouds can consist of water molecules that as a function of temperature and pressure occur as a variety of aggregates, ranging from single molecules in the gas phase to different oligomers and molecular ensembles in liquid and solid phases. Clouds can also contain particulate matter of many origins. All these aggregates and particles absorb and scatter infrared radiation at their own typical wavelengths. Scattering of radiation is a complicated phenomenon that depends to a large degree on the wavelength of the incident radiation and on the dimensions of the scattering particles. Well-known elastic scattering processes are Rayleigh scattering, where the wavelength is much larger than the particle size, and Mie scattering were the scatterers have a diameter similar to or larger than the wavelength of the incident light. The physics of radiation transfer under atmospheric conditions should therefore be considered in great detail. Since radiative transfer in physics is described by often coupled integro-integral equations, solving these equations under all kinds of physical circumstances is a demanding exercise in mathematical physics at a very high level. The study of such complex equations is not new. An important mathematical-physical paper by G.C. Wick (in German) already dates from 1943 [5], the ground-breaking book “ Radiative Transfer” by Chandrasekhar [6] was published in 1960 and is still a key reference. In these references atmospheric scattering is discussed employing sophisticated mathematics, but what is lacking is a mathematical framework that can be applied without great difficulty, not just to a single scattering problem, but to a range of different scattering issues. The paper of Van Wijngaarden and Happer aims to fill this gap that still exists after so many years. In their new paper Van Wijngaarden and Happer [7] direct their attention to radiation transfer in the atmosphere, but with clouds to scatter incoming and outgoing radiation. In this context the role of greenhouse gases is only secondary. The main purpose of this paper is to develop a flexible mathematical-physical framework to deal with all kinds of different scattering processes. Let us turn to its detailed contents now. The most effective direction for long wavelength radiation to leave the atmosphere is vertical. Hence the projection of any direction that makes an angle Θ with the vertical is proportional to cos Θ. By only introducing a cos Θ-dependence, it is implicitly assumed that the relevant streams of radiation possess axial symmetry. Radiative transfer in semi-transparent media involves absorption, emission and scattering. These processes are described with an equation of transfer for I(μ, τ, ϑ) where μ = cos Θ, τ the optical depth which is a measure of the altitude above the surface, and ϑ the relative time (Eq. 4 of ref. [7]). This intensity I(μ, τ, ϑ) can be thought of as a stream of monochromatic photons at optical depth τ, making various angles Θ with the vertical. In a mathematical sense this equation shows similarities with the Schrödinger equation of quantum mechanics. In this paper the techniques to solve the equation of transfer are borrowed from quantum mechanics, and the description is phrased in terms of slightly modified Dirac bra and ket vectors, using a notation where bra vectors are not simply Hermitian conjugates of ket vectors. Many radiative-transfer variables in 2n-space can be conveniently represented with non-Hermitian matrices. Hence, it is not always possible to express left and right eigenvectors as Hermitian-conjugate pairs. Because of the dependence of the intensity I on μ, a series expansion in terms of the complete orthogonal set of Legendre polynomials Pl(μ) [8] is introduced. This approach is reminiscent of the more familiar Fourier analysis where an expansion in terms of orthogonal sines and cosines is employed. In this way the power of matrix algebra can be unleashed to solve the equation of In the equation of transfer an important quantity is the phase function p(μ, μ’), the probability for elastic scattering of incident radiation with direction cosine μ’ to scattered radiation with direction cosine μ. Here a random orientation of the scattering particles is assumed, and inelastic scattering processes are neglected. If the single scattering albedo is less than 1, some of the radiation can be absorbed. Employing their new notation, the equation of transfer (Eq. 4) can now be written in vector form (Eq. 52). If we assume a time-independent atmosphere, and neglect scattering completely, it is pleasing to note that equation of transfer (Eq. 52) now simplifies to the more familiar Schwarzchild equation (Eq. 65) which describes the transfer of thermal radiation through a cloud-free atmosphere containing greenhouse gases [3].The more general Eq. 62 which describes a combination of absorption, emission and scattering is much harder to solve. In order to solve Eq. 62 for a completely general combination of absorption, emission and scattering, a 2n-stream method is used. In order to calculate integrals numerically, the Gauss-Legendre quadrature method [9] is employed. A 2n-stream, whose intensity is sampled at the 2n nodes of the Legendre polynomials P2n, allows for 2n parameters, the n independent nodes (occurring as pairs with opposite signs) of the Legendre functions and the corresponding weights. This means that polynomials of degree 2n-1 can be represented exactly [9]. These nodes and weights of the Legendre functions are tabulated in detail (ref [8], pages 916-917), and are thus readily available. The angular dependence of scattering processes is complicated. A well-known example is elastic Rayleigh scattering, where the incident wavelength is much larger than the particle size. The phase function for Rayleigh scattering is given by Eq. (132) and ref [10]. The angular dependence of Rayleigh scattering is not too different from isotropic. Another example is Mie scattering were the scattering particles have a diameter similar to or larger than the wavelength of the incident light. The angular dependence is generally more forward peaked, but depends on the wavelength of the incident light in relation to the particle size. A key result of the paper is Eq. (134) where a 2n-stream is constructed such that the phase function maximizes forward scattering. The proof is found in the Appendix, and is a real tour de force that employs Lagrange multipliers. In general, with the 2n-stream method one can engineer the phase functions that one wishes to use without great difficulty. Since the angular dependence of the phase function is an important issue for the many scattering processes that can occur in the atmosphere, this is an important novel aspect of the present theory. After the extensive development of the new formulation of the theory of radiation transfer in Earth’s atmosphere, the authors take a lot of trouble to apply the theory to many situations involving all kinds of clouds. Since in general changes in time are negligibly slow, the corresponding term in Eq. (52) can be replaced by Eq. (62) which is valid for a steady-state atmosphere. This equation represents an inhomogeneous differential equation that contains absorption, emission and scattering in all possible combinations. In order to get some feeling for the solutions of this equation, the authors first treat the simplified case of non-emissive clouds. These clouds are too cold to emit radiation at frequencies of interest. Under these conditions the right-hand side of Eq. (62) can be set to zero. This assumption leads to a homogeneous differential equation which is easier to solve. Assuming various types of scattering (Rayleigh, isotropic, maximum forward scattering according to Eq. (134)), many examples are discussed. In particular, it can be calculated what fraction of the incident radiation is transmitted through the cloud, and what percentage is absorbed and reflected. Of course the real challenge is in solving the inhomogeneous steady-state Eq. (62), with the right-hand side not equal to zero. This equation describes the general problem of clouds which absorb, emit and scatter incident radiation. A convenient way to solve this inhomogeneous differential equation is with the use of Green’s functions. The physical meaning of the Green’s function G(x0, x) (sometimes called an influence function) is that this formulation describes the effect that a source placed at position x0 has at position x [11]. The total Green’s function of the cloud is given in Eq. (274). The theory developed in the present paper only works for finite absorption and the single scattering albedo ῶ < 1. However, these methods fail for the somewhat academic case of conservative scattering, when ῶ = 1 and no energy is exchanged between the radiation and scatterers, In a follow-up paper the authors show that minor modifications to the fundamental 2n-scattering theory for ῶ < 1 make it suitable for ῶ = 1 [12]. Within the new formalism all the required integrals can be computed numerically with MATLAB [13]. In addition, MATLAB allows matrixmanipulations, and plotting offunctionsand data. The computations performed with this extremely useful mathematical toolbox only require limited coding and can be performed on a laptop. In summary, the authors have produced a remarkable, ground-breaking and most valuable study in radiation transfer in Earth’s atmosphere in the presence of clouds where absorption, emission, and scattering all play a role. Their admirable achievement in mathematical physics, based on advanced atomic, molecular and optical physics, is phrased in terms that are reminiscent of the language and notation familiar from modern quantum mechanics. The novel scientific framework created in this work offers numerous new possibilities for studying radiation transfer processes in the presence of scattering caused by a large variety of molecules and particles. As always, the proof of the pudding will be in the eating. With these novel theoretical techniques now in place there is a strong need for experimental results against which the methods developed by both authors can be tested. This work poses a strong challenge to experimental atmospheric scientists to produce detailed reliable information that can serve, together with the present theoretical treatment, to improve our much needed understanding of scattering processes in atmospheric clouds. About the author: Cornelis Andreas “Kees” de Lange Dr. C.A. de Lange is a Guest Professor in the Faculty of Science at Vrije Universiteit Amsterdam and was a Member of the Senate in The Netherlands from 7 June 2011 until 1 May 2015 as well as the Chairman of The Netherlands Organization for Pensions from 12 May 2009 until 10 January 2011. He has a PhD in Theoretical Chemistry from the University of Bristol (UK) in June 1969 with a Dissertation on Nuclear Magnetic Resonance in Oriented Molecules. His CV can be found here, publications list here,and website here. He is a member of the CO2 Coalition. [1] Murry L. Salby, Atmospheric Physics, Academic Press (1996). [2] J.-P. Williams, D.A. Paige, B.T. Greenhagen, E. Sefton-Nash, The global surface temperatures of the Moon as measured by the Diviner Lunar Radiometer Experiment, Icarus, Volume 283,300-325 [3] W.A. van Wijngaarden, W. Happer: https://co2coalition.org/wp-content/uploads/2022/03/Infrared-Forcing-by-Greenhouse-Gases-2019-Revised-3-7-2022.pdf [4] C.A. de Lange, Van Wijngaarden and Happer Radiative Transfer Paper for Five Greenhouse Gases Explained, [5] G.C. Wick, Über ebene Diffusionsprobleme, Z. Physik 121, 702–718 (1943). [6] S. Chandrasekhar, Radiative Transfer, Dover Publications (January 1, 1960). [7] W. A van Wijngaarden, W. Happer, 2n-Stream Radiative Transfer, http://arxiv.org/abs/2205.09713 [8] M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions, National Bureau of Standards (1964). [9] www.dam.brown.edu/people/alcyew/handouts/GLquad.pdf [10] https://en.wikipedia.org/wiki/Rayleigh_scattering [11] www.math.arizona.edu/~kglasner/math456/greens.pdf [12] W. A van Wijngaarden, W. Happer, 2n-Stream Conservative Scattering, https://arxiv.org/pdf/2207.03978 [13] https://en.wikipedia.org/wiki/MATLAB
{"url":"https://co2coalition.org/2022/10/25/review-and-analysis-of-van-wijngaarden-and-happer-concerning-radiative-transfer-in-earths-atmosphere-in-the-presence-of-clouds/","timestamp":"2024-11-02T18:57:47Z","content_type":"text/html","content_length":"100914","record_id":"<urn:uuid:150f5717-ce16-4da7-8234-032d7edc3d9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00211.warc.gz"}
Chapter 4.7: Parametric Equations Learning Objectives In this section, you will: • Parameterize a curve. • Eliminate the parameter. • Find a rectangular equation for a curve defined parametrically. • Find parametric equations for curves defined by rectangular equations. Consider the path a moon follows as it orbits a planet, which simultaneously rotates around the sun, as seen in (Figure). At any moment, the moon is located at a particular spot relative to the planet. But how do we write and solve the equation for the position of the moon when the distance from the planet, the speed of the moon’s orbit around the planet, and the speed of rotation around the sun are all unknowns? We can solve only for one variable at a time. In this section, we will consider sets of equations given by parametric equations: we are able to trace the movement of an object along a path according to time. We begin this section with a look at the basic components of parametric equations and what it means to parameterize a curve. Then we will learn how to eliminate the parameter, translate the equations of a curve defined parametrically into rectangular equations, and find the parametric equations for curves defined by rectangular equations. Parameterizing a Curve When an object moves along a curve—or curvilinear path—in a given direction and in a given amount of time, the position of the object in the plane is given by the x-coordinate and the y-coordinate. However, both vary over time and so are functions of time. For this reason, we add another variable, the parameter, upon which both When we parameterize a curve, we are translating a single equation in two variables, such as When we graph parametric equations, we can observe the individual behaviors of (Figure). Thus, the equation for the graph of a circle is not a function. However, if we were to graph each equation on its own, each one would pass the vertical line test and therefore would represent a function. In some instances, the concept of breaking up the equation for a circle into two functions is similar to the concept of creating parametric equations, as we use two functions to produce a non-function. This will become clearer as we move forward. Parametric Equations Parameterizing a Curve Parameterize the curve Show Solution If (Figure), and sketch the graph. See the graphs in (Figure). It may be helpful to use the TRACE feature of a graphing calculator to see how the points are generated as The arrows indicate the direction in which the curve is generated. Notice the curve is identical to the curve of Try It Construct a table of values and plot the parametric equations: Finding a Pair of Parametric Equations Find a pair of parametric equations that models the graph of Show Solution To graph the equations, first we construct a table of values like that in (Figure). We can choose values around The graph of (Figure). We have mapped the curve over the interval y-axis. Figure 4. Try It Parameterize the curve given by Show Solution Finding Parametric Equations That Model Given Criteria An object travels at a steady rate along a straight path Show Solution The parametric equations are simple linear expressions, but we need to view this problem in a step-by-step fashion. The x-value of the object starts at x has changed by 8 meters in 4 seconds, which is a rate of x-coordinate as a linear function with respect to time as Similarly, the y-value of the object starts at 3 and goes to y of −4 meters in 4 seconds, which is a rate of y-coordinate as the linear function are expressed in meters and represents time: Using these equations, we can build a table of values for (Figure)). In this example, we limited values of From this table, we can create three graphs, as shown in (Figure). Again, we see that, in (Figure)(c), when the parameter represents time, we can indicate the movement of the object along the path with arrows. Eliminating the Parameter In many cases, we may have a pair of parametric equations but find that it is simpler to draw a curve if the equation involves only two variables, such as Eliminating the Parameter from Polynomial, Exponential, and Logarithmic Equations For polynomial, exponential, or logarithmic equations expressed as two parametric equations, we choose the equation that is most easily manipulated and solve for into the second equation. This gives one equation in Eliminating the Parameter in Polynomials Show Solution We will begin with the equation for Next, substitute The Cartesian form is This is an equation for a parabola in which, in rectangular terms, (Figure). In this section, we consider sets of equations given by the functions Try It Given the equations below, eliminate the parameter and write as a rectangular equation for Show Solution Eliminating the Parameter in Exponential Equations Eliminate the parameter and write as a Cartesian equation: Show Solution Substitute the expression into The Cartesian form is The graph of the parametric equation is shown in (Figure)(a). The domain is restricted to (Figure)(b) and has only one restriction on the domain, Eliminating the Parameter in Logarithmic Equations Eliminate the parameter and write as a Cartesian equation: Show Solution Solve the first equation for Then, substitute the expression for The Cartesian form is To be sure that the parametric equations are equivalent to the Cartesian equation, check the domains. The parametric equations restrict the domain on Try It Eliminate the parameter and write as a rectangular equation. Show Solution Eliminating the Parameter from Trigonometric Equations Eliminating the parameter from trigonometric equations is a straightforward substitution. We can use a few of the familiar trigonometric identities and the Pythagorean Theorem. First, we use the identities: Solving for Then, use the Pythagorean Theorem: Substituting gives Eliminating the Parameter from a Pair of Trigonometric Parametric Equations Eliminate the parameter from the given pair of trigonometric equations where Show Solution Solving for Next, use the Pythagorean identity and make the substitutions. Figure 8. The graph for the equation is shown in (Figure). Applying the general equations for conic sections (introduced in Analytic Geometry, we can identify Try It Eliminate the parameter from the given pair of parametric equations and write as a Cartesian equation: Show Solution Finding Cartesian Equations from Curves Defined Parametrically When we are given a set of parametric equations and need to find an equivalent Cartesian equation, we are essentially “eliminating the parameter.” However, there are various methods we can use to rewrite a set of parametric equations as a Cartesian equation. The simplest method is to set one equation equal to the parameter, such as Rewriting this set of parametric equations is a matter of substituting Finding a Cartesian Equation Using Alternate Methods Use two different methods to find the Cartesian equation equivalent to the given set of parametric equations. Show Solution Method 1. First, let’s solve the Now substitute the expression for Method 2. Solve the Make the substitution and then solve for Try It Write the given parametric equations as a Cartesian equation: Show Solution Finding Parametric Equations for Curves Defined by Rectangular Equations Although we have just shown that there is only one way to interpret a set of parametric equations as a rectangular equation, there are multiple ways to interpret a rectangular equation as a set of parametric equations. Any strategy we may use to find the parametric equations is valid if it produces equivalency. In other words, if we choose an expression to represent Finding a Set of Parametric Equations for Curves Defined by Rectangular Equations Find a set of equivalent parametric equations for Show Solution An obvious choice would be to let The set of parametric equations is See (Figure). Figure 6. Access these online resources for additional instruction and practice with parametric equations. Key Concepts • Parameterizing a curve involves translating a rectangular equation in two variables, x, y, and t. Often, more information is obtained from a set of parametric equations. See (Figure), (Figure), and (Figure). • Sometimes equations are simpler to graph when written in rectangular form. By eliminating • To eliminate (Figure), (Figure), (Figure), and (Figure). • Finding the rectangular equation for a curve defined parametrically is basically the same as eliminating the parameter. Solve for (Figure). • There are an infinite number of ways to choose a set of parametric equations for a curve defined as a rectangular equation. • Find an expression for (Figure). Section Exercises 1. What is a system of parametric equations? Show Solution A pair of functions that is dependent on an external factor. The two functions are written in terms of the same parameter. For example, 2. Some examples of a third parameter are time, length, speed, and scale. Explain when time is used as a parameter. 3. Explain how to eliminate a parameter given a set of parametric equations. Show Solution Choose one equation to solve for 4. What is a benefit of writing a system of parametric equations as a Cartesian equation? 5. What is a benefit of using parametric equations? Show Solution Some equations cannot be written as functions, like a circle. However, when written as two parametric equations, separately the equations are functions. 6. Why are there many sets of parametric equations to represent on Cartesian function? For the following exercises, eliminate the parameter For the following exercises, rewrite the parametric equation as a Cartesian equation by building an For the following exercises, parameterize (write parametric equations for) each Cartesian equation by setting For the following exercises, parameterize (write parametric equations for) each Cartesian equation by using 46. Parameterize the line from 47. Parameterize the line from Show Solution 48. Parameterize the line from 49. Parameterize the line from Show Solution For the following exercises, use the table feature in the graphing calculator to determine whether the graphs intersect. For the following exercises, use a graphing calculator to complete the table of values for each set of parametric equations. 55. Find two different sets of parametric equations for Show Solution answers may vary: 56. Find two different sets of parametric equations for 57. Find two different sets of parametric equations for Show Solution answers may vary: , a variable, often representing time, upon which
{"url":"https://ecampusontario.pressbooks.pub/sccmathtechmath1/chapter/parametric-equations/","timestamp":"2024-11-08T12:20:00Z","content_type":"text/html","content_length":"317466","record_id":"<urn:uuid:d366ab1e-03b8-45e3-8ebd-31c028678d35>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00691.warc.gz"}
Our users: I want to thank you for all you help. Your spport in resolving how do a problem has helped me understand how to do the problems, and actually get the right result. Thanks So Much. Tara Fharreid, CA My decision to buy "The Algebrator" for my son to assist him in algebra homework is proving to be a wonderful choice. He now takes a lot of interest in fractions and exponential expressions. For this improvement I thank you. M.H., Illinois As a single mom attending college, I found that I did not have much time for my daughter when I was struggling over my Algebra homework. I tried algebra help books, which only made me more confused. I considered a tutor, but they were just simply to expensive. The Algebrator software was far less expensive, and walked me through each problem step by step. Thank you for creating a great product. Michael, OH Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2013-12-26: • ladder method for teaching multiplication • how to solve an equation with rational exponents? • Simplifying expressions under Square Roots test • online factorising • geometry coordinate plane powerpoint • are non linear polynomials really used in everyday life • excel solver simultaneous equation 4 unknowns • CAT tests online KS3 • factor quadratics calculator • gallian abstract algebra solutions • glencoe algebra 1 2004 solving linear equations and formulas • holt california math 6th grade percentages • square roots and variables using absolute value • worksheet, adding and subtracting negative numbers • solving for vertex in a quadratic equation • fraction word problems tutor • math activities and games about factoring of polynomial • how to factor out greatest common divisor in java • Dougal Littell pre algebra Practice Book • probability problem as fraction worksheet, 4th grade • subtracting fractions with roots • exact simplified solution calculator • free 6th grade probability worksheets • lowest common denominator calculator • using TI 30X II find least common multiple • permutaions+combinations+solve+easy+simple • aptitude question with answers • real life hyperbola • ti rom image download • factors affecting the grade in algebra of students • simple addition and subtraction commutative property worksheets • "6th grade" " math game " printable • answers my math homework • free college algebra answers • ti 89 quadratic • polynomial factor calculator • solve by the elimination method on ti 89 • help with solving for y and graphing using m and b • worksheet- system of equations- applications review for 7b test • solving for y quadratic algebra gcd • solving for cubed formula • least common multiplier tool • How do you add equations with fractions • A formula that describes a ratio of percentages • math grade 10 questions algebra simplification • simplifying division variables • "TI84 quadratic" • ks2 area worksheets • rearrange formulae calculator • software to solve matrices • double integral solver online free • math investigatory project in elementary • square roots for dummies • mixed number into decimal • examples of square root property • square root decimals chart • addition subtraction integer calculator • equation line calculator • prentice hall physics workbook teacher edition • maths worksheets online year 9 victoria • decimal division for dummies • rational expression solver • radicals in matlab • lowest common denominator practice problems • factor machine polynomials • converting fractions to decimal calculator • mcdougal littell algebra book 2 help • online inequality solver • finding least common denominator • error 13 dimension on ti88 • rational equations worksheets • how to solve a third grade equation • solve quadratic equations by factoring calculator • grade 7 division of decimals • Advanced MAthematics by Richard Brown chapter reviews • 6th grade writing equations • Algebra Solver • 8th grade pre-Algebra • adding fractions with monomials • solve quadratic equation for inverse • online math problems solver • McDougal Littell Algebra 2 textbook answers • teach math gr 10 and tests online free • solving equations using the square • instruction laplace transform ti 89 • online practice algebra problems prime factorization • Multiplying and Dividing Radical Expressions calculator
{"url":"https://mathworkorange.com/math-help-calculator/trigonometry/solving-linear-equations-with.html","timestamp":"2024-11-03T02:49:29Z","content_type":"text/html","content_length":"87502","record_id":"<urn:uuid:0eee8445-0363-4f60-a562-3f7ee86a8038>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00773.warc.gz"}
Addition to 100 with the zero-rule Students can add tens numbers to 100 using the zero rule. They learn to add tens quickly by removing the zero to add the zero to the answer. If students are able to quickly add to 100 using the zero rule, they can quickly add tens numbers. Using a ball, practice adding numbers to 10 quickly. You say an addition problem (max 10) and throw the ball to a student. They must answer and throw the ball back. Repeat a few times. Discuss with students that it is important to use the zero rule to make adding tens numbers easier. Next, discuss that tens numbers always end on a 0, and name all the tens to 100. Show how the zero rule works by showing 1+1 and 10+10 with fingers, fists and the MAB blocks, so students have a visual support to their understanding of the zero rule.Next, explain the zero rule using an example. Show that you can park the zeros in an addition problem with tens and solve the easier problem first. You make the addition problem ten times smaller by parking the zero. To solve, you replace the zeros, but then you replace the zero after the answer too. Practice this with the students with the three exercises given. Make sure that they park or remove the zero to quickly solve, and then return the zero to the problem and answer.Check that students are able to add to 100 using the zero rule by asking the following questions:- Which numbers do you count with the zero rule?- What do you do with tens if you want to add easily and quickly?- What do you do with the answer if there are zeros in both addends? Students first practice with an exercise where the total without zero is already given, and next they must determine the total of the problem without zeros as well as with zeros. Check that students can add tens using the zero rule by asking what to do with tens if you want to add easily and quickly. Ask students why it is useful to be able to do this and ask what 30+50 is. Next, students can show addition problems to 100 with their hands. Have them first create a problem without zeros and then calculate what the total with zeros would be. If students have difficulty adding to 100 using the zero rule, you can practice a few more problems with them together. Make sure that students understand the steps of parking the zero, solving and then replacing he zeros. Start with smaller problems like 20+10 and slowly grow to problems like 70+30. Gynzy is an online teaching platform for interactive whiteboards and displays in schools. With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom management more efficient.
{"url":"https://www.gynzy.com/en-us/library/items/addition-to-100-with-the-zero-rule","timestamp":"2024-11-14T11:54:09Z","content_type":"text/html","content_length":"553255","record_id":"<urn:uuid:77a23c0e-657d-4a1b-ad64-a65f8b6b7f6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00291.warc.gz"}
Sathyabama University 2008 B.E Civil Engineering Mechanics of Solids - I - Question Paper Wednesday, 30 January 2013 11:20Web SATHYABAMA UNIVERSITY (Established under section 3 of UGC Act, 1956) Course & Branch: B.E - CIVIL (Part Time) Title of the paper: Mechanics of Solids - I Semester: I Max. Marks: 80 Sub.Code: 620PT101 (2007/2007JAN/2008 JAN) Time: 3 Hours Date: 15-05-2008 Session: FN PART A (10 x 2 = 20) Answer All the Questions 1. Define the terms (i) rigidity modulus (ii) bulk modulus 2. A bar consists of two sections of lengths 200mm and 300mm with area of cross sections 400mm^2 and 500 mm^2 respectively. It is subjected to an axial pull of 100 kN. Take Youngs modulus (E) = 200 kN/ mm^2. Find the total elongation. 3. Draw the shear force and bending moment diagrams for a cantilever of length L carrying a uniformly distributed load of w per metre length over its entire length. 4. State the relationship between loading, shear force and bending moment. 5. Define section modulus and state its significance. 6. What do you understand by neutral axis and moment of resistance? 7. Find the power that can be transmitted by a shaft rotating at 150 rpm under a torque of 400 N-m. 8. Define stiffness of a helical spring and write an expression for it. 9. A tensile load of 50kN is gradually applied to a circular bar of 5cm diameter and 4m long. If the value of Youngs modulus (E) = 2x10^5 N/mm^2, determine the strain energy absorbed by the steel 10. How would you distinguish between a deficient frame and a redundant frame? PART B (5 x 12 = 60) Answer All the Questions 11. a. Draw stress-strain curve and explain clearly the salient points of mild steel. (8) b. The following data relate to a bar subjected to a tensile test: Diameter of the bar5 = 30 mm Tensile load = 54 kN Gauge length = 300 mm Extension of the bar = 0.112 mm. Change in diameter = 0.00366 mm Calculate (i).Possions ratio (ii) Youngs modulus. (4) 12. A compound tube consists of a steel tube 170 mm external diameter and 10 mm thickness and an outer brass tube 190 mm external diameter and 10 mm thickness. The two tubes are of the same length. The compound tube carries an axial load of 1 MN. Find the stresses and the load carried by each tube and the amount by which it shortens. Length of each tube is 0.15m. Take Youngs modulus for steel (E[s]) = 200 GN/m^2 and for brass (E[b]) = 100 GN/m^2 13. Draw the shear force and bending moment diagrams for the beam shown loaded in Figure. Find the maximum bending moment. 14. A beam 7.5 m long has supports 5 m apart, there being an overhang of 1 m on the left and 1.5 m on the right. There is a point load of 5kN at each free end a uniformly distributed load of 8 kN/m over the supported length and 4 kN/m over the overhanging portion on the right. Construct shear force and bending moment diagrams. 15. A hollow rectangular column is having external and internal dimensions as 120 cm deep x 80 cm wide and 90 cm deep x 50 cm wide respectively. A vertical load of 200kN is transmitted in the vertical plane bisecting 120 cm side and at an eccentricity of 10 cm from the geometric axis of the section. Calculate the maximum and minimum stresses in the section. 16. a. Sketch the shear stress distribution over a circular section (2) b. A 400 mm X 150 mm I grider has 20 mm thick flanges and \30 mm thick web. Calculate maximum intensity of shear stress when the shear force at the cross section is 1.6 MN. Also sketch the shear stress distribution across the depth of the beam. Calculate the percentage shear force carried by the web. 17. A shaft is required to transmit 245 kW power at 240 rpm. The maximum torque may be 1.5 times the mean torque. The shear stress is limited to 40 n/mm^2 and twist 1 per metre length. Determine the diameter required if (i) the shaft is solid (ii) the shaft is hollow with external diameter twice the internal diameter. Take modulus of rigidity = 80 kN/mm^2. 18. Design a close coil helical spring of stiffness 20N/mm. The maximum shear stress in the spring is not to exceed 80 N/mm^2 under a load of 500 N. The diameter of the coil is to be 10 times the diameter of the wire. Take modulus of rigidity = 84 kN/mm^2. 19. a. Derive an expression for strain energy stored in a body when it is subjected to a tensile force. (4) b. An unknown weight falls through a height of 10 mm on a collar rigidly attached to the lower end of a vertical bar 500 cm long and 600 mm^2 in section. If the maximum extension of the rod is to be 2 mm, what is the corresponding stress and magnitude of the unknown weight? Take Youngs modulus (E) = 2 x 10^5 N/mm^2. (8) 20. A truss of span 7.5 m is loaded as shown in figure. Find the reactions and forces in the members of the truss. Earning: Approval pending.
{"url":"http://www.howtoexam.com/index.php?option=com_university&task=show_paper&paper_id=4634&title=Sathyabama+University+2008+B.E+Civil+Engineering+Mechanics+of+Solids+-+I+-+Question+Paper&Itemid=58","timestamp":"2024-11-04T02:47:50Z","content_type":"application/xhtml+xml","content_length":"38148","record_id":"<urn:uuid:54667f33-1cbf-4bb9-b36d-4ef575f22cee>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00484.warc.gz"}
Roots Of Complex Numbers Worksheet 2024 - NumbersWorksheets.com Roots Of Complex Numbers Worksheet Roots Of Complex Numbers Worksheet – The Adverse Phone numbers Worksheet is a great way to start teaching your kids the concept of negative numbers. A poor amount is any quantity which is below no. It could be extra or subtracted. The minus sign signifies the unfavorable variety. You can also publish negative numbers in parentheses. Listed below is really a worksheet to provide you began. This worksheet has a range of bad amounts from -10 to 10. Roots Of Complex Numbers Worksheet. Negative phone numbers are a lot as their benefit is under no A poor variety carries a importance below zero. It could be expressed over a number range by two approaches: together with the good amount written as the initial digit, along with the unfavorable amount composed because the final digit. A positive number is composed with a plus indication ( ) before it, yet it is recommended to create it that way. If the number is not written with a plus sign, it is assumed to be a positive number. They can be depicted by way of a minus indication In old Greece, unfavorable figures were actually not used. These were disregarded, as his or her mathematics was based on geometrical methods. When European scholars started out converting old Arabic text messages from North Africa, they arrived at identify bad numbers and embraced them. Today, unfavorable numbers are displayed by a minus sign. For additional details on the origins and history of adverse figures, read this post. Then, consider these cases to discover how unfavorable figures have developed as time passes. They are often included or subtracted Positive numbers and negative numbers are easy to subtract and add because the sign of the numbers is the same, as you might already know. They are closer to than positive numbers are, though negative numbers, on the other hand, have a larger absolute value. They can still be added and subtracted just like positive ones, although these numbers have some special rules for arithmetic. You may also subtract and add negative figures utilizing a number range and apply exactly the same guidelines for addition and subtraction while you do for beneficial phone numbers. These are symbolized with a amount in parentheses A negative quantity is displayed with a variety covered in parentheses. The bad indication is transformed into its binary comparable, and also the two’s complement is saved in a similar spot in recollection. The result is always negative, but sometimes a negative number is represented by a positive number. In such cases, the parentheses should be included. If you have any questions about the meaning of negative numbers, you should consult a book on math. They may be divided from a beneficial quantity Negative numbers might be multiplied and divided like optimistic amounts. They can also be divided up by other adverse phone numbers. They are not equal to one another, however. At the first try you flourish a poor variety from a beneficial quantity, you will definately get absolutely nothing consequently. To help make the best solution, you must choose which signal your solution needs to have. It is actually much easier to bear in mind a poor quantity when it is developed in mounting brackets. Gallery of Roots Of Complex Numbers Worksheet Simplifying Complex Numbers Worksheet What Is The Square Root Of 15 Slidesharetrick Question Video Finding The Square Roots Of Complex Numbers In Polar Leave a Comment
{"url":"https://numbersworksheet.com/roots-of-complex-numbers-worksheet/","timestamp":"2024-11-03T05:57:03Z","content_type":"text/html","content_length":"53598","record_id":"<urn:uuid:33be64a6-18a8-4dd9-9c4c-3ec46c818601>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00559.warc.gz"}
• Quantitative Aptitude Practice Arithmetic Aptitude Study Material Digitization help student to explore and study their academic courses online, as this gives them flexibility and scheduling their learning at their convenience. Kidsfront has prepared unique course material of Arithmetic Aptitude Time Distance & Speed for Quantitative Aptitude Practice student. This free online Arithmetic Aptitude study material for Quantitative Aptitude Practice will help students in learning and doing practice on Time Distance & Speed topic of Quantitative Aptitude Practice Arithmetic Aptitude. The study material on Time Distance & Speed, help Quantitative Aptitude Practice Arithmetic Aptitude students to learn every aspect of Time Distance & Speed and prepare themselves for exams by doing online test exercise for Time Distance & Speed, as their study progresses in class. Kidsfront provide unique pattern of learning Arithmetic Aptitude with free online comprehensive study material and loads of Quantitative Aptitude Practice Arithmetic Aptitude Time Distance & Speed exercise prepared by the highly professionals team. Students can understand Time Distance & Speed concept easily and consolidate their learning by doing practice test on Time Distance & Speed regularly till they excel in Arithmetic Aptitude Time Distance & Speed. Time Distance & Speed a) 750 km b) 450 Km c) 600 Km d) 400Km Solution Is : a) 21 seconds b) 27 seconds c) 34 seconds d) 19 seconds Solution Is : a) 18 Km/hr b) 15 Km/hr c) 32 km/hr d) 21 km/hr Solution Is : a) 20 Km b) 29 km c) 34 km d) 11km Solution Is : a) 20 Km/hr b) 26 Km/hr c) 22 Km/hr d) 25 Km/hr Solution Is : a) 12.5 Km/hr b) 18 Km/hr c) 15.5 Km/hr d) 15 Km/hr Solution Is : a) 250 m b) 290 m c) 350 m d) 360 m Solution Is : a) 88.8 km/hr b) 78.2 Km/hr c) 82.8 Km/hr d) 72.8 Km/hr Solution Is : a) 1:3 b) 3:2 c) 2:3 d) 3:4 Solution Is : a) 68 Km/hr b) 58 Km/hr c) 60 km/hr d) 69 Km/hr Solution Is :
{"url":"https://www.kidsfront.com/competitive-exams/study-material/Quantitative+Aptitude+Practice++Tests-Arithmetic+Aptitude-Time+Distance+&+Speed-p1.html","timestamp":"2024-11-04T05:48:25Z","content_type":"text/html","content_length":"89949","record_id":"<urn:uuid:33aa0b60-9a51-40b6-8355-2afd7a90e13c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00523.warc.gz"}
Using breakeven analysis for better decisions Professor, Agricultural Economics (Retired) One of the merits of enterprise budgeting is the value of statistics obtained by producers as they engage in decision-making activities. Some of the easiest and most useful computations that can be obtained from enterprise data are breakeven values. As the name suggests, a breakeven gives the price or yield required for the revenue obtained from the enterprise to equal the costs encumbered to produce that revenue. How to calculate breakeven prices and yields For example, a breakeven price, assuming a particular yield, can be used in developing a marketing plan as it presents the price the producer must receive to cover all costs. When all costs are included in the enterprise budget, including opportunity costs of using the producer’s capital and time, a breakeven price will provide a return to all contributions to the production process. When not all costs are included, the breakeven value will provide a return to cover whatever costs are included. A breakeven yield, assuming a particular price, identifies the yield the producer must obtain to cover costs involved in producing the enterprise. The formulas to determine these breakeven values are: (1) Breakeven Price = Total Cost/Expected Yield (2) Breakeven Yield = Total Cost/Expected Price. Applying breakeven analysis: winter wheat example For example, using data from a 2023 winter wheat budget for the Nebraska Panhandle (Klein and McClure), breakeven prices and yields are presented in Table 1 for total operating and total costs, including overhead. Table 1. Breakeven Selling Prices and Yields for Irrigated Hard Red Winter Wheat in the Nebraska Panhandle. Cost Per Acre Breakeven Price ($/bu) Breakeven Yield (bu) $ (@90 bu) (@ $9.00/bu) 1. Total Operating Costs $491.45 $5.46 54.61 2. Total Economic Costs $658.46 $7.32 73.16 The per acre total operating costs (including seed, fertilizer, pesticides, custom services, paid labor, fuel and energy, repairs and maintenance, and interest of operative capital) total $491.45 per acre. Adding general overhead, equipment depreciation and opportunity cost, and land opportunity cost, brings the total economic costs to $658.46. Note that opportunity cost for the producer’s time is not included. The breakeven wheat price to cover operating expenses is $5.46 ($491.45/90 bu), while the breakeven wheat yield is 54.61 bushels ($491.45/$9.00). Likewise, the breakeven price to cover total economic costs ($658.46/90 bu) is $7.32 and the breakeven wheat yield is 73.16 bushels ($658.46/$9.00). Profitability goals and breakeven analysis The formulas for breakeven price and yield are derived from the economic identity: (1) Profit = Price x Yield – Costs Setting profit = 0 (price x yield – costs = 0) and solving for price gives the breakeven price given in equation (1). This formula can be useful in finding the breakeven price or yield when profit equals a specified number other than zero, such as a profit that includes living expenses. For example, to determine a breakeven price that not only covers the cost of production, but also includes a family living expense of $100 per acre, profit in equation (3) would be set equal to $100, and solving for breakeven price, would give: (2) Breakeven Price = (Total Cost + 100)/Expected Yield In the above wheat example, breakeven price given total economic costs would be $8.43. Using breakeven analysis for input decisions This same breakeven concept can be used to assist in other decision-making activities, such as the additional yield required to break even when adding an additional amount of nitrogen to a wheat field at varying prices of nitrogen and at varying prices of wheat. For example, in the wheat enterprise budget being used in this article, 125 pounds of 32-0-0 is applied to the field by pivot. To determine the increased yield that must be obtained to cover the marginal cost of applying an additional 25 pounds nitrogen, the following formula may be used: (3) (additional pounds of N x price of N per pound) / Price of wheat per bushel Doing this calculation for varying prices of N and wheat produces a decision chart similar to Table 2. Table 2. Increase in Yield Required to Breakeven Adding 25 Additional Pounds of Nitrogen (32-0-0) at Varying Prices of Nitrogen. Price of Nitrogen ($/lb.) Price of Wheat ($/bu.) $8.00 $8.50 $9.00 $9.50 $10.00 $10.50 $11.00 $11.50 $12.00 $0.70 2.2 2.1 1.9 1.8 1.8 1.7 1.6 1.5 1.5 $0.80 2.5 2.4 2.2 2.1 2.0 1.9 1.8 1.7 1.7 $0.90 2.8 2.6 2.5 2.4 2.3 2.1 2.0 2.0 1.9 $1.00 3.1 2.9 2.8 2.6 2.5 2.4 2.3 2.2 2.1 $1.10 3.4 3.2 3.1 2.9 2.8 2.6 2.5 2.4 2.3 $1.20 3.8 3.5 3.3 3.2 3.0 2.9 2.7 2.6 2.5 $1.30 4.1 3.8 3.6 3.4 3.3 3.1 3.0 2.8 2.7 If nitrogen is priced at $1 per pound, increasing nitrogen by 25 pounds per acre requires an additional wheat yield of 2.8 bushels per acre assuming the price of wheat was $9 per bushel. As the price of nitrogen increases, a greater yield must be obtained to break even. As the price of wheat increases, a lower yield is required to make the additional fertilizer break even at any given price of nitrogen. These results mirror the economic principles that say when the price of an input increases, less should be used in the production process, and, as the price of the output increases, more inputs should be used to maximize profit. Enterprise budgeting and breakeven analysis made simpler To create the enterprise budget necessary to determine breakeven prices and yields requires time and effort. Fortunately, the Center for Agricultural Profitability (CAP) has developed the Agricultural Budget Calculator (ABC) to make this process much easier. ABC can be accessed free of charge at https://agbudget.unl.edu. Online and in-person training sessions can be accessed at https: //cap.unl.edu. The base ABC program guides the user in developing economic and cash enterprise budgets. Breakeven tables and several reports useful in decision-making are available from completed enterprise budgets in ABC. Klein, Robert and Glennis McClure. 2023 Budget 82-Wheat-Winter, Panhandle, No Till, in Rotation, 90-bushel Yield, Pivot Irrigated Electric, 800 GPM 35 PSI, 6 acre/inches. https://cap.unl.edu/budgets/ crops-2022/2023-nebraska-crop-budgets-112122-final.pdf, page 94.
{"url":"https://cap.unl.edu/management/using-breakeven-analysis-better-decisions","timestamp":"2024-11-10T21:08:03Z","content_type":"text/html","content_length":"90231","record_id":"<urn:uuid:0a89f404-8859-4e71-b683-3ea76fe19417>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00266.warc.gz"}
δ-presence, for when being in the dataset is sensitive - Ted is writing things Remember \(k\)-map? We used this definition when the attacker didn't know who was in the dataset. Let's go back to this setting, with a slightly different scenario. You're no longer a doctor studying human sexual behavior. You're still a doctor, but this time, you're specialized in treating a particular chronic disease. Instead of running a survey, you're running a clinical trial for a new drug to treat this disease. Similarly, you want to share the data with other people. At first glance, these two settings look similar — but there is a crucial difference. Which information is sensitive, exactly? For the survey, the answers of each participant are sensitive, as they reveal intimate details. But for the clinical study, being in the dataset is the sensitive information. If someone figures out that you've taken part in the study, they learn that you suffer from this disease. So, what does it change in practice? Suppose that your dataset contains the following records: ZIP code age You do a little research on who lives in ZIP code 85535. You learn that in this ZIP code: • 5 people have ages between 10 and 19; • 5 people have ages between 20 and 29; • 10 people have ages between 30 and 39; • 10 people have ages between 40 and 49; • and 20 people are 50 or older. Transforming this part of your dataset to have it satisfy \(5\)-map is easy: ZIP code age 85535 10-19 85535 10-19 85535 10-19 85535 10-19 85535 10-19 85535 40-49 … But what has gone wrong there? An attacker, using only public data, knows that there are 5 people aged between 10 and 19 in ZIP code 85535. Then, by looking at your de-identified dataset, the attacker can figure out that all of them are part of your data. Thus, they all have this specific disease. The attacker learned something sensitive about individuals, without re-identifying any record. Just like in the example of \(l\) We need yet another definition. Introducing… \(\delta\)-presence! Remember what we counted for our previous privacy definitions? For each combination of quasi-identifier attributes: • for \(k\)-anonymity, we counted the number of records in the dataset; • and for \(k\)-map, we counted the number of records in the larger population. What went wrong in our leading example? For certain attributes, these numbers were equal. To detect this, we now compute the ratio between those two numbers. Then, the \(\delta\) in \(\delta\) -presence is the largest ratio across the dataset. Consider the dataset above. The ratio for the records (85535, 10-19) is \(5/5=1\), and the ratio for the records (85535, 40-49) is \(1/10=0.1\). Thus, since we defined \(\delta\) as the greatest ratio, we have \(\delta=1\). Since the \(k\) of \(k\)-map is always larger than the \(k\) of \(k\)-anonymity, this is the maximum possible value of \(\delta\). Saying that a dataset satisfies \(1\) -presence gives zero guarantees. Whether \(\delta=1\) is not the only interesting thing. We also want this value to be small. The lower, the better. Consider what it means if \(\delta=0.95\). The attacker might learn that their target has a 95% chance of being in the dataset. It's not quite a 100% certainty, but it still can be problematic. For example, it might be more than enough for an insurance company to deny you How do we get to a lower \(\delta\) in our previous example? One solution would be to generalize the age further: ZIP code age 85535 10-39 85535 10-39 85535 10-39 85535 10-39 85535 10-39 85535 40-49 Then, the ratio for the records (85535, 10-39) becomes \(5/(5+5+10)=0.25\). The ratio for record (85535, 40-49) is still \(0.1\), so \(\delta=0.25\). (Assuming that no other record in the dataset has ZIP code 85535, and all other records have a smaller ratio). \(\delta\)-presence was first proposed by Nergiz et al. in a 2007 paper^ (pdf). In this paper, the definition is a bit different. The authors compute not only the largest ratio, but also the smallest one. The \(\delta\) parameter hides two parameters \(\left(\delta_{\text{min}},\delta_{\text{max}}\right)\). This was done to protect against the symmetric attack: hiding that someone is not in the dataset. I never encountered a situation where this is a real concern, so I simplified it a bit for this post. \(\delta\)-presence in practice \(\delta\)-presence is computed from the ratios between quantities used in \(k\)-anonymity and \(k\)-map. While \(k\)-anonymity is very easy to compute, \(k\)-map is much harder. As such, \(\delta\) -presence has very similar practical characteristics than \(k\)-map. Since you don't typically have access to the full larger dataset, you can't compute \(\delta\) exactly. You can use a pessimistic approximation if your data is a sample of a larger dataset that you own. You can also do the work of estimating \(\delta\)-presence by hand. What about statistical approximations? Nergiz et al. proposed an interesting method in a followup paper^ (pdf). Unfortunately, two of its requirements make it hardly usable in practical scenarios. • First, to run the algorithm, you need to "describe your beliefs about the world" (in a statistical sense). Unless you're a statistician, this is not something you can really do. • Second, computing the algorithm exactly is very expensive. The authors propose a lot of approximations to make it tractable… But then, using them makes the results even more uncertain. Finally, if you still want to use this algorithm, you would also likely have to implement it yourself. I don't know of any available software that does it for you. Like \(k\)-map, in theory, it often makes sense to use \(\delta\)-presence. It's a pity that both definitions are so difficult to use in practice! Having simpler (and more usable) approximation algorithms would be great… Which is why I have done some research work in that direction. And the results of this work will be the topic of a future post! =)
{"url":"https://desfontain.es/blog/delta-presence.html","timestamp":"2024-11-11T10:52:46Z","content_type":"text/html","content_length":"21289","record_id":"<urn:uuid:4db99efa-8f68-44fe-a5ce-ded9fb56d471>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00520.warc.gz"}