content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Shapes of molecules
VSEPR stands for Valence Shell Electron Pair Repulsion and it is a useful model for predicting the shapes and bond angles of molecules.
Electron pairs are negatively charged so they will repel each other if they get too close. The basic principle of VSEPR is that pairs of electrons will arrange themselves in space so as to cause the
least amount of repulsion possible.
Bonding pairs vs lone pairs
Pairs of electrons can either be classified as bonding pairs (if they are located in a covalent bond shared between two atoms) or lone pairs (if they are left over after bonding has taken place). The
important thing is that lone pairs repel more than bonding pairs. The order is as follows:
Lone pair-lone pair repulsion > lone pair-bonding pair repulsion > bonding pair-bonding pair repulsion
The reason for this difference is to do with the shapes of the orbitals these electrons occupy. Bonding pairs are localised in a bond and so are more constrained by where they can be. They take up
less space than the lone pair, which is both more spread out and at a shorter distance to the nucleus. Let’s look at some examples.
Be has 2 outer electrons so when it bonds to 2 Hs, it has no electrons left over.
The bonding pairs arrange themselves to be as far away from each other as possible, which in this case means 360/2 = 180 degrees apart.
Boron has 3 outer electrons, so when it bonds to 3 Hs, it has no electrons left over. The 3 bonding pairs of electrons arrange to be 360/3 = 120 degrees.
Carbon has 4 outer electrons, so when it bonds to 4 Hs, there are no electrons left over. The bonds arrange to be as far apart as possible: 109.5 degrees away from each other.
Nitrogen has 5 outer electrons, so when it bonds to 3 Hs, there are 2 electrons (i.e. a lone pair) left over.
Now we can use a little trick to find the bond angle. The pyramidal structure is a bit like a tetrahedral structure (since it has 4 pairs of electrons in total) except it has 3 x bonding pairs and 1
x lone pair instead of 4 x bonding pairs. Since lone pairs repel more than bonding pairs, we can consider that this additional lone pair reduces the bond angle by 2.5 degrees compared to the
tetrahedral structure. In this case:
109.5 – 2.5 = 107 degrees
Water is an example of a v-shaped molecule (also called non-linear or bent). Oxygen has 6 outer electrons, so when it bonds to 2 Hs, it has 4 electrons left over (i.e. 2 lone pairs). Like before, we
can consider it to be a bit like tetrahedral (4 electrons pairs in total) but 2 of these are now lone pairs, so they repel more. The bond angle therefore decreases by 2 x 2.5 degrees compared to a
tetrahedral structure:
109.5 – (2 x 2.5) = 104.5 degrees
Trigonal bipyramidal – PF5
This has two bond angles. The structure is a bit like a trigonal planar structure with two additional bonds cutting through the centre of the molecule at right angles to the other 3 bonds.
When there are 6 bonds, all of them must be 90 degrees from each other.
|
{"url":"https://www.chemistryaleveltutor.co.uk/post/shapes-of-molecules","timestamp":"2024-11-01T22:28:14Z","content_type":"text/html","content_length":"1050599","record_id":"<urn:uuid:18500dfe-e1c4-4090-b219-2014950eceab>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00794.warc.gz"}
|
nForum - Discussion Feed (spectral theory)zskoda comments on "spectral theory" (24179)
Urs, while it is good that spectral theorem is included into functional analysis table of contents, and it has functional analysis toc bar, I do not like that spectral theory is also included and
also has this toc bar. My understanding is that spectral theory is much wider subject on the relation between the possibly categorified and possibly noncommutative function spaces (sheaf categories,
noncommutative analogues) and the specifical “singular” features of those like prime ideals, like certain special objects in abelian categories, points of spectra in operator framework etc. In any
case, in $n$POV, it is NOT a part of functional analysis, though some manifestations are. Like the concept of a space is not a subject of functional analysis, though some spaces are defined in the
language of operator algebras. I find spectral theory on equal footing like space, “quantity” etc. Of course, the entry currently does not reflect this much (though it has a section on spectra in
algebraic geometry), but it eventually will! Thus I will remove it from functional analysis contents.
One should also point out that using generators in the proof of Giraud’s reconstruction theorem of a site out of a topos is a variant of spectral idea: like points form certain spaces, so the
generators of various kind generate or form a category. This is behind many spectral constructions (including recent Orlov’s spectrum which is very laconic but stems from that) and reconstruction
theorems and if the category corresponds to coherent sheaves over a variety than often the geometric features of the variety give certain contributions to the spectrum.
|
{"url":"https://nforum.ncatlab.org/search/?PostBackAction=Search&Type=Comments&Page=1&Feed=ATOM&DiscussionID=2911&FeedTitle=Discussion+Feed+%28spectral+theory%29","timestamp":"2024-11-07T07:26:23Z","content_type":"application/atom+xml","content_length":"4678","record_id":"<urn:uuid:1f48b516-96f0-4b3c-bd81-442508f6391d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00800.warc.gz"}
|
Getting the sum of the absolute values of each axis of a Vector3
Title says it all, really.
I want to get the sum of the absolute values of each axis in a Vector3, such that this Vector3(-5, 3.2, -7) would give me 15.2. Is there an elegant/best practiced way to do this?
Vector3 vec= new Vector3(-5f, 3.2f, 7f);
float total = Mathf.Abs(vec.x) + Mathf.Abs(vec.y) + Mathf.Abs(vec.z);
You will get that 15.2 value.
|
{"url":"https://discussions.unity.com/t/getting-the-sum-of-the-absolute-values-of-each-axis-of-a-vector3/232282","timestamp":"2024-11-04T07:51:26Z","content_type":"text/html","content_length":"27331","record_id":"<urn:uuid:75852812-5999-4fee-960f-6de7c973526c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00125.warc.gz"}
|
Experimental design question
The Dunnett test is characterized by a particular minimum critical t-value (t), specificed for a given number of treatments (k) and degrees of freedom (nk-k, where n is the number of replicates, for
a balanced experiment). If you hold n constant, how does t change as a function of k when k becomes large? Either a closed form equation (if there is one), or else a big table of Dunnett critical
values on which I can do a regression would be fine. I’m looking for a bigger table than I can find through a google search or in a statistics textbook. I’ve tried using the standard table from
statistics textbooks and it’s not detailed enough for me to really see the behavior at large k.
The basic issue here is I want to plot exactly how the critical value changes, for comparisons against a single control group, as you add more and more ‘test’ groups.
How to go larger ?
1. One reason its not published is that your statistics package has it inbuilt.
So you might be able to find a function eg, dunnet_critical_value(n,alpha,t) …
2. If you don’t have the function to calculate it, why bother going to look up the formula ?
graph the curve for n from 3 to 12…
What shape is that curve ? extrapolate the curve to 15, 20… you can see the value stops changing much … so doubling the number of groups doesn’t change it too much?
Interpretation: law of large numbers kicking in… its really a numerical statement of whats happening until you have large numbers…
|
{"url":"https://boards.straightdope.com/t/experimental-design-question/683763","timestamp":"2024-11-08T03:03:07Z","content_type":"text/html","content_length":"28162","record_id":"<urn:uuid:cb1db773-157f-45ab-80dc-13319692fb2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00626.warc.gz"}
|
HD in the ground and excited bending states: an improved ro-vibrational global analysis
Issue A&A
Volume 559, November 2013
Article Number A125
Number of page(s) 9
Section Atomic, molecular, and nuclear data
DOI https://doi.org/10.1051/0004-6361/201322375
Published online 26 November 2013
A&A 559, A125 (2013)
The rotational spectrum of ^12C[2]HD in the ground and excited bending states: an improved ro-vibrational global analysis
^1 Dipartimento di Chimica “Giacomo Ciamician”Università di Bologna, via F. Selmi 2, 40126 Bologna, Italy
e-mail: claudio.degliesposti@unibo.it; luca.dore@unibo.it
^2 Dipartimento di Chimica Industriale “Toso Montanari”, Università di Bologna, viale del Risorgimento 4, 40136 Bologna, Italy
e-mail: luciano.fusina@unibo.it; filippo.tamassia@unibo.it
Received: 26 July 2013
Accepted: 11 October 2013
Rotational transitions of ^12C[2]HD were recorded in the range 100–700 GHz for the vibrational ground state and for the bending states v[4] = 1 (Π), v[5] = 1 (Π), v[4] = 2 (Σ^+ and Δ), v[5] = 2 (Σ^+
and Δ), v[4] = v[5] = 1 (Σ^−, Σ^+ and Δ), v[4] = 3 (Π and Φ) and v[5] = 3 (Π and Φ). The transition frequencies measured in this work were fitted together with all the infrared ro-vibrational
transitions involving the same bending states available in the literature. The global fit allowed a very accurate determination of the vibrational, rotational, and ℓ-type interaction parameters for
the bending states up to v[4] + v[5] = 3 of this molecule. The results reported in this paper provide a set of information very useful for undertaking astronomical searches in both the mm−wave and
the infrared spectral regions. The parameters from the global fit can be used to calculate accurate rest frequencies for rotational transitions in the ground state or in excited vibrational states
involving the bending modes. Pure rotational transition frequencies up to 1 THz are listed.
Key words: molecular data / methods: laboratory: molecular / techniques: spectroscopic / catalogs
© ESO, 2013
1. Introduction
Acetylene can be found in several astronomical environments: in molecular clouds (Lacy et al. 1989), in massive young stellar objects and planet forming zones (Lahuis & van Dishoeck 2000; Bast et al.
2013), in circumstellar envelopes of AGB stars (Ridgway et al. 1976; Matsuura et al. 2006; Fonfría et al. 2008), and it has been identified in cometary comae (Mumma et al. 2003) as well. This
unsaturated hydrocarbon may react with radicals – atomic C, CN, and CH – to form complex molecules in starless cores (Herbst 2005), it plays a key role in the formation of circumstellar carbon chain
molecules (Cherchneff & Glassgold 1993) and it is even a possible precursor of benzene in a carbon-rich PPN (Woods et al. 2002).
However, ^12C[2]H[2] does not have a permanent dipole moment and cannot be detected by (sub-)millimetre telescopes, therefore interstellar acetylene has been detected by observing its
vibration-rotation bands. The only detectable sub-millimetre features could be those due to some P-branch high-J transitions of the ν[5] ← ν[4] difference band in the THz region (Yu et al. 2009). On
the other hand, non-centrosymmetric isotopologues of acetylene, such as the subject of this paper ^12C[2]HD, do have a small permanent electric dipole moment. The first astronomical detection of ^12C
[2]HD is recent, namely it has been observed in Titan’s atmosphere through the Composite InfraRed Spectrometer (CIRS) mounted on the Cassini spacecraft (Coustenis et al. 2008). From these
observations it was possible to derive the D/H ratio on Titan, which was previously determined only through the transitions of the CH[4]/CH[3]D pair.
Molecules containing less abundant isotopes are very relevant from an astrophysical point of view. Several species containing D, ^13C, ^15N, ^18O, among the most important, provide a tool to assess
isotopic ratios in several astronomical environments (see for instance Herbst 2003; Caselli & Ceccarelli 2012; Bézard 2009). The D/H isotopic ratio is of particular interest for several reasons. It
is an important experimental constraint on the Big Bang models, as deuterium was formed in abundance only in this event. It can also provide key information on the chemical processes that lead to the
formation of complex organic molecules (Sandford et al. 2001). Although the cosmic D/H ratio is of the order of magnitude of 10^-5, abundances of a few percent with respect to their parent species
can be produced in the interstellar medium through isotopic fractionation mechanisms (Herbst 2003). It is known that in cold dense interstellar clouds D-enrichment proceeds through gas phase
ion-molecule exothermic reactions, but also through gas-grain chemistry (Sandford et al. 2001). Alternative routes for achieving D-H fractionation in more energetic environments and of interest for
complex molecules are: a) gas phase unimolecolar photodissociation; and b) ultraviolet photolysis in D-enriched ice mantles (Sandford et al. 2001).
Laboratory rotational spectra have been observed in the past for several monodeuterated species of acetylene, including also ^13C containing isotopologues (Wlodarczak et al. 1989; Matsumura et al.
1980; Cazzoli et al. 2008; Degli Esposti et al. 2013). Rotational transitions of ^12C[2]HD were measured up to 418 GHz for the ground vibrational state (GS) and for the v[4] = 1 and v[5] = 1 excited
states, ν[4] and ν[5] being the trans and cis bending modes, respectively (Wlodarczak et al. 1989). Pure rotational transitions in the GS up to J′′ = 10, around 650 GHz, were observed for ^12C[2]HD,
H^12C^13CD and H^13C^12CD (Cazzoli et al. 2008). In the same study, ab initio calculations were performed at various levels in order to predict the electric dipole moment for these species and the
equilibrium structure of acetylene. The calculated dipole moment does not show sizable variations upon isotopic substitution of one carbon atom, and is approximately 0.01 D for all monodeuterated
isotopologues in the GS. The ν[5] ← ν[4] difference band and associated hot bands for ^12C[2]HD have been recorded recently in the far-infrared (FIR) region, between 60 and 360 cm^-1, using the
synchrotron radiation at the Canadian Light Source (Predoi-Cross et al. 2011). The same band for the H^12C^13CD and H^13C^12CD isotopologues was also detected and analysed.
As far as the infrared (IR) region is concerned, several papers have been published on ^12C[2]HD. The most recent are the investigations of the bending states up to v[4] + v[5] = 3 (Fusina et al.
2005a) and of the stretching-bending bands in the 1800–4700 cm^-1 spectral region (Fusina et al. 2005b). In both cases the spectra were recorded by Fourier transform infrared spectroscopy (FTIR). A
study of the integrated band intensities in the 25–2.5 μm window was also published (Jolly et al. 2008).
In the present paper we report on the observation of the pure rotational transitions of ^12C[2]HD up to 657 GHz, that is in bands 3, 6, 7, 8 and 9 of the Atacama Large Millimeter Array (ALMA). A
total of 168 transitions were assigned in the GS and in various excited vibrational bending states. The rotational lines detected in this work were fitted together with the FIR (Predoi-Cross et al.
2011) and IR lines (Fusina et al. 2005a). The spectroscopic parameters obtained from the final global fit are determined with an excellent accuracy.
The high accuracy of the millimetre- and submillimetre-wave data presented in this paper, together with the increasing sensitivity of new observation systems, such as ALMA, will favour the
observation of transitions belonging to this species.
It should also be stressed that the dipole moment of ^12C[2]HD is strongly enhanced by the bending vibrations. Therefore, considering that some chemically rich regions, e.g. IRC+10216 (Cernicharo et
al. 2011), show a high degree of vibrational excitation, this will facilitate the detection of emission lines in the bending states ν[4] or ν[5] of ^12C[2]HD. In Sect. 3.1, the dipole moment
variation in excited bending states is discussed.
2. Experimental details
The sample of ^12C[2]HD was purchased from CDN Isotopes (98.9% purity). The rotational spectra of ^12C[2]HD were observed in selected frequency regions between 100 and 700 GHz, using a
source-modulation mm-wave spectrometer which employs Gunn oscillators (RPG Radiometer Physics GmbH and J. E. Carlstrom Co.) as main radiation sources covering the fundamental frequency range 75–124
GHz. Higher frequencies were generated using three different frequency multipliers (VDI – Virginia Diodes, Inc. and RPG). The Gunn oscillators were phase-locked to the suitable harmonic of the
frequency emitted by a computer-controlled frequency synthesizer (Schomandl), referenced to an external rubidium frequency standard (SRS Stanford Research System). This guaranteed an absolute
accuracy of ca. 20 Hz to the frequency scale. A liquid-helium-cooled InSb detector (QMC Instr. Ltd.) was employed. The Gunn oscillators were frequency modulated at 6 kHz, and the detected signals
were demodulated by a lock-in amplifier tuned at twice the modulation frequency, so that the second derivative of the actual spectrum profile was detected by the computer-controlled acquisition
Transition frequencies were recovered from a line-shape analysis of the spectral profile (Dore 2003); their accuracy, estimated by repeated measurements, was in the range 5–30 kHz depending on the
signal-to-noise ratio of the recorded lines.
The absorption cell was a 3.5 m long, 10 cm in diameter glass tube equipped with polyethylene windows. A double pass arrangement based on a wire grid polarizer and a roof mirror (Ziurys et al. 1994;
Dore et al. 1999) was employed to increase the absorption path. Sample pressures of a few tens of mTorr were employed during the measurements.
3. Results and discussion
3.1. Rotational analysis
Fig. 1
The J = 5 ← 4 transition of ^12C[2]HD in the v[5] = 1, ground, and v[4] = 1 vibrational states. The low-frequency components of each ℓ-doublet are displayed. Fourteen scans are co-added, the total
integration time is 850 s.
Table 1
Dipole moments, vibrational term values, populations and intensity calculations for several excited vibrational bending states of ^12C[2]HD.
The very small dipole moment of ^12C[2]HD was first determined by Matsumura et al. (1980), who performed Stark-effect measurements on the J = 2 ← 1 rotational transition of ^12C[2]HD in the v[4] =
1,3 and v[5] = 1,3 states. The obtained dipole moment values are: − 0.02359(5) D for v[4] = 1, 0.05601(9) D for v[5] = 1, − 0.09077(26) D for v[4] = 3 and 0.1472(21) D for v[5] = 3. These
experimental values led to extrapolate a GS moment of 0.01001(15) D, since the dipole moments of the v[4] = 1 and v[4] = 3 states have opposite sign with respect to that of the GS. The increase of
the dipole moment values due to vibrational excitation causes a considerable intensity enhancement of the excited state rotational lines, so that their detection is much easier than expected from an
evaluation of the population factors. At room temperature (kT = 207 cm^-1), the population factors are N[4]/N[0] = exp ( − 518/kT) = 0.0819 and N[5]/N[0] = exp ( − 677/kT) = 0.0378, which should
reduce the intensity of the rotational lines by a factor of 12 for v[4] = 1 and of 26 for v[5] = 1. It should be noticed that the bending states are doubly degenerate. However, the factor 2, which
doubles the population of v[4] = 1 and v[5] = 1, has no effect on the intensity of the rotational lines. In fact, the ℓ-type doubling removes the degeneracy of the levels and rotational transitions
are allowed only within each set. On the other hand, from the experimental μ values one can calculate the following ratios: (μ[4]/μ[0])^2 ≃ 5.6 and (μ[5]/μ[0])^2 ≃ 31, which partly compensate the
unfavourable population factors of the excited states. The intensity ratio between rotational lines in the excited bending states and in the GS can be calculated as $IvI0=(μvμ0)2e−Ev/kT·$(1)The
rotational transitions in v[4] = 1 are expected to be less intense than those of the GS by only a factor 12/5.6 ≃ 2, whereas the lines in v[5] = 1 should be even stronger than those in the GS by a
factor 31/26 ≃ 1.2. Figure 1 shows a 250 MHz frequency scan in which the J = 5 ← 4 transitions of v[5] = 1, ground, and v[4] = 1 states are simultaneously present. The experimental intensity ratios
nicely agree with the predicted ones. Anyway, for any combination of bending vibrational quanta, the dipole moment can be calculated by applying the usual expression for the vibrational dependence
$μv=μ0+∑iδμivi$(2)as reported in Eq. (11) of Matsumura et al. (1980). The values of the parameters δμ[i] are: δμ[4] = − 0.03360(13) D and δμ[5] = 0.04600(17) D, as given in Table 3 of the same
reference. For mixed excitations of ν[4] and ν[5] there is a less significant enhancement on the dipole moment, and therefore on the intensity, since δμ[4] and δμ[5] have opposite sign and comparable
magnitude. Indeed, only transitions in the combination v[4] + v[5] could be detected, whereas in higher-order mixed vibrational excitations they were too weak to be observed.
A summary of the intensity predictions at 300 K and 500 K for rotational lines of some vibrationally excited bending states is reported in Table 1. It is evident a large temperature effect, therefore
at 500 K the transitions in vibrational states up to the doubly excited ν[4] and ν[5] are stronger than the ground-state ones.
A very limited number of excited-state transitions of ^12C[2]HD have previously been observed only for v[4] = 1,3 (Π) and v[5] = 1,3 (Π) (Wlodarczak et al. 1989; Matsumura et al. 1980). We have
considerably enlarged the previous data-set by measuring more than 150 new line frequencies, spanning J values from 1 to 10, corresponding to transitions in the ground state, in the first excited
vibrational bending states v[4] = 1 and v[5] = 1 (Π), in the doubly excited v[4] = 2 and v[5] = 2 states (Σ^+ and Δ), in the combination state v[4] = v[5] = 1 (Σ^−, Σ^+ and Δ) and in the triply
excited v[4] = 3 and v[5] = 3 states (Π and Φ). The search of the rotational lines was guided by predictions based on the spectroscopic constants determined in a previous analysis of 4888 IR and FIR
data and 21 rotational transitions available at that time (Predoi-Cross et al. 2011). On average, the newly observed lines were found some hundreds of kHz away from the initial predictions. The
measured transition frequencies are listed in Table 2, along with predictions of unobserved low-J transitions and of the ones occurring at higher frequency up to 1 THz.
The term values of the observed rotational levels in the ground and excited vibrational states are in the range 2 − 2140 cm^-1. With the exception of the ground state, multiplets of rotational lines
were always observed for each J + 1 ← J transition, because of ℓ-type resonance effects. Before performing the final ro-vibrational global analysis (see Sect. 3.2), a series of state-by-state
least-squares fits were done to check the consistency of the MW measurements. The pure rotational spectra have been analysed using the formalism originally developed by Yamada et al. (1985) and
already employed to fit the excited-state rotational spectra of a large number of linear carbon chains (see for example Bizzocchi & Degli Esposti 2008, and references therein). The model is slightly
different from the one used to perform the global ro-vibrational analysis (see Sect. 3.2 for a detailed description), because the vibrational dependence of the various parameters is neglected, so
that effective constants for each vibrational state were determined by these preliminary fits. Briefly, rotational and vibrational ℓ-type resonance effects have been treated by diagonalization of
ro-vibrational matrices with off-diagonal elements which include q[t], $qtJ$, r[tt′] and $rtt′J$ spectroscopic parameters (t and t′ being equal to 4 or 5). The vibrational energy differences between
the interacting ℓ sublevels (Δℓ[t] = ± 2) of each doubly or triply excited vibrational state have been expressed through the effective values of the constants g[44], g[55] and g[45], which produce ℓ
-dependent energy contributions. In addition, the ℓ-dependence of rotational and quartic centrifugal distortion constants have been taken into accounts trough the γ^tt′ and δ^tt′ parameters,
respectively. Generally, not all of the required constants can be statistically determined from the rotational transitions of a single vibrational state, and some assumptions had to be necessarily
made. The ℓ-type doubling parameters q[t] and $qtJ$ were fitted for the v[t] = 1 (t = 4 or 5) and v[t] = 3 states (where degenerate ℓ = ± 1 levels do exist), but constrained to interpolated values
for the v[t] = 2 states, in order to avoid high correlations with the g[44] and g[55] constants. The latter constant has a rather large value (ca. 5.2 cm^-1), so that rotational ℓ-type resonance
effects are weak in the v[5] = 3 state, where no splitting of the | ℓ | = 3 lines could be observed. As far as the v[4] = v[5] = 1 combination state is concerned, where rotational and vibrational ℓ
-type resonance effects are simultaneously present, q[t] and $qtJ$ constants were held fixed at the values determined for the singly excited bending states, while the parameters involved in the
vibrational ℓ-type resonance, namely g[45], r[45] and $r45J$ were refined. The various state-by-state fits were repeated iteratively in order to achieve full consistency of the obtained parameters.
The results of the eight least-squares fits performed are collected in Tables 3 and 4.
Table 2
Measured and predicted^a transition frequencies (MHz) of ^12C[2]HD in the ground and excited bending states^b.
3.2. Global ro-vibrational analysis
Ro-vibrational transitions involving the vibrational states presently studied, except the v[4] = 3 state (Φ), were already observed in the FIR and IR regions (Predoi-Cross et al. 2011; Fusina et al.
2005b). They have been fitted together with the rotational transitions presently measured. The model Hamiltonian adopted for the global analysis represents an extension up to three quanta of the
bending excitation, i.e. v[4] + v[5] = 3, of the Hamiltonian for a molecule with two bending vibrations which has been described in detail by Herman et al. (1991). It was already used for ^13C[2]HD (
Degli Esposti et al. 2013) and ^12C[2]HD (Fusina et al. 2005a) itself. The term values of the ro-vibrational levels of the transitions were obtained by diagonalizing the appropriate energy matrix
containing the following vibrational (G^0) and rotational (F) diagonal contributions: $G0(v4,ℓ4,v5,ℓ5)=
[D0+β4v4+β5v5+δ44v42+δ45v4v5+δ55v52+δ44ℓ42+δ45ℓ4ℓ5+δ55ℓ52][M−k2]2+[H0+h4v4+h5v5][M−k2]3$with M = J(J + 1) and k = ℓ[4] + ℓ[5].
Table 3
Effective spectroscopic constants determined from state-by-state fits of the rotational transitions measured for the ground and v[4] = 1, v[5] = 1, v[4] = v[5] = 1 states of ^12C[2]HD^a.
Table 4
Effective spectroscopic constants determined from state-by-state fits of the rotational transitions measured for the v[4] = 2, v[4] = 3, v[5] = 2 and v[5] = 3 states of ^12C[2]HD^a.
Vibrational and rotational ℓ-type resonances are expressed by off-diagonal matrix elements (Herman et al. 1991) containing the following parameters: $r45=r450+r445(v4+1)+r455(v5+1)+r45JMqt=
qt0+qttvt+qtt′vt′+qtJM+qtJJM2+qtk(k±1)2ρt=ρt0+ρttvt+ρtt′vt′+ρtJM+ρtJJM2andρ450+ρ45JM.$The global fit included 5317 IR and 168 MW transitions. Overlapping lines were given zero weight. The uncertainty
for the FIR and IR data was in the range 5.0 × 10^-5 − 2.0 × 10^-4 cm^-1, and 1.0 × 10^-7 cm^-1 for the MW data. The MW blended lines were given an uncertainty of $2 ×1.0 × 10-7 cm-1$. Finally, 489
IR transitions, 9.2%, were excluded in the final fit because they were overlapping (271) or their observed-calculated values (218) were larger than 5 times their estimated experimental uncertainty.
For the 3ν[4](Φ) state, the J′′ = 3 e/f components were not resolved. For the 3ν[5](Φ) state, all the e/f components of the rotational lines were not resolved. The couples of overlapping lines were
identified by the same frequency (see Table 2) and their observed-calculated values were derived from the comparison of the experimental frequency with the average of the frequencies calculated for
the two components. Sixty nine statistically well determined parameters, which are collected in Table 5, were refined with a final rms value of 3.06 × 10^-4 cm^-1 for the IR data and 19 kHz for the
MW data. A few parameters in the model not reported in Table 5 were nevertheless allowed to vary during the fitting procedure but they resulted statistically undetermined and were constrained to
zero. Some of the parameters are highly correlated, i.e. $ω40$, $x440$ and y[444], $g440$ and $y444$, γ^44 and $γ444$. The results in Table 5 can be compared with those obtained from the previous
analysis, see Table 2 of Predoi-Cross et al. (2011). Sixty two parameters are common to both sets. The inclusion of the MW data allowed the determination of 7 additional constants. Three of these are
related to the ν[4] bending states, partly because experimental data for the v[4] = 3(Φ) state have been obtained for the first time. Values and signs of all the common parameters are consistent in
both sets: the differences between new and old values being in the range 0–1% (31 parameters), 1–10% (18 parameters), 10–50% (7 parameters), 50–100% (5 parameters) and 1 parameter, h[4], differs by
306%. The inclusion of the MW data also yields a significant improvement of the precision of the main constants which contribute to the rotational energy. For example, the values of B[0], α[t], and
$qt0$ constants are ca. 5 times more precise than those determined previously (Predoi-Cross et al. 2011). It should be pointed out that in the present global analysis 60 additional IR transitions
were discarded in the final fit, compared with the previous fit (Predoi-Cross et al. 2011), adopting the same rejection limits. The discarded lines are scattered over most of the bands. Considering
the high accuracy of the assigned MW transitions, this result is particularly pleasing since it confirms the validity of the calibration of the IR data, which span a wide wavenumber range, from about
90 to 2100 cm^-1.
Table 5
Spectroscopic parameters (in cm^-1) for the bending states of ^12C[2]HD resulting from the simultaneous fit of all rovibrational and rotational transitions involving levels up to v[4] + v[5] = 3^a.
4. Conclusions
Rotational lines of ^12C[2]HD were detected in the range 100–700 GHz. They were analysed together with the IR transitions reported in the literature in a global fit. Sixty nine parameters
(rotational, vibrational, ro-vibrational, and ℓ-type interaction) were determined with high precision. The analysis of the ℓ-type resonances which affect the pure rotational spectra allows the
determination of the small vibrational energy differences ΔG between different ℓ-sublevels of a given vibrational state, which can be directly calculated using the effective values of the fitted g[tt
′], r[45] and B[v] constants in Tables 3 and 4.
Table 6
Vibrational energy differences ( cm^-1) between ℓ-sublevels of doubly and triply excited bending states of ^12C[2]HD, as resulting from the rotational analysis and the global rovibrational analysis.
The reliability of these results can be checked by comparison with the corresponding values from the global analysis. Table 6 shows that the agreement between the two sets of ΔG values is very good,
differences being mostly less than one percent, thus confirming that an accurate treatment of ℓ-type resonances in the pure rotational spectra can provide precise information on the vibrational
energy. The set of spectroscopic constants determined in this work is the most accurate and consistent available in the literature. From these constants it is possible to derive very accurate
predictions for IR and MW spectra useful for astronomical searches. An extensive list of rotational frequencies up to 1 THz is reported in Table 2 for all the observed vibrational states, whose
energies are reported in Table 1. The line strength of each transition can be calculated by the simple formula (Lafferty & Lovas 1978): $S(J+1←J)=(J+1)2−ℓ2(J+1)·$(3)
The authors acknowledge the Università di Bologna and the Ministero della Ricerca e dell’Università for financial support under the grant PRIN09 “High-resolution Spectroscopy for Atmospherical and
Astrochemical Research: Experiment, Theory and Applications”. The authors also thank Prof. G. Di Lonardo for helping the analysis of the infrared spectra.
All Tables
Table 1
Dipole moments, vibrational term values, populations and intensity calculations for several excited vibrational bending states of ^12C[2]HD.
Table 2
Measured and predicted^a transition frequencies (MHz) of ^12C[2]HD in the ground and excited bending states^b.
Table 3
Effective spectroscopic constants determined from state-by-state fits of the rotational transitions measured for the ground and v[4] = 1, v[5] = 1, v[4] = v[5] = 1 states of ^12C[2]HD^a.
Table 4
Effective spectroscopic constants determined from state-by-state fits of the rotational transitions measured for the v[4] = 2, v[4] = 3, v[5] = 2 and v[5] = 3 states of ^12C[2]HD^a.
Table 5
Spectroscopic parameters (in cm^-1) for the bending states of ^12C[2]HD resulting from the simultaneous fit of all rovibrational and rotational transitions involving levels up to v[4] + v[5] = 3^a.
Table 6
Vibrational energy differences ( cm^-1) between ℓ-sublevels of doubly and triply excited bending states of ^12C[2]HD, as resulting from the rotational analysis and the global rovibrational analysis.
All Figures
Fig. 1
The J = 5 ← 4 transition of ^12C[2]HD in the v[5] = 1, ground, and v[4] = 1 vibrational states. The low-frequency components of each ℓ-doublet are displayed. Fourteen scans are co-added, the total
integration time is 850 s.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.
|
{"url":"https://www.aanda.org/articles/aa/full_html/2013/11/aa22375-13/aa22375-13.html","timestamp":"2024-11-10T01:45:01Z","content_type":"text/html","content_length":"179255","record_id":"<urn:uuid:5bd4e988-adb4-4d72-abb6-a50f8553e1d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00688.warc.gz"}
|
Trigonometry: Using Tangents
Lesson Video: Trigonometry: Using Tangents Mathematics
Learn how to find the size of an angle in a right triangle using the tangent ratio. Apply this knowledge to finding either the opposite or adjacent side and also in the context of word problems.
Video Transcript
In this video, we’re going to look at the trigonometric ratio tangent. And we’re going to see how it can be used in order to find the length of missing sides or the size of missing angles in
right-angled triangles. Now, first let’s have a look at what the tangent ratio is. So you see I’ve drawn a right-angled triangle. And I’ve labeled one of the two other angles with the Greek letter 𝜃.
Now often, the first step in any problem involving trigonometry is identifying the names of the three sides in relation to the angle we’re interested in. So in this case, that’s in relation to this
angle 𝜃.
In a right-angled triangle, remember, the longest side, the side that’s opposite the right angle, is always called the hypotenuse. So that is this side here. The other two sides in a right-angled
triangle are called the opposite and the adjacent. And those names depend on their position relative to this angle 𝜃. So the opposite is the side opposite the angle 𝜃. It’s the one side that isn’t
involved in making that angle. So for this triangle, it’s this side here. Finally, the adjacent is the side between the angle and the right angle. So that’s this third side here.
Now, trigonometry is all about the different ratios that exist between these pairs of sides for specific values of this angle 𝜃. You’ve probably already met the two trigonometric ratios sine and
cosine, or sin and cos as they’re often abbreviated to. In this video, we’re looking at the tangent ratio, or tan as it’s also known. So the tangent ratio for a particular angle 𝜃 is the ratio
between the opposite and the adjacent sides. And so it’s always calculated by dividing the length of the opposite side by the length of the adjacent side. If you’re familiar with SOHCAHTOA in
trigonometry, then it’s this TOA part here. The tan of an angle is equal to O, the opposite, divided by A, the adjacent.
So if we know the length of one of these two sides and we know the angle, we can use this relationship here in order to work out the length of the other side. The other thing we might want to do is
work out the size of the angle when we know both the opposite and the adjacent. And in order to do that, we need something called the inverse tangent. Now, this is expressed using this notation here,
tan, and then a superscript negative one. And this tells us that the angle 𝜃 is equal to the inverse tan of the opposite over the adjacent. What this means is if I know what the ratio is between
those two pairs of sides, it enables me to work backwards to work out what the angle is that creates that ratio. So those are the two key facts that we need to remember throughout this video. We’ll
now see how to apply them to working out some missing lengths and some missing angles in right-angled triangles. So here is our first problem.
We’re given a diagram of a right-angled triangle. And we’re asked to find the value of 𝑥 to the nearest tenth.
Looking at the diagram, we can see that 𝑥 represents a missing side. And we’re also given the length of one side and the size of one of these angles in addition to the right angle. So my first step
with any trigonometry problem is always to label up the three sides with their names, the hypotenuse, the adjacent, and the opposite. So I’ve just used the first letter of each of those words to
label them up. Now, let’s recall that definition of the tangent ratio. And, remember, it was that tan of 𝜃, the angle, is equal to the opposite divided by the adjacent. So what I’m gonna do is I’m
gonna write down this ratio for this specific triangle. I’m going to replace 𝜃, which is the angle with 38 degrees. And I’m gonna replace the opposite with four because that’s its length in the
diagram. And I’m gonna replace the adjacent with 𝑥 because that’s the label that’s been given to it.
So now, I have this statement here, tan of 38 is equal to four over 𝑥. So this is an equation that I can solve in order to work out the value of this missing letter 𝑥. So 𝑥 is in the denominator of a
fraction on the right-hand side. So in order to bring it out of the denominator, I’m going to multiply both sides of this equation by 𝑥. So when I do that, I have 𝑥 tan 38 is equal to four. Now in
order to work out what 𝑥 is, the next step is just to divide both sides of this equation by tan 38. Tan 38 is just a number. So I can do that. So this gives me 𝑥 is equal to four divided by tan 38.
Now, we are going to need a calculator in order to answer this question. Your calculator has the values of sin, cos, and tan for all of these different angles already programmed into it. So I can
type tan 38 into my calculator. And it will recall that value for me.
There are some angles — 30 degrees, 45 degrees, 60 degrees — for which the values of these sin, cos, and tan ratios are actually relatively straightforward. And they can be written down exactly in
terms of surds. So for those angles, it is possible to do trigonometry without a calculator. But we would do need it in this example. So in my calculator, I’m gonna type four divided by tan 38. And
when I do that, it tells me that 𝑥 is equal to 5.119766 and so on, this decimal here. Now, it is worth just pointing out here that the angle that I was given was measured in degrees. So I needed to
make sure that my calculator was in degree mode. And that’s always the case when doing trigonometry. You need to make sure your calculator is in the same mode as the way that the angle is measured in
the question.
Now this question asked me for the value of 𝑥 to the nearest tenth. So I need to round my answer. So I have that 𝑥 is equal to 5.1. So a reminder of what we did then, we recalled the definition of
that tangent ratio. Then we substituted the values given in the question to the relevant places and then solved the resulting equation in order to find the value we were looking for. Okay the next
question, we’re given a diagram of a right-angled triangle. And we’re given two sides this time. The question asks us to find the value of 𝜃 to the nearest degree. So this time, we’re being asked to
find an angle rather than a side length.
So my first step is gonna be to label the three sides of this right-angled triangle with the letters representing their names. So there they are. You do have to be slightly careful with this because,
remember, triangles can be drawn in lots of different orientations. The hypotenuse is just always the side opposite the right angle. But you just have to think carefully about the adjacent and the
opposite depending on which angle has been labeled. So let’s recall our definition of the tangent ratio. And here it is. Tan of 𝜃, the angle, is equal to the opposite divided by the adjacent. So I’m
gonna use this ratio. And I’m gonna substitute the known values of the opposite and the adjacent. So this tells me that tan of this angle is equal to 5.2 divided by 2.8.
Now as we’re looking to calculate an angle this time, we need to use the inverse tan function that says given that I know what the ratio is, I need to work backwards to work out the angle that that
ratio belongs to. So this tells me that the angle 𝜃 is equal to the inverse tan of 5.2 over 2.8. Now, you could perhaps go straight to that stage of working out if you preferred. So at this stage,
I’m gonna need to use my calculator in order to evaluate this. Now that inverse tan button is often located directly above the tan button. You often have to press shift to get to it. But that will
depend on the particular calculator that you have. So evaluating that on my calculator gives me this decimal value here. But I’m asked to find 𝜃 to the nearest degree, so I need to round my answer.
So then this tells me that 𝜃 must be equal to 62 degrees. So in this question, we began in exactly the same way. But because it was an angle that we were finding this time as opposed to the length of
a side, we needed to use that inverse tan function. Okay, our final question is a worded problem. So let’s read it through carefully.
A ladder leans against a wall making an angle of 15 degrees with the wall. The base of the ladder is 0.5 meters from the base of the wall. And the question we’re asked is, how far up the wall does
the ladder reach?
So with a worded question like this, if I’m not given a diagram, I would always draw my own to start off with. So we’re gonna have a diagram of a ladder, a wall, and a floor. And we’re making the
assumption here that the wall is vertical and the floor is horizontal. That seems a reasonable assumption for this question. So here is a sketch of the wall, the floor, and the ladder. Because we
assume the wall is vertical and the floor horizontal, we know that we have a right angle here. Now we need to put on the information we’re given. So we’re told the ladder makes an angle of 15 degrees
with the wall. So this angle here is 15 degrees. And we’re also told the base of the ladder is 0.5 meters from the base of the wall. So this measurement here is 0.5 meters.
Now what we’re asked to find is how far up the wall the ladder reaches. So we’re being asked to find this measurement here, which I’ll call 𝑦 meters. So I’ve got my diagram. And I can see that
actually it’s just a problem about a right-angled triangle. So we’re gonna approach it in exactly the same way as the previous ones. I’m gonna start off by labeling the three sides as always. So I
have the hypotenuse, the adjacent, and the opposite. Now let’s recall that tangent ratio that we’re going to need in this question. So I have that tan of the angle 𝜃 is equal to the opposite over the
adjacent. You’d be becoming familiar with that by now. So as in the previous questions, I’m gonna write down this ratio again. But I’m gonna fill in the information I know.
So I know that the angle 𝜃 is 15. And I know in this case that the opposite is 0.5. So I have that tan of 15 is equal to 0.5 over 𝑦. Now I need to solve this equation in order to work out the value
of 𝑦. So 𝑦 is in the denominator of this fraction. So I’m gonna multiply both sides by 𝑦 in order to bring it up into the top numerator. And when I do that, I have 𝑦 tan 15 is equal to 0.5. Now
remember tan 15 is just a number. So I can divide both sides of the equation by it. So I’ll have 𝑦 is equal to 0.5 over tan 15. Now this is the stage where I reach for my calculator in order to
evaluate this. And it tells me that 𝑦 is equal to 1.86602 and so on.
Now I need to choose a sensible way to round that answer cause I haven’t been asked for a specific level of accuracy. So the other measurement of 0.5 seems to be given to the nearest tenth. I’ll do
the same level of rounding for this value of 𝑦 here. So that will give me that 𝑦 is equal to 1.9. And to answer the question how far up the wall does the ladder reach, I put the units back in. It
reaches 1.9 meters up this wall.
So to summarize then, in this video, we’ve seen the definition of tangent as the ratio between the opposite and adjacent sides of a right-angled triangle. We’ve seen how to apply both tangent and
inverse tangent to problems involving right-angled triangles in order to find a missing side or a missing angle. And we’ve seen how to apply this to a worded problem.
|
{"url":"https://www.nagwa.com/en/videos/942181687636/","timestamp":"2024-11-04T02:33:13Z","content_type":"text/html","content_length":"268184","record_id":"<urn:uuid:ddc65eab-bf0d-4572-8e09-6870065b7893>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00326.warc.gz"}
|
Research Guides: Data Visualization: Getting Started
Average - a term for the median, mean, or mode of a data; which is often used to describe the most common value.
Bins - consecutive numeric ranges or intervals used in histograms, such as 20-30, 0-100, or 0-2.
Causation - a relationship of cause and effect in which change in one variable always results in change in another variable.
Categoric Information - information that is sorted into groups according to the presence of named characteristics, such as age or nationality.
Correlation - term used when one observes a relationship or connection between two or more elements. A correlation does not indicate that one variable causes another to happen. Positive correlation
means the variables increase together, while a negative correlation means that one variable decreases as the other variable increases. Correlation is much more common than causation.
Data point - a single piece of data or information.
Density - the compactness or volume of information associated with a geographic location, variable, or element.
Distribution - the range of values or intervals in a dataset.
Frequency - the number of times a variable, instance, or number appears in a dataset.
Interval - the distance between two numbers.
Key - a text box usually located in the lower right-hand corner of a graph that provides contextual information needed to interpret the graph, such as what specific colors, lines, or shapes
Legend - see key.
Outliers - a value that falls outside the expected range of a dataset and is numerically exceptional in comparison to the other values.
Quartiles - the three values that divide a numeric dataset into four equal parts.
Range - the difference in values from the maximum to the minimum.
Skew - a numeric distribution is considered skewed if, the majority of datapoints fall above or below the median.
X-axis - The horizontal line of a graph where either numbers or categories are listed. Always label your axis.
Variables - a characteristic or object that can be counted.
Y-axis - The vertical line of a graph where either numbers or categories are listed. Always label your axis.
Fontichiaro, K., Lennex, A., & Oehrli, J. A. (Eds.). (2017). Creating Data Literate Students. Michigan Publishing.
|
{"url":"https://fhsuguides.fhsu.edu/dataviz","timestamp":"2024-11-02T15:50:33Z","content_type":"text/html","content_length":"36037","record_id":"<urn:uuid:710c6615-20fa-4986-bb72-64a14c5f9f81>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00089.warc.gz"}
|
Dimension Space
Philosophy and History of Dimensional Analysis
My dissertation comprises a philosophical examination and exploitation of the fundamental principles of dimensional analysis and a partial history of its development. Dimensional analysis is a
logical method largely, though not exclusively, used by physicists to derive functional equations and estimate quantity values. This method is based on the fact that our physical equations are
quantitative and so specify equivalences between dimensional quantities. Roughly, we can consider quantity dimensions as kinds of quantities, with only quantities of like dimension being
commensurable, e.g. mass, length, charge.
In “The Π-Theorem as a Guide to Symmetries and the Argument Against Absolutism” I use a fundamental result of dimensional analysis, the Π-theorem, to define a class of quantity transformations which
are both empirical and dynamical symmetries. As such, they license an argument against absolutism, a commitment the fundamentality of intrinsic quantity magnitudes relative to quantity relations
(ratios). Earlier symmetry arguments on the basis of quantity transformations like global mass doublings are party to counterexamples. These single quantity dimension transformations cannot be both
empirical and dynamical symmetries. I show that dimensional analysis diagnoses this problem in a general way and provides an algorithm for generated true quantity symmetries. Generally such
symmetries involve the transformation of multiple quantity dimensions, which is usually encapsulated in the transformation of some physical constant, in this case G. This raises a question regarding
the nomological status of the values of the physical constants: Is it a law of nature that the gravitational constant has the magnitude it does? I do not settle the question, but I do set out some of
the positions and show how they interact with the absolutism-comparativism debate.
Another paper from the dissertation “Metaphysics and Methodology in Dimensional Analysis 1914-1917” chronicles a debate in the methodological foundations of dimensional analysis with a particular
focus on the positions of Richard Tolman and Percy Bridgman. The methodological debate yields a metaphysical one: Are quantity dimensions natural kinds (as I glossed above)? Or are they mere
conventional devices meant to guide use in unit transformations? After analyzing the dialectic between quantity dimension realism and quantity dimension conventionalism I put forward a third position
which seems to survive their mutual critique: quantity dimension functionalism. Under the functionalist position, there is a minimum number of basic quantity dimensions needed for an adequate model
of kinds of physical phenomena: three for mechanical phenomena and one additional dimension each for thermal and electromagnetic phenomena. Empiricism tells against postulating unnecessary addition
basic dimensions (e.g. treating forces and masses as basic). However, conventionalism still reigns when it comes to the identities of the basic quantity dimensions. Either force or mass may be
treated as basic and the other as derived—This makes no difference to mechanics. A vector space representation is used to illustrate quantity dimension functionalism.
A paper recently published in Theoria, “The Nature of the Physical and the Meaning of Physicalism” leverages some of my work on dimensional analysis to provide a quantitative account of the physical
that avoids “Hempel’s dilemma”. On this view, a physical object is any object which can be described by dimensional quantities. This account makes physicalism into an empirical claim that will or
will not be borne out by experience.
|
{"url":"https://dimensionspace.io/research/","timestamp":"2024-11-06T05:31:11Z","content_type":"text/html","content_length":"49462","record_id":"<urn:uuid:1e9d092c-f2f2-42b2-a33d-9019a557aa88>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00185.warc.gz"}
|
Teaching Students to Use Math Strategies
Teaching students to use math strategies ensures that they have a plan for problem-solving, but there are many other reasons and rewards. Students gain confidence when they realize they have
strategies for solving to success with the accompanying understanding. In this post, I will share easy ways to integrate math strategies into our math lessons.
Easy Ways To Teach Math Strategies
Teaching students to use a math strategy is simple! First, we choose a process for solving to introduce based on the level of development of our students. Math strategies all bring support to
problem-solving, but some are more supportive than others. Some strategies support students working in the concrete, some in the representational, and some in the abstract. The more strategies we
give students, the more empowered and confident they become across the math continuum of development.
The Math Warm-Up
An easy way to introduce and practice strategies whole group is with a math warm-up. Once a strategy has been introduced, the math warm-up is where students can have a chance to share a strategy
that they use to solve and why while also listening to their peers do the same! In the kindergarten warm-up below, students are given a concrete representation of addition. Students can share their
understanding of the visual addition problem out loud expressing the process of how to solve for the sum. As we do this, we invite other strategies for solving and model them. For example, a
student may say that they counted the crayons to find the sum. Another student may say that they counted up from 6 three times. Perhaps another student used a ten frame or drew dots, etc. All of
these simple strategies are explained and shared in the math chat.
Here is another example of a kindergarten math chat. As you can see there is variety in the levels of CPA. (Concrete Pictorial and Abstract) This allows students to join in the math warm-up at any
level of understanding. For more on math warm-ups, you can click the pictures or check out THIS POST.
Math Vocabulary to Support Strategies for Solving
Another significant addition to your math block (see what I did there) is explicit vocabulary instruction. I know cue the violins. Time is tight and we teach vocabulary during reading so we are
good right? Well… let me try to convince you that you’ll love how your students gain insight and better express their process in solving when they have the correct vocabulary! First, just having a
small focus board (or area of your whiteboard) can help you quickly and easily add a few words to support the concept and skill you are teaching. Starting with the big overarching math strand we
have a visual definition card (pictured below). Available for grades K-4. (shown is grade 2)
Then as a math warm-up on “Wordy Wednesday,” we add one word or multiple words that enhance our understanding of the skill. Below you can see some of the words for the different strands of second
grade. (available K-4)
Explicit Math Strategy Instruction
During the course of one school year, I introduce multiple strategies to my groups day in and out which ends up being roughly 40 different strategies. In order to keep these straight, I created
visuals of each strategy so we can “name it and claim it” as we go. I introduce a strategy. Show the visual. Model the process of using it. I call these our math strategies.
As we move through the school year we apply math in many different situations and modalities. Students are always growing and developing mathematically. We want to push them to make new
connections, discoveries and to apply math in more complex situations. We want to move from very supported to less supported strategies for solving.
Just like we want to post the visual vocabulary cards, we also want to remind students of the many strategies they have to solve. As we learn a new process, we add it. Here’s an example of 6 math
strategies for solving an addition problem.
As students grow through the year we introduce them to the appropriate strategies for solving.
Math Tools
Along with introducing math strategies for solving, we also want to give students the tools to carry out their problem-solving. Just like the math strategies, these can support a wide range of
levels and learning. When students are ready, they move to less supportive tools or math mats. Having this resource both in the small group setting as well as available for students during
independent practice is recommended. To see more about how I use these math tools, check out THIS POST.
Math Word Problems with Math Strategies
Word problems can be an area that students struggle. With these math strategies word problems tasks, students can solve and show their strategies in an easy fun format. Although the mats have some
basic strategies for each strand, students can always solve any way they want! Use in small group, independent practice, homework, or a center rotation! You can print on copy paper, laminate, or
throw in a pocket sleeve and you will be solving word problems using strategies in no time! Available K-2. Linked after the example pictures.
Math Strategies on YouTube
If you like to hear things on repeat, I have a YOUTUBE VIDEO explaining what this post was about! I hope you find some fresh ways to incorporate math strategies in your learning!
3 Comments
1. For your units, do you have a Canadian version of the money, temperature, and measurement components? Thanks in advance!
2. Dear Reagan,
Is it still possible to purchase your Ipad Essentials Unit Analysing Weather from January 2015?
1. Unfortunately, the co-author of this resource works for Apple now and contractually we are not allowed to sell it any longer. I am so sorry!
|
{"url":"https://tunstallsteachingtidbits.com/2019/01/teaching-students-to-use-math-strategies.html","timestamp":"2024-11-05T16:03:12Z","content_type":"text/html","content_length":"375624","record_id":"<urn:uuid:8b4be99b-1244-412e-9f72-1a120c4a2d47>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00297.warc.gz"}
|
Smoothed Jackknife Empirical Likelihood for Weighted Rank Regression with Censored Data
Longlong Huang, Karen Kopciuk and Xuewen Lu*
Department of Mathematics and Statistics, University of Calgary, Canada
Submission: October 31, 2017; Published: April 25, 2018
*Corresponding author:
Xuewen Lu, Department of Mathematics and Statistics, University of Calgary, Calgary, AB, T2N 1N4, Canada, Tel: +1-4032206620; E-mail:
How to cite this article: Longlong Huang, Karen Kopciuk, Xuewen Lu. Smoothed Jackknife Empirical Likelihood for Weighted Rank Regression with Censored Data. Biostat Biometrics Open Acc J. 2018; 6(2):
To make inference for the semiparametric accelerated failure time (AFT) model with right censored data, which may contain outlying response or covariate values, we propose a smoothed jackknife
empirical likelihood (JEL) method for the U -statistic obtained from a weighted smoothed rank estimating function. The jackknife empirical likelihood ratio is shown to be a standard chi-squared
statistic. The new method improves upon the inference of the normal approximation method and possesses desirable important properties of easy computation and double robustness against influence of
both outlying response and covariates. The advantages of the new method are demonstrated by simulation studies and data analyses. We illustrate our method by reanalyzing two data sets: the Stanford
Heart Transplant Data and Multiple Myeloma Data.
Keywords: Accelerated failure time model; Jackknife empirical likelihood; Normal approximation; Outlying observations; Rank estimation; Robustness
Abbreviations: AFT: Accelerated Failure Time; JEL: Jackknife Empirical Likelihood; UWJEL: Unweighted JEL; WJEL: Weighted JEL; SSD: Sample Standard Deviation; AESD: Average Estimated Standard
Deviation Of Estimators; MSE: Mean Squared Error Of Estimators; CP: Coverage Probability; LC: Length Of Confidence Intervals; JCP: Joint Coverage Probability; MCP: Marginal Coverage Probability; HGB:
Haemoglobin; LogBUN: Logarithm Of Blood Urea Nitrogen; NSERC: National Science and Engineering Council
A primary interest of survival analysis is often to understand the relationship between survival times and covariates measured on study participants, such as physical and biological measurements and
medical conditions. Typically, survival data are not fully observed on all subjects, but rather some response values are censored. For i =1,...,n,let T representthe survival time for the i^thsubject,
X[i]be the associated p-dimensional vector of covariates, C[i]denote the censoring time and δ[i] denote the event indicator, i.e., δ[i] =I(T [i]≤C[i]), which takes the value 1 if the event time is
observed, or 0 if the event time is censored. Conditional on the covariates for the i^thsubject, c[i]is assumed to be independent of the failure times T[i] We define y [i]as the minimum of the
survival time and the censoring time, i.e., y[i]= mm (T[i],C[i]). Then, the observed data are in the form (y[i],δ[i],x[i]),i= 1,...,n,which are assumed to be an independent and identically
distributed (i.i.d.) sample from (y,δ X), here y =min (T, c) and δ = I (T ≤ C).
The Cox proportional hazards (PH) model is the most prominent regression model used in survival analysis. However, when the proportional hazards assumption is not satisfied, the Cox PH model can
produce incorrect regression parameter estimates. The accelerated failure time (AFT) model provides a useful alternative to the Cox PH model, where its covariate effects on the log transformed
survival time T can be directly interpreted in terms of the regression coefficients. Because of this physical interpretability, the AFT model is more appealing than the PH model [1]. Following the
definition in Kalbfleisch &Prentice [1] and Heller [2], the AFT model is defined as
Where the ε[i]'sare i.i.d random errors with an unknown distribution function and are independent of the covariates x[i]'s,β[0] is the unknown true regression p-dimensional parameter to be estimated,
and α[T]denotes the transpose ofa vector or a matrix a. In data analysis, the log survival times from the regression residual, ε^β= log(T)-β^Tx,can be very large for small failure times, which is an
indication that estimation and inference are sensitive to small failure times. Rank regression has been shown to be an effective method to regain robustness with respect to the outlying log survival
times. For example, Fygenson & Ritov [3] proposed a monotonic rank estimating function
Where, r^β[i] = log(y[i])-β[T]'x[i]. This estimating function is not continuous in β because of the indicator function I(r[i]^β > r[j]^β). This discontinuity creates difficulties in the derivation of
the asymptotic distribution and computation of the estimator of β To overcome these difficulties, Heller [2] developed a smoothed rank estimating function, which is monotonic and continuous with
respect to the parameter vector. Moreover, to reduce the influence of outlying covariate values, he introduced a weight function in the smoothed rank estimating function. Heller's weighted smoothed
rank estimating function for estimating β[0] is given by
Which is symmetric and is chosen to reduce the influence of outlying covariate values on the estimator of β[0] and its asymptotic variance. The function, Φ(.) is a local cumulative distribution
function and is usually taken to be the standard normal distribution. It is a smooth approximation to the indicator function (r[i]^β > r[j]^β) in (2) and ensures that s[n]. (β; w) is differentiable
in β and has bounded influence. The bandwidth, h, is used for smoothing purposes and satisfies conditions: n →∞, nh →∞, and nh^4→ 0. In practice, as suggested by Heller [2], h can be set equal to
[o], the exponent, -0.26, provides the quickest rate of convergence while satisfying the above bandwidth conditions.
Which is symmetric and is chosen to reduce the influence of outlying covariHeller [2] demonstrated that the weighted smoothed rank estimating function is not only robust to outlying survival times,
but also to outlying covariate values, that is, it possesses the double robustness property. The computation of 2] obtained the asymptotic normal distribution of 4] proposed a censored regression
quantile method, which is used to analyze conditional survival function without requiring specific distributional assumptions on the errors. Jin et al. [5] provided simple and reliable methods for
implementing the Gehan rank estimator in the AFT model.
Which is symmetric and is chosen to reduce the influence of outlying covariIn this paper, we will develop an empirical likelihood based inference method and investigate its theoretical and numerical
properties. We aim at improving the finite sample performance of the NA method. In the last two decades or so, the empirical likelihood (EL) method has become an attractive inference method for a
number of statistical problems. In contrast to the NA method, the EL method has many nice features, for example, it combines the reliability of nonparametric methods with the effectiveness of the
likelihood approach, and does not require variance calculations. Its application can be found in many publications. Owen [6,7] introduced EL confidence intervals and regions for parameters; Qin &
Lawless [8] established Wilks' theorem for EL in an estimating equation seting; Owen [9] investigated EL for linear regression; and Qin & Zhou [10] suggested EL inference for the area under the ROC
For the AFT model, Zhou [11] considered EL based on the censored empirical likelihood and Zhao [12] studied EL based on Fygenson & Ritov [3] estimating equation. Recently, Jing et al. [13] proposed
the jackknife empirical likelihood (JEL) method, which combines the jackknife and the empirical likelihood for U-statistics. The most important property of the JEL method is its simplicity, which
overcomes computational difficulty in an optimization problem with many nonlinear equations when sample size gets large. Based on the Gehan estimator studied by Jin et al. [5], Zhou [11] used EL
method to obtain confidence regions. Our work is motivated by two survival data sets with outlying covariate values which have not been taken into account in the published analyses by researchers
using both NA and EL methods.
Another strong motivation comes from both the double robustness property of the weighted smoothed rank estimator and the appealing finite sample properties of the JEL. In this paper, we will develop
a new smoothed JEL method in the regression setting for the AFT model, where the parameters of interest may be multi-dimensional and are contained in some smoothed estimating functions, which involve
a smoothing parameter and are multi-dimensional U statistics. Jing et al. [13] considered JEL for making inference about a one-dimensional parameter, whose estimator is directly defined by a
U-statistic. Hence, their method cannot be immediately applied to our case. We will extend their method to smoothed estimating functions containing a multi-dimensional parameter vector and provide a
rigorous justification for the proposed method. Via simulation studies and two real data examples, we will show that the proposed JEL method not only inherits the double robustness property of the NA
method, but also contains some superior finite sample properties to the NA method. Hence, this new method provides a very useful and reliable tool for survival data analysis, which is easy for
practitioners to adopt in their research work. The paper is organized as follows. In Section 2, we apply the JEL to the U-statistic derived from the weighted smoothed rank estimating function and
show that the JEL ratio follows a standard chi-squared distribution. In Section 3, we conduct simulation studies to compare the performance of the JEL, NA and other competitors. In Section 4, we
reanalyze two real data sets using our method: the Stanford Heart Transplant Data and the Multiple Myeloma Data, and compare the new method to other methods used in the literature. Section 5 includes
our conclusion and discussion. Technical proofs are given in the Appendix.
Before we introduce our new inference method for the AFT model defined in (1), we state the result of the NA method given in Heller [2] Theorem 2 in the following paragraph. For the AFT model, under
conditions C1-C 4 given in Appendix, the weighted smoothed rank estimating function vector s[n].(βw) given in (3) is a monotone field, is differentiable in β, and has bounded influence. The
regression estimator
To provide the NA based inference for β[0], one needs to estimate the variance-covariance Matrix
Where x^2α,p is the α^th upper quantile of the chi-squared distribution with p degrees of freedom. When p =^1, the confidence region R[NA] becomes the confidence interval given by
Where, Z[α/2] is the (α/2)[th] upper quantile of the standard normal distribution. Taking w[ij]≡1, we obtain the unweighted smoothed rank estimating function, s[n](β), as follows:
The estimator of β[0] based on this estimating function is its zero solution and Theorem 1 in Heller [2] provides the asymptotic distribution of this estimator. By imposing weights in the smoothed
rank estimating function, Heller [2] Theorem 2 stated above indicates that its bounded influence provides stability to the regression estimator
Let[n](β;w)given in (3) as a U-statistic with a symmetric kernel function in the following,
The kernel has expectation of order 0(h^2 when evaluated at U[n] (β) is considered as a smoothed version of its analogue in
Where, [n-1] (β) computed on the sample of n -1 variables formed from the original data set by deleting the i^th data value. Using these jackknife pseudo-values β, we define the following jackknife
empirical likelihood function,
The smoothed jackknife empirical log-likelihood ratio becomes
Jing et al. [13] proposed JEL for a single parameter using one-dimensional U-statistics and obtained several asymptotic results. We can easily extend their results to p-dimensional U-statistics and
then to the smoothed JEL under the current setup. We state our main result in the following theorem, and relegate the proofs to the Appendix.
Thehorem 1: Define p x1 vector px p variance-covariance matrix [0], we obtain
As n → ∞, where level JEL confidence region for β[0] is given by
To construct confidence intervals or regions for a single element or a q-dimensional (1 < q < p) subvector β^1 of the regression parameter vector [1] defined as
Where, 14], we omit it here.
Theorem 2: Under the conditions for the NA method stated in Theorem 1, denote β[01] as the true value of β[1], then the limiting distribution of q degrees of freedom. That is
The idea of smoothed empirical likelihood is not new in the literature, for example, Whang [15] considered a smoothed empirical likelihood method to make inference for quantile regression models,
where the estimating function is actually a U-statistics of degree one only and JEL is not needed. Here, our work further extends smoothed EL to smoothed JEL for the popular AFT model with possible
outlying response or covariate values in survival analysis, and provides an exemplar of inference procedure for other types of rank regression models, where similar theoretical results can be
established accordingly.
In this section, two simulation studies are conducted to compare the performance of the NA method and other competitive methods to that of the smoothed JEL method under different contaminating
conditions on survival times and covariate values. To evaluate sensitivity of the NA and JEL methods to the weights used in the smoothed rank estimating functions, results are also compared between
the weighted and unweighted smoothed rank estimating functions under different censoring rates and sample sizes. In the first simulation study, we fit a model with a one-dimensional continuous
covariate, and then compare the bias of estimators, sample standard deviation of estimators, average estimated standard deviation of estimators, mean squared error of estimators, the coverage
probability and average length of the confidence intervals. In the second simulation study, a model with a two-dimensional covariate is fitted, and the performance of different methods is compared in
terms of joint coverage probabilities and marginal coverage probabilities.
In the following AFT model with a one-dimensional covariate,
where the true regression parameter β[0] =2 The censoring time C was generated from a uniform distribution U(0,c), here determines the censoring rate (cr). We chose different values of c to produce
cr = 25%, 5o%, 75%, respectively. Sample size was taken to be n = 30 and n = 100, respectively. Various combinations of error distribution, censoring rate and sample size provide a comprehensive
comparison between the NA and JEL methods. In order to compare the proposed method with its competitors in the literature, we also consider the quantile regression of Portnoy [4], the Gehan rank
regression of Jin et al. [5] and the empirical likelihood method of Zhou [10] in simulation studies. For ease of presentation, we construct the error term ε as follows. Assume e be an arbitrary
continuous random variable with cumulative distribution function F[e] (t), let μ[e] =E(e) and τ^th quantile of e is given by F^—1 (τ). For any σ> 0, we define error term as ^ε)=σ^2, the T^th quantile
of ε equals 0.
In the above AFT model, k^th quantile ofε by t[k], then we have the k^th conditional quantile of Y =log(^T) given X equals [0]of interest is the slope parameter in both the quantile regression and
the log-linear AFT model, we can compare the performance of different methods by aiming at the same target parameter β We choose four different error distributions corresponding to four types of
skewness in simulation. Through the aforementioned process, they are generated from ^2/6, respectively, where N(0,1) is standard normal distribution and symmetric, X^2 (df) is chi-squared
distribution with df degrees of freedom and skewed to the right, smaller df results in larger skewness and vice versa, and EV is skewed to the left and can be generated from exponential distribution
by e = log(u), u follows exponential distribution with rate equal to In this simulation setup, two different simulation scenarios were considered for four error distributions. In the first scenario,
the univariate covariate X was generated from the standard normal distribution N(0,1). For ε was generated from τ = 0.5 represents median regression.
In the second scenario, a data set with a contaminated covariate and a contaminated error distribution was generated to test robustness of the NA and JEL methods. Specifically, 95% of the values of
the covariate X were generated from the standard normal distribution n (0,1) and 95% of the error term ε values were generated from ^N°^-5,1) and 5% of the error term ε values were generated from σ
than in Table 1.
We conducted simulation studies to compare the performance of the quantile regression method of Portnoy [4] (QR), the Gehan rank estimation method of Jin et al. [5] (JinNA), Heller's unweighted
estimation method (UWNA), Heller’s weighted estimation method (WNA), the empirical likelihood method of Zhou [10] (ZhouEL), and the proposed unweighted JEL (UWJEL) and weighted JEL (WJEL) methods, in
terms of bias (Bias) of estimators for β[0] = 2, sample standard deviation of estimators (SSD), average estimated standard deviation of estimators (AESD), mean squared error of estimators (MSE),
coverage probability (CP) of 95% confidence intervals, average length of confidence intervals (LC). QR, JinNA and ZhouEL were implemented using R functions crq, lss and RankRegTest in the R packages
quantreg, lss and emplik, respectively.
There were B = 1000 replications in each simulation setting. We did 50 ^th quantile regression (i.e., median regression with τ = 05) under low and medium censoring rates (cr = 25; 50%), results are
denoted by QR,.[0.5], which guaranties that the estimates of coefficients could be obtained. When censoring rate is high (cr = 75%), we did 25^th quantile regression denoted by QR[0.25] for the same
reason. Weighted and unweighted smoothed rank regression functions were investigated for both NA and JEL methods with both uncontaminated and contaminated data. Since Portnoy's, Jin’s and Zhou's
methods are available for unweighted cases only, and extending their methods to weighted cases is not trivial and beyond of the scope of the current research, we did not consider them in our
simulation studies. The results are reported in Tables 2-5. We observe that QR method has relatively larger MSE, lower coverage probabilities and wider confidence intervals. Jin’s Gehan rank
estimation method is similar to Heller's UWNA. Zhou’s EL approach is suited to the data with low censoring rate and large sample size. The coverage probabilities of JEL are closest to the nominal
level 95% compared to those of NA and other competitors in most cases. This is particularly true for small sample sizes (e.g. n = 30) and high censoring rates (e.g. cr = 75%). It implies that JEL
generally has better coverage probability than NA and other methods. On the other hand, the average length of JEL is slightly longer than that of NA and other methods. Also, when the sample size
increases or the censoring rate decreases, the average length becomes shorter. In the cases of uncontaminated covariates, using weights or contaminated covariates, the weighted methods perform much
better than the unweighted methods (to abuse terminologies in some way, we use weighted methods to refer to the JEL and NA methods based on the weighted smoothed estimating function, do not
necessarily reduce the bias of the unweighted methods, but they provide more accurate approximation to the sampling distributions of the estimators, then result in better coverage probability.
In this simulation study, we simulate the model with a two dimensional covariate as follows:
Where β=(β[01,]β[02])^τ, =( 2,-1)^τ Similar to the one-dimensional simulation, this simulation setup also has two scenarios. We considered three different error distributions, i.e., e ~ N (0,1), EV
and x^2 (df = 6), respectively. In the first scenario, X[1i] and x[2i], were generated from the standard normal distribution N (0,1), X[1i]; and X[2i] are independent, the error term ε was generated
from X[1i], and X^2i were generated from the standard normal distribution N(0,1), 95% of the error term ε values were generated from the other 5% of the X[1i] and X[2i], were generated from the
normal distribution N(-5,1), and 5% of the error term
'Besides Bias, SSD and AESD, we also report the joint coverage probability (JCP) of 95% confidence regions for β[0] from JinNA, ZhouEL, NA and JEL methods respectively and marginal coverage
probability (MCP) of 95% confidence intervals for each parameter component from all the methods. Again the results in Tables 6-11 show that in most cases, JEL has better performance than other
methods. Especially when the censoring rate gets large and the sample size is small, QR was not stable and produced extraneous estimates, the JCP and MCP based on JEL is much closer to the nominal
level than all the other methods. This again indicates that JEL has better inference precision than NA and other methods. When the covariates are contaminated, the weighted NA and JEL methods work
better than the corresponding unweighted methods. The weighted methods perform better even when there is no contamination in the covariates. Hence, the weighted JEL method is the best choice in all
the circumstances, and it is recommended for use in practice.
In this section, two real data sets are used to illustrate our method and to compare with other methods in the literature. The data sets include the Stanford Heart Transplant Data and the Multiple
Myeloma Data. Previous authors have analyzed these data sets in their work, for example, Jin et al. (2003) and Zhou (2005), but they didn't take outlying covariate values into consideration.
Following their analyses, we consider a single continuous covariate in the first data set and two continuous covariates in the second data set. We first demonstrate that outlying covariate values
exist in these data sets, and then we apply the proposed weighted JEL method for analysis.
The Stanford Heart Transplant Data can be found in Miller & Halpern [16], and is obtained by using attach(stanford2) inside the R survival package. In short, the Stanford heart transplant program
began in October 1967 and a total of 184 patients received heart transplants. The information contained in the data set include: survival time in days; an indicator of whether the patient was dead or
alive by February 1980; the age of the patient in years at the time of transplant; and the T5 mismatch score, which makes a distinction between deaths primarily due to rejection of the donor heart by
the recipient's immune system and non-rejection related deaths. For 27 of the 184 transplant patients, the T5 mismatch scores were missing because the tissue typing was never completed. Following
Miller & Halpern [16] suggestion, the five patients with survival times less than 10 days were deleted in order to compare our new methods with existing methods. In the end, there were 152 cases with
a complete data record, which we will use to fit the following
Where, T is the survival time and X[i] is the age of the i^th patient at their heart transplant. In this data set, the censoring rate is cr = 36% with 55 people still alive at the end of the
observation period and 97 deceased individuals. At first, we use box plots to check the outliers of the observed response Y[i] and covariate X^i. Figure 1 clearly shows that there are some outliers
(small values) of the ages of the patients, therefore, the weighted JEL is desirable. The results from the fitted model based on weighted approaches are: β are (-0.0890,-0.0210) with length=0.068 and
(-0.0872,-0.0202) with length=0.067, respectively. They are very close except for a lightly longer length by JEL. Both confidence intervals indicate a significant negative association between age and
survival time in this patient population.
When we ignore the outliers and use the unweighted methods, we obtain β (-0.0436,-0.0035) and 95% UWNA confidence interval for β is (-0.0465,-0.0044). These results are quite different from those
based on the weighted methods, however, they are very similar to those of Zhou [10] which show that = -0.0253 with se (β based on the censored empirical likelihood ratio is (-0.0446,-0.0030) and the
95% Wald confidence interval is (-0.0462,-0.0044). This is not surprising, since Zhou's method does not take the outliers into consideration and is an unweighted method.
The Multiple Myeloma Data were reported by Krall et al. [17] , and can be obtained from SAS [18]. The data set contains information for survival times and nine covariates: survival time, censoring
status, logarithm of blood urea nitrogen (LogBUN), haemoglobin (HGB), Platelet, Age, LogWBC, Frac, LogPBM, Protein, SCalc. Out of total of 65 observations, 17 were censored, and the censoring rate is
cr=26%. Following Jin et al. [5], we fit an AFT model with these two covariates: Log(BUN) and HGB. To improve numerical efficiencies, we also standardize the original covariates to have zero mean and
unit variance. The fitted model is
Where X[li] is the standardized score of Log(BUN), and X[2i] is the standardized score of HGB.
From Figure 2 we notice that the standardized scores of Log(BUN) have outliers (large values). Using the weighted methods, we obtain weighted NA estimates of the regression coefficients, [1],β[2]) =
(0.0) based on the WJEL method and the WNA method are 11.34 and 11.21, and the corresponding p-values are 0.003 and 0.004, respectively, both indicating a jointly significant effect of Log(BUN) and
HGB on survival time. The estimates obtained from the unweighted methods are [1],β[2]) = (0.0) based on the UWJEL method and the UWNA method are 16.909 and 16.666, and the corresponding p-values are
0.00021 and 0.00024, respectively. The estimated regression coefficients are similar to Jin et al. (2003)'s results, which did not consider outliers and weights: their estimates were (-0.532, 0.292)
with estimated standard errors (0.146, 0.169). These results once again substantiate our claim that when covariates have outliers (e.g. the standardized score of LogBUN), the rank estimates of
regression coefficients under the weighted and unweighted methods may be quite different [19-21].
In this paper, we have developed the smoothed JEL method, a new inference method for the regression parameters in the AFT model with right censored data containing outlying response or covariate
values. Based on the weighted smoothed rank estimation function proposed by Heller [2], jackknife and empirical likelihood are integrated to yield the new method. The proposed smoothed JEL method
preserves not only important features of empirical likelihood, but also the double robustness of the weighted smoothed rank estimation. Another advantage of the new method is that it can be easily
implemented in a standard software environment and used by practitioners, for example, the R package such as emplik already exists to maximize the empirical likelihood functions and to obtain the
values of the test statistics. Simulation studies indicate that the smoothed JEL method outperforms the NA and other competitors in the sense of improving accuracy of inferences for regression
parameters. This is especially evident when sample size is small or censoring rate is high. Two real data sets with outlying covariate values are reanalyzed using the new method and the results show
superiority of the new method over those in the literature. Therefore, in practice, when the AFT model is used for survival data analysis, especially when there are outlying observations in either
responses or covariates, the proposed method deserves first consideration. Our method can also be extended to other types of rank regression models, we leave this for future research.
The authors acknowledge with gratitude the support for this research via Discovery Grants from National Science and Engineering Council (NSERC) of Canada. They are grateful to Dr. Heller for giving
them his Fortran program, which helped them to develop R programs for numerical studies.
|
{"url":"https://juniperpublishers.com/bboaj/BBOAJ.MS.ID.555685.php","timestamp":"2024-11-04T23:11:44Z","content_type":"text/html","content_length":"109890","record_id":"<urn:uuid:02af24b8-3943-46ff-9710-f72e79dab5b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00250.warc.gz"}
|
Deep Learning in Python: High Level Comparison of TensorFlow and Keras
Deep learning is useful for unstructured data. It allows machines to learn from complex data. In our previous post, we covered how deep learning is powerful. In this post, we will cover an
introduction on TensorFlow and Keras open-sourced libraries in Python for deep learning models, and how to use a combination of both to ease tasks.
The use of these two libraries still require the data to go through preparation steps like loading the dataset, imputing missing values, treating categorical variables, normalizing data and creating
a validation set.
Tensorflow is an open source software library (created by the Google Brain team). It eases the process of acquiring data, training models, serving predictions, and refining future results. TensorFlow
can run on CPU and GPU.
It uses data flow graphs for numerical computation - structures that describe how data moves through a graph, or a series of processing nodes. Nodes in the graph represent mathematical operations.
Edges in the graph represent the multidimensional data arrays (tensors) communicated between them.
It excels at numerical computing. It combines the computational algebra of compilation optimization techniques, making easy the calculation of many mathematical expressions that would be difficult to
It offers more advanced operations than Keras. This comes very handy if you are doing a research or developing some special kind of deep learning models. It offers more control of your network. For
example, you can decide which variable should be trainable and should not.
But with more flexibility, TensorFlow is not easy to use. The flexibility comes at a cost of complexity. Therefore, Keras is required to simplify some tasks. In the next part, we will cover Keras and
explain how it is different from TensorFlow.
The TensorFlow team is however spending lots of efforts on developing easier-to-use APIs and TensorFlow is evolving rapidly. It has a very large community which supports its development and has
gained a lot of momentum lately.
There is also Keras. But you cannot use Keras without an underlying back-end/ engine like TensorFlow, CNTK, or Theano.
It is a high-level neural network API, written in Python. Keras is extremely user-friendly and comparatively easier than TensorFlow. With Keras, you do not need to know the back-end in detail.
If you want to quickly build and test a neural network with minimal lines of code, choose Keras. You can build simple or very complex neural networks within a few minutes with Keras. Keras also runs
seamlessly on CPU and GPU.
Keras supports almost all the models of a neural network – fully connected, convolutional, pooling, recurrent, embedding, etc. Furthermore, these models can be combined to build more complex models.
In short, using Keras is like lego blocks and an out-of-box solution. It allows developers to not build from scratch what is already available and commonly used.
Keras provides all the general purpose functionalities for building a deep learning model, but provides less than TensorFlow.
If you want to support custom layers, you can use Keras, but this will have the same complexity with TensorFlow.
Using Tensorflow with Keras
Keras + TensorFlow = easier neural network construction! (All you ever need)
Keras is now integrated as part of TensorFlow core API via the "tf.keras" module. This means you can define your model using Keras. And then drop down to TensorFlow if you need specific TensorFlow
TensorFlow is complex in some areas. Keras makes a great model definition add-on for TensorFlow, as it simplifies many tasks. Keras enables quicks POC's and experiments before launching into full
scale build process.
Keras can be used to quickly build base models. These base models if built using TensorFlow will take time to build from scratch.
However, if you need low level changes to your model, Keras will not work (as it is not flexible) and you will need TensorFlow. TensorFlow allows you to deep dive and control low level
Essentially, you can code your model and training procedures using the easy to use Keras API and then custom implementations into the model or training process using pure TensorFlow.
In this article, we discussed how TensorFlow and Keras break down foundational barriers to develop deep learning models. It increases development efficiency by knowing which to use. This depends on
the task to perform. Deep learning has fast advanced solutions for complex artificial intelligence problems.
Do you use deep learning in your business? Share by leaving us a comment. If you require more information or assistance on predictive analytics, contact us. We want to be an extension of our clients.
We assist to systemise processes and decision making. Subscribe to our newsletter for regular feeds.
Did you find this blog post helpful? Share the post! Have feedback or other ideas? We'd love to hear from you.
|
{"url":"https://www.letolleconsulting.com/post/deep-learning-in-python-tensorflow-and-keras","timestamp":"2024-11-10T07:39:04Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:57182f83-0c3c-43a2-b3bd-7c982406f3df>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00496.warc.gz"}
|
7,848 research outputs found
The integral equationPâ «cK(ζâ Č,ζ)ζâ Čâ Î¶Ï (ζâ Č)dζâ Č=h(ζ)Ï (ζ)+f(ζ)is shown to have simple solutions obtained by standard and elementary methods if h and K have appropriate analytic
properties.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70799/2/JMAPAQ-7-12-2121-1.pd
The lattice potential Korteweg-de Vries equation (LKdV) is a partial difference equation in two independent variables, which possesses many properties that are analogous to those of the celebrated
Korteweg-de Vries equation. These include discrete soliton solutions, Backlund transformations and an associated linear problem, called a Lax pair, for which it provides the compatibility condition.
In this paper, we solve the initial value problem for the LKdV equation through a discrete implementation of the inverse scattering transform method applied to the Lax pair. The initial value used
for the LKdV equation is assumed to be real and decaying to zero as the absolute value of the discrete spatial variable approaches large values. An interesting feature of our approach is the solution
of a discrete Gel'fand-Levitan equation. Moreover, we provide a complete characterization of reflectionless potentials and show that this leads to the Cauchy matrix form of N-soliton solutions
In an attempt to understand the conditions under which the neutron transport equation has solutions, and the properties of those solutions, a number of existence and uniqueness theorems are proved.
One finds that the properties of the solution are closely related to the boundedness of the source as well as to certain velocityâ space integrals of the scattering kernel. Both timeâ dependent and
timeâ independent equations are considered as are also the timeâ dependent and timeâ independent adjoint equations. Although only a very few of all possible existence and uniqueness theorems for
these equations are considered here, the work may serve as a guide to the treatment of similar problems.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70329/2/JMAPAQ-4-11-1376-1.pd
We study the stochastic evolution of four species in cyclic competition in a well mixed environment. In systems composed of a finite number $N$ of particles these simple interaction rules result in a
rich variety of extinction scenarios, from single species domination to coexistence between non-interacting species. Using exact results and numerical simulations we discuss the temporal evolution of
the system for different values of $N$, for different values of the reaction rates, as well as for different initial conditions. As expected, the stochastic evolution is found to closely follow the
mean-field result for large $N$, with notable deviations appearing in proximity of extinction events. Different ways of characterizing and predicting extinction events are discussed.Comment: 19
pages, 6 figures, submitted to J. Stat. Mec
Academic Development Programmes (ADPs), or Extended Curriculum Programmes (ECPs), continue to play a central role in increasing access to previously marginalised students in higher education in South
Africa. Using Archerâ s morphogenetic approach, this study examines how a group of ADP students â made their wayâ through their engineering undergraduate studies. Twelve students in their fourth
year of study were interviewed three times and selected university documents were analysed. The authors found that the fragmented curriculum, shortened consolidation and examination periods, and
unfavourable examination timetables potentially constrained the studentsâ aspirations. In addition, the mainstream students and lecturersâ ideas about ADP students worsened their experience of
marginalisation and exception. We also found that students experienced the mainly black student enrolment of the ADP as racial discrimination. The findings indicate that students found themselves in
enormously constrained circumstances, but they also exhibited what Archer calls â corporate agencyâ and different â modes of reflexivityâ to overcome some of these constraints. We argue that
the establishment of Academic Development Programmes as separate from mainstream curricula, while enabling access to some extent, may have unintended consequences of also constraining the students
for whom they are designed
An optical vector matrix multiplication scheme that encodes the matrix elements as a holographic mask consisting of linear diffraction gratings is proposed. The binary, chrome on glass masks are
fabricated by e-beam lithography. This approach results in a fairly simple optical system that promises both large numerical range and high accuracy. A partitioned computer generated hologram mask
was fabricated and tested. This hologram was diagonally separated outputs, compact facets and symmetry about the axis. The resultant diffraction pattern at the output plane is shown. Since the
grating fringes are written at 45 deg relative to the facet boundaries, the many on-axis sidelobes from each output are seen to be diagonally separated from the adjacent output signals
The problem of the flux to a spherical trap in one and three dimensions, for diffusing particles undergoing discrete-time jumps with a given radial probability distribution, is solved in general,
verifying the Smoluchowski-like solution in which the effective trap radius is reduced by an amount proportional to the jump length. This reduction in the effective trap radius corresponds to the
Milne extrapolation length.Comment: Accepted for publication, in pres
A Green's function for the elastic wave equation, which satisfies certain boundary conditions on the surface of a homogeneous halfâ space, is derived by means of the Fourier transformation. This
halfâ space Green's function is then applied to the computation of radiative effects due to the earth's surface when a radiating source is located on or within that surface. The results obtained are
to be taken as an extension of a previous and similar formulation for the infinite medium due to Case and Colwell.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/70190/2/
We implement the concept of Wilson renormalization in the context of simple quantum mechanical systems. The attractive inverse square potential leads to a \b function with a nontrivial ultraviolet
stable fixed point and the Hulthen potential exhibits the crossover phenomenon. We also discuss the implementation of the Wilson scheme in the broader context of one dimensional potential problems.
The possibility of an analogue of Zamolodchikov's $C$ function in these systems is also discussed.Comment: 16 pages, UR-1310, ER-40685-760. (Additional references included.
|
{"url":"https://core.ac.uk/search/?q=authors%3A(Case%20K%20M)","timestamp":"2024-11-09T23:54:38Z","content_type":"text/html","content_length":"139148","record_id":"<urn:uuid:502518b0-be95-4e2d-baf3-750edb2dbd7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00214.warc.gz"}
|
Ktb Noise Calculator - Calculator Wow
Ktb Noise Calculator
In various industries and environmental assessments, understanding noise levels is crucial for ensuring safety and compliance with regulations. The Ktb Noise Calculator is a specialized tool designed
to help users calculate noise levels based on distance from the source and source level in decibels (dB). By applying the formula NL = SL - 20 * log10(D), where NL is the noise level, SL is the
source level, and D is the distance, this calculator provides valuable insights into noise reduction and propagation. This article will delve into the importance of the Ktb Noise Calculator, guide
you on how to use it, and address common questions.
The Ktb Noise Calculator serves several key functions:
1. Noise Assessment: It helps in assessing how noise levels decrease with distance from the source, which is essential for environmental noise studies and urban planning.
2. Regulatory Compliance: Many industries have noise regulations that require accurate measurement and reporting. This calculator aids in complying with such standards by providing precise noise
level data.
3. Safety and Comfort: In settings such as workplaces or residential areas, managing noise levels is critical for maintaining safety and comfort. Accurate noise level calculations help in
implementing noise control measures effectively.
4. Design and Engineering: Engineers and designers use noise level data to optimize the placement of equipment and design noise mitigation solutions, ensuring that noise levels remain within
acceptable limits.
5. Cost Efficiency: By understanding noise levels, businesses can make informed decisions about investments in noise reduction technologies or modifications, leading to potential cost savings.
How to Use
Using the Ktb Noise Calculator is straightforward. Follow these steps:
1. Input the Distance: Enter the distance from the noise source in meters. This distance is crucial as noise levels typically decrease as the distance increases.
2. Input the Source Level: Enter the source level in decibels (dB). This value represents the initial noise level emitted by the source.
3. Calculate: Click the "Calculate Noise Level" button. The calculator will apply the formula NL = SL - 20 * log10(D) to determine the noise level at the given distance.
4. Interpret Results: Review the calculated noise level (NL) displayed by the calculator. This result indicates the expected noise level at the specified distance from the source.
1. What does the Ktb Noise Calculator do?
The Ktb Noise Calculator determines the noise level at a specific distance from a noise source based on the source level in decibels.
2. Why is the distance important in noise calculations?
Distance affects the noise level due to the principle that sound intensity decreases with increasing distance from the source.
3. How does the calculator use the formula NL = SL - 20 * log10(D)?
This formula accounts for the reduction in noise level as sound propagates through the environment. It calculates the decrease in sound intensity based on distance.
4. Can the calculator be used for different types of noise sources?
Yes, the calculator can be used for various noise sources as long as the source level and distance are known.
5. What units are used in the calculation?
The source level is in decibels (dB), and the distance is in meters. The result is also in decibels (dB).
6. Is it necessary to convert units before using the calculator?
No, the calculator handles the units as long as the inputs are provided correctly.
7. How accurate is the noise level calculation?
The calculation is accurate if the input values are correct and the assumptions of the formula are met (e.g., free field conditions).
8. Can the calculator help in noise abatement projects?
Yes, by providing precise noise level data, the calculator aids in planning and implementing effective noise control measures.
9. Are there any limitations to the Ktb Noise Calculator?
The calculator assumes ideal conditions and does not account for factors such as obstacles or atmospheric conditions that can affect sound propagation.
10. Where can I access the Ktb Noise Calculator?
The Ktb Noise Calculator is available online through various engineering and environmental websites or can be built as a custom tool for specific needs.
The Ktb Noise Calculator is an invaluable tool for anyone involved in noise assessment and management. By providing a clear method for calculating noise levels based on distance and source level, it
aids in regulatory compliance, safety, and effective noise management. Understanding how to use this calculator and interpreting its results can significantly enhance noise control strategies and
contribute to a more comfortable and compliant environment. Whether for professional use or personal projects, mastering this tool ensures accurate and reliable noise level assessments.
|
{"url":"https://calculatorwow.com/ktb-noise-calculator/","timestamp":"2024-11-03T22:13:41Z","content_type":"text/html","content_length":"65851","record_id":"<urn:uuid:44c2d9de-77ce-4f6c-ab8f-27082e1a1232>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00519.warc.gz"}
|
A Simple Note on Ionic Product of Water
Water plays a significant role in our lives. Two molecules of hydrogen along with a single oxygen molecule make the pure form of water. Thus, implying that H2O is the formula for water. Now, this
helps us understand the basic chemistry of water. As we move forward, we shall focus on the formula for the ionic product of water, the significance of the ionic product of water and the ionic
product of water equation.
Ionic Product of Water
It can be said the pure form of water is weak as an electrolyte. Thus, it can ionise to a limited extent and lead to the production of hydroxyl ions and protons. As water ionisation is only to a
limited extent, it can be observed that there are only some water molecules that get dissociated in the form of H+ and OH- ions. Therefore, the molecules of water that are unionised remain mostly
constant in concentration.
Here, the constant for equilibrium can be called ‘K’ and explained through the following equation –
• K = (H3O+) (OH-) / (H20)2
• K (H20)2 = (H3O+) (OH-)
Thus, implying
K(H20)2 = Constant = Kw
From the above equations, it can be inferred that
Kw = (H3O+) (OH-)
Which can also be expressed as
Kw = (H+) (OH-)
Thus, one may conclude at a specific temperature, the ionic product of water is called Kw
Kw is the ion-product of water and can be explained as the precise product of the concentration of the hydroxide ions and hydrogen ions. It can further be noted that the Kw ‘s value is very minute as
per the reaction which is favouring the reactants.
Further, to understand the value of Kw one can note that according to the experiments at a temperature of twenty-five degrees Celsius the value is 1.0*10-14
Thus, we hope that with this the formula for the ionic product of water, the value for the ionic product of water and the ionic product of water equation has become a little clearer.
For understanding the significance of the ionic product of water, one can first try to understand how the human plasma is similar to an aqueous solution. Now, there are certain rules of chemistry
such as the constancy of the ionic product of water and the electrical neutrality principle which are to be obeyed. With the help of these rules, one can understand the balance of acid-base in the
body. As per the principle of electroneutrality, it can be understood that the plasma is supposed to be neutral electrically. Further, there has to be constancy in the ionic product of water. This
implies how the hydrogen ions plasma concentration is based upon the plasma ionic composition.
As observed from the above discussion, the understanding of various significant concepts such as the formula for the ionic product of water, the significance of the ionic product of water and the
ionic product of water equation can now be understood in a clear and comprehensive manner. The value of Kw that has been determined through experiments is also mentioned. Thus, it can be concluded
that with the help of the above sections one can learn about the formula for the ionic product of water, the significance of the ionic product of water and the ionic product of water equation.
|
{"url":"https://unacademy.com/content/upsc/study-material/chemistry/a-simple-note-on-ionic-product-of-water/","timestamp":"2024-11-09T03:48:35Z","content_type":"text/html","content_length":"682950","record_id":"<urn:uuid:8a6c2a6d-f244-4807-94bd-037da10b16cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00037.warc.gz"}
|
What does CCSS mean by
What does CCSS mean by “know from memory?”
Knowing from memory means not having to think about it.
Two of the best standards from the Common Core State Standards are on our home page:
By end of Grade 2, know from memory all sums of two one-digit numbers and
By the end of Grade 3, know from memory all products of two one-digit numbers.
These standards name the most important elementary math skills of all, because they are the foundation of all further work in mathematics. But what does it mean to say students know math facts “from
memory?” It means that students don’t have to stop to figure it out. Say for example a student is adding nine plus seven. A student can figure that out by thinking that because 9 is one more than 8
and 7 is one less than 8, the answer to 9+7 would be the same as 8+8, which is 16. This is a smart strategy for figuring out the answer, but knowing it from memory means the student simply remembers
the answer is 16.
So if second grade students know from memory the sums of all single digit numbers, they can answer any of those problems without hesitation, without having to stop and think about them. That takes
practice, to build up the neural connections, so that students remember the answers instantly without some intervening thought process. That’s what Rocket Math is specifically designed to do.
Practicing figuring out the answer to facts is NOT the same thing as recalling them from memory. So any practice procedure that allows students a long time to answer facts, allows hesitations, will
not be very helpful in achieving that status of “knowing from memory.”
The peer practice procedures in Rocket Math require the “checker” to follow a “correction procedure” whenever there is a hesitation. If the student has to stop even for a second to “think about it”
they need more practice on that fact to commit it fully to memory. The “correction procedure” provides that extra needed practice. Having students complete worksheets on their own will NEVER
eliminate that “stopping to figure it out.” That is why the oral peer practice in Rocket Math is essential. And that is why Rocket Math really will help students come to “know from memory” all sums
of two one-digit numbers.
2 thoughts on “What does CCSS mean by “know from memory?””
1. This topic of “know from memory” is something I have been digging into as a special educator. I wonder what your thoughts are about whether certain accommodations from these “know from memory”
standards would actually be modifying the curriculum?
For example, if we used “extra time to respond” and the student had to use their fingers or some other method to count, would they then not be doing the standard?
This relates to where I’m at in middle school math, but I think that it’s reflected in the continuum of the common core maths.
□ Actually, your example is very clear that it is not “knowing from memory.” You are describing “deriving from a strategy” or what I call, “figuring it out.” When you know it from memory, when
you recall the answer, then you stop having to “figure it out.” These are two very different things. I used to ask workshop participants to imagine sitting next to me in a bar and asking me
for my name. If I said, “Wait a second, I have it here on my driver’s license,” they would likely move away from me while wondering what kind of traumatic brain injury I had sustained! They
would very likely say, “OMG, that man doesn’t know his own name.” The purpose of the verbal rehearsal that is a daily part of Rocket Math is to cement these facts in memory. Then when a
student says to themselves, “8 times 7 is,” the answer pops into their mind with no effort. It takes quite a bit of practice to achieve that. However, the ability to instantly recall the
answers to basic math facts makes doing mathematical computation a relative breeze. It make seeing relationships among numbers very obvious. It makes reducing fractions and finding common
denominators easy. That’s why the Common Core thinks “knowing from memory” is so worthwhile. It’s why I began promoting Rocket Math in the first place.
|
{"url":"https://www.rocketmath.com/2016/03/04/what-does-ccss-mean-by-know-from-memory/","timestamp":"2024-11-08T09:14:57Z","content_type":"text/html","content_length":"230071","record_id":"<urn:uuid:92de58b3-c181-41fc-a545-6c709d458a0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00815.warc.gz"}
|
[CP2K-user] [CP2K:20016] Output file was overwritten by warnings
ma455...@gmail.com ma455173220 at gmail.com
Tue Mar 12 00:20:02 UTC 2024
I've encountered situations where errors or warnings cause the entire
output file to be overwritten (as attached). I'm curious about the cause of
this and how to prevent it from happening.
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/13f7fbdb-b71d-407c-8399-668829f13fe6n%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20240311/cb9d89b8/attachment-0001.htm>
-------------- next part --------------
JOBID| JOBFS 110619495.gadi-pbs
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1471, 1471) = 1.000000000000114
The deviation from the expected value 1 is 1.141E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1471, 1471) = 1.000000000000129
The deviation from the expected value 1 is 1.288E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1905, 1905) = 1.000000000000106
The deviation from the expected value 1 is 1.059E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1687, 1687) = 1.000000000000107
The deviation from the expected value 1 is 1.070E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1481, 1481) = 1.000000000000108
The deviation from the expected value 1 is 1.079E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1432, 1432) = 1.000000000000115
The deviation from the expected value 1 is 1.146E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (2017, 2017) = 1.000000000000105
The deviation from the expected value 1 is 1.048E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1971, 1971) = 1.000000000000101
The deviation from the expected value 1 is 1.006E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1718, 1718) = 1.000000000000106
The deviation from the expected value 1 is 1.059E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (2132, 2132) = 1.000000000000114
The deviation from the expected value 1 is 1.141E-13
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1557, 1557) = 1.000000000000100
The deviation from the expected value 1 is 1.001E-13
*** WARNING in fm/cp_fm_diag.F:337 :: Check of matrix diagonalization ***
*** failed in routine check_diag ***
1 Broy./Diag. 0.20E+00 35.8 0.06576023 -2903.1684802823 -2.90E+03
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1554, 1554) = 1.000000000000104
The deviation from the expected value 1 is 1.044E-13
*** WARNING in fm/cp_fm_diag.F:337 :: Check of matrix diagonalization ***
*** failed in routine check_diag ***
The eigenvectors returned by ELPA are not orthonormal
Matrix element (1555, 1555) = 1.000000000000122
The deviation from the expected value 1 is 1.217E-13
*** WARNING in fm/cp_fm_diag.F:337 :: Check of matrix diagonalization ***
*** failed in routine check_diag ***
More information about the CP2K-user mailing list
|
{"url":"https://lists.cp2k.org/archives/cp2k-user/2024-March/020036.html","timestamp":"2024-11-10T21:42:02Z","content_type":"text/html","content_length":"31819","record_id":"<urn:uuid:e8cb2da6-a702-4017-b3b4-b1c554f4196c>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00380.warc.gz"}
|
Abramowitz and Stegun
The informal name for the classical Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. As it predates Tcl by about a quarter of a century, there is no direct Tcl
connection (though there is the NIST connection), but being a classical reference, it gets mentioned every now and then when something numerical is done on this wiki. Hence some pointers for those
who do not themselves own a copy may be useful.
The original book can be viewed online at [L1 ]. See also the Wikipedia article [L2 ].
The successor to this book is the Digital Library of Mathematical Functions [L3 ] of BIG SCIENCE: National Institute of Standards and Technology, but that is currently (2008) rather far from
|
{"url":"https://wiki.tcl-lang.org/page/Abramowitz+and+Stegun","timestamp":"2024-11-06T13:35:58Z","content_type":"text/html","content_length":"7965","record_id":"<urn:uuid:c826eb63-024b-4b51-8c96-97dd81ac4185>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00138.warc.gz"}
|
Geometric problems in machine learning
We present some problems with geometric characterizations that arise naturally in practical applications of machine learning. Our motivation comes from a well known machine learning problem, the
problem of computing decision trees. Typically one is given a dataset of positive and negative points, and has to compute a decision tree that fits it. The points are in a low dimensional space, and
the data are collected experimentally. Most practical solutions use heuristic algorithms. To compute decision trees quickly, one has to solve optimization problems in one or more dimensions
efficiently. In this paper we give geometric characterizations for these problems. We present a selection of algorithms for some of them. These algorithms are motivated from practice, and have been
in many cases implemented and used as well. In addition, they are theoretically interesting, and typically employ sophisticated geometric techniques. Finally we present future research directions.
Original language English (US)
Title of host publication Applied Computational Geometry
Subtitle of host publication Towards Geometric Engineering - FCRC 1996 Workshop, WACG 1996, Selected Papers
Editors Dinesh Manocha, Ming C. Lin, Ming C. Lin
Publisher Springer Verlag
Pages 121-132
Number of pages 12
ISBN (Print) 354061785X, 9783540617853
State Published - 1995
Event 1st ACM Workshop on Applied Computational Geometry, WACG 1996 held as part of 2nd Federated Computing Research Conference, FCRC 1996 - Philadelphia, United States
Duration: May 27 1996 → May 28 1996
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 1148
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Other 1st ACM Workshop on Applied Computational Geometry, WACG 1996 held as part of 2nd Federated Computing Research Conference, FCRC 1996
Country/Territory United States
City Philadelphia
Period 5/27/96 → 5/28/96
All Science Journal Classification (ASJC) codes
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Geometric problems in machine learning'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/geometric-problems-in-machine-learning","timestamp":"2024-11-04T02:02:10Z","content_type":"text/html","content_length":"51689","record_id":"<urn:uuid:a498cf67-f64e-453e-b90f-63b77873affc>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00354.warc.gz"}
|
C++ Program Archives - coderz.py
Accessing Elements of a 2-D Array: The most basic type of multidimensional array in C++ is a two-dimensional array. One way to think of it is as an array of arrays. A two-dimensional array is also
called a matrix. Let us take a look at the accessing of its elements. There are 2 ways to access the […]
December 13, 2022 | C++ | No comments
|
{"url":"https://coderzpy.com/tag/c-program/","timestamp":"2024-11-02T14:10:06Z","content_type":"text/html","content_length":"43199","record_id":"<urn:uuid:43cd983d-8632-4d42-bb09-8c19051879f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00808.warc.gz"}
|
Milk pooling and blending#
Pooling and blending operations involve the â poolingâ of various streams to create intermediate mixtures that are subsequently blended with other streams to meet final product specifications.
These operations are common to the chemical processing and petroleum sectors where limited tankage may be available, or when it is necessary to transport materials by train, truck, or pipeline to
remote blending terminals. Similar applications arise in agriculture, food, mining, wastewater treatment, and other industries.
This notebook considers a simple example of a wholesale milk distributor to show how non-convexity arises in the optimization of pooling and blending operations. Non-convexity is due to presence of
bilinear terms that are the product of two decision variables where one is a scale-dependent extensive quantity measuring the amount or flow of a product, and the other is scale-independent intensive
quantity such as product composition. The notebook then shows how to develop and solve a convex approximation of the problem, and finally demonstrates solution the use of Couenne, a solver
specifically designed to find global solutions to mixed integer nonlinear optimization (MINLO) problems.
# install dependencies and select solver
%pip install -q amplpy pandas matplotlib numpy scipy
SOLVER_LO = "cbc"
SOLVER_GNLO = "couenne"
from amplpy import AMPL, ampl_notebook
ampl = ampl_notebook(
modules=["coin"], # modules to install
license_uuid="default", # license to use
) # instantiate AMPL object and register magics
Using default Community Edition License for Colab. Get yours at: https://ampl.com/ce
Licensed to AMPL Community Edition License for the AMPL Model Colaboratory (https://colab.ampl.com).
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from itertools import product
Problem: Pooling milk for wholesale blending and distribution#
A bulk distributor supplies custom blends of milk to several customers. Each customer has specified a minimum fat content, a maximum price, and a maximum amount for the milk they wish to buy. The
distributor sources raw milk from local farms. Each farm produces a milk with a known fat content and cost.
The distributor has recently identified more affordable sources raw milk from several distant farms. These distant farms produce milk grades that can be blend with milk from the local farms. However,
the distributor only has one truck with a single tank available for transporting milk from the distant farms. As a result, milk from the distant farms must be combined in the tank before being
transported to the blending station. This creates a â poolâ of uniform composition which to be blended with local milk to meet customer requirements.
The process is shown in the following diagram. The fat content and cost of raw milk is given for each farm. For each customer, data is given for the required milk fat content, price, and the maximum
demand. The arrows indicate pooling and blending of raw milk supplies. Each arrow is labeled with the an amount of raw milk.
What should the distributor do?
• Option 1. Do nothing. Continue operating the business as usual with local suppliers.
• Option 2. Buy a second truck to transport raw milk from the remote farms to the blending facility without pooling.
• Option 3. Pool raw milk from the remote farms into a single truck for transport to the blending facility.
customers = pd.DataFrame(
"Customer 1": {"min_fat": 0.045, "price": 52.0, "demand": 6000.0},
"Customer 2": {"min_fat": 0.030, "price": 48.0, "demand": 2500.0},
"Customer 3": {"min_fat": 0.040, "price": 50.0, "demand": 4000.0},
suppliers = pd.DataFrame(
"Farm A": {"fat": 0.045, "cost": 45.0, "location": "local"},
"Farm B": {"fat": 0.030, "cost": 42.0, "location": "local"},
"Farm C": {"fat": 0.033, "cost": 37.0, "location": "remote"},
"Farm D": {"fat": 0.050, "cost": 45.0, "location": "remote"},
local_suppliers = suppliers[suppliers["location"] == "local"]
remote_suppliers = suppliers[suppliers["location"] == "remote"]
│ │min_fat │price│demand│
│Customer 1 │0.045 │52.0 │6000.0│
│Customer 2 │0.030 │48.0 │2500.0│
│Customer 3 │0.040 │50.0 │4000.0│
│ │ fat │cost│location │
│Farm A │0.045│45.0│local │
│Farm B │0.03 │42.0│local │
│Farm C │0.033│37.0│remote │
│Farm D │0.05 │45.0│remote │
Option 1. Business as usual#
The normal business of the milk distributor is to blend supplies from local farms to meet customer requirements. Let \(L\) designate the set of local suppliers, and let \(C\) designate the set of
customers. Decision variable \(z_{l, c}\) is the amount of milk from local supplier \(l\in L\) that is mixed into the blend sold to customer \(c\in C\).
The distributorâ s objectives is to maximize profit
\[ \begin{align*} \text{profit} & = \sum_{(l, c)\ \in\ L \times C} (\text{price}_c - \text{cost}_l) z_{l,c} \end{align*} \]
where \((l, c)\ \in\ L\times C\) indicates a summation over the cross-product of two sets. Each term, \((\text{price}_c - \text{cost}_l)\), is the net profit of including one unit of raw milk from
supplier \(l\in L\) in the blend delivered to customer \(c\in C\).
The amount of milk delivered to each customer \(c\in C\) can not exceed the customer demand.
\[ \begin{align*} \sum_{l\in L} z_{l, c} & \leq \text{demand}_{c} & \forall c\in C \end{align*} \]
Let \(\text{fat}_l\) denote the fat content of the raw milke produced by farm \(l\), and let \(\text{min_fat}_c\) denote the minimum fat content required by customer \(c\), respectively. Assuming
linear blending, the model becomes
\[ \begin{align*} \sum_{(l,c)\ \in\ L \times C} \text{fat}_{l} z_{l,c} & \geq \text{min_fat}_{c} \sum_{l\in L} z_{l, c} & \forall c \in C \end{align*} \]
This is a standard linear blending problem that can be solved by linear optimization (LO).
model = AMPL()
# define sources and customers
set L;
set C;
param price{C};
param demand{C};
param min_fat{C};
param cost{L};
param fat{L};
# define local -> customer flowrates
var z{L cross C} >= 0;
maximize profit: sum{l in L, c in C} z[l, c]*(price[c] - cost[l]);
subject to demand_req{c in C}:
sum{l in L} z[l, c] <= demand[c];
subject to fat_content{c in C}:
sum{l in L} z[l, c] * fat[l] >= sum{l in L} z[l, c] * min_fat[c];
model.set_data(customers, "C")
model.set_data(local_suppliers.drop(columns=["location"]), "L")
model.option["solver"] = SOLVER_LO
# report results
print(f"\nProfit = {model.obj['profit'].value():0.2f}\n")
# create dataframe of results
z = model.var["z"]
L = model.set["L"]
C = model.set["C"]
print("Blending Plan")
Z = pd.DataFrame(
[[l, c, round(z[l, c].value(), 1)] for l in L.members() for c in C.members()],
columns=["supplier", "customer", ""],
Z = Z.pivot_table(index="customer", columns="supplier")
Z["Total"] = Z.sum(axis=1)
Z["fat content"] = [
sum(z[l, c].value() * suppliers.loc[l, "fat"] for l in L.members())
/ sum(z[l, c].value() for l in L.members())
for c in C.members()
Z["min_fat"] = customers["min_fat"]
Profit = 81000.00
Blending Plan
│ │ │Total │fat content │min_fat│
│ supplier │Farm A│Farm B│ │ │ │
│ customer │ │ │ │ │ │
│Customer 1 │6000.0│0.0 │6000.0│0.045 │0.045 │
│Customer 2 │0.0 │2500.0│2500.0│0.030 │0.030 │
│Customer 3 │2666.7│1333.3│4000.0│0.040 │0.040 │
Option 2. Buy an additional truck#
The distributor can earn a profit of 81,000 using only local suppliers. Is is possible earn a higher profit by also sourcing raw milk from the remote suppliers?
Before considering pooling, the distributor may wish to know the maximum profit possible if raw milk from the remote suppliers could be blended just like local suppliers. This would require acquiring
and operating a separate transport truck for each remote supplier, and is worth knowing if the additional profit would justify the additional expense.
The linear optimization model presented Option 1 extends the to include both local and remote suppliers.
model = AMPL()
# define sources and customers
set S;
set C;
param price{C};
param demand{C};
param min_fat{C};
param cost{S};
param fat{S};
# define local -> customer flowrates
var z{S cross C} >= 0;
maximize profit: sum{s in S, c in C} z[s, c]*(price[c] - cost[s]);
subject to demand_req{c in C}:
sum{s in S} z[s, c] <= demand[c];
subject to quality{c in C}:
sum{s in S} z[s, c] * fat[s] >= sum{s in S} z[s, c] * min_fat[c];
model.set_data(customers, "C")
model.set_data(suppliers.drop(columns=["location"]), "S")
model.option["solver"] = SOLVER_LO
# report results
print(f"\nProfit = {model.obj['profit'].value():0.2f}\n")
# create dataframe of results
z = model.var["z"]
S = model.set["S"]
C = model.set["C"]
print("Blending Plan")
Z = pd.DataFrame(
[[s, c, round(z[s, c].value(), 1)] for s in S.members() for c in C.members()],
columns=["supplier", "customer", ""],
Z = Z.pivot_table(index="customer", columns="supplier")
Z["Total"] = Z.sum(axis=1)
Z["fat content"] = [
sum(z[s, c].value() * suppliers.loc[s, "fat"] for s in S.members())
/ sum(z[s, c].value() for s in S.members())
for c in C.members()
Z["min_fat"] = customers["min_fat"]
Profit = 122441.18
Blending Plan
│ │ │Total │fat content │min_fat│
│ supplier │Farm A│Farm B│Farm C│Farm D│ │ │ │
│ customer │ │ │ │ │ │ │ │
│Customer 1│0.0 │0.0 │1764.7│4235.3│6000.0│0.045 │0.045 │
│Customer 2│0.0 │0.0 │2500.0│0.0 │2500.0│0.033 │0.030 │
│Customer 3│0.0 │0.0 │2352.9│1647.1│4000.0│0.040 │0.040 │
Sourcing raw milk from the remote farms significantly increases profits. This blending, however, requires at least two trucks to keep the sources of milk from the remote suppliers separated until
they reach the blending facility. Note that the local suppliers are completely replaced by the lower cost remote suppliers, even to the extent of providing â product giveawayâ by surpassing the
minimum requirements of Customer 2.
Option 3. Pool delivery from remote suppliers#
Comparing Option 1 with Option 2 shows there is significantly more profit to be earned by purchasing raw milk from the remote suppliers. But that option requires an additional truck to keep the
supplies separated during transport.
Because only one truck with a single tank is available for transport from the remote farms, the pool and blending problem is to combine purchases from the remote suppliers into a single pool of
uniform composition, transport that pool to the distribution facility, then blend with raw milk from local suppliers to meet individual customer requirements. Compared to option 2, the profit
potential may be reduced due to pooling, but without the need to acquire an additional truck.
Pooling problem#
There are a several mathematical formulations of pooling problem in the academic literature. The formulation used here is called the â p-parameterizationâ where the pool composition represents a
new decision variable \(p\). The other additional decision variables are \(x_r\) referring to the amount of raw milk purchased from remote supplier \(r\in R\), and \(y_c\) which is the amount of the
pooled milk included in the blend delivered to customer \(c\in C\).
The profit objective is the difference between the income received for selling blended products and the cost of purchasing raw milk from local and remote suppliers.
\[ \begin{align*} \text{Profit} & = \sum_{(l,c)\ \in\ L \times C} (\text{price}_c - \text{cost}_l)\ z_{l,c} + \sum_{c\in C} \text{price}_c y_{c} - \sum_{r\in R} \text{cost}_r x_{r} \end{align*} \]
The product delivered to each customer from local farms and the pool can not exceed demand.
\[ \begin{align*} \sum_{l\in L} z_{l, c} + y_{c} & \leq \text{demand}_{c} & \forall c\in C \end{align*} \]
Purchases from the remote farms and the amounts delivered to customers from the pool must balance.
\[\begin{split} \begin{align*} \sum_{r\in R}x_{r} & = \sum_{c\in C} y_{c} \\ \end{align*} \end{split}\]
The average milk fat composition of the pool, \(p\), must satisfy an overall balance on milk fat entering the pool from the remote farms and the milk fat delivered to customers.
\[ \begin{align*} \sum_{r\in R}\text{fat}_{r}\ x_{r} & = \underbrace{p \sum_{c\in C} y_{c}}_{\text{bilinear}} \end{align*} \]
Finally, the milk fat required by each customer \(c\in C\) satisfies a blending constraint.
\[ \begin{align*} \underbrace{p y_{c}}_{\text{bilinear}} + \sum_{(l,c)\ \in\ L \times C} \text{fat}_{l}\ z_{l,c} & \geq \text{min_fat}_{c}\ (\sum_{l\in L} z_{l, c} + y_{c}) & \forall c \in C \end
{align*} \]
The last two constraints include bilinear terms which are the product of the decision variable \(p\) with decision variables \(y_c\) for all \(c\in C\).
Summarizing, the blending and pooling problem is to find a solution to maximize profit, where
\[\begin{split} \begin{align*} \max\ \text{Profit} = & \sum_{(l,c)\ \in\ L \times C} (\text{price}_c - \text{cost}_l)\ z_{l,c} + \sum_{c\in C} \text{price}_c y_{c} - \sum_{r\in R} \text{cost}_r x_{r}
\\ \text{s.t.}\qquad & \sum_{l\in L} z_{l, c} + y_{c} \leq \text{demand}_{c} & \forall c\in C \\ & \sum_{r\in R}x_{r} = \sum_{c\in C} y_{c} \\ & \underbrace{p y_{c}}_{\text{bilinear}} + \sum_{(l,c)\
\in\ L \times C} \text{fat}_{l}\ z_{l,c} \geq \text{min_fat}_{c}\ (\sum_{l\in L} z_{l, c} + y_{c}) & \forall c \in C \\ & \sum_{r\in R}\text{fat}_{r}\ x_{r} = \underbrace{p \sum_{c\in C} y_{c}}_{\
text{bilinear}} \\ & p, x_r, y_c, z_{l, c} \geq 0 & \forall r\in R, c\in C, l\in L \end{align*} \end{split}\]
Before attempting a solution to this problem, letâ s first consider the implications of the bilinear terms.
Why are bilinear problems hard?#
Bilinearity has a profound consequence on the nature of the optimization problem. To demonstrate this point, we consider a function obtained by fixing \(p\) in the milk pooling problem and solving
the resulting linear optimization problem for the maximum profit \(f(p)\).
model_p_fixed = AMPL()
# define sources
set L;
set R;
# define customers
set C;
# define flowrates
var x{R} >= 0;
var y{C} >= 0;
var z{L cross C} >= 0;
param p >= 0;
param price{C};
param demand{C};
param min_fat{C};
param cost{L union R};
param fat{L union R};
maximize profit: sum{l in L, c in C} z[l, c]*(price[c] - cost[l])
+ sum{c in C} price[c]*y[c]
- sum{r in R} cost[r]*x[r];
subject to customer_demand{c in C}:
y[c] + sum{l in L} z[l, c] <= demand[c];
subject to pool_balance:
sum{r in R} x[r] = sum{c in C} y[c];
subject to pool_quality:
sum{r in R} fat[r]*x[r] = p*sum{c in C} y[c];
subject to customer_quality{c in C}:
p*y[c] + sum{l in L} fat[l]*z[l,c] >= min_fat[c]*(sum{l in L} z[l, c] + y[c]);
model_p_fixed.set_data(customers, "C")
model_p_fixed.set_data(local_suppliers.drop(columns=["location"]), "L")
model_p_fixed.set_data(remote_suppliers.drop(columns=["location"]), "R")
model_p_fixed.option["solver"] = SOLVER_LO
# solve milk pooling problem for a fixed pool composition p
def f(p):
model_p_fixed.param["p"] = p
return model_p_fixed
p_plot = np.linspace(0.025, 0.055, 200)
f_plot = [f(p).obj["profit"].value() for p in p_plot]
fig, ax = plt.subplots(figsize=(8, 3))
ax.plot(p_plot, f_plot)
ax.set_title("Milk Pooling")
ax.set_xlabel("Pool composition p")
ax.set_ylabel("Profit f")
In contrast to linear or other convex optimization problems, the objective function for this bilinear optimization problem is a non-convex function of a decision variable \(p\) denoting the
composition of the blending pool. In fact, when profit is plotted as a function of \(p\), there are three local maxima separated by two local minima.
Convex Approximation#
The cause of the non-convexity in milk pooling problem are thee bilinear terms \(p y_c\) for all \(c\in C\) that appear in the constraints. A linear approximation can be obtained by introducing
decision variables \(w_c\) to take the place of the bilinear terms \(p y_c\) in the model expressions. The result is a new, but incomplete, linear optimization problem
\[\begin{split} \begin{align*} \max\ \text{Profit} = & \sum_{(l,c)\ \in\ L \times C} (\text{price}_c - \text{cost}_l)\ z_{l,c} + \sum_{c\in C} \text{price}_c y_{c} - \sum_{r\in R} \text{cost}_r x_{r}
\\ \text{s.t.}\qquad & \sum_{l\in L} z_{l, c} + y_{c} \leq \text{demand}_{c} & \forall c\in C \\ & \sum_{r\in R}x_{r} = \sum_{c\in C} y_{c} \\ & w_c + \sum_{(l,c)\ \in\ L \times C} \text{fat}_{l}\ z_
{l,c} \geq \text{min_fat}_{c}\ (\sum_{l\in L} z_{l, c} + y_{c}) & \forall c \in C \\ & \sum_{r\in R}\text{fat}_{r}\ x_{r} = \underbrace{p \sum_{c\in C} y_{c}}_{\text{bilinear}} \\ & w_c, x_r, y_c, z_
{l, c} \geq 0 & \forall r\in R, c\in C, l\in L \end{align*} \end{split}\]
Just adding additional variables isnâ t enough. Also needed are constraints that cause \(w_c\) to have close to or equal to \(p y_c\), that is \(w_c \approx p y_c\) for all \(c\in C\). If so, then
this process produces a convex approximation to the original problem. Because the approximation relaxes the original constraints, a solution to the convex approximation will produced an over-estimate
the potential profit.
From the problem formulation, the values of \(y_c\) are bounded between 0 and demand of customer \(c\), and the value of \(p\) is bounded between the minimum and maximum milk fat concentrations of
the remote farms.
\[\begin{split} \begin{align*} 0 \leq\ & y_c \leq \text{demand}_c\ & \forall c\in C \\ \min_{r\in R} \text{conc}_r \leq\ & p \leq \max_{r\in R} \text{conc}_r \\ \end{align*} \end{split}\]
Representing the bounds on \(p\) and \(y_c\) as
\[\begin{split} \begin{align*} \underline{p} & \leq p \leq \bar{p} \\ \underline{y}_c & \leq y_c \leq \bar{y}_c & \forall c\in C \end{align*} \end{split}\]
the McCormick envelope on \(w_c\) is given by a system of four inequalities. For each \(c\in C\),
\[\begin{split} \begin{align*} w_c & \geq \underline{y}_c p + \underline{p} y_c - \underline{p}\underline{y}_c \\ w_c & \geq \bar{y}_c p + \bar{p} y_c - \bar{p}\bar{y}_c \\ w_c & \leq \bar{y}_c p + \
underline{p} y_c - \bar{p}\underline{y}_c \\ w_c & \leq \underline{y}_c p + \bar{p} y_c - \underline{p}\bar{y}_c \\ \end{align*} \end{split}\]
The features to note are:
• Use of a rule to specify bounds on the decision variables y[c].
• Creating a new decision variable p with bounds.
• Creating a new set of decision variables w[c] to replace the bilinear terms.
• Using McCormick envelopes for variables w[c].
The result of these operations is a linear model that will provide an upper bound on the profit. Hopefully the resulting solution and bound will be a close enough approximation to be useful.
def milk_pooling_convex():
m = AMPL()
# define sources
set L;
set R;
# define customers
set C;
param price{C};
param demand{C};
param min_fat{C};
param cost{L union R};
param fat{L union R} default 0;
# define flowrates
var x{R} >= 0;
var y{c in C} >= 0 <= demand[c];
var z{L cross C} >= 0;
# composition of the pool
var p >= min{r in R} fat[r] <= max{r in R} fat[r];
# w[c] to replace bilinear terms p * y[c]
var w{C} >= 0;
maximize profit: sum{l in L, c in C} z[l, c]*(price[c] - cost[l])
+ sum{c in C} price[c]*y[c]
- sum{r in R} cost[r]*x[r];
subject to customer_demand{c in C}:
y[c] + sum{l in L} z[l, c] <= demand[c];
subject to pool_balance:
sum{r in R} x[r] = sum{c in C} y[c];
subject to pool_quality:
sum{r in R} fat[r]*x[r] = sum{c in C} w[c];
subject to customer_quality{c in C}:
w[c] + sum{l in L} fat[l]*z[l,c] >= min_fat[c]*(sum{l in L} z[l, c] + y[c]);
subject to
mccormick_ll{c in C}: w[c] >= y[c].lb0*p + p.lb0*y[c] - p.lb0*y[c].lb0;
mccormick_hh{c in C}: w[c] >= y[c].ub0*p + p.ub0*y[c] - p.ub0*y[c].ub0;
mccormick_lh{c in C}: w[c] <= y[c].ub0*p + p.lb0*y[c] - p.lb0*y[c].ub0;
mccormick_hl{c in C}: w[c] <= y[c].lb0*p + p.ub0*y[c] - p.ub0*y[c].lb0;
m.set_data(customers, "C")
m.set_data(local_suppliers.drop(columns=["location"]), "L")
m.set_data(remote_suppliers.drop(columns=["location"]), "R")
m.option["solver"] = SOLVER_LO
return m
def report_solution(m, model_title):
x = m.var["x"]
y = m.var["y"]
z = m.var["z"]
if model_title == "Milk Pooling Model":
p = m.param["p"]
p = m.var["p"]
L = m.set["L"]
R = m.set["R"]
C_model = m.set["C"]
# Supplier report
S = suppliers.copy()
for l in L.members():
for c in C_model.members():
S.loc[l, c] = z[l, c].value()
for r in R.members():
S.loc[r, "Pool"] = x[r].value()
S = S.fillna(0)
S["Amount"] = S[C_model.members()].sum(axis=1) + S["Pool"]
S["Expense"] = S["Amount"] * S["cost"]
# Customer report
C = customers.copy()
for c in C_model.members():
for l in L.members():
C.loc[c, l] = z[l, c].value()
for c in C_model.members():
C.loc[c, "Pool"] = y[c].value()
C = C.fillna(0)
C["Amount"] = C[L.members()].sum(axis=1) + C["Pool"]
C["fat delivered"] = (
sum(C[l] * S.loc[l, "fat"] for l in L.members()) + C["Pool"] * p.value()
) / C["Amount"]
C["Income"] = C["Amount"] * C["price"]
print(f"\nPool composition = {p.value():0.2f}")
print(f"Profit = {m.obj['profit'].value():0.2f}")
print(f"\nSupplier Report\n")
print(f"\nCustomer Report\n")
m_convex = milk_pooling_convex()
report_solution(m_convex, "Milk Pooling Model - Convex Approximation")
Milk Pooling Model - Convex Approximation
Pool composition = 0.04
Profit = 111411.76
Supplier Report
│ │ fat │cost│location│Customer 1│Customer 2│Customer 3│ Pool │ Amount │ Expense │
│Farm A│0.045│45.0│local │2500.0 │0.0000 │0.0 │0.0000 │2500.0000│112500.0000 │
│Farm B│0.030│42.0│local │0.0 │1029.4118 │0.0 │0.0000 │1029.4118│43235.2941 │
│Farm C│0.033│37.0│remote │0.0 │0.0000 │0.0 │4852.9412│4852.9412│179558.8235 │
│Farm D│0.050│45.0│remote │0.0 │0.0000 │0.0 │4117.6471│4117.6471│185294.1176 │
│ │min_fat│price│demand│Farm A│ Farm B │ Pool │Amount│fat delivered │ Income │
│Customer 1│0.045 │52.0 │6000.0│2500.0│0.0000 │3500.0000│6000.0│0.0421 │312000.0│
│Customer 2│0.030 │48.0 │2500.0│0.0 │1029.4118│1470.5882│2500.0│0.0359 │120000.0│
│Customer 3│0.040 │50.0 │4000.0│0.0 │0.0000 │4000.0000│4000.0│0.0400 │200000.0│
The convex approximation of the milk pooling model estimates an upper bound on profit of 111,412 for a pool composition \(p = 0.040\). The plot below compares this solution to what was found by in an
exhaustive search over values of \(p\).
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(p_plot, f_plot)
ax.set_title("Milk Pooling")
ax.set_xlabel("Pool composition p")
ax.plot(m_convex.var["p"].value(), m_convex.obj["profit"].value(), "ro", ms=10)
ax.axhline(m_convex.obj["profit"].value(), color="r", linestyle="--")
ax.axvline(m_convex.var["p"].value(), color="r", linestyle="--")
"convex approximation",
xy=(m_convex.var["p"].value(), m_convex.obj["profit"].value()),
xytext=(0.036, 106000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
Text(0.036, 106000, 'convex approximation')
As this stage the calculations find the maximum profit for a given value of \(p\). The challenge, of course, is that the optimal value of \(p\) is unknown. The following cell computes profits over a
range of \(p\).
The convex approximation is clearly misses the market in the estimate of profit and pool composition \(p\). Without the benefit of the full scan of profit as a function of \(p\), the only check on
the profit estimate would be to compute the solution to model for the reported value of \(p\). This is done below.
p = m_convex.var["p"].value()
m_convex_profit = m_convex.obj["profit"].value()
m_est = f(p)
m_est_profit = m_est.obj["profit"].value()
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(p_plot, f_plot)
ax.set_title("Milk Pooling")
ax.set_xlabel("Pool composition p")
ax.plot(p, m_convex_profit, "ro", ms=10)
ax.axhline(m_convex_profit, color="r", linestyle="--")
ax.axvline(p, color="r", linestyle="--")
"convex approximation",
xy=(p, m_convex_profit),
xytext=(0.036, 106000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
ax.plot(p, m_est_profit, "go", ms=10)
ax.axhline(m_est_profit, color="g", linestyle="--")
"local maxima",
xy=(p, m_est_profit),
xytext=(0.045, 105000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
The result shows the profit if the pooled milk transported from the remote farms has a fat content \(p = 0.04\) them a profit of 100,088 is realized which is better than 81,000 earned for business as
usual with just local suppliers, but falls short of the 122,441 earned if the remote milk supply could be transported without pooling.
The following cell presents a full report of the solution.
report_solution(m_est, "Milk Pooling Model")
Milk Pooling Model
Pool composition = 0.04
Profit = 100088.24
Supplier Report
│ │ fat │cost│location│Customer 1│Customer 2│Customer 3│ Pool │ Amount │ Expense │
│Farm A│0.045│45.0│local │6000.0 │0.0 │0.0 │0.0000 │6000.0000│270000.0000 │
│Farm B│0.030│42.0│local │0.0 │0.0 │0.0 │0.0000 │0.0000 │0.0000 │
│Farm C│0.033│37.0│remote │0.0 │0.0 │0.0 │3823.5294│3823.5294│141470.5882 │
│Farm D│0.050│45.0│remote │0.0 │0.0 │0.0 │2676.4706│2676.4706│120441.1765 │
│ │min_fat│price│demand│Farm A│Farm B│ Pool │Amount│fat delivered │ Income │
│Customer 1│0.045 │52.0 │6000.0│6000.0│0.0 │0.0 │6000.0│0.045 │312000.0│
│Customer 2│0.030 │48.0 │2500.0│0.0 │0.0 │2500.0│2500.0│0.040 │120000.0│
│Customer 3│0.040 │50.0 │4000.0│0.0 │0.0 │4000.0│4000.0│0.040 │200000.0│
With regard to the practical impact, the results of using this particular convex approximation are mixed. The approximation successfully produced a value for the pool composition \(p\) which would
produce a profit of over 100,088. However, the reported value for \(p\) was actually the smallest of the three local maxima for this problem. This discrepancy may have large consequences regarding
the choice of suppliers.
Global Nonlinear Optimization (GNLO) solution with Couenne#
The final version of this milk pooling model returns to the bilinear formulation with pool composition \(p\) as a decision variable. The following AMPL implementation needs to specify a solver
capable of solving the resulting problem. This has been tested with nonlinear solvers ipopt and couenne. Pre-compiled binaries for these solvers can be downloaded from AMPL.
def milk_pooling_bilinear():
m = AMPL()
# define sources
set L;
set R;
# define customers
set C;
param price{C};
param demand{C};
param min_fat{C};
param cost{L union R};
param fat{L union R};
# define flowrates
var x{R} >= 0;
var y{C} >= 0;
var z{L cross C} >= 0;
# composition of the pool
var p >= min{r in R} fat[r] <= max{r in R} fat[r];
maximize profit: sum{l in L, c in C} z[l, c]*(price[c] - cost[l])
+ sum{c in C} price[c]*y[c]
- sum{r in R} cost[r]*x[r];
subject to customer_demand{c in C}:
y[c] + sum{l in L} z[l, c] <= demand[c];
subject to pool_balance:
sum{r in R} x[r] = sum{c in C} y[c];
subject to pool_quality:
sum{r in R} fat[r]*x[r] = p*sum{c in C} y[c];
subject to customer_quality{c in C}:
p*y[c] + sum{l in L} fat[l]*z[l,c] >= min_fat[c]*(sum{l in L} z[l, c] + y[c]);
m.set_data(customers, "C")
m.set_data(local_suppliers.drop(columns=["location"]), "L")
m.set_data(remote_suppliers.drop(columns=["location"]), "R")
m.option["solver"] = SOLVER_GNLO
return m
m_global = milk_pooling_bilinear()
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(p_plot, f_plot)
ax.set_title("Milk Pooling")
ax.set_xlabel("Pool composition p")
ax.plot(p, m_convex_profit, "ro", ms=10)
ax.axhline(m_convex_profit, color="r", linestyle="--")
ax.axvline(p, color="r", linestyle="--")
"convex approximation",
xy=(p, m_convex_profit),
xytext=(0.036, 106000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
ax.plot(p, m_est_profit, "go", ms=10)
ax.axhline(m_est_profit, color="g", linestyle="--")
"local maxima",
xy=(p, m_est_profit),
xytext=(0.045, 105000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
m_global_p = m_global.var["p"].value()
m_global_profit = m_global.obj["profit"].value()
ax.plot(m_global_p, m_global_profit, "bo", ms=10)
ax.axhline(m_global_profit, color="b", linestyle="--")
"global maxima",
xy=(m_global_p, m_global_profit),
xytext=(0.025, 95000),
arrowprops=dict(shrink=0.1, width=1, headwidth=5),
report_solution(m_global, "Milk Pooling Model - Bilinear")
Milk Pooling Model - Bilinear
Pool composition = 0.03
Profit = 102833.33
Supplier Report
│ │ fat │cost│location│Customer 1│Customer 2│Customer 3│ Pool │ Amount │ Expense │
│Farm A│0.045│45.0│local │6000.0 │0.0 │2333.3333 │0.0000 │8333.3333│375000.0000 │
│Farm B│0.030│42.0│local │0.0 │0.0 │0.0000 │0.0000 │0.0000 │0.0000 │
│Farm C│0.033│37.0│remote │0.0 │0.0 │0.0000 │4166.6667│4166.6667│154166.6667 │
│Farm D│0.050│45.0│remote │0.0 │0.0 │0.0000 │-0.0000 │-0.0000 │-0.0000 │
│ │min_fat│price│demand│ Farm A │Farm B│ Pool │Amount│fat delivered │ Income │
│Customer 1│0.045 │52.0 │6000.0│6000.0000│0.0 │0.0000 │6000.0│0.045 │312000.0│
│Customer 2│0.030 │48.0 │2500.0│0.0000 │0.0 │2500.0000│2500.0│0.033 │120000.0│
│Customer 3│0.040 │50.0 │4000.0│2333.3333│0.0 │1666.6667│4000.0│0.040 │200000.0│
Concluding Remarks#
The solution for the bilinear pooling model reveals several features of the problem.
• For the given parameters, pooling raw materials for shipment from remote suppliers yields the most profitable solution, but that solution is only possible because there are local suppliers to
augment the pool blend to meet individual customer requirements.
• Customer 2 receives a blend that is 3.3% exceeding the requirement of 3%. This results in some â give awayâ of product quality in return for the economic benefits of pooling.
Suggested Exercises#
This simple model demonstrates practical issues that arise in modeling the non-convex problems. Take time explore the behavior of the model to parameter changes and the nature of the solution.
1. Examine the model data and explain why the enhanced profits are observed only for a particular range of values in \(p\).
2. Think carefully about the non-convex behavior observed in the plot of profit versus parameter \(p\). Why are the local maxima located where they are, and how are they related to problem data?
Test your hypothesis by changing the problem data. What happens when the product specification for Customer A is set equal to 0.045? What happens when it is set to 0.04?
3. Revise the AMPL model using â cbcâ , â gurobi_directâ , â ipoptâ , and â bonminâ to find a solution. Did you find a solver that could solve this nonlinear problem?
4. The above analysis assumed unlimited transport. If the truck turns out to have a limit of 4,000 units of milk, write the mathematical constraint necessary to introduce that limit into the model.
Add that constraint to the AMPL model and discuss the impact on profit.
Bibliographic Notes#
The pooling and blending is a large scale, high value, fundamental problem of logistics for the process and refining industries. The prototypical examples are the pooling and blending crude oils to
meet the feed stock constraints of refineries, and for the pooling of refinery products for pipeline delivery to distribution terminals. Un
Haverly (1978) is a commonly cited small benchmark problem for the pooling and blending of sulfurous fuels.
There is an extensive literature on pooling and blending. The following encyclopedia entry explains the history of the pooling problem, how it leads to multiple local minima and other pathological
behaviors, and approaches to finding practical solutions.
Recent research overviews include
Gupte, A., Ahmed, S., Dey, S. S., & Cheon, M. S. (2013). Pooling problems: relaxations and discretizations. School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta,
GA. and ExxonMobil Research and Engineering Company, Annandale, NJ. http://www.optimization-online.org/DB_FILE/2012/10/3658.pdf
The current state-of-the-art appears to be a formulation of the pooling problem is a mixed-integer quadratically-constrained quadratic optimization on a given network.
Ceccon, F., & Misener, R. (2022). Solving the pooling problem at scale with extensible solver GALINI. Computers & Chemical Engineering, 107660. https://arxiv.org/pdf/2105.01687.pdf
Applications for pooling and blending are probably underappreciated. In particular, what role might pooling and blending problems have in projects like the World Food Programme (WFP)?
|
{"url":"https://ampl.com/mo-book/notebooks/05/milk-pooling.html","timestamp":"2024-11-07T12:46:44Z","content_type":"text/html","content_length":"179408","record_id":"<urn:uuid:e756e089-591c-4bf9-b67b-d73d25d2ed04>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00853.warc.gz"}
|
How much can you save with solar?How much can you save with solar? - Energy News 247
How we calculated total annual electricity cost for a property with no solar panels
We started by using a total annual consumption figure of 3,500kWh. This is an example figure used, based on standard MCS calculations (with the customer at home half the day) and is also the mean,
non-population weighted British electricity consumption according to UK Government statistics
To calculate how much it costs to consume 3,500kWh of electricity, we took the July 2024 regional average price cap figures from Ofgem, which are:
Average Unit Rate – 22.36p/kWhAverage Standing Charge – 60.12p/day
The total cost of electricity, for a property with no solar panels, with the above assumptions works out to be:Cost of Imported Electricity + Standing Charge (3,500kWh x 22.36p) + (60.12p x 365) =
How we calculated total annual electricity cost and savings for a property with a 10 solar panel system, no battery storage. On Flexible Octopus for import, and Fixed Outgoing for export
We took the same total annual consumption figure as above, and assume the 10 x 440W panels generate, annually, 4,400kWh of electricity.
Based on MCS standardised calculations and guidance, a property with total annual consumption between 3,500 – 3,999 kWh, and a total annual generation between 4,200 – 4,499kWh, with a panel only
system, self consumes 22% of the energy generated. This works out to be: 4,400kWh x 22% = 968kWh
This means the remainder (4,400kWh – 968kWh) of 3,432kWh is assumed to be exported back to the grid.We assume the exported energy is paid for at a rate of 15p/kWh. This is our July 2024 Fixed
Outgoing Tariff export rate.
Since total annual consumption is assumed to be 3,500kWh, of which 968kWh is consumed from electricity generated by the solar panels, the remainder (2,532kWh) needs to be imported from the grid.
To calculate the total annual electricity cost of this property, we will again use the July 2024 average Ofgem price cap figures. To recap, these are 22.36p/kWh unit rate, and a 60.10p/day standing
charge, and the mentioned Fixed Outgoing tariff rate of 15p/kWh.
The total annual electricity cost works out to be:
Cost of imported electricity + standing charge – (Export Savings)(2,532kWh x 22.36p) + (60.12p x 365) – (3,432kWh x 15p) = £271Versus a property with no solar panel system installed, the total
savings are:Total Annual Elec bill of property with no solar – Total Annual Elec bill of property with solar£1,002 – £271 = £731 (73% saved)
How we calculated total annual electricity cost and savings for a property with a 10 panel system, 5kWh battery storage. On Flexible Octopus for import, and Fixed Outgoing for export
Based on MCS standardised calculations and guidance, a property with total annual consumption between 3,500 – 3,999 kWh, and a total annual generation between 4,200 – 4,499kWh, with a 10 solar panel
system AND a battery storage with 4.1 – 5.1kWh usable capacity is expected to self consume 54% of the energy generated.
Assuming the same total annual generation as above, the new self consumed electricity is:4,400kWh x 54% = 2,376kWh
This means that the remainder (2,024kWh) is exported to the grid.
Assuming the same total annual consumption as above, the new imported electricity is:Total Annual Consumption – Self Consumption: 3.500kWh – 2,349kWh = 1,124kWh
Assuming the same unit rate, standing charge and export rate as above, the total annual electricity cost works out to be:Imported electricity + standing charge – (Export Savings)(1,124kWh x 22.36p) +
(60.12p x 365) – (2,024kWh x 15p) = £167
Versus a property with no solar panel system installed, the total savings here are:Total Annual Elec bill of property with no solar – Total Annual Elec bill of property with solar.
£1,002 – £167 = £835 (83% saved)
How we calculated total annual electricity cost and savings for a property with a 10 panel system, 5kWh battery storage. On Flexible Octopus for import, and Fixed Outgoing for export
All assumptions remain the same as above, except for the import rate, export rate and standing charge, as the tariff has changed here.
We have used a weighted average import rate of 18.86p/kWh, a weighted average export rate of 16.33p/kWh, and a standing charge of 58.03p/day.
The standing charge is our July 2024 rate for the Octopus Flux tariff.
The weighted import and export rates were obtained by looking at the average expected usage patterns for customers on the Octopus Flux tariff. This found that, on average, we expect our customers to
import 46% of their total import at the night rate, 46% of their total import at the day rate and 7% of their total import at the peak rate.
While on average, of the total exported electricity, our customers export 0% at the night rate, 82% at the day rate and 18% at the higher paying peak rate.
We then took our July 2024 Octopus Flux rates (in p/kWh) which are:
Import / ExportNight 13.42p / 4.82pDay 22.36p / 15pPeak 31.31p / 22.71p
And used this to calculate the weighted average import and export rate.
With all other assumptions remaining the same, the total annual electricity bill for this property works out to be:Cost of Imported electricity + standing charge – (export savings)(1,124kWh x 18.86p)
+ (58.03p x 365) – (2,024kWh x 16.33p) = £93
Versus a property with no solar panel system installed, the total savings here are:Total Annual Elec bill of property with no solar – Total Annual Elec bill of property with solar£1,002 – £93 =
£909 (91% saved)
|
{"url":"https://energynews247.com/how-much-can-you-save-with-solar/","timestamp":"2024-11-12T06:52:56Z","content_type":"text/html","content_length":"204576","record_id":"<urn:uuid:fa1d3bef-d8b3-42c5-a695-e00becd3a3f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00077.warc.gz"}
|
A fast network-decomposition algorithm and its applications to constant-time distributed computation
A partition (C[1],C[2],…,C[q]) of G=(V,E) into clusters of strong (respectively, weak) diameter d, such that the supergraph obtained by contracting each C[i] is ℓ-colorable is called a strong (resp.,
weak) (d,ℓ)-network-decomposition. Network-decompositions were introduced in a seminal paper by Awerbuch, Goldberg, Luby and Plotkin in 1989. Awerbuch et al. showed that strong (d,ℓ)
-network-decompositions with d=ℓ=exp{O(lognloglogn)} can be computed in distributed deterministic time O(d). Even more importantly, they demonstrated that network-decompositions can be used for a
great variety of applications in the message-passing model of distributed computing. The result of Awerbuch et al. was improved by Panconesi and Srinivasan in 1992: in the latter result d=ℓ=exp{O
(logn)}, and the running time is O(d) as well. In another remarkable breakthrough Linial and Saks (in 1992) showed that weak (O(logn),O(logn))-network-decompositions can be computed in distributed
randomized time O(log^2n). Much more recently Barenboim (2012) devised a distributed randomized constant-time algorithm for computing strong network decompositions with d=O(1). However, the
parameter ℓ in his result is O(n^1/2+ϵ). In this paper we drastically improve the result of Barenboim and devise a distributed randomized constant-time algorithm for computing strong (O(1),O(n^ϵ))
-network-decompositions. As a corollary we derive a constant-time randomized O(n^ϵ)-approximation algorithm for the distributed minimum coloring problem, improving the previously best-known O(n^1/
2+ϵ) approximation guarantee. We also derive other improved distributed algorithms for a variety of problems. Most notably, for the extremely well-studied distributed minimum dominating set problem
currently there is no known deterministic polylogarithmic-time algorithm. We devise a deterministic polylogarithmic-time approximation algorithm for this problem, addressing an open problem of Lenzen
and Wattenhofer (2010).
• Coloring
• Distributed algorithms
• Dominating sets
• Local algorithms
• Spanners
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'A fast network-decomposition algorithm and its applications to constant-time distributed computation'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/a-fast-network-decomposition-algorithm-and-its-applications-to-co-6","timestamp":"2024-11-09T14:18:58Z","content_type":"text/html","content_length":"62103","record_id":"<urn:uuid:655b6f1a-b55f-4f43-ae48-bc7495dc1651>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00393.warc.gz"}
|
Enterohepatic circulation model
Enterohepatic circulation (EHC) occurs when drugs circulate from the liver to the bile in the gallbladder, followed by entry into the gut when the bile is released from the gallbladder, and
reabsorption from the gut back to the systemic circulation. The presence of EHC results in longer apparent drug half-lives and the appearance of multiple secondary peaks. Various pharmacokinetic
models have been used in the literature to describe EHC concentration-time data.
The reference below provides a basic review of the EHC process and modeling strategies implemented to represent EHC.
Okour, M., & Brundage, R. C. (2017). Modeling Enterohepatic Circulation. Current Pharmacology Reports, 3(5), 301–313. doi: 10.1007/s40495-017-0096-z/
Mlxtran structural model
We present two examples of EHC models, based on a two-compartments model with iv bolus administration. Both models have been implemented with PK macros.
Example with switch function
In the first example, the gallbladder emptying is modelled with a switch function: the emptying rate constant follows a simple piece-wise function, implemented with if/else statements. Hence,
gallbladder emptying occurs within time windows, where the rate constant outside these windows is assumed to be zero.
The Mlxtran code for the structural model reads:
input = {V, Cl, Q, V2, kbile, kempt, ka_gut, Tfirst, Tsecond}
if t>Tfirst & t<Tfirst+duration
kemptying = kempt
elseif t>Tsecond & t<Tsecond+duration
kemptying = kempt
kemptying = 0
compartment(cmt=1, volume=V, concentration=Cc)
iv(adm=1, cmt=1)
elimination(cmt=1, k)
peripheral(k12, k21)
transfer(from=1, to=3, kt=kbile)
transfer(from=3, to=4, kt=kemptying)
transfer(from=4, to=1, kt=ka_gut)
output = {Cc}
The parameters kbile, kempt and ka_gut describe respectively the transfer rate between the central compartment and the bile, between the bile and the gut, and between the gut and the central
The parameter kemptying is the rate constant controlling the gallbladder emptying process, and its value changes with time as displayed below:
Here two windows of duration duration=1 have been defined starting at times Tfirst and Tsecond. It is of course possible to extend this model with additional emptying windows, if more than two
secondary peaks are noticeable in the data, and to change the value of the parameter duration.
Example with sine function
In the second example, gallbladder emptying is modeled with a sine function, assumed to equal zero outside the gallbladder emptying times. This model accounts for multiple cycles of gallbladder
The Mlxtran code for the structural model reads:
input = {V, Cl, Q, V2, kbile, kempt, ka_gut, phase, period}
kemptying = max(0, kempt * (cos(2*3.14*(t+phase)/period)))
compartment(cmt=1, volume=V, concentration=Cc)
iv(adm=1, cmt=1)
elimination(cmt=1, k)
peripheral(k12, k21)
transfer(from=1, to=3, kt=kbile)
transfer(from=3, to=1, kt=kemptying)
transfer(from=4, to=1, kt=ka_gut)
output = {Cc}
This model assumes fixed intervals between gallbladder release times, with the parameter period defining their frequency. The parameter phase represents the time of the beginning of the first
gallbladder release. The value of the parameter kemptying ove time is then:
Variation with oral administration
If the drug administration is oral instead of a bolus, the model can be adapted by replacing the macro iv(adm=1, cmt=1) with absorption(adm=1, cmt=1, ka), which defines a first-order absorption with
a rate ka. While ka represents the absorption of the drug, ka_gut represents its reabsorption after storage in the gallbladder. These two physiological processes are not strictly equivalent, as the
bile stored in the gallbladder is released into the duodenum. Depending on the drug, ka and ka_gut can be set as separate or identical rate constants.
input = {V, ka, Cl, Q, V2, kbile, kempt, ka_gut, phase, period}
kemptying = max(0, kempt * (cos(2*3.14*(t+phase)/period)))
compartment(cmt=1, volume=V, concentration=Cc)
absorption(adm=1, cmt=1, ka)
elimination(cmt=1, k)
peripheral(k12, k21)
transfer(from=1, to=3, kt=kbile)
transfer(from=3, to=1, kt=kemptying)
transfer(from=4, to=1, kt=ka_gut)
output = {Cc}
Exploration with Mlxplore
The Mlxplore script below permits to compare the two models with IV bolus, along with the two-compartments model with no EHC.
input = {V, Cl, Q, V2, kbile, kempt, ka_gut, Tfirst,Tsecond, phase, period}
; --- Model with no EHC ---
compartment(cmt=1, volume=V, concentration=Cc_no_EHC)
iv(adm=1, cmt=1)
elimination(cmt=1, k)
; --- EHC model with two constant rate emptying windows ---
if t>Tfirst & t<Tfirst+1
kemptying = kempt
elseif t>Tsecond & t<Tsecond+1
kemptying = kempt
kemptying = 0
compartment(cmt=3, volume=V, concentration=Cc_EHC_1)
iv(adm=1, cmt=3)
elimination(cmt=3, k)
peripheral(k34=k12, k43=k21)
transfer(from=3, to=5, kt=kbile)
transfer(from=5, to=6, kt=kemptying)
transfer(from=6, to=3, kt=ka_gut)
; --- EHC model with sine emptying rate ---
kemptying_2 = max(0, kempt * (cos(2*3.14*(t+phase)/period)))
compartment(cmt=7, volume=V, concentration=Cc_EHC_2)
iv(adm=1, cmt=7)
elimination(cmt=7, k)
peripheral(k78=k12, k87=k21)
transfer(from=7, to=9, kt=kbile)
transfer(from=9, to=10, kt=kemptying_2)
transfer(from=10, to=7, kt=ka_gut)
Tfirst = 3
adm = {type=1, time=0, amount=1000}
list={Cc_no_EHC, Cc_EHC_1, Cc_EHC_2, kemptying, kemptying_2}
p1 = {y={Cc_no_EHC, Cc_EHC_1, Cc_EHC_2}, ylabel='Concentration', xlabel='Time'}
p2 = {y={kemptying, kemptying_2}, ylabel='Gallbladder emptying rate', xlabel='Time'}
We obtain the following profiles for the drug concentration in log scale on the top plot below, with no EHC (red), EHC with switch function (blue) and sine function (green). The bottom plot
represents the gallbladder emptying rate.
|
{"url":"https://mlxtran.lixoft.com/examples/circulation-model/","timestamp":"2024-11-10T04:51:46Z","content_type":"text/html","content_length":"70957","record_id":"<urn:uuid:ed7ccf22-ac9d-4cd7-9087-2a6f31c35d7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00094.warc.gz"}
|
Essay # 5 - Constructing Polar Graphs on The Geometer's Sketchpad
by Shannon Umberger
Most trigonometry and pre-calculus students study graphs of polar equations. They know how to graph lines and circles on polar planes. The students also know the general forms of equations that will
result in the graphs of limacons, cardioids, and lemniscates. And they know what these graphs look like and can graph them on a polar plane.
If you ask one of these students to construct a line or a circle in The Geometer's Sketchpad, if they've had experience with GSP, they can do it with ease. But try asking them to construct a limacon,
cardioid, or lemniscate, and they probably have no idea. They may even think that it cannot be done. But it can!! This essay explains a couple of ways to construct these types of graphs in GSP.
Limacon with an Extra Loop
Construct a circle and a point A outside the circle.
Construct a point B on the circle (NOT the one that was used to construct the circle). Construct segment AB.
Construct a circle with center at point B and passing through point A.
Construct the locus of this circle as point B moves around the first circle. The locus is a limacon with an extra loop!!
So how do you construct just the outline of the limacon with an extra loop? Start again with a circle, a point A outside the circle, and a point B on the circle (but NOT the one that was used to
construct the circle).
Construct a radius of the circle with point B as the endpoint on the circle. Construct a line perpendicular to this radius at point B (this line is tangent to the circle at point B).
Construct a perpendicular line to the tangent line through point A. Call the intersection of this line and the tangent line point P. This point P is sometimes called the "pedal" of the circle.
Construct the locus of point P as point B moves around the circle. The result is the limacon with an extra loop!
Construct a circle and a point A on the circle.
Construct a point B on the circle (NOT the one that was used to construct the circle). Construct segment AB.
Construct a circle with center at point B and passing through point A.
Construct the locus of this circle as point B moves around the first circle. The locus is a cardioid!!
So how do you construct just the outline of the cardiod? Start again with a circle, a point A on the circle, and a point B on the circle (but NOT the one that was used to construct the circle).
Construct a radius of the circle with point B as the endpoint on the circle. Construct a line perpendicular to this radius at point B (this line is tangent to the circle at point B).
Construct a perpendicular line to the tangent line through point A. Call the intersection of this line and the tangent line point P. This point P is sometimes called the "pedal" of the circle.
Construct the locus of point P as point B moves around the circle. The result is the cardioid!
Limacon without an Extra Loop
Construct a circle and a point A inside the circle.
Construct a point B on the circle (NOT the one that was used to construct the circle). Construct segment AB.
Construct a circle with center at point B and passing through point A.
Construct the locus of this circle as point B moves around the first circle. The locus is a limacon without an extra loop!!
So how do you construct just the outline of the limacon without an extra loop? Start again with a circle, a point A inside the circle, and a point B on the circle (but NOT the one that was used to
construct the circle).
Construct a radius of the circle with point B as the endpoint on the circle. Construct a line perpendicular to this radius at point B (this line is tangent to the circle at point B).
Construct a perpendicular line to the tangent line through point A. Call the intersection of this line and the tangent line point P. This point P is sometimes called the "pedal" of the circle.
Construct the locus of point P as point B moves around the circle. The result is the limacon without an extra loop!
Construct a hyperbola and a point A at the center of the hyperbola.
Construct a point B on the hyperbola. Construct segment AB.
Construct a circle with center at point B and passing through point A.
Construct the locus of this circle as point B moves around the hyperbola. The locus is a lemniscate!!
|
{"url":"https://jwilson.coe.uga.edu/emt668/EMAT6680.2000/Umberger/EMAT6690smu/Essay5smu/Essay5smu.html","timestamp":"2024-11-13T08:47:02Z","content_type":"text/html","content_length":"10495","record_id":"<urn:uuid:3bbc6eca-6b83-40ff-8f17-dfb1dc8bd3ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00190.warc.gz"}
|
Is there an algebraic approach for the topological boundary (defect) states?
2392 views
There are many free fermion systems that possess topological edge/boundary states. Examples include quantum Hall insulators and topological insulators. No matter chiral or non-chiral, 2D or 3D,
symmetry protected or not, their microscopic origins are similar. Explicitly speaking, when placing such a system on a geometry with open boundary in one spatial dimension (say the $x$-axis), and
closed boundary in other spatial dimensions, the bulk model Hamiltonian is always reduced to one or several copies of the following 1D Hamiltonian along the open-boundary $x$-axis direction (see B.
Zhou et al., argument thatPRL 101, 246807) $$H_\text{1D}=-i\partial_x\sigma_1+k_\perp\sigma_2+(m-\partial_x^2+k_\perp^2)\sigma_3,$$ where $\sigma_{1,2,3}$ are the three Pauli matrices, and $k_\perp$
denotes the momentum perpendicular to $x$-axis (and could be extended to a matrix in higher dimensions). The existence of the topological edge state is equivalent to the existence of edge modes of
$H_\text{1D}$ on an open chain.
It was claimed that the edge modes exist when $m<0$. After discretize and diagonalize $H_\text{1D}$, I was able to check the above statement. But my question is that whether there is a simple
argument that allows one to judge the existence of the edge mode by looking at the differential operator $H_\text{1D}$ without really solving the differential equation? I believe there should be a
reason if the edge mode is robust.
PS: I am aware of but not satisfied with the topological argument that the bulk band has non-trivial topology, which can not be altered without closing the bulk gap, thus there must be edge states on
the boundary. Is it possible to argue from the property of $H_\text{1D}$ without directly referring to the bulk topology?
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Everett You
How about computing the Chern index?
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user DaniH
Here is an algebraic approach to understand the edge state. Let us start from a generic Dirac Hamiltonian for the bulk fermions in the $d$-dimensional space. $$H=\sum_{i=1:d}\mathrm{i}\partial_i\
alpha^i+m(x_i)\beta,$$ where $\alpha^i$ and $\beta$ are anti-commuting gamma matrices ($\{\alpha^i,\alpha^j\}=2\delta^{ij}$, $\{\alpha^i,\beta\}=0$, $\beta\beta=1$), and $m(x_i)$ is the topological
mass that varies in the space. The boundary of a topological insulator would correspond to a nodal interface where $m(x_i)$ goes from positive to negative (or vice versa). Let us consider a smooth
boundary where $m$ changes along the $x_1$ direction, meaning that $m\propto x_1$ in the vicinity of the boundary.
So we can focus along the $x_1$ direction, and study the following 1D effective Hamiltonian $$H_\text{1D}=\mathrm{i}\partial_1\alpha^1+x_1 \beta.$$ The existence of the boundary mode in $H$ would
correspond to the existence of the zero mode around $x_1=0$ in $H_\text{1D}$.
To proceed, we define an annihilation operator $$a=\frac{1}{\sqrt{2}}(x_1+\eta\partial_1),$$ with $\eta\equiv\mathrm{i}\beta\alpha^1$, which is analogous to the well-known annihilation operator $a=
(x+\partial_x)/\sqrt{2}$ of the harmonic oscillator. The matrix $\eta$ has the following properties: (i) $\eta^{\dagger}=\eta$ and (ii) $\eta\eta=1$, which can be derived from the algebra of $\alpha^
1$ and $\beta$. Then the creation operator will be $a^\dagger=(x_1-\eta\partial_1)/\sqrt{2}$, and one can show that $$[a,a^\dagger]=\eta.$$ Further more, the squared Hamiltonian can be written as
$$H_\text{1D}^2=2 a^\dagger a,$$ whose eigenstates are the same as $H_\text{1D}$, with the eigenvalues squared. So a zero mode in $H_\text{1D}$ would correspond to a zero mode in $H_\text{1D}^2$ as
well. Because the spectrum of $H_\text{1D}^2$ is positive definite, its zero mode is also its ground state.
From $\eta\eta=1$, we know the eigenvalues of $\eta$ can only be $\pm1$. Then in the $\eta=+1$ subspace, we retrieve the familiar commutation relation of boson operators $[a,a^\dagger]=+1$ (note that
$a$ commute with $\eta$, so it will not carry any state out of the $\eta=+1$ subspace). Then it becomes obvious that $H_\text{1D}^2=2a^\dagger a$ is simply counting the boson number (with a factor
2). So the zero mode of $H_\text{1D}^2$ exists and is just the boson vacuum state, defined by $a|0\rangle=0$ in the $\eta=+1$ subspace. The spacial wave function of $|0\rangle$ will just be the same
as the ground state of a harmonic oscillator, which is a Gaussian wave packet $\exp(-x_1^2/2)$ exponentially localized at $x_1=0$. However in the $\eta=-1$ subspace, the commutation relation is
reversed $[a,a^\dagger]=-1$, meaning that one may redefine the annihilation operator to $b=a^\dagger$ (with $[b,b^\dagger]=+1$ now), so that the spectrum of the Hamiltonian $H_\text{1D}^2=2bb^\dagger
=2b^\dagger b+2$ is now bounded by 2 from below and has no zero mode. Therefore by making connection to the harmonic oscillator, we have demonstrated that
1. the zero mode of $H_\text{1D}$ exist,
2. its internal (flavor) wave vector is given by the eigenvectors of $\eta=+1$,
3. its spacial wave function is exponentially localized around $x_1=0$.
Having these results, we can obtain the boundary effective Hamiltonian by projecting the bulk Hamiltonian $H$ to the boundary mode Hilbert space, which is the eigen space of $\eta=+1$. So we define
the projection operator $\mathcal{P}_1=(1+\eta)/2\equiv(1+\mathrm{i}\beta\alpha^1)/2$, and apply that to the bulk Hamiltonian $H\to H_{\partial}=\mathcal{P}_1 H\mathcal{P}_1$. According the
anti-commuting property of the gamma matrices, $\alpha^1$ and $\beta$ can not survive projection, and the rest of the matrices $\alpha^i$ ($i=2:d$) all commute through the projection $\mathcal{P}$,
and hence persist to the boundary Hamiltonian $$H_\partial=\sum_{i=2:d}\mathrm{i}\partial_i\tilde{\alpha}^i,$$ which describes the gapless edge modes on the boundary. $\tilde{\alpha}^i$ denotes the
restriction of the matrix $\alpha^i$ to the $\mathrm{i}\beta\alpha^1=+1$ subspace (the projection will half the Hilbert space dimension). Therefore by the projection operator $\mathcal{P}_i=(1+\
mathrm{i}\beta\alpha^i)/2$, we can push the Dirac Hamiltonian to the mass domain wall perpendicular to any $x_i$-direction, and obtained the effective boundary Hamiltonian.
This approach can be applied to calculate the effective Hamiltonian in the topological mass defects as well. Starting from the bulk Hamiltonian will multiple topological mass terms $m_j$, $$H=\sum_{i
=1:d}\mathrm{i}\partial_i\alpha^i+\sum_{j}m_j\beta^j,$$ where $m_j$ is a vector field in the space with topological defects (like monopoles, vortex lines, domain walls etc.). We can use dimension
reduction procedure to eliminate the dimension of the problem by one each time, until we reach the desired dimension. In each step, we first deform the topological defect (by scaling it) to its
anisotropic limit, and treat the problem along the anisotropy dimension as a 1D problem. By using the projection operator as described above, we can project the Hamiltonian to the remaining
dimensions, and hence reduce the problem dimension by one.
For example, if the mass field scales with the coordinate as $m_1\propto x_1$, $m_2\propto x_2$, ..., then the projection operator should be (up to a normalization factor) $\mathcal{P}\propto(1+\
mathrm{i}\beta^1\alpha^1)(1+\mathrm{i}\beta^2\alpha^2)\cdots$. The low-energy fermion modes in the topological defect will be given by those eigenstates of $\mathcal{P}$ with non-zero eigenvalues.
This approach can be further applied to calculate the effective Hamiltonian in the gauge defects, such as gauge fluxes and gauge monopoles. Let us start by considering threading a flux $\phi$ in a 2D
topological insulator, which amounts to digging a circular hole and putting the flux inside the hole.
It will be convenient to switch to the polar coordinate and rewrite the bulk Hamiltonian as $$H=\mathrm{i}\partial_r\alpha^r+\frac{1}{r}(\mathrm{i}\partial_\theta-A_\theta)\alpha^\theta+m\beta,$$
where the $(\alpha^r,\alpha^\theta)$ are rotated from $(\alpha^1,\alpha^2)$ by $$\left[\begin{matrix}\alpha^r\\\alpha^\theta\end{matrix}\right]=\left[\begin{matrix}\cos\theta&\sin\theta\\-\sin\theta&
\cos\theta \end{matrix}\right]\left[\begin{matrix}\alpha^1\\\alpha^2\end{matrix}\right].$$ $A_\theta$ denotes the gauge connection that integrates up to the flux $\int_0^{2\pi} A_\theta \mathrm{d}\
theta=\phi$ through the hole. To obtain the fermion spectrum around the hole, we need to push the bulk Hamiltonian to the circular boundary by the projection $\mathcal{P}=(1+\mathrm{i}\beta\alpha^r)/
2$ (which is $\theta$ dependent). Only $\alpha^\theta$ will survive the projection and be restricted to $\tilde{\alpha}^\theta$ in the $\mathrm{i}\beta\alpha^r=+1$ subspace. So the low-energy
effective Hamiltonian around the flux is (assuming the hole radius is $r=1$) $$H_\phi=(\mathrm{i}\partial_\theta-A_\theta)\tilde{\alpha}^\theta=\Big(n+\frac{1}{2}-\frac{\phi}{2\pi}\Big)\tilde{\alpha}
^\theta.$$ In the last equality, we have plugged in the wave function $|n\rangle=e^{\mathrm{i}n\theta}|\mathrm{i}\beta\alpha^r(\theta)=+1\rangle$ labeled by the angular momentum quantum number $n\in\
mathbf{Z}$. The shift $1/2$ comes from the spin connection (the fermion accumulates Berry phase of $\pi$ as $\mathrm{i}\beta\alpha^r$ winds around the hole). From $H_\phi$ we can see that only $\
pi$-flux ($\phi=\pi$) can trap fermion zero modes (at $n=0$) in 2D gapped Dirac fermion systems.
A gauge monopole defect (of unit strength) in 3D can be considered as the end point of a $2\pi$-flux tube. Suppose the flux tube is placed along the $x_3$ direction in a topological insulator, with
the flux $\phi(x_3)$ changing from $2\pi$ to $0$ across $x_3=0$. The effective Hamiltonian along the tube will be $$H=\mathrm{i}\partial_3\tilde{\alpha}^3+m(x_3)\tilde{\alpha}^\theta,$$ where $m(x_3)
=n+\frac{1}{2}-\phi(x_3)/(2\pi)$ plays the role of a varying mass. $\tilde{\alpha}^\theta$ and $\tilde{\alpha}^3$ are restrictions of $\alpha^\theta$ and $\alpha^3$ in the $\mathrm{i}\beta\alpha^r=
+1$ subspace.
Only the angular momentum $n=0$ sector has a sign change in the mass $m(x_3)$, which leads to the zero mode trapped by the monopole. The zero mode is therefore given by the projection $\mathcal{P}=
(1+\mathrm{i}\beta\alpha^r)(1+\mathrm{i}\alpha^\theta\alpha^3)/4$. Using the bulk boundary correspondence, if the monopole traps a zero mode in the bulk of a 3D TI, then its surface termination,
which is a $2\pi$ flux, will also trap a zero mode on the TI surface. So we conclude that the $2\pi$-flux can trap fermion zero modes in 2D gapless Dirac fermion systems.
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Everett You
That's a very nice post. Did you read all that somewhere, or did you figure it out yourself ?
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Adam
@Adam Thanks. I learned this algebraic approach from Prof. Xiao-Liang Qi at Stanford. Then I generalized the method to any topological defects, and applied to my recent work (arxiv.org/abs/1402.4151
). I just remember this post here recently, and decided to share my new understandings with you all.
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Everett You
Very nice approach!
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Heidar
Excellent answer, that's something I was awaiting for since a long time: how to nicely understand edge states without referring to lattice Hamiltonian and computers. That's a pity you can not promote
yourself to be the best answerer :-)
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user FraSchelle
I think I understand what you mean when you say that you're not satisfied with the “nontrivial bulk topology argument” when it comes to thinking about edge states. The Chern number (for time-reversal
breaking) and $\mathbb{Z}_{2}$ invariant (for time-reversal symmetric) systems, as DaniH suggested, does indeed give you information about the edge states; the Chern number and $\mathbb{Z}_{2}$
invariant give the number and parity of edge states respectively. But these computations once again rely on directly dealing with the nontrivial bulk topology. It seems that you are more interested
in explicitly seeing what's happening at the edge. There could (possibly) be many ways of doing this; one very popular one that I am aware of is the Jackiw-Rebbi solution. I know you want a simple
argument without any calculations; don't worry, the below calculations are only there to make a point in the end. Consider a 2D Dirac model with a spatially varying mass term: $$H=-iv_{F}\left(\
sigma_{x}\partial_{x}+\sigma_{y}\partial_{y}\right)+m(x)\sigma_{z}$$ where $\lim_{x\rightarrow\pm\infty}m(x)=\pm m_{0}$ and the sign of $m(x)$ on either side of $x=0$ stays the same; in that case we
must have $m(0)=0$. If you consider the analogy of this generic Dirac model to topologically nontrivial systems, you would have a topologically nontrivial (trivial) system for $x<0$ $(x>0)$. Using
the same boundary conditions you described $k_{y}$ is still a good quantum number. Therefore in the above Hamiltonian we can replace $i\partial_{y}\rightarrow k_{y}$; writing explicitly in matrix
form we get $$H=\left(\begin{array}{cc} m(x) & -iv_{F}(\partial_{x}-k_{y})\\ -iv_{F}(\partial_{x}+k_{y}) & -m(x) \end{array}\right).$$ You can solve for the solutions $\Psi(x)=\left(\psi_{1}(x),\psi_
{2}(x)\right)^{T}\equiv\left(u(x),v(x)\right)^{T}e^{ik_{y}y}$ with energy $E(k_{y})=v_{F}k_{y}$ as $$\left(\begin{array}{cc} m(x) & -iv_{F}(\partial_{x}-k_{y})\\ -iv_{F}(\partial_{x}+k_{y}) & -m(x) \
end{array}\right)\left(\begin{array}{c} u(x)\\ v(x) \end{array}\right)=E\left(\begin{array}{c} u(x)\\ v(x) \end{array}\right).$$ Looking at the zero energy solution (by picking the most convenient
$k_{y}=0$) we get a set of first-order coupled differential equations $$m(x)u(x)-iv_{F}\partial_{x}v(x)=0$$ and $$-iv_{F}\partial_{x}u(x)-m(x)v(x)=0$$ The equation for $v(x)$ after elimination is $$\
partial_{x}^{2}v(x)=\left(\frac{m(x)}{v_{F}}\right)^{2}v(x)+\frac{1}{m(x)}\partial_{x}v(x)\partial_{x}m(x).$$ The general solution would be $$v(x)=C_{1}\sinh\left(-\frac{1}{v_{F}}\int dx\; m(x)\
right)+C_{2}\cosh\left(-\frac{1}{v_{F}}\int dx\; m(x)\right).$$ Implementing the physically relevant boundary conditions ($\lim_{x\rightarrow\pm\infty}v(x)=0$) we have $$v(x)\propto\exp\left(-\frac
{1}{v_{F}}\int dx\; m(x)\right).$$ For the simple (but slightly unphysical) case of $m(x)=m_{0}(2\theta(x)-1)$ it can be verified that we get a simple expression $$v(x)\propto\exp\left(-\frac{m_{0}}
{v_{F}}|x|\right).$$ showing the state localized at the edge. You can get a more physical expression for $v(x)$ (i.e. one that is smooth at $x=0$) by choosing an $m(x)$ which changes less abruptly at
I realize that you are looking for a simple mathematical argument to see the existence of edge states without solving the model. Although I performed some trivial calculations above, the conclusion
is that when a parameter (in this case $m(x)$) in the model crosses its critical value (critical point in the phase diagram) at a certain point in real space you are expected to see an edge state in
the vicinity of that point. This is in no way a proof; I provided only one example!
I have one last comment on “I believe there should be a reason if the edge mode is robust.” Whether the robustness of the edge states can be determined from the model alone depends on the complexity
of the model. For example, the robustness of the edge states in topological insulators comes from time-reversal symmetry. When you write down the Hamiltonian for (say) the HgTe/CdTe quantum well
using the Bernevig-Hughes-Zhang (BHZ) model as a $4\times4$ matrix (which hold approximately at the $\Gamma$ point) you are constructing your model such that it respects time-reversal symmetry; it's
not the other way around where the robustness is a consequence of the model. When dealing with the BHZ model, the robustness of edge states can be argued by enforcing Kramer's theorem on the
dispersion of the edge states. You can read more on this in: What conductance is measured for the quantum spin Hall state when the Hall conductance vanishes?. Scroll all the way down until you see
the question in the block quote “Also: Why is there only a single helical edge state per edge? Why must we have at least one and why can't we have, let's say, two states per edge?”
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user NanoPhys
Thanks a lot for your detailed answer. I did not know that the zero mode can be shown in such a clear way. But I am still wondering if the index theorem is applicable here to interprete your proof of
the zero mode.
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Everett You
Why do you want to have an understanding of the gapless edge states without using bulk topology? If you allow me to use the bulk topology, an argument is that you can continuously move the edge and
consider that as an adiabatic parameter which interpolate two systems. To be more precise, you can consider a sphere with part of it in one topological state A and the rest in another state B, such
as vacuum. The interface between A and B is a circle. Now if you start from B on the whole sphere, and create a small island of A, then enlarge A gradually, the boundary circle moves across the whole
sphere and shrink to zero again. You can view the whole procedure as an interpolation between B phase and A phase, and due to the bulk topological invariant there must be gap closing during such a
procedure, which is the reason of gapless edge states.
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Phynics
Thanks for your explanation ^_^. I think I am getting confortable with the bulk topology argument now. Is this argument provable in the context of Chern-Simons theory?
This post imported from StackExchange Physics at 2017-11-18 23:27 (UTC), posted by SE-user Everett You
|
{"url":"https://www.physicsoverflow.org/40333/there-algebraic-approach-topological-boundary-defect-states","timestamp":"2024-11-03T06:26:18Z","content_type":"text/html","content_length":"193855","record_id":"<urn:uuid:551f2fa6-7ebc-4f67-8bac-ecb38b95b53a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00675.warc.gz"}
|
Potential and Kinetic Energy J S 2 | Basic Science Lesson Note
Topic: Potential and Kinetic Energy
Class: J S S 2
Subject: Basic Science
Time: 40 minutes
Instructional Objectives
By the end of the lesson, students should be able to:
1. Define potential and kinetic energy.
2. Differentiate between potential and kinetic energy.
3. Use formulas to determine potential and kinetic energy.
Instructional Materials
• Chalkboard
• Diagrams showing examples of objects in motion and at rest
• Objects such as a ball and a book for demonstrations
• Calculator (for solving energy problems)
Lesson Development
Step 1: Introduction (5 minutes)
• Start by asking students if they’ve ever seen a ball rolling down a hill or a book resting on a table.
• Explain that these are examples of objects that either possess energy in motion or store energy in a position.
• Introduce the terms potential energy and kinetic energy and state that both are forms of mechanical energy. Potential and Kinetic Energy J S 2 | Basic Science Lesson Note
Step 2: Definitions (10 minutes)
1. Potential Energy (PE):
□ Define potential energy as the energy stored in an object due to its position or state.
□ Example: A book on a shelf has potential energy because of its height above the ground.
2. Kinetic Energy (KE):
□ Define kinetic energy as the energy an object possesses due to its motion.
□ Example: A ball rolling down a hill has kinetic energy because it is in motion.
Step 3: Differences Between Potential and Kinetic Energy (5 minutes)
• Explain that potential energy is stored energy based on an object’s position, while kinetic energy is the energy of movement.
• Potential energy depends on height and mass, while kinetic energy depends on speed and mass.
Potential Energy Kinetic Energy
Stored energy due to position Energy of motion
Depends on height and mass Depends on speed and mass
Step 4: Formulas for Calculating Energy (15 minutes)
• Write the formulas on the board and explain how each is used.
1. Formula for Potential Energy (PE):PE=mghwhere:
m= mass of the object (kg)
g = acceleration due to gravity (9.8 m/s²)
h = height above the ground (m)
2. Formula for Kinetic Energy (KE):KE=12mv2where:
m = mass of the object (kg)
v = velocity of the object (m/s)
• Provide an example for each formula:
• Example for Potential Energy: Calculate the PE of a 2 kg book sitting on a 3 m high shelf.
• Example for Kinetic Energy: Calculate the KE of a 2 kg ball moving at 4 m/s.
KE=1/2 Mv^2 : 1/2 x 4^2 = 16J
Step 5: Class Activity (5 minutes)
• Ask students to calculate the potential and kinetic energy of various objects using different given values of mass, height, and velocity.
• Discuss their answers and explain corrections where necessary.
Conclusion (3 minutes)
• Recap the definitions and differences between potential and kinetic energy.
• Emphasize the formulas used to calculate them and encourage students to practice more problems at home.
• Define potential and kinetic energy.
• Differentiate between potential and kinetic energy.
• Calculate the potential energy of a 5 kg object placed 4 m above the ground.
• Calculate the kinetic energy of a 3 kg object moving at a velocity of 6 m/s.
Take-Home Assignment
• Find an object at home and determine its potential and kinetic energy at different points in time.
This lesson will help students grasp the concepts of energy in different forms and how to apply formulas to calculate potential and kinetic energy.
Potential and Kinetic Energy J S 2 | Basic Science Lesson Note
Web Advert Links
You Can Get Complete Lesson Note Download On Your Phone Now By Click The Link Below For Each Class:
|
{"url":"https://mrojajuni.com/potential-and-kinetic-energy-j-s-2-basic-science-lesson-note/","timestamp":"2024-11-04T17:08:47Z","content_type":"text/html","content_length":"235626","record_id":"<urn:uuid:e9108581-e0d7-41d3-9e1a-995f0538debd>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00527.warc.gz"}
|
estimate of the variability
Random phenomena have variability. In some cases we know what this is based on existing literature, previous studies or theoretical calculations, but at other times we need to estimate the size of
the variability, usually from data gathered in an experiment or study. For example, we might calculate the interquartile range of our measured data and use that as an estimate of the variability of
the underlying phenomenon.
Used on page 32
|
{"url":"https://alandix.com/glossary/hcistats/estimate%20of%20the%20variability","timestamp":"2024-11-06T18:19:52Z","content_type":"application/xhtml+xml","content_length":"9887","record_id":"<urn:uuid:32705515-029f-45c0-978e-e879904dc8dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00555.warc.gz"}
|
Seeing the whole Physically-Based picture.
Subtitle: Building our rendering on solidly shaky grounds.
Physically-Based Rendering has won. There is no question about it, after an initial period of reluctance, even artists have been converted and I don't think you can find many rendering systems
nowadays, either offline or real-time, that hasn't embraced PBR. And PBR proved itself to even be able to adapt to multiple art styles, outside strict adherence to photorealism.
But, really, how much physics is there in our PBR renderers? Let's have a look.
- Optics and Photometry.
Starting from the top, we have to define our physical framework. Physics are models, made to "fit" reality in order to make predictions. Different models are appropriate for different contexts and
problems. For rendering, we work with a framework called "geometrical optics".
In G.O. light is composed of multiple frequencies which are assumed to be independent. Light travels in straight lines (in homogeneous media). It changes direction at changes of media (changes of
IOR), where it can be absorbed, reflected or refracted. It travels instantaneously and it follows the path of least time (Fermat's principle).
Is this a good framework? It's already making a lot of assumptions, and we know it cannot model all light behavior even when it comes to things that are easily visible: diffraction, interference,
fluorescence, phosphorescence. But we say that these phenomena are not that common in everyday materials, and we might be right.
That's not all though, even before we start rendering our first triangle, we make more assumptions. First, we define a color space, usually a trichromatic one, because of the visual system
metamerism. Fine, but we know that's not correct for rendering. We know spectral rendering has in even sometimes dramatically different results, but we trust our artists can tune lighting, materials,
and post-processing in the right way (even if the two things shouldn't be related) to generate nice images even if we restrict ourselves to RGB. Or at least, we hope.
- Scattering
Next, we have to define what happens when the light "hits" something (an IOR discontinuity). Well, who knows, light is really hard! Some electrons... resonate? Get polarized? Please let it not be
something to do with quantum stuff... Anyhow, eventually they scatter some energy back... waves? particles? There is some interference at around the atomic level. Who knows, luckily, we have another
framework that comes to rescue: microfacet theory.
Surfaces are made of microfacets, like a microscopic landscape, light rays hit, bounce around and eventually come out. If we integrate the behavior of said microfacets over a small area, we can
compute a scattering probability (BRDF) from the distribution of the microfacets themselves and a lot of math and voila', rendering happens.
Over a small area? How small, by the way? Well, Naty Hoffman and Eric Heitz say around the order of magnitude of the projected area of a pixel. I say, around the order of magnitude of a light
wavelength, and then the projected area thing is antialiasing applied "after". So probably it's the pixel thing that's right.
What are these microfacets made of? Ideal reflectors obeying only the Fresnel law for how much light is reflected and how much refracted. The refracted part gets into the material (for dielectrics,
that somehow allow this behavior), scatters some more and eventually comes out. If it comes out still "near enough" we call that "diffuse" reflection.
Otherwise, we call that subsurface scattering. But how does the light scatter inside the material? It hits particles. Microflakes? But microfacet based diffuse models (e.g. Oren-Nayar) simply swap
the facets from ideal reflectors to ideal diffusers (Lambert)...
Regardless. We know all these things! We have blog posts, Siggraph talks, and books. Physics... And this still is well in that "geometrical optics" framework. Rays of light hit things. So much so
that we can create raytracers to brute-force these microscopic interactions and create our own BRDFs!
But, it is still reasonable to use geometrical optics for these interactions? They seem to be quite... small. Maybe diffraction now matters? It turns out, it does, and it's a well-known thing (if you
read the papers from the sixties... Beckmann-Spizzichino), but we sweep it under the rug.
And well, we can't really derive the equations from the microfacets, that integral is itself hard, so the BRDFs that we use introduce, typically, a bunch of other assumptions. For example, they might
disregard the fact that light can bounce around multiple times before "coming out".
But who cares, nice pictures can be generated with this theory, and that's what matters. Moreover, someone did try to fit the resulting equations to real-world materials, right? The MERL database? I
wonder how much error there is in that. Or how much it samples "well" real-world materials. Or how perceptual is the error metric used in estimating the error... Better to not think too much.
- Fiat Lux!
Are we done now? Far from it! In practice, we cannot just use the BRDF and brute-force light rays, not for real-time rendering, we're not Arnold. We need to compute a few more integrals!
We need to integrate over the light source, and over the surface area that is "seen" by the pixel, we're considering (pixel footprint). And that is incredibly hard, so hard we don't even try before
having introduced a bunch more assumptions and approximations.
First of all, when we talk about pixel footprint, we really mean that we consider some statistics of the surface normals. We don't consider the fact that, for example, the "view rays" themselves
change (and the light ones too), or that the surface normals don't really exist as an entity separate from actual surface geometry (which would cause shadowing and all other fun things). We assume
these effects to be small.
Then, when we talk about light, we mostly mean simple geometric shapes that emit light from their surface. For example, a sphere. At each point, the light is emitted equally in all directions, and
most often, it's also emitted with the same intensity over the surface.
And even then it's not enough to compute everything in closed-form. In fact, the more complex the light is, typically, the more approximated the BRDF will become. And then we'll fit these
approximated BRDFs to the "real" one, and sum everything up. And sprinkle some of that pixel footprint thing on top somehow as well, but really that's done once on the "real" BRDF, even if we never
actually use that!
So we have an approximation for very small lights, and maybe a good one for spheres, one for lines and capsules with some more handwaving, even more for polygonal lights, especially if textured and
lastly one for far, "environment" light... We have approximations for "diffuse" and for "specular", for each of these. And maybe for static versus dynamic objects? A lot of math and different
We compare them and make sure that more-or-less the material looks the same under different kinds of light, and call it a day... The most ambitious of us might even export a scene to a path-tracer
and compute some sort of ground-truth, to visually make sure things are at least reasonable...
- We're done, right?
So... we get our final image. In all its Physically Based, 60fps, HDR glory! Spectacular.
Year after year people come up with better equations, tighter approximations, and we make shinier pixels as a result.
Is that all? Of course not! We are just getting started!
In practice, materials are not just one surface... They can have layers! And they are never optically uniform! They sparkle! They are anisotropic, they have scratches. Really, look around, look at
most things. Most things are sparkly and anisotropic, due to the way they are fabricated.
And nothing is a surface, really. It's mostly volume and particles. Even... air! So we need fog and volumetric models. But that's not just about the light that scatters in the air back to our virtual
cameras, we should also consider how this scattering affects lighting at surfaces. Our rays of light are not that straight anymore! Participating volumes make our light sources more "diffuse".
Bigger. All of them, also things like environment lighting! And... that should affect shadows too right?
And now that we think about shadows... all this complexity and unknowns are still only for what we call "direct" lighting! What about global illumination? What about the million other hacks and
assumptions that we rely upon to render each or our frames?
- Conclusions
So. How much physics is there in a frame, really? And more importantly, what's the point of all this? Should we be ashamed of not knowing physics that well? Should we do physics more? Less?
I don't know. I personally do not know physics well and I'm not too ashamed. A lot of what we've been doing is reasonable, really. We went with GGX because its "tail" helps. All the lighting
improvements served our products. All the assumptions, individually, looked reasonable.
But, there is a value I think in looking at our math and our approximations holistically, now that we are getting so good at photorealism.
Perhaps there is not too much value for example in going off the deep end of complexity when we think of BRDFs, if we can't then integrate them with complex lighting, or, in order to do so, we have
to approximate them again.
Similarly, the features we focus on should be evaluated. Is it more important to have non-uniform emission in our light sources, or a different "tail" in GGX/T-R? Anisotropic surfaces or sparkles?
Spectral sampling? Thin-film? Non-lambertian diffuse? Of which kind? Accurate energy conservation or multi-bounce in microfacets?
Is it better to use the best possible approximation for a given integral, even if we end up with many different ones, or should we just use a bunch of spherical gaussians, or LTCs and such, but keep
the same representation everywhere? And in general, is most of our error still in the materials, or in the lights? This is very hard to tell from just looking at artist-made pictures, because artists
will compensate any way they can!
But even more importantly - How much can we keep relying on simplifying assumptions in order to make our math work?
I suspect to answer these questions we'll need more data. Acquire from the real world. Brute force solutions. Then look at the data and understand what matters, what matters perceptually, what errors
are we committing end-to-end, and what we should approximate better and how...
And we should not assume that because somewhere we have a bit of physics, we are doing things correctly. We are, after all, a field that forgot for decades basic things like color spaces and gamma.
|
{"url":"http://c0de517e.blogspot.com/2019/05/seeing-whole-physically-based-picture.html","timestamp":"2024-11-07T07:26:48Z","content_type":"text/html","content_length":"111853","record_id":"<urn:uuid:3afc737f-2372-4fca-823b-b1907af79bb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00518.warc.gz"}
|
7th grade inequalities worksheet
7th grade inequalities worksheet Related topics: highest common factor of 57 and 93
is there any program that shows step by step solution to a mathematical problem
covert square meter to square feet
calculating root of a whole number
example of mathematical algebraic expressions poem
quadratic formula program for the ti-82 / ti-83
mcdougal littell geometry illinois version online
seventh class maths
gcse maths algebra worksheets
least common multiple printable
first degree equations and inequalities
Author Message
DynaJT Posted: Friday 20th of Sep 12:16
Hey, A couple of days back I began solving my mathematics assignment on the topic Algebra 2. I am currently unable to finish the same because I am unfamiliar with the fundamentals of
algebra formulas, function range and radical expressions. Would it be possible for anyone to aid me with this?
Back to top
AllejHat Posted: Friday 20th of Sep 21:06
You seem to be more freaked out than confused. First you need to control your senses . Do not panic. Sit back, relax and look at the books with a clear mind. They will seem tough if you
think they are tough. 7th grade inequalities worksheet can be easily understood and you can solve almost every equation with the help of Algebrator. So relax.
From: Odense,
Back to top
Bet Posted: Saturday 21st of Sep 11:07
It is good to know that you wish to improve your math and are taking efforts to do so. I think you should try Algebrator. This is not exactly a tutoring tool but it provides solutions
to math problems in a very descriptive manner. And the best thing about this software product is that it is very user friendly. There are a lot of demos given under various topics which
are quite helpful to learn the subject. Try it and wish you good luck with math.
From: kµlt øƒ
Back to top
cmithy_dnl Posted: Sunday 22nd of Sep 18:31
I am a regular user of Algebrator. It not only helps me complete my homework faster, the detailed explanations offered makes understanding the concepts easier. I strongly advise using
it to help improve problem solving skills.
Back to top
|
{"url":"https://softmath.com/algebra-software/radical-equations/7th-grade-inequalities.html","timestamp":"2024-11-14T07:45:24Z","content_type":"text/html","content_length":"39301","record_id":"<urn:uuid:0d58fd69-e8b7-4979-951a-31eab4d97b49>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00144.warc.gz"}
|
Notes - Quantum Information HT24, Basic measurements
What does it mean for two orthonormal bases $\{ \vert \psi _ n \rangle : n = 0, \cdots, d-1\}$ and $\{ \vert \psi _ n’ \rangle : n = 0, \cdots, d-1\}$ to be equal up to global phase?
For each $n$, $\exists \gamma _ n \in [0, 2\pi]$ such that
\[|\psi'_n \rangle = e^{i\gamma_n} |\psi_n \rangle\]
What are the possible types of basic measurements on a $d$-dimensional quantum system?
Orthonormal bases in $\mathbb C^d$, up to global phases.
• We have a quantum system in a state $ \vert \psi \rangle$
• We perform the basic measurement corresponding to the orthonormal basis $\{ \vert \psi _ k \rangle : k = 0, \cdots, d-1\}$.
What is the probability we obtain the outcome $ \vert \psi _ k\rangle$, and what is the state of the system immediately after the measurement?
\[p_k = |\langle \psi_k | \psi\rangle|^2\]
If we obtain the outcome $\psi _ l$, the state of the system immediately after the measurement is $ \vert \psi _ n\rangle$.
Can you state the Born rule?
• We have a quantum system in a state $ \vert \psi \rangle$
• We perform the basic measurement corresponding to the orthonormal basis $\{ \vert \psi _ k \rangle : k = 0, \cdots, d-1\}$.
Then the probability we obtain the outcome $ \vert \psi _ k\rangle$ is
\[p_k = |\langle \psi_k | \psi\rangle|^2\]
Suppose $\{ \vert \psi _ n \rangle : n = 0, \cdots, d-1\}$ and $\{ \vert \psi _ n’ \rangle : n = 0, \cdots, d-1\}$ are orthonormal bases equal up to global phase. What can we say about the
probabilities of obtaining each outcome $\psi _ k$ and $\psi’ _ k$, and the post-measurement state?
The probabilities are the same
\[|\langle \psi_n | \psi\rangle|^2 = |\langle \psi'_n | \psi \rangle|^2\]
and the post-measurement state will always be the same.
What is the computational basis for $\mathbb C^d$?
The orthonormal basis given by
\[\\{|k \rangle : k = 0, \cdots, d - 1\\}\]
What is the Fourier basis for $\mathbb C^d$?
The orthonormal basis given by
\[|e_k \rangle := \frac{1}{\sqrt d} \sum^{d-1}_{m=0} \exp\left[ \frac{2\pi i m k}{d} \right] |m\rangle\]
Can you define the vectors $ \vert +\rangle$ and $ \vert -\rangle$, and how do these relate to the Fourier basis?
\[\begin{aligned} |+\rangle &:= \frac{|0\rangle + |1\rangle}{\sqrt 2} \\\\ |-\rangle &:= \frac{|0\rangle - |1\rangle}{\sqrt 2} \\\\ \end{aligned}\]
These are the Fourier basis vectors in dimension $2$.
Suppose we have the ONB $\{ \vert \phi _ 0\rangle, \vert \phi _ 1\rangle, \vert \phi _ 2\rangle\}$ where
\[\begin{aligned} |\phi_0 \rangle &:= \frac{1}{\sqrt 2} (|0\rangle + i|2\rangle) \\\\ |\phi_1 \rangle &:= |1\rangle \\\\ |\phi_2 \rangle &:= \frac{1}{\sqrt 2} (|0\rangle - i|2\rangle) \end{aligned}\]
What state gives rise to the following probability distribution
\[\begin{aligned} p(0) &= 1 / 5 \\\\ p(1) &= 1 /3 \\\\ p(2) &= 7/15 \end{aligned}\]
for each of the outcomes?
Use the ONB and give coefficients that are square roots of the corresponding amplitudes, so
\[|\psi\rangle = \sqrt{1/5} |\phi_0\rangle + \sqrt{1/3} |\phi_1\rangle + \sqrt{7/15} |\phi_2\rangle\]
Related posts
|
{"url":"https://ollybritton.com/notes/uni/part-a/ht24/quantum-information/notes/notes-quantum-information-ht24-basic-measurements/","timestamp":"2024-11-05T10:29:42Z","content_type":"text/html","content_length":"508154","record_id":"<urn:uuid:9246675c-a76b-458a-acff-91adea78f154>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00308.warc.gz"}
|
Tax Shield Formula How to Calculate Tax Shield with Example - CSA-Creuzet
Tax Shield Formula How to Calculate Tax Shield with Example
Worldwide in recent years, the volume of leveraged buyouts and management buyouts has increased. Covenants are termed as points or restrictions that an organisation has to agree to for obtaining the
loan or debt. To start, an organisation may have to agree to refrain from an action like not selling back their assets. Moreover, a few covenants demand the company to maintain various ratios such as
debt coverage ratio or debt-equity ratio.
• Furniture manufacturing has more systematic risk than furniture retailing.
• Worldwide in recent years, the volume of leveraged buyouts and management buyouts has increased.
• Determine the proceeds of a 10-year promissory note discounted after 4 years at 6% compounded quarterly with a maturity value of $6000.
• Remember, when you use tax shields like depreciation, the taxes are deferred, not avoided, so they’ll eventually need to be paid.
• S corporations and general partnerships are not taxed at the business level.
Find the present and future values of an income stream of $6,000 per year for 17 years, assuming a 2% interest rate compounded continuously. Find the present and future values of an income stream of
$4,000 a year for 23 years. Calculate the present value PV of an investment that will be worth 1,000 at the stated interest rate after the stated amount of time. Find the present and future values of
an income stream of $4000 a year for 25 years.
Tax Shield for Medical Expenses
This arbitrage-free value is different from the discounted value of Fernandez . The different approaches that can be used to de-lever and re-lever betas start from the same principle—that a firm’s
value is equal to the present value of its cash flows. However, there are divergences in terms of how tax shields should be discounted depending on the financing strategy of a firm.
• If the investor still pays $1,000 of his initial equity capital, in addition to borrowing $4,000 at the terms above, the investor can purchase 5 units of investment for $5000 total.
• In following through the example on this page you should use the assumption that the tax rate is zero.
• They are wedded to measuring each piece of the capital structure at its nominal outstanding value and then attaching net of tax cost of capital to the different items.
• Rm is the market rate, and this is the market’s long-term average return.
• The equipment would cost $75,000, and she has the cash for it.
Both Gear and Nogear also work in high-paying jobs and are subject personal marginal tax rates of 45%. Both Lev and Nolev also work in high-paying jobs and are subject personal marginal tax rates of
45%. tax shield Unrestricted negative gearing is allowed in Australia, New Zealand and Japan. Negative gearing laws allow income losses on investment properties to be deducted from a tax-payer’s
pre-tax personal income.
What Is the Formula for Tax Shield?
Since Company A has no non-operating expenses to factor in, its taxable income remains at $35m. The value of a tax shield can be calculated as the total amount of the taxable interest expense
multiplied by the tax rate. The intuition here is that the company has an $800,000 reduction in taxable income since the interest expense is deductible.
• In terms of developed and emerging markets, there are different determinants of capital structure.
• Financial effect expressed as a difference between cost of equity and cost of debt.
• The tax savings are calculated as the amount of interest multiplied by the tax rate.
|
{"url":"https://csa-creuzet.com/tax-shield-formula-how-to-calculate-tax-shield/","timestamp":"2024-11-04T02:33:02Z","content_type":"text/html","content_length":"186321","record_id":"<urn:uuid:925012c1-8fb1-4d4d-9ce4-40f946afa5ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00487.warc.gz"}
|
REV4 - Editorial
Bit Manipulation
Find the largest number of consecutive set bits in a given number.
The problem becomes easy once you find the logic.
You can find the maximum number of consecutive bits by incrementing the count whenever the bit is 1 and if the bits is 0, by updating current maximum and resetting the count.
while n is not 0
if LSB is 1 //LSB- Least Significant Bit
if count > max
if count > max
The complexity of this solution is O(log n).
You can first convert the number into its binary form and store it as a string. Then, using a for loop, do the same thing given above but instead of checking the LSB you can check each character in
the binary string.
The binary representation can be found in O(log n) and finding the maximum number of consecutive set bits is also O(log n). So this approach also takes O(log n).
But we can do better than O(log n) using the approach discussed here. If the maximum number of consecutive set bits is k, then this approach has a complexity O(k).
Author’s solution can be found here.
First Tester’s solution can be found here.
Both of the solutions above use O(log n) approach.
Second Tester’s solution can be found here.
This Solution uses the O(k) approach.
|
{"url":"https://discusstest.codechef.com/t/rev4-editorial/14760","timestamp":"2024-11-10T23:58:59Z","content_type":"text/html","content_length":"21913","record_id":"<urn:uuid:aca96559-60ed-4730-ba00-8a4af331879f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00642.warc.gz"}
|
Structure Learning of H-colorings
We study the structure learning problem for graph homomorphisms, commonly referred to as H-colorings, including the weighted case which corresponds to spin systems with hard constraints. The learning
problem is as follows: for a fixed (and known) constraint graph H with q colors and an unknown graph G=(V,E) with n vertices, given uniformly random H-colorings of G, how many samples are required to
learn the edges of the unknown graph G? We give a characterization of H for which the problem is identifiable for every G, i.e., we can learn G with an infinite number of samples. We focus particular
attention on the case of proper vertex q-colorings of graphs of maximum degree d where intriguing connections to statistical physics phase transitions appear. We prove that when q>d the problem is
identifiable and we can learn G in poly(d,q)× O(n^2n) time. In contrast for soft-constraint systems, such as the Ising model, the best possible running time is exponential in d. When q≤ d we prove
that the problem is not identifiable, and we cannot hope to learn G. When q<d-√(d) + Θ(1) we prove that even learning an equivalent graph (any graph with the same set of H-colorings) is
computationally hard---sample complexity is exponential in n in the worst-case. For the q-colorings problem, the threshold for efficient learning seems to be connected to the uniqueness/
non-uniqueness phase transition at q=d. We explore this connection for general H-colorings and prove that under a well-known condition in statistical physics, known as Dobrushin uniqueness condition,
we can learn G in poly(d,q)× O(n^2n) time.
|
{"url":"https://api.deepai.org/publication/structure-learning-of-h-colorings","timestamp":"2024-11-09T03:19:59Z","content_type":"text/html","content_length":"155153","record_id":"<urn:uuid:daf325e9-fe78-4433-9855-76be55ae581a>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00130.warc.gz"}
|
Meters per Second to Knots Conversion (m/s to kn)
Meters per Second to Knots Converter
Enter the speed in meters per second below to convert it to knots.
Do you want to convert knots to meters per second?
How to Convert Meters per Second to Knots
To convert a measurement in meters per second to a measurement in knots, multiply the speed by the following conversion ratio: 1.943844 knots/meter per second.
Since one meter per second is equal to 1.943844 knots, you can use this simple formula to convert:
knots = meters per second × 1.943844
The speed in knots is equal to the speed in meters per second multiplied by 1.943844.
For example,
here's how to convert 5 meters per second to knots using the formula above.
knots = (5 m/s × 1.943844) = 9.719222 kn
Meters per second and knots are both units used to measure speed. Calculate speed in meters per second or knots using our speed calculator or keep reading to learn more about each unit of measure.
What Are Meters per Second?
Meters per second are a measurement of speed expressing the distance traveled in meters in one second.
The meter per second, or metre per second, is the SI derived unit for speed in the metric system. Meters per second can be abbreviated as m/s, and are also sometimes abbreviated as m/sec. For
example, 1 meter per second can be written as 1 m/s or 1 m/sec.
In the expressions of units, the slash, or solidus (/), is used to express a change in one or more units relative to a change in one or more other units.^[1] For example, m/s is expressing a change
in length or distance relative to a change in time.
Meters per second can be expressed using the formula:
v[m/s] = d[m] / t[sec]
The velocity in meters per second is equal to the distance in meters divided by time in seconds.
Learn more about meters per second.
What Is a Knot?
One knot is equal to a speed of one nautical mile per hour,^[2] or one minute of latitude per hour.
Knots can be abbreviated as kn, and are also sometimes abbreviated as kt. For example, 1 knot can be written as 1 kn or 1 kt.
Learn more about knots.
Meter per Second to Knot Conversion Table
Table showing various meter
per second measurements
converted to knots.
Meters Per Second Knots
1 m/s 1.9438 kn
2 m/s 3.8877 kn
3 m/s 5.8315 kn
4 m/s 7.7754 kn
5 m/s 9.7192 kn
6 m/s 11.66 kn
7 m/s 13.61 kn
8 m/s 15.55 kn
9 m/s 17.49 kn
10 m/s 19.44 kn
11 m/s 21.38 kn
12 m/s 23.33 kn
13 m/s 25.27 kn
14 m/s 27.21 kn
15 m/s 29.16 kn
16 m/s 31.1 kn
17 m/s 33.05 kn
18 m/s 34.99 kn
19 m/s 36.93 kn
20 m/s 38.88 kn
21 m/s 40.82 kn
22 m/s 42.76 kn
23 m/s 44.71 kn
24 m/s 46.65 kn
25 m/s 48.6 kn
26 m/s 50.54 kn
27 m/s 52.48 kn
28 m/s 54.43 kn
29 m/s 56.37 kn
30 m/s 58.32 kn
31 m/s 60.26 kn
32 m/s 62.2 kn
33 m/s 64.15 kn
34 m/s 66.09 kn
35 m/s 68.03 kn
36 m/s 69.98 kn
37 m/s 71.92 kn
38 m/s 73.87 kn
39 m/s 75.81 kn
40 m/s 77.75 kn
1. National Institute of Standards and Technology, NIST Guide to the SI, Chapter 6: Rules and Style Conventions for Printing and Using Units, https://www.nist.gov/pml/special-publication-811/
2. NASA, Knots Versus Miles per Hour, https://www.grc.nasa.gov/WWW/K-12/WindTunnel/Activities/knots_vs_mph.html
More Meter per Second & Knot Conversions
|
{"url":"https://www.inchcalculator.com/convert/meter-per-second-to-knot/","timestamp":"2024-11-14T23:34:40Z","content_type":"text/html","content_length":"69686","record_id":"<urn:uuid:d48de37c-b726-4947-b646-94205e1ac226>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00633.warc.gz"}
|
What’s new in version 0.9¶
What’s new in version 0.8¶
Patch release 0.8.2 handles deprecations in dependencies.
What’s new in version 0.7¶
• Added query function to support analysing set-based data.
• Fixed support for matplotlib >3.5.2 (#191. Thanks @GuyTeichman)
What’s new in version 0.6¶
What’s new in version 0.5¶
What’s new in version 0.4.4¶
• Fixed a regresion which caused the first column to be hidden (#125)
What’s new in version 0.4.3¶
• Fixed issue with the order of catplots being reversed for vertical plots (#122 with thanks to Enrique Fernandez-Blanco)
• Fixed issue with the x limits of vertical plots (#121).
What’s new in version 0.4.2¶
• Fixed large x-axis plot margins with high number of unique intersections (#106 with thanks to Yidi Huang)
What’s new in version 0.4.1¶
• Fixed the calculation of percentage which was broken in 0.4.0. (#101)
What’s new in version 0.4¶
What’s new in version 0.3¶
• Added from_contents to provide an alternative, intuitive way of specifying category membership of elements.
• To improve code legibility and intuitiveness, sum_over=False was deprecated and a subset_size parameter was added. It will have better default handling of DataFrames after a short deprecation
• generate_data has been replaced with generate_counts and generate_samples.
• Fixed the display of the “intersection size” label on plots, which had been missing.
• Trying to improve nomenclature, upsetplot now avoids “set” to refer to the top-level sets, which are now to be known as “categories”. This matches the intuition that categories are named, logical
groupings, as opposed to “subsets”. To this end:
□ generate_counts (formerly generate_data) now names its categories “cat1”, “cat2” etc. rather than “set1”, “set2”, etc.
□ the sort_sets_by parameter has been renamed to sort_categories_by and will be removed in version 0.4.
What’s new in version 0.2.1¶
• Return a Series (not a DataFrame) from from_memberships if data is 1-dimensional.
What’s new in version 0.2¶
• Added from_memberships to allow a more convenient data input format.
• plot and UpSet now accept a pandas.DataFrame as input, if the sum_over parameter is also given.
• Added an add_catplot method to UpSet which adds Seaborn plots of set intersection data to show more than just set size or total.
• Shading of subset matrix is continued through to totals.
• Added a show_counts option to show counts at the ends of bar plots. (#5)
• Defined _repr_html_ so that an UpSet object will render in Jupyter notebooks. (#36)
• Fix a bug where an error was raised if an input set was empty.
|
{"url":"https://upsetplot.readthedocs.io/en/stable/changelog.html","timestamp":"2024-11-11T22:35:15Z","content_type":"text/html","content_length":"27587","record_id":"<urn:uuid:93553531-247e-475d-b0ff-16f5ab025b1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00370.warc.gz"}
|
Java Project 5
Implement a Triangle class as specified as follows.
-a: double
-b: double
-c: double
+getAngleA(): int
+getAngleB(): int
+getAngleC(): int
+findArea(): double
+findPerimeter(): double
+print(): void
The class contains three double data fields a, b, c for the three sides of the triangle. The constructor initializes the three sides with the parameters. The methods getAngle*() return the angles of
the triangle, rounded to the nearest degrees. The methods findArea() and findPerimeter() return the area and perimeter of the triangle, respectively. The print() method prints the three sides, three
angles, area and
perimeter of the triangle to the console.
Write a test program (Project5.java) that creates two Triangle objects. On the console, prompt the user to enter the three sides of the first triangle. Set the sides of the second triangle so that it
is similar to the first triangle and its area doubles the area of the
first. Print the two triangles. For example,
Enter a => 3.0
Enter b => 5.0
Enter c => 4.0
Triangle 1:
a = 3.0, b = 5.0, c = 4.0
A = 37, B = 90, C = 53
area = 6.0, perimeter = 12.0
Triangle 2:
a = 4.242640687119286, b = 7.0710678118654755, c = 5.656854249492381
A = 37, B = 90, C = 53
area = 12.000000000000005, perimeter = 16.970562748477143
Need a custom answer at your budget?
This assignment has been answered 4 times in private sessions.
Or buy a ready solution below.
|
{"url":"https://codifytutor.com/marketplace/java-project-5-cd69a904-2fed-416e-9017-c45292c2817e","timestamp":"2024-11-02T11:13:20Z","content_type":"text/html","content_length":"24493","record_id":"<urn:uuid:c9ebc7ca-c8e4-4283-af62-7d7b5e676e37>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00062.warc.gz"}
|
Normalize Scores and Count High Achievers per Subject - TensorGym Exercises
Normalize Scores and Count High Achievers per Subject
Given a 2D list `StudentScores` where each row represents a student's scores in different subjects, and subjects are represented by columns, normalize the scores for each subject using the softmax
function. Return a one-dimensional tensor representing how many students achieved a normalized score greater than 0.3 for each subject.
For each subject (column), apply the softmax function to normalize the scores. Then count the number of students for each column (subject) who achieved a normalized score greater than 0.3. Note:
Softmax is a mathematical function that converts a vector of real numbers into a vector of probabilities, where the probabilities are proportional to the exponentials of the input numbers and sum up
to 1. Check 'Implement Softmax Function' problem to see the formula and to practice implementing it.
|
{"url":"https://tensorgym.com/exercises/6","timestamp":"2024-11-09T19:39:37Z","content_type":"text/html","content_length":"39445","record_id":"<urn:uuid:04507c5b-296e-4669-a9b5-35025893abe6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00140.warc.gz"}
|
Fixed Income Section 2 - Page 2 of 14 - Avatto
6. A horizon yield is the internal rate of return between the total return and the:
7. Bond A has a maturity of 8 years. Bond B has a maturity of 4 years. All else equal:
9. Holding all other characteristics the same, the bond exposed to the lowest level of reinvestment risk is most likely the one selling at:
10. Analyst 1: If the investment horizon is short, reinvestment risk will dominate the market price risk. Analyst 2: If the investment horizon is short, market price risk will dominate the
reinvestment risk. Which analyst’s statement is most likely correct?
|
{"url":"https://www.avatto.com/cfa-level-1/cfa-practice/fixed-income/fixed-sec-2/page/2/","timestamp":"2024-11-12T12:40:45Z","content_type":"text/html","content_length":"172181","record_id":"<urn:uuid:94db6cd8-34f3-41f2-b672-46f90edfb2a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00671.warc.gz"}
|
The density of NaOH solution is 1.2 g cm−3. The molality of thi... | Filo
The density of solution is . The molality of this solution is (Round off to the nearest integer)
[Use : Atomic masses : Na ; Density of ]
Not the question you're searching for?
+ Ask your question
Let volume of solution is x ml
So mass of solution = 1.2 x
& mass of water (solvent) = x gram
So mass of solute = 0.2 x gram
Molality =
Was this solution helpful?
Video solutions (8)
Learn from their 1-to-1 discussion with Filo tutors.
10 mins
Uploaded on: 7/9/2023
Was this solution helpful?
6 mins
Uploaded on: 7/15/2023
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Mains 2021 - PYQs
View more
Practice questions from Some Basic Concepts of Chemistry in the same exam
Practice more questions from Some Basic Concepts of Chemistry
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Chemistry tutors online and get step by step solution of this question.
231 students are taking LIVE classes
The density of solution is . The molality of this solution is (Round off to the nearest integer)
Question Text [Use : Atomic masses : Na ; Density of ]
Updated On Jul 15, 2023
Topic Some Basic Concepts of Chemistry
Subject Chemistry
Class Class 11
Answer Type Text solution:1 Video solution: 8
Upvotes 766
Avg. Video Duration 6 min
|
{"url":"https://askfilo.com/chemistry-question-answers/the-density-of-mathrmnaoh-solution-is-12-mathrm~g-mathrm~cm-3-the-molality-of","timestamp":"2024-11-09T13:11:19Z","content_type":"text/html","content_length":"447085","record_id":"<urn:uuid:33cf713b-7406-46f3-a1ba-e893089617f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00721.warc.gz"}
|
Danang, Setyadi (2015) THE DIFFERENTIATE BETWEEN THE RESULT OF MATHEMATICS USING METACOGNITIVE APPROACH AND MECHANISTIC APPROACH IN SET (HIMPUNAN). Proceeding of International Conference On Research,
Implementation And Education Of Mathematics And Sciences 2015 (ICRIEMS 2015), Yogyakarta State University, 17-19 May 2015. ISSN 978-979-96880-8-8
ME - 2.pdf
Download (58kB) | Preview
Generally, students do not active in this learning teaching mathematics in the class. The less of effectiveness will influence and satisfy toward the result of mathematics. Because of that, we find
other approach that can make students become an active learner. For example: using metacognitive approach. The teaching learning using metacognitive approach is embed the awareness to students how to
build, monitor, and control about what they have known, what they needed to do it, and how to do it. The aim is to know whether or not the differentiate between the result of mathematic for students
who using metacognitive and mechanistic approach. The kind of research is quasi experiment. This research will hold at SMP Negeri 2 Pabelan grade VII C which called experiment class. It will do using
metacognitive approach and then grade VII B which called control class. It will do using mechanistic approach. The result of this research shows that metacognitive approach is better than mechanistic
approach. The teaching learning using metacognitive approach is better because this approach is embedding the awareness of students in the teaching learning mathematic in the class. It shows that
students more aware to learn in the class, they also more concentration and listen the teacher’s explanation, and they also find their new problem and how to solve their problem in teaching learning
mathematic. Keywords: metacognition, metacognitive approach
Actions (login required)
|
{"url":"http://eprints.uny.ac.id/22963/","timestamp":"2024-11-12T05:32:57Z","content_type":"application/xhtml+xml","content_length":"24476","record_id":"<urn:uuid:6312b343-84b1-40f9-a508-963c054f8c88>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00503.warc.gz"}
|
Transform the Target Variable
Table of Contents
The Objective Transforming the Target Variable
There are three problems that can occur in a machine learning project that we can tackle by transforming the target variable:
1) Improve the results of a machine learning model when the target variable is skewed.
2) Reduce the impact of outliers in the target variable
3) Using the mean absolute error (MAE) for an algorithm that only minimizes the mean squared error (MSE)
In the next part of this article, we dive a little deeper into all three points.
During the article, you will see some transformations that we are using. If you are more deeply interested in different transformations then you can also visit the article that is dedicated to
transformations of features in general.
Improve the Results of a Machine Learning Model when the Target Variable is Skewed
A skewed distribution of the target variable means that the distribution differs from the normal distribution, also known as the Gaussian distribution. The normal distribution is a symmetrical
bell-shaped distribution that is characterized by its mean and standard deviation.
Machine learning algorithms that are based on statistics like linear regression can be applied with greater confidence when the distribution of the target variable follows the normal distribution as
much as possible. Therefore a possibility to achieve better performance and more accurate predictions is to transform the target variable to a more Gaussian distribution.
The following plot shows an original distribution of the target variable that is positively skewed (the measured skewness is positive). After applying the log transformation the distribution gets
negatively skewed (skewness is negative) but with a much lower value for the skewness. Therefore the distribution is more normal. Applying the Box-Cox transformation we achieve the most normal
distribution with a skewness of -0.01.
To improve the results of the machine learning model I would suggest applying the Box-Cox transformation to the target variable. The complete program code for the transformations and also for
additional transformations is available at the following Google Colab
Other machine learning algorithms like tree-based models or neural networks work well with any data even when data is not normally distributed.
Reduce the Impact of Outliers
Transforming the target variable reduces the impact of outliers because the distribution changes with the transformation. Outliers for example at the extreme end of the distribution can affect the
mean and standard deviation of the data, making it harder to build an accurate model. By transforming the target variable, we can change the distribution of the data so that the outliers are less
extreme or have less influence on the overall distribution.
Additionally, some transformations, such as the square root or log transformation, can be particularly effective at reducing the impact of outliers because they tend to compress the extreme values in
the distribution.
The following picture shows the distribution with outliers for the original dataset and after applying the square root transformation and log transformation. From the comparison of the skewness, you
see that after the transformations the distributions are more like a Gaussian distribution, and the influence of the outliers is reduced as the difference between the outliers and the mean of the
overall dataset is reduced.
Using the MAE for a Machine Learning Algorithm that only Minimizes the MSE
The transformation of the target variable does also influence the error function that you are using in a machine-learning project and can trick a machine-learning algorithm to use an error function
that is not supported. For example, if the regression model only allows minimizing the MSE, we can log-transform the target variable and are close to the solution when we would use the MAE error
measure. (MSE + log-transform ~ MAE)
The Easiest Way to Transform the Target Variable
After you learned the main objectives to transform the target variable, I will show you in this section different ways how to transform the target variable in Python.
To give you a little sneak peek, the easiest way to transform the target variable is to use the sklearn TransformedTargetRegressor function.
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.compose import TransformedTargetRegressor
from sklearn.linear_model import HuberRegressor
from sklearn.model_selection import cross_val_score
# create a pipeline that standardizes the data before using the HuberRegressor
pipeline = Pipeline(steps=[
('scaler', StandardScaler()),
('regressor', HuberRegressor())
# use TransformedTargetRegressor to log transform the target variable
# and calculate the rmse of all cv runs
rmse_log = np.sqrt(
X, y,
In the following end-to-end example, you see three different ways to compute the RMSE with cross-validation for an example dataset:
1) Calculate the RMSE of all cv runs without any transformation.
2) Only use the log-transformed target variable and calculate the RMSE of all cv runs.
3) Use TransformedTargetRegressor to log transform the target variable and calculate the RMSE of all cv runs.
1. RMSE of run without transformation: [52.12826305 55.59135539 57.49187307 55.25159682 53.97513891]
2. RMSE of run with only log transforming y: [0.4266049 0.40561625 0.43106019 0.39170138 0.40330288]
3. RMSE of run with log transforming and retransforming y: [51.72329989 57.02066486 57.56083704 56.23000048 55.03754777]
When we compare the RMSE in the three different ways we see that when you only transform the target variable (2), you are not able to compare the errors of different runs (1), for example with
different parameters to find the best parameter combination or to test if transforming the target variable improves the results.
Only if you use the TransformedTargetRegressor with the transformer function and inverse transformer function (3), the results of the cross-validation have the same scale compared to the results when
no transformer is applied (1).
Read my latest articles:
Leave a Comment
|
{"url":"https://datasciencewithchris.com/transform-the-target-variable/","timestamp":"2024-11-08T14:17:26Z","content_type":"text/html","content_length":"57573","record_id":"<urn:uuid:1845b36e-07a1-4906-a4af-23c4750c866a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00385.warc.gz"}
|
Add Significance Level and Stars to Plot in R
Add Significance Level and Stars to Plot in R, In this article, I’ll show you how to use the R programming language’s ggsignif package to annotate a ggplot2 plot with significance levels.
Boxplot with Stars for Significance
The R programming language’s box-and-whisker plot with significance levels is demonstrated in the following R code.
To do this, we must first produce an illustrative data collection.
data <- data.frame(group = rep(LETTERS[1:4], each = 100),
value = c(rnorm(100),
rnorm(100, 3),
rnorm(100, - 5)))
group value
1 A -0.56047565
2 A -0.23017749
3 A 1.55870831
4 A 0.07050839
5 A 0.12928774
6 A 1.71506499
How to perform the MANOVA test in R? – Data Science Tutorials
In this tutorial, we’ll use the ggplot2 software to plot our data. We must first install and load the ggplot2 package before we can use any of its features.
The code below can be used to create a boxplot without significance levels in the next step:
ggp_box <- ggplot(data,
aes(x = group,
y = value)) +
Assume we wish to determine whether the various boxplots (i.e., the various groups in our data) differ considerably.
Let’s also assume that we want to include the significance levels in our image.
How to Add a caption to ggplot2 Plots in R? (datasciencetut.com)
The ggsignif package needs to be installed and loaded first.
With the help of the ggsignif package, created by Constantin Ahlmann-Eltze and Indrajeet Patil, you can add group-wise comparisons to your ggplots2 graphs.
We can use the geom_signif function (or, alternatively, the geom_stat function) as demonstrated below to achieve this.
We must specify the groups we want to compare when using the geom_signif function.
How to create a hexbin chart in R – Data Science Tutorials
ggp_box +
geom_signif(comparisons = list(c("A", "B")))
By using the aforementioned syntax, a ggplot2 boxplot compares groups A and B and has a significant level.
We compared our groups in the previous graphic using the p-value. By setting the map signif_level option to TRUE, we can display significant stars instead.
ggp_box +
geom_signif(comparisons = list(c("A", "B")),
map_signif_level = TRUE)
Additionally, it is possible to compare numerous groups at once.
In order to accomplish this, we must expand our comparison list, and in order to prevent visual overlap, we must additionally define the points on the y-axis where we want to display the important
Free Data Science Course-Online 2022 »
ggp_box +
geom_signif(comparisons = list(c("A", "B"),
c("A", "C")),
map_signif_level = TRUE,
y_position = c(7.5, 9))
The significance levels’ layout can also be changed by the user using the geom_signif function.
As demonstrated here, we may alter the font size, line spacing, and color.
ggp_box +
geom_signif(comparisons = list(c("A", "B"),
c("A", "C")),
map_signif_level = TRUE,
y_position = c(7.5, 9.5),
col = 2,
size = 2,
textsize = 5) +
ylim(- 8, 12)
How to use the image function in R – Data Science Tutorials
|
{"url":"https://datasciencetut.com/add-significance-level-and-stars-to-plot-in-r/","timestamp":"2024-11-06T20:00:33Z","content_type":"text/html","content_length":"111512","record_id":"<urn:uuid:d2332493-374f-435d-a016-015fd934f164>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00433.warc.gz"}
|
Probably Approximately Correct – The Dan MacKinlay stable of variably-well-consider’d enterprises
Probably Approximately Correct
November 25, 2014 — October 27, 2021
machine learning
A class of risk bounds, related in some way to concentration inequalities of statistical learning.
🏗 What is that way precisely?
Some historical notes: PAC learning was invented by Leslie Valiant in 1984, and it birthed a new subfield of computer science called computational learning theory and won Valiant some of computer
science’s highest awards. Since then there have been numerous modifications of PAC learning, and also models that are entirely different from PAC learning. One other goal of learning theorists
(as with computational complexity researchers) is to compare the power of different learning models.
1 PAC-Bayes
Alquier (2021):
Aggregated predictors are obtained by making a set of basic predictors vote according to some weights, that is, to some probability distribution. Randomized predictors are obtained by sampling in
a set of basic predictors, according to some prescribed probability distribution. Thus, aggregated and randomized predictors have in common that they are not defined by a minimization problem,
but by a probability distribution on the set of predictors. In statistical learning theory, there is a set of tools designed to understand the generalization ability of such procedures:
PAC-Bayesian or PAC-Bayes bounds. Since the original PAC-Bayes bounds of McAllester, these tools have been considerably improved in many directions (we will for example describe a simplified
version of the localization technique of Catoni that was missed by the community, and later rediscovered as “mutual information bounds”). Very recently, PAC-Bayes bounds received considerable
attention: for example, there was a workshop on PAC-Bayes at NIPS 2017, “(Almost) 50 Shades of Bayesian Learning: PAC-Bayesian trends and insights”, organized by B. Guedj, F. Bach and P. Germain.
One reason for this recent success is the successful application of these bounds to neural networks by Dziugaite and Roy. An elementary introduction to PAC-Bayes theory is still missing. This is
an attempt to provide such an introduction.
|
{"url":"https://danmackinlay.name/notebook/probably_approximately_correct.html","timestamp":"2024-11-10T21:04:26Z","content_type":"application/xhtml+xml","content_length":"31749","record_id":"<urn:uuid:08533af1-38f2-4039-be7a-425b4117046d>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00421.warc.gz"}
|
more from Keith Hossack
Single Idea 23626
[catalogued under 6. Mathematics / A. Nature of Mathematics / 5. The Infinite / h. Ordinal infinity]
Full Idea
The transfinite ordinal numbers are important in the theory of proofs, and essential in the theory of recursive functions and computability. Mathematics would be incomplete without them.
Gist of Idea
Transfinite ordinals are needed in proof theory, and for recursive functions and computability
Keith Hossack (Knowledge and the Philosophy of Number [2020], 10.1)
Book Reference
Hossack, Keith: 'Knowledge and the Philosophy of Number' [Routledge 2021], p.152
A Reaction
Hossack offers this as proof that the numbers are not human conceptual creations, but must exist beyond the range of our intellects. Hm.
Related Idea
Idea 23622 We can only mentally construct potential infinities, but maths needs actual infinities [Hossack]
|
{"url":"http://www.philosophyideas.com/search/response_philosopher_detail.asp?era_no=M&era=New%20millenium%20(2001-%20)&id=23626&PN=3605&order=chron&from=theme&no_ideas=29","timestamp":"2024-11-05T06:15:48Z","content_type":"application/xhtml+xml","content_length":"3838","record_id":"<urn:uuid:54371698-75d5-4eb9-a024-e5727ebe1e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00416.warc.gz"}
|
2X X50. What one number can replace X? - Answers
Still have questions?
How do you perform 2x plus 2y and raise the whole thing to the negative one?
The answer to your question is (2x + 2y)^-1 = 1/(2x + 2y)^1. When you raise a number to a negative power, you can rewrite it by dividing one by the original number with the negative sign dropped from
the exponent. Because the power here is 1, you can rewrite the answer again to 1/(2x + 2y) since any number raised to the power of 1 is simply the number itself. You can't add 2x and 2y because they
are two different variables.
|
{"url":"https://math.answers.com/math-and-arithmetic/2X_X50._What_one_number_can_replace_X","timestamp":"2024-11-10T08:41:53Z","content_type":"text/html","content_length":"161827","record_id":"<urn:uuid:583c1db3-b136-40f9-b24a-3b3edd0085dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00098.warc.gz"}
|
NCERT Solutions for Class 8 Maths Chapter 13 Direct and Indirect Proportions Ex 13.2
NCERT Solutions for Class 8 Maths Chapter 13 Direct and Indirect Proportions Ex 13.2 are part of NCERT Solutions for Class 8 Maths. Here we have given NCERT Solutions for Class 8 Maths Chapter 13
Direct and Indirect Proportions Ex 13.2.
Board CBSE
Textbook NCERT
Class Class 8
Subject Maths
Chapter Chapter 13
Chapter Name Direct and Indirect Proportions
Exercise Ex 13.2
Number of Questions Solved 11
Category NCERT Solutions
NCERT Solutions for Class 8 Maths Chapter 13 Direct and Indirect Proportions Ex 13.2
Question 1.
Which of the following are in inverse proportion?
(i) The number of workers on a job and the time to complete the job.
(ii) The time is taken for a journey and the distance traveled in a uniform speed.
(iii) Area of cultivated land and the crop harvested.
(iv) The time is taken for a fixed journey and the speed of the vehicle.
(v) The population of a country and the area of land per person.
(i) The number of workers on jobs and the time to complete the job are in inverse proportion.
(ii) The time is taken for a journey and the distance traveled in a uniform speed are not in inverse proportion.
(iii) Area of cultivated land and the crop harvested are not in inverse proportion.
(iv) The time taken for a fixed journey and the speed of the vehicle are in inverse proportion.
(v) The population of a country and the area of land per person are in inverse proportion.
Question 2.
In a Television game show, the prize money of 1,00,000 is to be divided equally amongst the winners. Complete the following table and find whether the prize money given to an individual winner is
directly or inversely proportional to the number of winners.
Question 3.
Rehman is making a wheel using spokes. He wants to fix equal spokes in such a way that the angles between any pair of consecutive spokes are equal. Help him by completing the following table.
(i) Are the number of spokes and the angles formed between the pairs of consecutive spokes in verse proportion?
(ii) Calculate the angle between a pair of consecutive spokes on a wheel with 15 spokes.
(iii) How many spokes would be needed, if the angle between a pair of consecutive spokes is 40°?
Let the angle (in degree) between a pair of consecutive spokes be \({ y }_{ 3 }\), \({ y }_{ 4 }\) and \({ y }_{ 5 }\) respectively. Then,
Question 4.
If a box of sweets is divided among 24 children, they will get 5 sweets each. How many would each get, if the number of the children is reduced by 4?
Suppose that each would get \({ y }_{ 2 }\) sweets.
Thus, we have the following table.
Question 5.
A farmer has enough food to feed 20 animals in his cattle for 6 days. How long would the food last if there were 10 more animals in his cattle?
Suppose that the food would last for \({ y }_{ 2 }\) days. We have the following table:
Question 6.
A contractor estimates that 3 persons could rewire Jasminder’s house in 4 days. If, he uses 4 persons instead of three, how long should they take to complete the job?
Suppose that they take \({ y }_{ 2 }\) days to complete the job. We have the following table
Question 7.
A batch of bottles was packed in 25 boxes with 12 bottles in each box. If the same batch is packed using 20 bottles in each box, how many boxes would be filled?
Suppose that \({ y }_{ 2 }\) boxes would be filled. We have the following table:
Question 8.
A factory requires 42 machines to produce a given number of articles in 63 days. How many machines would be required to produce the same number of articles in 54 days?
Suppose that \({ x }_{ 2 }\) machines would be required. We have the following table:
Question 9.
A car takes 2 hours to reach a destination by traveling at a speed of 60 km/h. How long will it take when the car travels at the speed of 80 km/h?
Let it take \({ y }_{ 2 }\) hours. We have the following table:
Question 10.
Two persons could fit new windows in a house in 3 days.
(i) One of the persons fell ill before the work started. How long would the job take now?
(ii) How many persons would be needed to fit the windows in one day?
(i) Let the job would take \({ y }_{ 2 }\) days. We have the following table:
Clearly, more the number of persons, lesser would be the number of days to do the job. So, the number of persons and number of days vary in inverse proportion.
So, 2 x 3 = 1 x \({ y }_{ 2 }\)
⇒ \({ y }_{ 2 }\) = 6
Thus, the job would now take 6 days.
(ii) Let \({ y }_{ 2 }\) persons be needed. We have the following table:
Clearly, more the number of persons, lesser would be the number of days to do the job. So, the number of persons and number of days vary in inverse proportion.
So, 3 x 2 = 1 x \({ y }_{ 3 }\)
⇒ \({ T }_{ 2 }\) = 6
Thus, 6 persons would be needed.
Question 11.
A school has 8periods a day each of 45 minutes duration. How long would each period be, if the school has 9 periods a day, assuming the number of school hours to be the same?
Let each period be \({ y }_{ 2 }\) minutes long.
We have the following table:
We note that more the number of periods, lesser would be the length of each period. Therefore, this is a case of inverse proportion.
So, 8 x 45 = 9 x \({ y }_{ 2 }\)
⇒ \({ y }_{ 2 }=\frac { 8\times 45 }{ 9 } \)
⇒ \({ y }_{ 2 }\) = 40
Hence, each period would be 40 minutes long.
We hope the NCERT Solutions for Class 8 Maths Chapter 13 Direct and Indirect Proportions Ex 13.2 help you. If you have any query regarding NCERT Solutions for Class 8 Maths Chapter 13 Direct and
Indirect Proportions Ex 13.2, drop a comment below and we will get back to you at the earliest.
|
{"url":"https://www.learninsta.com/ncert-solutions-for-class-8-maths-chapter-13-ex-13-2/","timestamp":"2024-11-04T15:05:15Z","content_type":"text/html","content_length":"72655","record_id":"<urn:uuid:27574103-36ed-4536-a227-9b4c1cc9d3fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00325.warc.gz"}
|
Index limitations and workarounds - Knowledge Base
Index limitations and workarounds
In this article we discuss index providers and what limitations and workarounds there are.
There are two index types in Neo4j, btree and full-text. This article target btree indexes, up until 4.0 called schema indexes. This is the normal index you get when you create an index or index
backed constraint through Cypher.
CREATE INDEX "My index" FOR (p:Person) ON (p.name)
All indexes are backed by an index provider. Full-text indexes are backed by fulltext-1.0 and btree indexes are backed by native-btree-1.0 (default) or lucene+native-3.0.
The table below lists the available btree index providers and their support for native indexing:
Index provider Value types supported for native indexing
native-btree-1.0 Native for all types
lucene+native-3.0 Lucene for single-property strings, native for the rest
Key size limit
The native-btree-1.0 index provider has a key size limit of 8167 bytes. This limit manifests itself in different ways depending on whether the key holds a single string, a single array, or multiple
values (i.e. is the key in a composite index).
If a transaction reaches the key size limit for one or more of its changes, that transaction will fail before committing any changes. If the limit is reached during index population, the resulting
index will be in a failed state, thus not be usable for any queries.
See below for details on how to calculate key sizes for native indexes.
If keys does not fit in this limit, most likely a full-text index is a better fit for what the use case, if that’s not the case the lucene+native-3.0 has a key size limit of 32766 bytes.
Contains and ends with queries
The native-btree-1.0 index provider have limited support for ENDS WITH and CONTAINS queries. These queries will not be able to do an optimized search the way they do for queries that use STARTS WITH,
= and <>. Instead, the index result will be a stream of an index scan with filtering.
For single-property strings lucene+native-3.0 can be used instead which have full support for both ENDS WITH and CONSTAINS.
To create an index with a different provider than default, the easiest way is to use db.createIndex, db.createUniquePropertyConstraint or db.createNodeKey procedures to which you can provide index
provider name. Another option is to configure the default index provider using dbms.index.default_schema_provider. Note that a restart is necessary for this config option to take effect.
Key size calculation
This part describes how to calculate key sizes for native indexes.
As described in the section about key size there are limitations to how large the key size can be when using native-btree-1.0 index provider. This appendix describes in detail how the sizes can be
Element size calculations
It is useful to know how to calculate the size of a single value when calculating the total size of the resulting key. In some cases those entry sizes is different based on whether the entry is in an
array or not.
Table 1. Element sizes
Type elementSize[ifSingle] * elementSize[ifInArray] **
Byte 3 1
Short 4 2
Int 6 4
Long 10 8
Float 6 4
Double 10 8
Boolean 2 1
Date 9 8
Time 13 12
LocalTime 9 8
DateTime 17 16
LocalDateTime 13 12
Duration 29 28
Period 29 28
Point (Cartesian) 28 24
Point (Cartesian 3D) 36 32
Point (WGS-84) 28 24
Point (WGS-84 3D) 36 32
String 3 + utf8StringSize *** 2 + utf8StringSize ***
Array † Nested arrays are not supported
* elementSize[ifSingle] denotes the size of an element if is a single entry.
** elementSize[ifInArray] denotes the size of an element if it is part of an array.
*** utf8StringSize is the size of the String in bytes when encoded with UTF8.
† elementSize[Array] is the size of an array element, and is calculated using the following formulas:
• If the data type of the array is a numeric data type:
elementSize[Array] = 4 + ( arrayLength * elementSize[ifInArray] )
• If the data type of the array is a geometry data type:
elementSize[Array] = 6 + ( arrayLength * elementSize[ifInArray] )
• If the data type of the array is non-numeric:
elementSize[Array] = 3 + ( arrayLength * elementSize[ifInArray] )
String encoding with UTF8
It is worth noting that common characters, such as letters, digits and some symbols, translate into one byte per character. Non-Latin characters may occupy more than one byte per character.
Therefore, for example, a string that contains 100 characters or less may be longer than 100 bytes if it contains multi-byte characters.
More specifically, the relevant length in bytes of a string is when encoded with UTF8.
Example 1. Calculate the size of a string when used in an index
Consider the string His name was Måns Lööv.
This string has 19 characters that each occupies 1 byte. Additionally, there are 3 characters that each occupy 2 bytes per character, which add 6 to the total. Therefore, the size of the String in
bytes when encoded with UTF8, utf8StringSize, is 25 bytes.
If this string is part of a native index, we get:
elementSize = 2 + utf8StringSize = 2 + 25 = 27 bytes
Example 2. Calculate the size of an array when used in an index
Consider the array [19, 84, 20, 11, 54, 9, 59, 76, 82, 27, 9, 35, 56, 80, 65, 95, 16, 91, 61, 11].
This array has 20 elements of the type Int. Since they are in an array, we need to use elementSize[ifInArray], which is 4 for Int.
Applying the formula for arrays of numeric data types, we get:
elementSize[Array] = 4 + ( arrayLength * elementSize[ifInArray] ) = 4 + ( 20 * 4 ) = 84 bytes
Non-composite indexes
The only way that a non-composite index can violate the size limit is if the value is a long string or a large array.
Strings in non-composite indexes have a key size limit of 8164 bytes.
The following formula is used for arrays in non-composite indexes:
1 + elementSize[Array] =< 8167
Here elementSize[Array] is the number calculated from Element sizes.
If we count backwards, we can get the exact array length limit for each data type:
• maxArrayLength = FLOOR( ( 8167 - 4 ) / elementSize[ifInArray] ) for numeric types.
• maxArrayLength = FLOOR( ( 8167 - 4 ) / elementSize[ifInArray] ) for non-numeric types.
These calculations result in the table below:
Table 2. Maximum array length, per data type
Data type maxArrayLength
Byte 8163
Short 4081
Int 2040
Long 1020
Float 2040
Double 1020
Boolean 8164
String See Maximum array length, examples for strings
Date 1020
Time 680
LocalTime 1020
DateTime 510
LocalDateTime 680
Duration 291
Period 291
Point (Cartesian) 340
Point (Cartesian 3D) 255
Point (WGS-84) 340
Point (WGS-84 3D) 255
Note that in most cases, Cypher will use Long or Double when working with numbers.
Properties with the type of String are a special case because they are dynamically sized. The table below shows the maximum number of array elements in an array, based on certain string sizes:
Table 3. Maximum array
length, examples for strings
String size maxArrayLength
The table can be used as a reference point. For example: if we know that all the strings in an array occupy 100 bytes or less, then arrays of length 80 or lower will definitely fit.
Composite indexes
This limitation only applies if one or more of the following criteria is met:
• Composite index contains strings
• Composite index contains arrays
• Composite index targets many different properties (>50)
We denote a targeted property of a composite index a slot. For example, an index on :Person(firstName, surName, age) has three properties and thus three slots.
In the index, each slot is filled by an element. In order to calculate the size of the index, we must have the size of each element in the index, i.e. the elementSize, as calculated in previous
The following equation can be used to verify that a specific composite index entry is within bounds:
sum( elementSize ) =< 8167
Example 3. The size of a composite index containing strings
Consider a composite index of five strings that each can occupy the maximum of 500 bytes.
Using the equation above we get:
sum( elementSize ) = 5 * ( 3 + 500 ) = 2515 < 8167
We are well within bounds for our composite index.
Example 4. The size of an index containing arrays
Consider a composite index of 10 arrays of type Float that each have a length of 250.
First we calculate the size of each array element:
elementSize[Array] = 4 + ( arrayLength * elementSize[ifInArray] ) = 4 + ( 250 * 4 ) = 1004
Then we calculate the size of the composite index:
sum( elementSize[Array] ) = 10 * 1004 = 10040 > 8167
This index key will exceed the key size limit for native indexes.
|
{"url":"https://neo4j.com/developer/kb/index-limitations-and-workaround/","timestamp":"2024-11-05T21:50:13Z","content_type":"text/html","content_length":"57754","record_id":"<urn:uuid:deb6d371-7bd3-4272-9e71-bd2591f1174f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00423.warc.gz"}
|
Maths Archives - Page 4 of 33 - Double Helix
Difficulty: Fun Jane has some Australian silver coins in her pocket. When buying an apple, she realises she has more than $1 in silver coins, but can’t make exactly $1 with her coins. How is this
possible? (Silver Australian coins come in 5c, 10c, 20c, and 50c.)
|
{"url":"https://blog.doublehelix.csiro.au/category/maths/page/4/","timestamp":"2024-11-14T08:36:08Z","content_type":"text/html","content_length":"81757","record_id":"<urn:uuid:c7dae55b-1c0c-49d2-b9fa-e99042f4b19f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00172.warc.gz"}
|
SPPU Finite Element Analysis - December 2016 Exam Question Paper | Stupidsid
Total marks: --
Total time: --
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
Solve any one question from Q.1(a,b)& Q.2(a,b)
1(a) Explain step by step procedure for FEA and comments on convergence based on elemental size.
6 M
1(b) Explain concept of Plane Stress with appropriate example
4 M
2(a) Write down the difference between Weighted Residual Method and Weak Formulations.
6 M
2(b) Explain LST (Linear Strain Triangle Element) Element.
4 M
Solve any one question from Q.3 & Q.4(a,b)
3 Determine the forces in the members of the truss shown in Fig. Take E= 200GPa. A= 2000mm^2.
10 M
4(a) Determine the nodal displacement, element stresses and support reactions of the axially loaded bar as shown in Fig. Take E = 200GPa and P =30kN
6 M
4(b) Write a note on Lagrendge interpolation functions used in FEA formulations.
4 M
Solve any one question from Q.5(a,b,c,d) & Q.6(a,b)
5(a) Write a note on isoparametric formulations and how the geometric as well as field variable variation is taken into account?
6 M
5(b) Determine the Cartesian coordinate of the point P(ζ0.5,η0.6
4 M
5(c) Wrie short notes on
i) Uniqueness of mapping of isoparametric elements.
ii) Jacobian matrix.
4 M
5(d) State and explain the three basic laws on which isoparametric concept is developed.
4 M
6(a) Wrie short notes on
i) Uniqueness of mapping of isoparametric elements.
ii) Jacobian matrix.
8 M
6(b) For the elements shown in Fig. Asssemble Jacobian matrix and strain displacement matrix for the Gaussian point (0.7, 0.5).
10 M
Solve any one question from Q.7(a,b) & Q.8(a,b)
7(a) Write down govering equation of steady state heat transfer and also write down elemental stiffness matrix and compare with Bar element.
6 M
7(b) Consider a brick wall of thickness 0.6m, k=0.75W/m °K. The inner surface is at 15°C and the outer surface is exposed to cold air at-15°C. The heat transfer coefficient associated with the
outside surface is 40W/m^20 K. Determine the steady state temerature distribution within the wall and also the heat flux through the wall. Use two elements and obtain the solution.
10 M
8(a) Heat is generated in a large plate (K=0.5W/m°C) at the rate of 2000W/m^3. The plate is 10cm thick. Outside surface of the plate is exposed to ambient air at 30°C with a convective heat transfer
coefficient of 40W/m^2C. Determine the temperature distribution in the wall.
10 M
8(b) Derive FEA stiffness matrix for Pin Fin Heat Transfer probelm.
6 M
Solve any one question from Q.9(a,b) & Q.10(a,b)
9(a) Write down Consistent Mass and Lumped Mass Matrix for
i) Bar Element
ii) Plane Stress Element
6 M
9(b) Find the nautral frequencies of longitudinal vibrations of the same stepped shaft of areas A = 1200 mm^2 and 2A 2500 mm^2 and of equal lengths (L=1m), when it is constrained at one end, as shown
10 M
10(a) Explain difference between cosistent and lumped mass matrix technique for modal analysis of structure.
6 M
10(b) Fidn natural frequencies of longitudinal vibrations of the unconstrained stepped shaft of areas A and 2A and of equal lengths (L), as shown below.
10 M
More question papers from Finite Element Analysis
|
{"url":"https://stupidsid.com/previous-question-papers/download/finite-element-analysis-15718","timestamp":"2024-11-14T03:54:42Z","content_type":"text/html","content_length":"63554","record_id":"<urn:uuid:b301bf03-7c55-4284-80d4-8ff2f0f06f29>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00819.warc.gz"}
|
In a triangle, the lengths of two larger sides are 10 cm and 9 ... | Filo
In a triangle, the lengths of two larger sides are 10 and . If the angles of the triangle are in , then the length of the third side is
Not the question you're searching for?
+ Ask your question
Since the angles A, B, C of are in A.P., thus
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Arihant Mathematics Master Resource Book (Arihant)
View more
Practice more questions from Trigonometric Functions
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text In a triangle, the lengths of two larger sides are 10 and . If the angles of the triangle are in , then the length of the third side is
Topic Trigonometric Functions
Subject Mathematics
Class Class 11
Answer Type Text solution:1
Upvotes 114
|
{"url":"https://askfilo.com/math-question-answers/in-a-triangle-the-lengths-of-two-larger-sides-are-10-mathrmcm-and-9-mathrm~cm-if","timestamp":"2024-11-05T10:49:07Z","content_type":"text/html","content_length":"370254","record_id":"<urn:uuid:f54d9243-0ff1-49cb-b49d-b54a896b79c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00820.warc.gz"}
|
Elements of Geometry
Elements of Geometry: Containing the First Six Books of Euclid, with a Supplement on the Quadrature of the Circle and the Geometry of Solids; to which are Added, Elements of Plane and Spherical
Popular passages
If two triangles have one angle of the one equal to one angle of the other and the sides about these equal angles proportional, the triangles are similar.
THE straight lines which join the extremities of two equal and parallel straight lines, towards the same parts, are also themselves equal and parallel. Let AB, CD be equal and parallel straight
lines, and joined towards the same parts by the straight lines AC, BD ; AC, BD are also equal and parallel.
Parallelograms upon the same base and between the same parallels, are equal to one another.
BG; and things that are equal to the same are equal to one another; therefore the straight line AL is equal to BC. Wherefore from the given point A a straight line AL has been drawn equal to the
given straight line BC.
If two triangles which have two sides of the one proportional to two sides of the other, be joined at one angle, so as to have their homologous sides parallel to one another ; the remaining sides
shall be in a straight line. Let ABC, DCE be two triangles which have the two sides BA, AC proportional to the two CD, DE, viz.
If, from the ends of the side of a triangle, there be drawn two straight lines to a point within the triangle, these shall be less than, the other two sides of the triangle, but shall contain a
greater angle.
FGL, have an angle in one equal to an angle in the other, and their sides about these equal angles proportionals ; the triangle ABE is equiangular (6.
If a straight line be divided into any two parts, the square of the whole line is equal to the squares of the two parts, together with twice the rectangle contained by the parts.
DEF, and be equal to it ; and the other angles of the one shall coincide with the remaining angles of the other and be equal to them, viz. the angle ABC to the angle DEF, and the angle ACB to DFE.
If a straight line be divided into two equal, and also into two unequal parts ; the squares on the two unequal parts are together double of the square on half the line, and of the square on the line
between the points of section.
Bibliographic information
|
{"url":"https://books.google.mk/books?id=ONo2AAAAMAAJ&dq=editions:LCCN03026786&output=html_text&lr=&as_brr=0","timestamp":"2024-11-10T07:48:07Z","content_type":"text/html","content_length":"50067","record_id":"<urn:uuid:cf17360b-f191-431d-bc4e-47021757d936>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00011.warc.gz"}
|
Advanced Techniques in Data Visualization with Python
Data visualization is an essential tool for understanding and communicating insights from data. Python, with its rich ecosystem of libraries, offers powerful tools for creating detailed, interactive,
and aesthetically pleasing visualizations. While basic charts like bar graphs, scatter plots, and line graphs can be generated easily using libraries like Matplotlib and Seaborn, advanced data
visualization techniques require a deeper understanding of both the tools and the data being represented.
This guide delves into advanced techniques for data visualization using Python, focusing on Matplotlib, Seaborn, and Plotly, and includes tips on creating interactive visualizations, handling large
datasets, and customizing plots for enhanced insight.
Why Use Python for Data Visualization?
Python’s flexibility and ease of use make it a go-to language for data visualization. Its libraries provide:
Extensive Customization: From simple to highly customized plots, Python libraries give you full control over aesthetics and details.
**Interactivity**: Tools like Plotly allow for interactive, web-based visualizations.
**Integration with Data Processing**: Python’s data handling libraries (like pandas and NumPy) seamlessly integrate with its visualization tools, making the process smooth and efficient.
Libraries Overview
1. Matplotlib
Matplotlib is the most fundamental plotting library in Python and provides building blocks for creating all kinds of visualizations.
2. Seaborn
Built on top of Matplotlib, Seaborn is a high-level interface for drawing attractive and informative statistical graphics.
3. Plotly
Plotly is used for creating interactive plots and can generate visualizations for web-based applications. It supports a wide variety of charts and is known for its flexibility.
To follow along with this guide, you will need basic knowledge of Python and the following libraries installed:
pip install matplotlib seaborn plotly pandas numpy
We will also use pandas for data manipulation and NumPy for numerical operations.
Advanced Visualization Techniques
1. Customizing Subplots with Matplotlib
When visualizing complex datasets, using multiple plots (subplots) in a single figure can help convey more information. Matplotlib offers great flexibility in managing subplots.
import matplotlib.pyplot as plt
import numpy as np
Sample data
x = np.linspace(0, 10, 100)
y1 = np.sin(x)
y2 = np.cos(x)
Create subplots
fig, ax = plt.subplots(2, 1, figsize=(8, 6))
First subplot
ax[0].plot(x, y1, ‘r-‘, label=’sin(x)’)
ax[0].set_title(‘Sine Wave’)
Second subplot
ax[1].plot(x, y2, ‘b–‘, label=’cos(x)’)
ax[1].set_title(‘Cosine Wave’)
Adjust layout and show
Here, we have created a simple two-row subplot layout with sine and cosine waves. By using `fig.subplots()`, we can organize multiple plots in different configurations (e.g., grids or stacked
2. Pairplots and Heatmaps in Seaborn for Multivariate Data
Seaborn is great for handling multivariate data visualizations. Two of its most powerful features for advanced analysis are pair plots and heatmaps.
A pair plot shows pairwise relationships in a dataset. It’s particularly useful for understanding interactions between variables.
import seaborn as sns
import pandas as pd
Load the built-in Iris dataset
iris = sns.load_dataset(‘iris’)
Create pair plot
sns.pairplot(iris, hue=’species’)
The pairplot generates scatterplots for all pairs of variables and diagonal histograms for univariate distributions, with different colors representing species. It’s a great way to visualize
relationships in multivariate data.
Heatmaps allow you to visualize data in matrix form, where colors represent the magnitude of values.
# Generate a random correlation matrix
corr_matrix = iris.corr()
Create heatmap
sns.heatmap(corr_matrix, annot=True, cmap=’coolwarm’, linewidths=0.5)
In this example, we generate a heatmap showing the correlation matrix of the Iris dataset. The `annot=True` argument annotates each cell with its correlation coefficient, making it easy to spot
3. Interactive Visualization with Plotly
For advanced, interactive visualizations, Plotly provides a powerful interface. Interactive charts are useful when dealing with large datasets or when sharing insights with non-technical audiences.
Interactive Line Plot
import plotly.graph_objs as go
Data for plotting
x = np.linspace(0, 10, 100)
y = np.sin(x)
Create interactive line plot
fig = go.Figure()
fig.add_trace(go.Scatter(x=x, y=y, mode=’lines’, name=’sin(x)’))
fig.update_layout(title=’Interactive Sine Wave’,
Here, we use Plotly to generate an interactive line plot. You can hover over data points for details, zoom in, or pan around the plot, making it more engaging and informative.
Interactive 3D Surface Plot
Plotly also supports 3D plots, which can be particularly useful for visualizing three-dimensional data or complex functions.
Generate data
x = np.linspace(-5, 5, 50)
y = np.linspace(-5, 5, 50)
X, Y = np.meshgrid(x, y)
Z = np.sin(np.sqrt(X**2 + Y**2))
Create a 3D surface plot
fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y)])
fig.update_layout(title=’Interactive 3D Surface Plot’,
In this example, we generate a 3D surface plot representing the function `sin(sqrt(x^2 + y^2))`. Users can rotate the plot and zoom in to explore the surface in detail.
4. Handling Large Datasets Efficiently
When dealing with large datasets, performance can become a bottleneck in data visualization. Python provides several techniques and libraries to handle large datasets efficiently:
Downsampling: Only plot a subset of your data points to reduce the load.
Dask: Use Dask to handle large datasets in parallel and avoid memory issues.
Example of downsampling:
import pandas as pd
Load a large dataset
large_data = pd.DataFrame({
‘x’: np.random.rand(1000000),
‘y’: np.random.rand(1000000)
Downsample the data (plot only 1% of it)
downsampled_data = large_data.sample(frac=0.01)
plt.scatter(downsampled_data[‘x’], downsampled_data[‘y’], alpha=0.5)
plt.title(‘Scatter plot with downsampled data’)
5. Customization for Better Insights
Advanced data visualizations often require highly customized designs for clarity and impact. Here are a few tips for better customization:
Annotations: Add annotations to highlight specific points or trends in the data.
plt.scatter(x, y1, label=’sin(x)’)
plt.annotate(‘Maximum Point’, xy=(1.57, 1), xytext=(2, 1.5),
arrowprops=dict(facecolor=’black’, shrink=0.05))
Themes: Use Seaborn’s built-in themes to make your plots more visually appealing.
sns.lineplot(x=x, y=y1)
Logarithmic Scales: For datasets with a wide range of values, logarithmic scales can enhance visualization clarity.
plt.plot(x, y1)
6. Creating Dashboards
For professional use, data visualizations are often part of larger dashboards that allow users to filter data and generate reports dynamically. Plotly’s `Dash` is a library designed for building
web-based interactive dashboards.
Advanced data visualization with Python unlocks new ways to analyze, interpret, and present data. By mastering tools like Matplotlib, Seaborn, and Plotly, you can create complex, customized, and
interactive visualizations that offer deep insights into your data. Whether working with large datasets or crafting detailed reports, these techniques will enhance your ability to communicate
findings effectively and engage your audience.
|
{"url":"https://techdigitalminds.com/advanced-python-data-visualization-techniques/","timestamp":"2024-11-04T16:41:25Z","content_type":"text/html","content_length":"249656","record_id":"<urn:uuid:d07b248c-5061-4196-9814-b665e3eb6d31>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00391.warc.gz"}
|
Neural Variational Inference: Blackbox Mode – B.log
In the previous post we covered Stochastic VI: an efficient and scalable variational inference method for exponential family models. However, there're many more distributions than those belonging to
the exponential family. Inference in these cases requires significant amount of model analysis. In this post we consider Black Box Variational Inference by Ranganath et al. This work just as the
previous one comes from David Blei lab — one of the leading researchers in VI. And, just for the dessert, we'll touch upon another paper, which will finally introduce some neural networks in VI.
Blackbox Variational Inference
As we have learned so far, the goal of VI is to maximize the ELBO $\mathcal{L}(\Theta, \Lambda)$. When we maximize it by $\Lambda$, we decrease the gap between the marginal likelihood of the model
considered $\log p(x \mid \Theta)$, and when we maximize it by $\Theta$ we acltually fit the model. So let's concentrate on optimizing this objective:
$$ \mathcal{L}(\Theta, \Lambda) = \mathbb{E}_{q(z \mid x, \Lambda)} \left[\log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right] $$
Let's find gradients of this objective:
$$ \nabla_{\Lambda} \mathcal{L}(\Theta, \Lambda) &= \nabla_{\Lambda} \int q(z \mid x, \Lambda) \left[\log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right] dz \\ &= \int \nabla_{\Lambda} q(z \
mid x, \Lambda) \left[\log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right] dz - \int q(z \mid x, \Lambda) \nabla_{\Lambda} \log q(z \mid x, \Lambda) dz \\ &= \mathbb{E}_{q} \left[\frac{\
nabla_{\Lambda} q(z \mid x, \Lambda)}{q(z \mid x, \Lambda)} \log \frac{p(x, z \mid \Theta)}{q(z \mid x, \Lambda)} \right] - \int q(z \mid x, \Lambda) \frac{\nabla_{\Lambda} q(z \mid x, \Lambda)}{q(z
\mid x, \Lambda)} dz \\ &= \mathbb{E}_{q} \left[\nabla_{\Lambda} \log q(z \mid x, \Lambda) \log \frac{p(x, z \mid \Theta)}{q(z \mid x, \Lambda)} \right] - \int \nabla_{\Lambda} q(z \mid x, \Lambda)
dz \\ &= \mathbb{E}_{q} \left[\nabla_{\Lambda} \log q(z \mid x, \Lambda) \log \frac{p(x, z \mid \Theta)}{q(z \mid x, \Lambda)} \right] - \nabla_{\Lambda} \overbrace{\int q(z \mid x, \Lambda) dz}^{=1}
\\ &= \mathbb{E}_{q} \left[\nabla_{\Lambda} \log q(z \mid x, \Lambda) \log \frac{p(x, z \mid \Theta)}{q(z \mid x, \Lambda)} \right] $$
In statistics $\nabla_\Lambda \log q(z \mid x, \Lambda)$ is known as score function. For more on this "trick" see a blogpost by Shakir Mohamed. In many cases of practical interest $\log p(x, z, \mid
\Theta)$ is too complicated to compute this expectation in closed form. Recall that we already used stochastic optimization successfully, so we can settle with just an estimate of true gradient. We
get one by approximating the expectation using Monte-Carlo estimates using $L$ samples $z^{(l)} \sim q(z \mid x, \Lambda)$ (in practice we sometimes use just $L=1$ sample. We expect correct averaging
to happen automagically due to use of minibatches):
$$ \nabla_{\Lambda} \mathcal{L}(\Theta, \Lambda) \approx \frac{1}{L} \sum_{l=1}^L \nabla_{\Lambda} \log q(z^{(l)} \mid x, \Lambda) \log \frac{p(x, z^{(l)} \mid \Theta)}{q(z^{(l)} \mid x, \Lambda)} $$
For model parameters $\Theta$ gradients look even simpler, as we don't need to differentiate w.r.t. expectation distribution's parameters:
$$ \nabla_{\Theta} \mathcal{L}(\Theta, \Lambda) &= \mathbb{E}_{q} \nabla_{\Theta} \log p(x, z \mid \Theta) \approx \frac{1}{L} \sum_{l=1}^L \nabla_{\Theta} \log p(x, z^{(l)} \mid \Theta) $$
We can even "naturalize" these gradients by premultiplying by the inverse Fisher Information Matrix $\mathcal{I}(\Lambda)^{-1}$. And that's it! Much simpler than before, right? Of course, there's no
free lunch, so there must be a catch... And there is: performance of stochastic optimization methods crucially depends on the variance of gradient estimators. It makes perfect sense: the higher the
variance — the less information about the step direction we get. And unfortunately, in practice the aforementioned estimator based on the score function has impractically high variance. Luckily, in
Monte Carlo community there are many variance reductions techniques known, we now describe some of them.
The first technique we'll describe is Rao-Blackwellization. The idea is simple: if it's possible to compute the expectation w.r.t. some of random variables, you should do it. If you think of it, it's
an obvious advice as you essentially reduce amount of randomness in your Monte Carlo estimates. But let's put it more formally: we use chain rule to rewrite joint expectation as marginal expectation
of conditional one:
$$ \mathbb{E}_{X, Y} f(X, Y) = \mathbb{E}_X \left[ \mathbb{E}_{Y \mid X} f(X, Y) \right] $$
Let's see what happens with variance (in scalar case) when we estimate expectation of $\mathbb{E}_{Y \mid X} f(X, Y)$ instead of expectation of $f(X, Y)$:
$$ \text{Var}_X(\mathbb{E}_{Y \mid X} f(X, Y)) &= \mathbb{E} (\mathbb{E}_{Y \mid X} f(X, Y))^2 - (\mathbb{E}_{X, Y} f(X, Y))^2 \\ &= \text{Var}_{X,Y}(f(X, Y)) - \mathbb{E}_X \left(\mathbb{E}_{Y \mid
X} f(X, Y)^2 - (\mathbb{E}_{Y \mid X} f(X, Y))^2 \right) \\ &= \text{Var}_{X,Y}(f(X, Y)) - \mathbb{E}_X \text{Var}_{Y\mid X} (f(X, Y)) $$
This formula says that Rao-Blackwellizing an estimator reduces its variance by $\mathbb{E}_X \text{Var}_{Y\mid X} (f(X, Y))$. Indeed, you can think of this term as of a measure of how much
information $Y$ contains about $X$ that's relevant to computing $f(X, Y)$. Suppose $Y = X$: then you have $\mathbb{E}_X f(X, X)$, and taking expectation w.r.t. $Y$ does not reduce amount of
randomness in the estimator. And this is what the formula tells us as $\text{Var}_{Y \mid X} f(X, Y)$ would be 0 in this case. Here's another example: suppose $f$ does not use $X$ at all: then only
randomness in $Y$ affects the estimate, and after Rao-Blackwellization we expect the variance to drop to 0. And the formula agrees with out expectations as $\mathbb{E}_X \text{Var}_{Y \mid X} f(X, Y)
= \text{Var}_Y f(X, Y)$ for any $X$ since $f(X, Y)$ does not depend on $X$.
Next technique is Control Variates, which is slightly less intuitive. The idea is that we can add zero-mean function $h(X)$ that'll preserve the expectation, but reduce the variance. Again, for a
scalar case
$$ \text{Var}(f(X) - \alpha h(X)) = \text{Var}(f(X)) - 2 \alpha \text{Cov}(f(X), h(X)) + \alpha^2 \text{Var}(f(X)) $$
Optimal $\alpha^* = \frac{\text{Cov}(f(X), h(X))}{\text{Var}(f(X))}$. This formula reflects an obvious fact: if we want to reduce the variance, $h(X)$ must be correlated with $f(X)$. Sign of
correlation does not matter, as $\alpha^*$ will adjust. BTW, in reinforcement learning $\alpha$ is called baseline.
As we already have learned, $\mathbb{E}_{q(z \mid x, \Lambda)} \nabla_\Lambda \log q(z \mid x, \Lambda) = 0$, so the score function is a good candidate for $h(x)$. Therefore our estimates become
$$ \nabla_{\Lambda} \mathcal{L}(\Theta, \Lambda) \approx \frac{1}{L} \sum_{l=1}^L \nabla_{\Lambda} \log q(z^{(l)} \mid x, \Lambda) \circ \left(\log \frac{p(x, z^{(l)} \mid \Theta)}{q(z^{(l)} \mid x,
\Lambda)} - \alpha^* \right) $$
Where $\circ$ is pointwise multiplication and $\alpha$ is a vector of $|\Lambda|$ components with $\alpha_i$ being a baseline for variational parameter $\Lambda_i$:
$$ \alpha^*_i = \frac{\text{Cov}(\nabla_{\Lambda_i} \log q(z \mid x, \Lambda)\left( \log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right), \nabla_{\Lambda_i} \log q(z \mid x, \Lambda))}{\text
{Var}(\nabla_{\Lambda_i} \log q(z \mid x, \Lambda)\left( \log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right))} $$
Neural Variational Inference and Learning
Hoooray, neural networks! In this section I'll briefly describe a variance reduction technique coined by A. Mnih and K. Gregor in Neural Variational Inference and Learning in Belief Networks. The
idea is surprisingly simple: why not learn a baseline $\alpha$ using a neural network?
$$ \nabla_{\Lambda} \mathcal{L}(\Theta, \Lambda) \approx \frac{1}{L} \sum_{l=1}^L \nabla_{\Lambda} \log q(z^{(l)} \mid x, \Lambda) \circ \left(\log \frac{p(x, z^{(l)} \mid \Theta)}{q(z^{(l)} \mid x,
\Lambda)} - \alpha^* - \alpha(x) \right) $$
Where $\alpha(x)$ is a neural network trained to minimize
$$ \mathbb{E}_{q(z \mid x, \Lambda)} \left( \log \frac{p(x, z^{(l)} \mid \Theta)}{q(z^{(l)} \mid x, \Lambda)} - \alpha^* - \alpha(x) \right)^2 $$
What's the motivation of this objective? The gradient step of $\nabla_\Lambda \mathcal{L}(\Theta, \Lambda)$ can be seen as pushing $q(z\mid x, \Lambda)$ towards $p(x, z \mid \Theta)$. Since $q$ has
to be normalized like any other proper distribution, it's actually pushed towards the true posterior $p(z \mid x, \Theta)$. We can rewrite the gradient $\nabla_\Lambda \mathcal{L}(\Theta, \Lambda)$
$$ \nabla_{\Lambda} \mathcal{L}(\Theta, \Lambda) &= \mathbb{E}_{q} \left[\nabla_{\Lambda} \log q(z \mid x, \Lambda) \left(\log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda) \right) \right] \\ &= \
mathbb{E}_{q} \left[\nabla_{\Lambda} \log q(z \mid x, \Lambda) \left(\log p(z \mid x, \Theta) - \log q(z \mid x, \Lambda) + \log p(x \mid \Theta) \right) \right] $$
While this additional $\log p(x \mid \Theta)$ term does not contribute to the expectation, it affects the variance on the estimator. Therefore, $\alpha(x)$ is supposed to estimate the marginal
log-likelihood $\log p(x \mid \Theta)$.
The paper also lists several other variance reduction techniques that can be used in combination with the neural network-based baseline:
• Constant baseline — analogue of Control Variates, uses running average of $\log p(x, z \mid \Theta) - \log q(z \mid x, \Lambda)$ as a baseline
• Variance normalization — normalizes the learning signal to unit variance, equivalent to adaptive learning rate
• Local learning signals — falls out of the scope of this post as requires it model-specific analysis and alternations, and can't be used in Blackbox regime
|
{"url":"https://artem.sobolev.name/posts/2016-07-05-neural-variational-inference-blackbox.html","timestamp":"2024-11-09T17:17:12Z","content_type":"text/html","content_length":"15199","record_id":"<urn:uuid:b99230b5-031d-4c6c-8f62-0efbaf3143b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00121.warc.gz"}
|
Formatting cell data in spreadsheet part III: decimal digits and thousand separators
Posted by Greten on 17 May 2020 under Tools
A decimal separator separates a number between whole number digits and decimal digits. A thousand separator appears after every third digit to the left of the decimal point, thus indicating thousand,
million, and so on.
In most English-speaking countries, the thousand separator is a comma, and the decimal separator is a period. However, hair space is getting popular as a thousand separator. In some countries, such
as the Netherlands and Belgium, it's the opposite; comma for decimal separator and period for thousand separator.
This entry is the part III of the post series: Formatting cell data in spreadsheet software programs. The features and functions discussed in this entry are based on Calc 6.4.3, Excel 365, and the
version of Google Sheets as of May 2020.
Enable the thousand separator
In several spreadsheet applications, such as LibreOffice Calc, Microsoft Excel, and Google Sheets, the default is that there is no thousand separator. To see the thousand separator, you need to
format the cell to show it.
Enable the thousand separator in Calc
To enable the thousand separator in a Calc spreadsheet:
1. Select the cells where you want the numbers to show a thousand separator.
2. Select the Format as Number icon or press Ctrl+Shift+1; this step automatically adds a thousand separator and two decimal places.
3. Modify the number of decimal places using the Add Decimal Place and Delete Decimal Place icons. For example, you want to display whole numbers only, click Delete Decimal Place twice or more until
you can no longer see a decimal place digit.
An alternative method involves the use of the Formal Cells window:
1. Select the cells where you want the numbers to show thousand separators.
2. Press Ctrl+1. You can also right-click on the cell or one of the selected cells and select Format Cells. The Format Cells window opens.
3. Under Options of the Numbers tab, enable the Thousands separator.
4. Enter the number of Decimal places you want this number to display.
5. Select OK.
Enable the thousand separator in Excel
To enable the thousand separator in an Excel spreadsheet:
1. Select the cells where you want the numbers to show thousand separators.
2. Open the Format Cells window. Right-click on the cell or one of the selected cells and select Format Cells.
3. In the Numbers tab, select Number under Category.
4. Under Options, enable the Use 1000 Separator checkbox.
5. Enter the number of Decimal places you want to display.
6. Select OK.
You can also use the ribbon's Number group:
1. Select the cells where you want the numbers to show thousand separators.
2. In the ribbon's Number group, select Comma Style. This step adds a thousand separator and configures the number to two decimal places.
3. Use the Increase Decimal and Decrease Decimal icons to set the number of decimal places to display.
However, the second method configures the number to Accounting even though it does not display any currency symbol. Then, the format changes to Custom when you work on step 3. I cannot determine how
this would work as compare to the first method. I suggest you use the first method (the one that opens the Format Cells window) if formatting it just as a regular number is essential to you.
Enable the thousand separator in Sheets
To enable the thousand separator in a Google Sheets spreadsheet:
1. Select the cells where you want the numbers to show thousand separators.
2. Select Format » Number » Number; this automatically adds a thousand separator and two decimal places.
3. Use the Increase decimal places and Decrease decimal places buttons to set the number of decimal places that you want your selected numbers to display.
Displace decimal places as needed: use Format Code
In the previous section, once you added a thousand separator, you will need to decide on how many decimal digits you want to display. If the number has more decimal digits than the number you
configured to display, it will round up. If the number has fewer decimal digits, trailing zeroes are added. For example, set the number of decimal digits to three digits; the number 5.503 displays as
5.503, 6.7826 displays as 6.783, and 8.9 displays as 8.900.
What if you want to display a thousand separators, but you also want to display decimal digits only as needed?
This is the use of the hash in the Format Code. Hash represents a number that will appear only when needed.
Display decimal places as needed in Calc
In Calc, to display thousand separators while allowing decimal digits only to appear as needed:
1. Select the cells where you want the numbers to show thousand separators.
2. Press Ctrl+1. You can also right-click on the cell or one of the selected cells and select Format Cells.
3. Under Options of the Numbers tab, in the Decimal places field, enter the maximum number of decimal places you want to display.
4. Check the Thousands separator.
5. The Format code field has something like #,##0.0000 in it. The number of zeroes on the right of the decimal separator depends on the number you enter on step 3. Replace these zeroes with hash.
6. Select OK.
Display decimal places as needed in Excel
In Excel, to display thousand separators while allowing decimal digits only to appear as needed:
1. Select the cells where you want the numbers to show thousand separators.
2. Open the Format Cells window. Right-click on the cell or one of the selected cells and select Format Cells.
3. In the Numbers tab, select Number under Category.
4. Under Options, enable the Use 1000 Separator checkbox.
5. Enter the number of maximum of decimal places you want to display in the Decimal places field.
6. Select Custom under Category.
7. You will see something like #,##0.0000 in the Type field. The number of zeroes on the right side of the decimal separator depends on the number you entered in step 5. Replace these zeroes with
8. Select OK.
Display decimal places as needed in Sheets
In Sheets, to display thousand separators while allowing decimal digits to appear only as needed:
1. Select the cells where you want the numbers to show thousand separators.
2. Select Format » Number » More Formats » Custom number format.
3. Select the code #,##0 or #,##0.00.
4. On the text field on top, edit the code so that the number of hashes on the right of the decimal separator is the maximum number of decimal places you want to appear. For example, you want the
cell to display up to five decimal digits; you can encode #,##0.#####.
5. Select Apply.
The hash on the decimal places works like this. For example, you format your cell as #,##0.###. It has three hashes on the right of decimal separator, and thus, can display up to three decimal places
without trailing zeroes. Enter 4.4, and it displays 4.4; enter 4.45, and it displays 4.45; and enter 4.456, and it displays 4.456. However, if you enter 4.4567, a number with more than three decimal
places, it will round off to 4.457.
Maximum in the number of decimal places
In all three spreadsheet application, one of the steps involves deciding the maximum number of digits you want to display. You're probably wondering why there should be a maximum number of decimal
digits. Is there any way to not set any maximum, just like in General (Excel and Calc) and Automatic (Sheets)?
Actually, even these default number formats have a limit. Calc, Excel, and Sheets can display a maximum of 15 digits in a cell, including both whole numbers and decimal places.
If you encode more than 15 digits, their behavior varies:
• Calc rounds off the rightmost and smallest decimal places. If the digits are mostly whole number digits, it changes to scientific notation.
• Excel truncates (not round off) the rightmost and smallest decimal places. If the digits are mostly whole number digits, it changes to scientific notation. Moreover, you can see all remaining 15
digits only in the Formula Bar. In a cell, only seven or ten digits can be displayed depending on whether the number is converted to scientific notation or not, respectively.
• Sheets truncates the rightmost and smallest decimal places. If the digits are mostly whole number digits but have at least one decimal place, it changes to scientific notation. If the digits are
all whole numbers, Sheets seem to convert it to text--the number is automatically aligned to the left--but mathematical calculations can still be done with it.
Now, why are we discussing the limitation on the number of digits? Well, this is to demonstrate that spreadsheet programs have a limit on how many decimal digits they can display, so you cannot rely
on the General and Automatic formats to display decimal digits as needed. You need to decide on how many decimal digits you want to display if, for example, you are going to build problem generators
or use spreadsheets to keep class records.
Changing the decimal separator; you can't
You cannot change the decimal separator by using only the format function of the spreadsheet application. You cannot change it unless you change the language or local settings of your spreadsheet,
which is beyond the scope of this entry.
You cannot change the decimal separator by simply replacing the decimal separator in the Format code, Type, or Custom number field. For example, your language or local setting is configured to
Philippine language, where the decimal separator is period, the system recognizes the period as the decimal separator. If you alter the code and use the period as, say a thousand separator, and comma
as decimal separator, it might ruin the code, and the resulting format is not what you expect.
Changing the thousand separator
As mentioned earlier, even though the comma is the traditional thousand separator in several English-speaking countries, hair space is becoming acceptable. Unlike the decimal separator, you can
replace the thousand separator with different characters. However, it's not a simple direct substitution.
Revisit the second section, Displace decimal places as needed: use Format Code. The steps to open and alter the code for custom format are similar, except that now, you will alter the left side of
the decimal separator; this applies to Calc, Excel, and Sheets. If you already enabled the thousand separation, you will find something like #,##0 in it. Even though the code is only up to thousand,
the system understands it as applying every third whole number place. For example, it will display one million as 1,000,000.
For the same reason as decimal digits, you need to decide the maximum number of digits you want to show with thousand separators. The character will only apply if it has equivalent places in the
code. If you code # ### ##0 and your number is 1,000,000,000, it displays as 1000 000 000.
The spreadsheet applications do not accept any character as thousand separator. For one, it will not accept the character that is already used as decimal separator. Calc, Excel, and Sheets accept
character space, hairspace, and hyphen as thousand separator. For other characters, one spreadsheet may accept it but not the others.
In most English-speaking countries, the comma is used as the thousand separator, and period as the decimal separator. You cannot change the decimal separator without changing your local or language
settings. You can change the thousand separator by replacing the comma (or the default thousand separator in your locality) with certain characters like hair space, space, and hyphen, but you have to
increase the hashes on the whole number parts of the code.
The code, known as Format code in Calc, Type in Excel, and Custom number format in Sheets, is useful in configuring the decimal places and thousand separators. They are made of hashes and zeroes;
zeroes leave trailing and leading zeroes while hashes only display digits as needed.
Last updated on 25 May 2020.
Share your thoughts
* Required. Your email will never be displayed in public.
|
{"url":"https://www.grethz.com/formatting-spreadsheet-cell-data-in-calc-part-iib-formatting-numbers/","timestamp":"2024-11-03T22:37:56Z","content_type":"text/html","content_length":"37148","record_id":"<urn:uuid:2162ad69-32a0-48d1-82fe-503568e24b1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00356.warc.gz"}
|
By Definition
Most non-mainstream groups that go against the status quo have some phrase they say that just instills me with a cringe feeling all over my body. Creationists have the question “If we evolved from
monkeys, then why are there still monkeys?”. Flat Earthers have “Water always finds its level!”…and New Atheists™ have “by definition”. Whenever I hear an New Atheist™ say those two concatenated
words, a reflexive, almost visceral reaction occurs inside me, that I feel deep inside my bones, and I brace myself for the almost inevitable onslaught of linguistic misunderstandings, followed by
assertions of an improper use of a descriptive dictionary.
More specifically, the phrase that New Atheists™ frequently say to me is “Atheism is, by definition…”. When uttering this phrase, regardless of what follows thereafter, they are making a fundamental
mistake of trying to coax a prescription out of a description. It has been well established, contrary to assertions made by fiat of groups such as American Atheist, that words like “atheism” are
polysemous and do not have a singular definition which is both necessary and sufficient for a prescribed usage. Typically, prescribed definitions are generally found in fields like math and logic.
For example, the relationship of 0!=1 is true because there is a specific definition of the word “factorial” that exists, that has both necessary and sufficient conditions to establish that
relationship as true.
In this case, it is first established by the definition of a “factorial” from which we can expand upon our definition of factorial for a special case, and say 0! = 1, by definition:
(from Wolfram MathWorld):
n! ≡ n(n – 1) ⋅⋅⋅2 ⋅1 where n is a positive integer, or where n ≥ 1 as n ∈ ℕ
which can be more generally expanded to n! =n(n – 1) ⋅ (n – 2) ⋅ (n – 3)⋅⋅⋅3 ⋅ 2 ⋅ 1 which is just a general form that shows for any n you can form a descension from n down to 1 in order to multiple
the numbers together to get n!.
Notice the definition that a triple bar is used to represent the relationship is stronger than a mere equality, and as such, it is giving a prescriptive definition. A prescriptive definition tells
you that in a math proof, you could literally give justification of an expansion to be replaced by n! by definition
Examples of factorials using the definition*:
5! = 5(5 – 1) ⋅ (5 – 2) ⋅ 2 ⋅ 1 = 5 ⋅ 4 ⋅ 3 ⋅ 2 ⋅ 1 = 120
4! = 4(4 – 1) ⋅ (4 – 2) ⋅ 1 = 4 ⋅ 3 ⋅ 2 ⋅ 1 = 24
3! = 3(3 – 1) ⋅ 1 = 3 ⋅ 2 ⋅ 1 = 6
2! = 2(2 – 1) = 2
1! = 1
We can also express the definition of factorial a bit more mathematically as:
n! = Π i=1 to n i
Where “Π” is capital pi to mean a summation of products similar to how Σ is a summation for addition from 1 to n (the index) with the argument i.
So, if n = 5 then:
5! = Π i=1 to 10 i = 120
This also gives up a nice recurrence relation of:
n!= n(n – 1)!
Examples using recurrence relation:
10! = 10 ⋅ 9!
42! = 41 ⋅ 40!
99! = 99 ⋅ 98!
If you have n = 1 then given that n! = n(n – 1)!
1!= 1 ⋅ 0!
For that to work then clearly 0! must equal 1 to adhere to our initial definition as 1!= 1 ⋅ 0! =1
However, if you notice by the definition of a factorial n is only for positive numbers. 0 is not a positive number, so we have to specially define 0! to make it work with our definition of
factorials, as even with our capital pi notation since the index has to start at 1 which is the first positive number.
We can then expand upon our definition of factorial for a special case, and say 0! = 1, by definition, as the product of an empty set is 1 by the “empty product rule”. Which means if you take the
product of a set with no elements (empty set or {∅}), it equals 1 by the multiplicative identity which is the product equivalent of the additive identity of zero in addition. If you add up no
numbers, like adding the elements for an empty set {∅}, it is equal to the addition identity of 0, by definition. If you multiply “the elements” in an empty set (one that contains no elements), it is
equal to the multiplicative identity of 1, by definition, by definition.
There are other mathematical examples of “by definition” which I won’t go into here, but suffice it to say that it is based upon ring theory. It should be intuitive enough to see that you can’t
actually add elements which are not there so, we have to find a different way to say adding of zero elements is zero (addition identity). We do the same for products by saying the multiplication of
zero elements, product of an empty set {∅}, is by definition is 1 (multiplicative identity).
This also comes from combinatorics as to how many ways can we arrange (permutations) a set that contains zero elements, or more intuitively, how many ways can we do nothing. One way.
So we can say the definition of 0! is 1 and when we see 0! for example in a Taylor series:
e^x = Σ n=0 to ∞ x^n / n!
This expands out to:
e^x = Σ n=0 to ∞ x^n / n! = x^0 / 0! + x^1 / 1! + x^2 / 2! + x^3 / 3! )+…
In the first term in the expansion 1 / 0! the expansion would only hold if of course 0!=1 to give us 1 /0!= 1/1 = 1 which is why you often see Taylor expansions merely written as 1 + x + x^2 / 2! + x
^3 / 3! Since we defined 0! to be 1 and by our definition of factorials 1! If you don’t believe me, just try doing a Taylor series where 0! ≠ 1 and let me know how well that works for you.
You may be wondering what all this has to do with New Atheists™ saying “Atheism, by definition…”, but this above is an example of a prescriptive definition that is given by the symbol “0!” so we can,
by definition, replace it with 1 in our mathematical computations. More importantly, as far as the conceptual take away, 0! is not “describing” the number 1, it is by definition equal to the number
A dictionary however does not prescribe usages, it merely describes synchronic usage of modern language and its usage in the population. It is usually in the form of:
<object> = general description of the object
Contrary to a descriptive definitional form, the form for a prescriptive definition is more akin to:
<object> ≡ substitutional equivalence from necessary and sufficient conditions to establish the relationship.
A popular reference for New Atheist™ to cite as a definition of an atheist is:
Atheist = “a person who disbelieves or lacks belief in the existence of God or gods.”
This merely is given a description of an atheist, it is not prescribing that anyone who lacks a belief is “by definition” is an atheist. Merely, that 1) “lack of belief” describes a person who is an
atheist. 2) Some atheists use “lack of belief” as their definition of atheist as to what they believe constitutes being an atheist.
A mathematical example would be equivalent to:
Square = A four sided object.
This is a descriptive definition. It describes, albeit not very well, what a square is…it is very general, but not untrue.
We can have other more specific definitions such as:
Square = A two dimensional plane object consisting of four congruent sides and four right 90° angles
That certainly is a more precise description of a square and sets it up as a special case parallelogram, kite, quadrilateral, rectangle, rhombus, and trapezoid…but it is still merely describing what
the object we call a square has for attributes. It meets both necessary and sufficient conditions to not just describe a square, but that if you had an object that met all those conditions (being two
dimensional, in a plane, having four sides of equal length, and four 90° angles), you could call it a square. Unlike our first definition of atheist, which was just a general definition which
contained a necessary condition for to be a square (having four sides), but obviously was quite insufficient to be a prescriptive definition.
So when a New Atheist™ says “an atheist, by definition, is a person who lacks a belief”, they are erroneously attempting to make a description that is not prescriptive, into a prescriptive definition
either by ignorance or by deceit. All atheists lack a belief God exists, and the definition is a true “description” of an atheist. It, however, is not prescribing that all who lack a belief *are*
atheists. I am going to coin this fallacy as: Argumentum ad prescriptiorum.
Equivalently, a descriptive definition for theist would be:
Theist= “a person who disbelieves or lacks belief in the non-existence of God or gods”
That definition accurately describes all theists, as all theists disbelieve, which entails lack of belief in the non-existence of God. The reason dictionaries don’t have that specific definition, is
that theists don’t use “lack of belief in the non-existence of God or gods” in their usage of the word theist, else the dictionary would reflect that particular usage.
If you therefore allow for the word “atheist” to be as “a person who disbelieves or lacks belief in the existence of God or gods” to be a prescriptive definition, derived from a descriptive one, then
you must also allow theist to have the equivalent of “theist” to be prescriptively define as a person who disbelieves or lacks belief in the non-existence of God or gods.” to be prescriptive as
well, else you’re guilty of special pleading. (See my WASP argument)
Which additionally means anyone who lacks a belief in the existence and non-existence of God, could be both an “atheist” and a“theist” at the same time due to the how the definitions are prescribed.
(See my Atheist Semantic Collapse argument).
See how these things I discuss in my blog, and on my channel, all tie in together and just start to fall apart when imprecise usages of terms are prescribed? So next time a New Atheist™ says
“atheism is by definition is…”, see if they understand the difference between a descriptive definition and a prescriptive one. I personally can’t recall ever asking one who was able to tell me the
difference, but if you do ever find one…I would love to know about it!
Argumentum ad prescriptiorum: The fallacy of attempting to derive a prescriptive usage from a descriptive source.
* Each time we have a decrease in n! we have to remove one of the quantities being multiplied, so 5! has 5 things being multiplied, 4! has 4 things being multiplied, 3! has 3 things being multiplied,
2! has 2 things being multiplied, 1! has only 1 element so nothing is being multiplied.
Download latest updated PDF format here: By Definition
You must be logged in to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://greatdebatecommunity.com/2021/06/04/by-definition/","timestamp":"2024-11-09T12:20:42Z","content_type":"text/html","content_length":"65862","record_id":"<urn:uuid:c1ef8be9-3077-4bf4-8a05-873cb7666d3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00589.warc.gz"}
|
Uncategorized Archives - MATHS GLOW
Basic Concepts of Permutations and Combinations Exercise 5B Solutions CA maths chapter 5 CA foundation maths solutions Chapter 5 Basic Concepts of Permutations and Combinations Exercise 5B are given.
You should study the textbook lesson Basic Concepts of Permutations and Combinations very well. You must practice all example Problems and solutions which are given in […]
CA foundation maths solutions Chapter 5 Basic Concepts of Permutations and Combinations Exercise 5B Read More »
NCERT maths class 11 solutions chapter 4 Complex Numbers and Quadratic Equations exercise 4.1
Complex Numbers and Quadratic Equations solutions exercise 4.1 chapter 4 Maths class 11 NCERT NCERT mathematics class 11 chapter 4 Complex Numbers and Quadratic Equations exercise 4.1 solutions are
given. You should study the textbook lesson Complex Numbers and Quadratic Equations very well. You should also practice the example problems and solutions are given in
NCERT maths class 11 solutions chapter 4 Complex Numbers and Quadratic Equations exercise 4.1 Read More »
|
{"url":"https://www.mathsglow.com/category/uncategorized/","timestamp":"2024-11-13T21:05:28Z","content_type":"text/html","content_length":"79379","record_id":"<urn:uuid:ed2924fe-4777-4e8d-9b42-df819af7f666>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00487.warc.gz"}
|
Papers with Code - Diederik P. Kingma
2 code implementations • CVPR 2023 • Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, Tim Salimans
For standard diffusion models trained on the pixel-space, our approach is able to generate images visually comparable to that of the original model using as few as 4 sampling steps on ImageNet 64x64
and CIFAR-10, achieving FID/IS scores comparable to that of the original model while being up to 256 times faster to sample from.
Denoising Image Generation +1
|
{"url":"https://paperswithcode.com/search?q=author%3ADiederik+P.+Kingma","timestamp":"2024-11-06T21:17:51Z","content_type":"text/html","content_length":"206819","record_id":"<urn:uuid:4777b874-8990-4e7a-8b5e-9fb3af3476a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00283.warc.gz"}
|
hydraulic motor efficiency calculator
Hydraulic Motor Efficiency Calculator
Hydraulic motors are integral components in many industrial applications, converting hydraulic energy into mechanical energy. Calculating the efficiency of these motors is crucial for ensuring
optimal performance and energy conservation. This article introduces a hydraulic motor efficiency calculator, explains how to use it, details the formula involved, provides an example, answers
frequently asked questions, and concludes with some final thoughts.
How to Use
To use the hydraulic motor efficiency calculator, simply input the required values for pressure, flow rate, and output power. Press the “Calculate” button to obtain the efficiency result.
The efficiency of a hydraulic motor is calculated using the following formula:
• Output Power (kW): The mechanical power output of the motor.
• Input Power (kW): The hydraulic power input to the motor, calculated as:
Example Solve
Assume a hydraulic motor with an output power of 75 kW, operating under a pressure of 200,000 Pa with a flow rate of 0.5 m³/s. The efficiency can be calculated as follows:
1. Calculate the input power:
2. Calculate the efficiency:
Frequently Asked Questions
What is hydraulic motor efficiency?
Hydraulic motor efficiency is a measure of how effectively a hydraulic motor converts hydraulic energy into mechanical energy.
Why is calculating hydraulic motor efficiency important?
Calculating efficiency helps in identifying energy losses and optimizing the performance of hydraulic systems.
What factors affect hydraulic motor efficiency?
Factors include fluid viscosity, pressure losses, mechanical friction, and internal leakage.
Can efficiency be improved?
Yes, efficiency can be improved through regular maintenance, using high-quality hydraulic fluids, and optimizing operating conditions.
Understanding and calculating the efficiency of hydraulic motors is essential for maintaining effective and energy-efficient hydraulic systems. Using a hydraulic motor efficiency calculator
simplifies this task, providing quick and accurate results.
|
{"url":"https://calculatordoc.com/hydraulic-motor-efficiency-calculator/","timestamp":"2024-11-02T12:22:23Z","content_type":"text/html","content_length":"93375","record_id":"<urn:uuid:15afaf87-35ac-4d98-8d62-6720cf349abf>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00661.warc.gz"}
|
15. The Stanford-Binet Test Was Used During World War 1 To Identify Individuals For Officer Status.TrueFalse
The Stanford-Binet test was developed by Alfred Binet and Theodore Simon in 1905 to measure intelligence. It was first used during World War I to identify individuals for officer status in the United
States Army. The test was later revised and is still used today as an intelligence test for children and adults.
Two resistors are connected in parallel. If R1 and R2 represent the resistance in Ohms (Ω) of each resistor, then the total resistance R is given by [tex]\mathbf{\dfrac{1}{R}=\dfrac{1}{R_1}+\dfrac
{1}{R_2}}[/tex]. Thus, the rate of R changes when R₁ = 117 Ω and
R₂ = 112 Ω is 0.25 Ω/min
For a given resistor connected in parallel;
Making R from the left-hand side the subject of the formula, then:
[tex]\mathbf{R = \dfrac{R_1R_2}{R_1+R_2}}[/tex]
Given that:
[tex]\mathbf{R_1 = 117,}[/tex] [tex]\mathbf{R_2 = 112 }[/tex]
Now, replacing the values in the above previous equation, we have:
[tex]\mathbf{R = \dfrac{13104}{229}}[/tex]
However, the differentiation of R with respect to time t will give us the rate at which R is changing when R1=117Ω and R2=112Ω.
So, by differentiating the given equation of the resistor in parallel with respect to time t;
[tex]\mathbf{\dfrac{1}{R}=\dfrac{1}{R_1}+\dfrac{1}{R_2}}[/tex], we have:
[tex]\mathbf{(\dfrac{dR}{dt})=R^2 \Bigg[ \dfrac{1}{R_1^2}(\dfrac{dR_1}{dt})+\dfrac{1}{R_2^2}(\dfrac{dR_2}{dt})\Bigg]}[/tex]
[tex]\mathbf{\dfrac{dR}{dt}=(\dfrac{13104}{229})^2 \Bigg[ \dfrac{0.4}{117^2}+\dfrac{0.6}{112^2}\Bigg]}[/tex]
[tex]\mathbf{\dfrac{dR}{dt}=3274.44 \Bigg[ (7.7052 \times 10^{-5} )\Bigg]}[/tex]
[tex]\mathbf{\dfrac{dR}{dt}=0.25\ \Omega /min}[/tex]
Therefore, we can conclude that the rate at which R is changing R1=117Ω and R2=112Ω is 0.25 Ω/min
Learn more about resistors here:
|
{"url":"https://www.cairokee.com/homework-solutions/15-the-stanford-binet-test-was-used-during-world-war-1-to-id-qkip","timestamp":"2024-11-07T09:30:50Z","content_type":"text/html","content_length":"81842","record_id":"<urn:uuid:b42658a8-e22a-4f70-98d0-e526f945f300>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00262.warc.gz"}
|
This problem from Programming Praxis came about in the comments to my last post and intrigued me. So today, we are trying to sum the first one billion primes. Summing the first hundred, thousand,
even million primes isn’t actually that bad. But it takes a bit more effort when you scale it up to a billion. And why’s that?
Before I get started, if you’d like to download today’s source code and follow along, you can do so here: billion primes source
Now that that’s out of the way, the first problem is time. A naive approach would be to go through all of the numbers from 2 to a billion and test if each is prime. To do that, test each number up to
your current number and see if the latter divides the former. Simple enough:
; test if n divides m
(define (divides? m n)
(= 0 (remainder m n)))
; test if n is prime by trial division
(define (prime? n)
(and (not (divides? n 2))
(for/and ([i (in-range 3 (+ 1 (ceiling (sqrt n))) 2)])
(not (divides? n i)))))
; sum the first n primes directly
(define (sum-primes-direct n)
(let loop ([i 3] [count 1] [sum 2])
[(= count n) sum]
[(prime? i)
(loop (+ i 2) (+ count 1) (+ sum i))]
(loop (+ i 2) count sum)])))
Not too bad, we can sum the first hundred thousand primes pretty easily:
> (time (sum-primes-direct 100000))
cpu time: 3068 real time: 3063 gc time: 79
If we waited a bit longer, we could even get the first billion that way. Still, that’s 3 seconds for just only 1/10,000th of the problem. I think that we can do better. What’s the next idea? Perhaps
if rather than dividing by all numbers from 2 up to the number we’re dealing with, why don’t we just divide by the previous primes:
; sum the first n primes by keeping a list of primes to divide by
(define (sum-primes-list n)
(let loop ([i 3] [count 1] [sum 2] [primes '()])
[(= count n)
[(andmap (lambda (prime) (not (divides? i prime))) primes)
(loop (+ i 2) (+ count 1) (+ sum i) (cons i primes))]
(loop (+ i 2) count sum primes)])))
Simple enough. And theoretically it should be faster, yes? After all, we’re doing far fewer divisions for each number. But no. It turns out that it’s not actually faster at all. If you cut it down to
the first 10,000 primes, the direct solution only takes 91 ms but this solution takes a whopping 9 seconds. That’s two whole orders of magnitude. Ouch!
> (time (sum-primes-direct 10000))
cpu time: 91 real time: 90 gc time: 0
> (time (sum-primes-list 10000))
cpu time: 8995 real time: 8987 gc time: 0
At first, you might think that that doesn’t make the least bit of sense. After all, we’re doing essentially the same thing, we’re just performing fewer divisions. So why isn’t it faster?
Basically, it all comes down to memory access. In the first direct case, we basically aren’t using the system’s RAM. Everything (or just about) can be done in registers directly on the CPU. In the
second case though, there’s constant swapping as the list grows too large to hold in registers alone. And memory access is orders of magnitude slower than any single instruction on the CPU. Really,
this is a perfect example of both this phenomenon and the cost of premature optimization. Just because something should be faster according to runtime alone, that’s not the entire story.
Still, we’re not quite done. I know we can do better than the direct method. So this time, let’s use a more intricate method, specifically the Sieve of Eratosthenes. The basic idea is to start with a
list of all of the numbers you are interested in. Then repeatedly take the first number as prime and cross out all of it’s multiples. There’s a pretty nice graphic on the aforelinked Wikipedia page.
And if we just go with a loop, the code is rather straight forward:
; sum the first n primes using the Sieve of Eratosthenes
; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
(define (sum-primes-sieve-eratosthenes-list n)
(define-values (lo hi) (guess-nth-prime n))
(let loop ([ls (range 2 hi)]
[count 0]
[sum 0])
[(= count n) sum]
(lambda (i) (not (divides? i (car ls))))
(cdr ls))
(+ count 1)
(+ (car ls) sum))])))
There’s one interesting bit–the guess-nth-prime function:
; estimate the nth prime, return lower and upper bounds
; source: http://en.wikipedia.org/wiki/Prime_number_theorem
(define (guess-nth-prime n)
(values (inexact->exact
(floor (* n (log n))))
(if (<= n 1)
(+ (* n (log n)) (* n (log (log n)))))))))
By default, the Sieve of Eratosthenes generates all of the primes from 1 to some number n. But that’s not what we want. Instead, we want the first n primes. After a bit of searching though, I found
the Wikipedia page on the Prime number theorem. That defines the function pi(n) which approximates the number of primes less than or equal to n. Invert that function and you find that the value of
the nth prime p[n] falls in the range:
n * ln(n) < p_n < n * ln(ln(n))
That upper bound is the one that lets us generate enough primes with the Sieve of Eratosthenes so that we can sum the first n.
The best part is that it turns out that it’s at least faster than the list based method:
> (time (sum-primes-sieve-eratosthenes-list 10000))
cpu time: 4347 real time: 4344 gc time: 776
Still. That’s not good enough. The problem here is much the same as the list based method. We’re passingly along and constantly building a list that would eventually have a billion elements in it.
Not something that’s particularly easy to deal with. So instead of a list, why don’t we use a vector of #t/#f?
; sum the first n primes using the Sieve of Eratosthenes with a vector
; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
(define (sum-primes-sieve-eratosthenes-vector n)
(define-values (lo hi) (guess-nth-prime n))
(define v (make-vector hi #t))
(vector-set! v 0 #f)
(vector-set! v 1 #f)
(for* ([i (in-range 2 hi)]
#:when (vector-ref v i)
[j (in-range (* i i) hi i)])
(vector-set! v j #f))
(let loop ([i 3] [count 1] [sum 2])
[(= count n) sum]
[(vector-ref v i)
(loop (+ i 2) (+ count 1) (+ sum i))]
(loop (+ i 2) count sum)])))
So how does it perform?
> (time (sum-primes-sieve-eratosthenes-vector 10000))
cpu time: 6 real time: 6 gc time: 0
Dang that’s nice. 😄 Let’s scale to a million:
> (time (sum-primes-sieve-eratosthenes-vector 1000000))
cpu time: 892 real time: 889 gc time: 2
Less than a second isn’t too shabby. It’s slower than I’d like, but I could wait a thousand seconds (a bit over 16 minutes) if I had to.
> (time (sum-primes-sieve-eratosthenes-vector 1000000000))
out of memory
Oops. It turns out that calling make-vector to make a billion element vector doesn’t actually work so well on my machine… We’re going to have to get a get a little sneakier.
Perhaps if we used the bitvectors from Monday’s post? (And now you know why I made that library 😄). All we have to do is swap out each instance of make-vector, vector-ref, or vector-set! for
make-bitvector, bitvector-ref, or bitvector-set!.
> (time (sum-primes-sieve-eratosthenes-bitvector 1000000))
cpu time: 5174 real time: 5170 gc time: 0
So it run about five times slower than the simple vector based method (which makes sense if you think about it; twiddling bits doesn’t come for free). Still, we’re using a fair bit less memory. Let’s
see if it can handle a billion:
> (time (sum-primes-sieve-eratosthenes-bitvector 1000000000))
cpu time: 9724165 real time: 9713671 gc time: 5119
Dang. Nice. 2 hrs 41 minutes may be more than twice as long as I was expecting based on the 16 minute estimate for the one million run vector version and the 5x slowdown between the vector and
bitvector versions. Still, it worked. And that’s a pretty good base all by itself. Still, I think we can do better.
Upon some searching, it turns out that you can actually create vectors containing one billion entries. You just can’t have them all in the same vector. So instead, I created another datatype: the
multivector. Essentially, the idea is to create several smaller vectors and abstract the ref and set! methods to minimize the changes to the Sieve of Eratosthenes code.
(define-struct multivector (size chunks default data)
#:constructor-name make-multivector-struct)
(define (make-multivector size [chunks 1] [default #f])
(define per-chunk (inexact->exact (ceiling (/ size chunks))))
size chunks default
(for/vector ([i (in-range chunks)])
(if (= i (- chunks 1))
(make-vector (- size (* (- chunks 1) per-chunk)) default)
(make-vector per-chunk default)))))
(define (multivector-ref mv i)
(vector-ref (vector-ref (multivector-data mv)
(quotient i (multivector-chunks mv)))
(remainder i (multivector-chunks mv))))
(define (multivector-set! mv i v)
(vector-set! (vector-ref (multivector-data mv)
(quotient i (multivector-chunks mv)))
(remainder i (multivector-chunks mv))
You can test it if you’d like, but it does work. So let’s try it in the Sieve of Eratosthenes. The same as before, just swap out make-vector, vector-ref, or vector-set! for make-multivector,
multivector-ref, or multivector-set!.
So how does the performance compare?
> (time (sum-primes-sieve-eratosthenes-multivector 1000000))
cpu time: 6635 real time: 6625 gc time: 435
Hmm. Well, it doesn’t actually run any faster than the bitvector, but it also doesn’t run out of memory.
I think we may have a winner, but before we wind down, there are two other sieves linked to from the Sieve of Eratosthenes page: the Sieve of Atkin and the Sieve of Sundaram. The algorithms are a bit
more complicated than the Sieve of Eratosthenes, but still entirely doable. It is interesting just how they work though. The Sieve of Eratosthenes is intuitive. These two? A bit less so.
First, we have the Sieve of Atkin:
; sum the first n primes using the Sieve of Atkin
; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Atkin
(define (sum-primes-sieve-atkin n)
(define-values (lo hi) (guess-nth-prime n))
(define v (make-vector hi #f))
; add candidate primes
(for* ([x (in-range 1 (+ 1 (sqrt hi)))]
[y (in-range 1 (+ 1 (sqrt hi)))])
(define x2 (* x x))
(define y2 (* y y))
(let ([i (+ (* 4 x2) y2)])
(when (and (< i hi) (or (= 1 (remainder i 12))
(= 5 (remainder i 12))))
(vector-set! v i (not (vector-ref v i)))))
(let ([i (+ (* 3 x2) y2)])
(when (and (< i hi) (= 7 (remainder i 12)))
(vector-set! v i (not (vector-ref v i)))))
(let ([i (- (* 3 x2) y2)])
(when (and (> x y) (< i hi) (= 11 (remainder i 12)))
(vector-set! v i (not (vector-ref v i))))))
; remove composites
(for ([i (in-range 5 (+ 1 (sqrt hi)))])
(when (vector-ref v i)
(for ([k (in-range (* i i) hi (* i i))])
(vector-set! v k #f))))
; report
(let loop ([i 5] [count 2] [sum 5])
[(= count n) sum]
[(vector-ref v i)
(loop (+ i 2) (+ count 1) (+ sum i))]
(loop (+ i 2) count sum)])))
It’s pretty much a direct translation of the code on the Wikipedia page. Since it uses vector, it won’t be able to calculate the sum of the first billion, but you could pretty easily replace swap out
for a bitvector or multivector. Still, I’m mostly interested in the implementation and performance to start with. Speaking of which:
> (time (sum-primes-sieve-atkin 1000000))
cpu time: 2421 real time: 2421 gc time: 415
So this particular version is about three times slower than the vector version of the Sieve of Eratosthenes. The Wikipedia page mentions that there are a number of optimizations that you could do to
speed this up which I may try some day, but not today. What’s interesting though is that if you do swap out a bitvector for a vector, it’s actually faster:
> (time (sum-primes-sieve-atkin-bitvector 1000000))
cpu time: 3059 real time: 3058 gc time: 0
If that proportion follows through to the billion element run, we should be able to finish in just an hour and a half. Let’s try it out.
> (time (sum-primes-sieve-atkin-bitvector 1000000000))
cpu time: 5304855 real time: 5300800 gc time: 1237
An hour and a half, spot on. None too shabby if I do say so myself. (Although I bet we could get even faster. I’ll leave that as an exercise for another day though.)
Finally, the Sieve of Sundaram. This one is even more different than the previous ones, not removing multiples of primes but rather removing all composites less than n by noting that they all have
the form i + j + 2ij ≤ n:
; sum the first n primes using the Sieve of Sundaram
; algorithm source: http://en.wikipedia.org/wiki/Sieve_of_Sundaram
(define (sum-primes-sieve-sundaram n)
(define-values (lo hi) (guess-nth-prime n))
(define dn (quotient hi 2))
(define v (make-vector dn #t))
(for* ([j (in-range 1 dn)]
[i (in-range 1 (+ j 1))]
#:when (< (+ i j (* 2 i j)) dn))
(vector-set! v (+ i j (* 2 i j)) #f))
(let loop ([i 1] [count 1] [sum 2])
[(= count n) sum]
[(vector-ref v i)
(loop (+ i 1) (+ count 1) (+ sum (+ 1 (* 2 i))))]
(loop (+ i 1) count sum)])))
Very straight forward code, how does it perform?
> (time (sum-primes-sieve-sundaram 10000))
cpu time: 32066 real time: 32055 gc time: 0
Eesh. Note that that’s only on 10,000 and it still took 30 seconds. I think I’ll skip running this one even out to a million.
Well, that’s enough for today I think. Here’s a nice timing summary for the methods:
Algorithm Ten thousand One million One billion
Direct 91 ms 86.0 s —
Previous primes 9.0 s — —
Eratosthenes (list) 4.3 s — —
Eratosthenes (vector) 6 ms 0.9 s —
Eratosthenes (bitvector) 31 ms 5.2 s 2 hr 42 min
Eratosthenes (multivector) 34 ms 6.6 s —
Atkin (vector) 12 ms 2.4 s —
Atkin (bitvector) 20 ms 3.1 s 1 hr 28 min
Atkin (multivector) 23 ms 4.4 s —
Sundaram 32.1 s — —
Segmented Sieve 7 ms 0.9 s 25 min 12 sec
And the actual values:
Ten thousand 496165411
One million 7472966967499
One billion 11138479445180240497
If you’d like to download the source code for today’s post, you can do so here:
Edit: After Will’s comments, I actually got around to writing a segmented version. It’s pretty amazing the different it made too. It runs about 3x faster than even the Sieve of Atkin. Sometimes
optimization is awesome. You can find that post here and the source code here.
|
{"url":"https://blog.jverkamp.com/2012/11/01/the-sum-of-the-first-billion-primes/","timestamp":"2024-11-07T02:42:12Z","content_type":"text/html","content_length":"51005","record_id":"<urn:uuid:08994522-e181-4a57-a25e-bc1be5ab38be>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00635.warc.gz"}
|
=RSQ formula | Calculate the square of the Pearson product moment correlation coefficient.
Formulas / RSQ
Calculate the square of the Pearson product moment correlation coefficient. RSQ(known_y's, known_x's)
• array1 - required, a number or name
• array2 - required, a number or name
• =RSQ(A3:A9, B3:B9)
The RSQ function can be used to calculate the square of the Pearson product moment correlation coefficient between two sets of data points. For example, if you have data points in cells A3:A9 and
B3:B9, you can use the formula above to calculate the coefficient.
• =RSQ(A3:A9, B3:B9)
The RSQ function is used to measure the strength of the linear relationship between two sets of data points. For example, if you have data points in cells A3:A9 and B3:B9, you can use the RSQ
function to measure the strength of the linear relationship between them. You can do this by entering the formula above into a cell.
• =RSQ(A3:A9, B3:B9)
The RSQ function can be used to determine whether two sets of data points are related. If the returned result is close to 1, then the two sets of data points are strongly related. On the other
hand, if the result is close to 0, then the two sets of data points are weakly related. For example, if you want to determine the relationship between data points in A3:A9 and B3:B9, you can use
the RSQ function by entering the example formula into a cell.
The RSQ function calculates the square of the Pearson product moment correlation coefficient, which is the proportion of the variance in y that is attributable to the variance in x. It requires two
arguments that must be numbers, arrays, or references.
• The RSQ function calculates the square of the Pearson product moment correlation coefficient, measuring the proportion of variance in y that is attributable to the variance in x.
• The RSQ function accepts numerical, array, and reference arguments, but ignores logical values, text, and empty cells.
• If known_y's and known_x's are empty or have a different number of data points, the RSQ function throws errors.
Frequently Asked Questions
What is the RSQ function?
The RSQ function calculates the square of the Pearson product moment correlation coefficient. This is a measure of how well a linear equation can describe the relationship between two sets of data.
What does r-squared tell me?
The r-squared value tells you the proportion of the variance in the dependent variable that is explained by the independent variable. A higher r-squared value indicates a better fit.
What are the arguments for the RSQ function?
The RSQ function takes two arguments. These can be numbers, names, arrays, or references containing numbers.
Does the RSQ function ignore certain values?
Yes, the RSQ function ignores logical values, text, and empty cells in arguments.
What errors does the RSQ function throw?
• The RSQ function throws errors if known_y's and known_x's are empty or have different numbers of data points.
• The RSQ function throws an error if known_y's and known_x's are only 1 data point.
|
{"url":"https://sourcetable.com/formula/rsq","timestamp":"2024-11-02T00:04:12Z","content_type":"text/html","content_length":"60583","record_id":"<urn:uuid:d2bd01a6-39ab-478c-9cd6-704838f86af3>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00506.warc.gz"}
|
Algebra 2, Part 2
Price: $125 | Credits: One Semester | Dept: Math | Course ID# 222-2
This course is the second semester of Algebra 2 and builds upon the skills and knowledge gained from the first semester. The course includes the topics: Exponents, Roots, Radicals, Exponential and
Logarithmic Functions, Conic Sections- Solving, Graphing, Trigonometry, Permutation & Combinations, Sequence and Series, and Complex Numbers. Algebra 2 is approved by the University of California A-G
as mathematics (category C).
Upon completion of this course, the student is awarded 5 credits. Each credit corresponds to 15 hours of study. Of course, some students work more quickly than others, and some can devote more hours
to study, so some students are able to complete the course in an accelerated rate.
• In this module, students gain a comprehension of the following:
• To simplify and be able to analyze patterns from positive and negative powers.
• To understand the inverse relationship between roots and exponents.
• How to solve radical equations and perform operations with radical expressions.
• How to simplify radicals in fractions and how to rationalize a denominator.
• How to simplify fractions in the exponent and how to switch to root form.
• How to write repeating decimals as fractions.
• What exponential functions look like and how they behave.
• To solve exponential equations.
• To simplify exponents with base 10, and with base e.
• How to use scientific notation to represent very large or very small numbers.
• How to solve for inverse functions.
• How to solve and graph logarithmic equations, including the translation of exponential and logarithmic graphs from their parent graphs.
• The equations and graphs for circles, ellipses, hyperbolas, and parabolas.
• How to solve for and utilize the six trigonometric functions; sine, cosine, tangent, secant, cosecant, and cotangent.
• How to convert between radians and degrees.
• To use trigonometric ratios to solve special triangles.
• How to interpret and use the unit circle.
• How to use the law of sines and the law of cosines to solve triangles.
• To identify and distinguish between permutations and combinations, and when to apply which to a real life scenario.
• To solve basic probability problems.
• How to use the binomial theorem to expand polynomials raised to exponential powers.
• How to identify and distinguish between arithmetic and geometric sequences.
• To solve for the nth term of an arithmetic sequence, as well as the sum of the first n terms of that sequence.
• To solve for the nth term of an geometric sequence, as well as the sum of the first n terms of that sequence.
• How to determine if an infinite geomtric sequence is converging or diverging and to solve for the sum of that sequence if it exists.
• To perform basic operations with complex numbers, including addition, subtraction, multiplication, and division.
• How to add and subtract functions.
• How to multiply and divide functions.
• Finding compositions of functions.
• This course covers the following topics:
• Positive and Negative Powers
• Roots and Exponents
• Radical Equations and Operations
• Radicals in Fractions
• Fractions in the Exponent
• Writing Repeating Decimals as Fractions
• Exponential Functions
• Exponential Equations
• Exponents with Base 10
• Base e
• Scientific Notation
• Inverse Functions
• Properties of Log Functions
• Logarithmic Equations
• Translation of Exponential and Logarithmic Graphs
• Circles: Equations & Graphs of
• Ellipses: Equations & Graphs of
• Hyperbolas: Equations & Graphs of
• Parabolas: Equations & Graphs of
• Sin, Cos, Tan, Cosec, Sec, Cot
• Converting Between Radians & Degrees
• Trig Ratios of Special Angles
• The Unit Circle
• The Law of Sines
• The Law of Cosines
• Permutations
• Combinations
• Basic Probability
• Binomial Theorem
• Arithmetic Sequence ( nth Term)
• Arithmetic Series (Sum Of)
• Geometric Sequence (nth Term)
• Geometric Series (Sum Of)
• Sum of Infinite Series
• Basic Operations With Complex Numbers
• Multiplying and Dividing Complex Numbers
• Adding & Subtracting Functions
• Multiplying & Dividing Functions
• Composition of Functions
• One Semester Credit: $125
• Second Semester of Algebra 2
|
{"url":"https://svhs.co/product/algebra-2-part-2-id-322/","timestamp":"2024-11-14T23:41:59Z","content_type":"text/html","content_length":"465012","record_id":"<urn:uuid:42c70df8-da02-4dc8-b4d0-f6f04a79e682>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00732.warc.gz"}
|
How does NodeMethod work?
I have a very large MIP problem. I have a question about turning off crossover for barrier method for MIP.
When I choose the parameters as follows,
Expansion_Model = Model(optimizer_with_attributes(Gurobi.Optimizer, "MIPGap" => 0.01, "TimeLimit" => 108000, "Method" => 2))
it gives me the optimal solution in 25 hours. This parameter setting runs only barrier method for root relaxation, does crossover, creates basic solution as the root relaxation solution, and starts B
&B tree exploration. What I understand is that the algorithm runs barrier and simplex concurrently for the rest of B&B nodes because I did not turn off crossover completely with NodeMethod = 2.
As suggested some scholars, I decided to turn off crossover completely with the following setting:
Expansion_Model = Model(optimizer_with_attributes(Gurobi.Optimizer, "MIPGap" => 0.01, "TimeLimit" => 108000, "Method" => 2, "Crossover" => 0, "NodeMethod" => 2))
I was expecting that this solution process would give me the optimal solution faster because I eliminated crossover procedure from B&B tree. Reality is reverse. It could not solve the model in 30
hours. Below is the final part of the log:
Can you please explain what I am doing wrong?
Kind regards
• Hi again,
I have just the checked definition of NodeMethod, and it seems that only one of dual simplex, primal simplex, or barrier is applied in B&B nodes. When I do not have NodeMethod = 2 and Crossover =
0, it chooses one of them automatically, and apparently this was much faster. When I added NodeMethod = 2 and Crossover = 0, I enforced that the algorithm should use only barrier in B&B nodes,
which is too slow in my case.
Can somebody confirm that I am right? If I am right, then what does Gurobi choose among primal simplex, dual simplex, and barrier when the setting is automatic?
Thank you
• Hi Fikri,
You are correct. When setting NodeMethod=2, Gurobi will use the Barrier method in every node of the B&B tree. This is in most cases way slower than using one of the simplex methods because
Barrier cannot utilize warm start information. For the default setting, Gurobi will choose one of the algorithms, most often the dual Simplex.
In order to avoid Crossover after the root node it is required to also force NodeMethod=2. This is because Gurobi will need a valid basis anyway when starting the B&B algorithm in order to
benefit from warm starts. Thus, Crossover is performed.
It seems like your model may suffer from numerical issues. The very large objective values, the sub-optimal termination of the root node Barrier, together with the many postponed nodes when using
NodeMethod=2 are all indicators for numerical trouble. Did you had the chance to look at our Guidelines for Numerical Issues? Proper scaling often makes the difference between a solvable and an
unsolvable model.
Best regards,
• Hi Jaromil,
Thank you so much for your answer. It is getting clearer, but I need more guidance with the following questions. I am sorry if they are repetitive of the previous ones.
1) NodeMethod only affects the nodes after the root node, right? It has no effect on the root node.
2) If NodeMethod = 2, Gurobi relaxes MIP in a relevant leaf node, solves it with barrier, and then does crossover in order to get a basic solution. If NodeMethod = -1, it chooses an algorithm
automatically (generally dual simplex as you said), and then directly solves with dual simplex to get a basic solution. Is this correct? If correct, then why is dual simplex so much faster than
barrier? Generally, barrier is a faster solution method.
3) So far, my questions have been about leaf nodes after the root node. As for the root node, can I just solve with barrier without any crossover? You are saying above that "Gurobi will need a
valid basis anyway when starting the B&B algorithm in order to benefit from warm starts.". Does it mean that I can not avoid crossover for the root relaxation? If I can not avoid, then what does
Crossover = 0 do?
What I am eventually trying to do is as follows:
□ Use only barrier without crossover for the root relaxation
□ Start B&B tree
□ Use automatic setting (dual simplex) for the relaxation of leaf nodes
What parameter setting should I use for this one? I once tried Method = 2 along with Crossover = 0 in order to avoid crossover for the root relaxation, but it still did crossover.
Could you please clarify these points for me?
Kind regards
• Hi Fikri,
1) NodeMethod only affects the nodes after the root node, right? It has no effect on the root node.
2) If NodeMethod = 2, Gurobi relaxes MIP in a relevant leaf node, solves it with barrier, and then does crossover in order to get a basic solution. If NodeMethod = -1, it chooses an algorithm
automatically (generally dual simplex as you said), and then directly solves with dual simplex to get a basic solution. Is this correct? If correct, then why is dual simplex so much faster
than barrier? Generally, barrier is a faster solution method.
If NodeMethod is set to 2, then Gurobi solves each B&B node with Barrier and Crossover unless Crossover=0.
Correct, if NodeMethod=-1, then Gurobi usually picks dual Simplex.
Using a Simplex algorithm in the B&B tree is usually much faster than Barrier, because of the ability to use warm start. In a warm start, the Simplex algorithm can re-use old Basis information to
skip phase 1 and directly proceed with phase 2. Often enough, it is the case that phase 2 needs only a few iterations, because the previous solution point is not far away from the new one such
that the Simplex algorithm only has to wander across a few edges of the polyhedron. In contrast, there is currently no methodology to make use of warm start for the Barrier algorithm. This
results in Barrier being forced to always compute a valid starting point first. Moreover, Simplex algorithms are often faster than Barrier for small - medium and sometimes even on large models.
3) So far, my questions have been about leaf nodes after the root node. As for the root node, can I just solve with barrier without any crossover? You are saying above that "Gurobi will need
a valid basis anyway when starting the B&B algorithm in order to benefit from warm starts.". Does it mean that I can not avoid crossover for the root relaxation? If I can not avoid, then what
does Crossover = 0 do?
You can only solve the root node without Crossover if you also set NodeMethod=2. This is because Gurobi would like to take advantage of warm start for the B&B nodes and requires a basic solution
and thus, will use Crossover anyway to get one if NodeMethod=-1. The only way to avoid Crossover in the root node is to additionally set NodeMethod=2. However, this may then hurt the overall
performance because one usually does not want to run Barrier in leaf nodes.
What I am eventually trying to do is as follows:
☆ Use only barrier without crossover for the root relaxation
☆ Start B&B tree
☆ Use automatic setting (dual simplex) for the relaxation of leaf nodes
What parameter setting should I use for this one? I once tried Method = 2 along with Crossover = 0 in order to avoid crossover for the root relaxation, but it still did crossover.
Could you please clarify these points for me?
Concluding, if you want to use Barrier for the root node and Simplex for leaf nodes then there is no way to avoid Crossover.
Best regards,
• Hi Jaromil,
I forgot to thank you. Everything is very clear now. I have some issues in making the optimization faster though. I will open another issue for that.
Kind regards
Please sign in to leave a comment.
|
{"url":"https://support.gurobi.com/hc/en-us/community/posts/4413676791441-How-does-NodeMethod-work","timestamp":"2024-11-07T13:49:23Z","content_type":"text/html","content_length":"65151","record_id":"<urn:uuid:59240067-1444-4430-b0b3-2b7ee5738bc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00772.warc.gz"}
|
voltage Archives - Quirky Science
Heat is so expensive! Especially is this so for electric heat. Why? Electricity flowing through a conductor obeys the simple mathematical relationship, E = IR That equation reads: electromotive force
(E) equals current (I) times resistance (R). Since the power consumed (P) equals the current times the voltage, P = EI = IR x I = I2R P = I2R This equation informs us that the power consumed by a
device is equal to the square of the current (that is, the current times itself, I x I), times the resistance to current flow of the device. If an electrical conductor is very good—for example, a
thick copper wire—the power consumed is quite small. This is because the resistance to electrical flow is small. If current is measured in amperes…
|
{"url":"https://www.quirkyscience.com/tag/voltage/","timestamp":"2024-11-04T14:51:27Z","content_type":"text/html","content_length":"117158","record_id":"<urn:uuid:216cf1e2-dc98-483c-a013-ca2be5a0749e>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00824.warc.gz"}
|
Wrong, But Useful: Episode 3
It’s the awkward third episode! In which…
• @sr_cav (who is Cav in real life) is nice about us on his blog
• Dave meets Art Benjamin
• Colin invites you to MathsJam - Yarnfield Park, Stone, November 2/3rd 2013
• Dave does a stats exam and lectures Colin about what counts as an exam
• Colin and the maths police investigate whether Nakamoto and Mochizuki are the same person:
• Dave stamps down on simultaneous word equations
• Colin applies maths in the garden: \(V = \\frac{\\pi}{3}\\left(R^2 + Rr + r^2\\right)h\)
• Dave lowers the tone and Colin tries to rescue it with a Fibonacci half-marathon
• Then literature! Feynman Point Pilish Poetry ((Correction: Feynman would have been 95 this month, not 99 as Colin says.)) The Feynman point is at digit 762 of $\pi$. Thanks to @giftedmaths, who
is Richard Mankiewicz in real life.
• Dave listens to the Aperiodicast ((Oops)) All Squared and wonders why some times are more attractive than others.
• @notonlyahatrack (who is Will Davies in real life) answered last month’s puzzle and had it featured on @haggismaths’s What’s on my blackboard blog.
• This month’s puzzle: An equilateral triangle and a regular hexagon have the same perimeter. The triangle has an area of 2; what is the area of the hexagon?
• This month’s confusion: what’s the difference between a chart, a graph, a figure and a picture?
A selection of other posts
subscribe via RSS
|
{"url":"https://www.flyingcoloursmaths.co.uk/wrong-but-useful-episode-3/","timestamp":"2024-11-10T01:58:21Z","content_type":"text/html","content_length":"14410","record_id":"<urn:uuid:94784256-e14d-4dad-94bf-a3f87cedcc24>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00138.warc.gz"}
|
Permutation and combination basics
Yahoo visitors came to this page today by using these keywords :
• quadratic formula sample calculations
• free worksheets for practicing the distribtive law
• beginner math work sheet
• model of adding fractions
• free root solver
• Equation funtions
• Newton second order equation + fortran
• ti-83 calculator emulator
• PreAlgerbra
• Louisiana GED practice free printouts
• question about quadratic equations mathematicsa
• combination and permutation- sample problem
• 8th grade algebra review worksheets
• Free Absolute Value Worksheets
• games for the t1-84 plus calculator
• elementary algebra step by step instructions/free
• MATHS WORK SHEET FOR 3RD CLASS
• free algebra lessons
• mathematical poem
• ti-84 emulator
• decimal to fraction in simplest form calculator
• probability tree worksheet
• solving nonlinear simultaneous equations more than 2 variables
• formula for doubling an investment in excel
• basic 6 maths exam question papers
• algebra solver freeware
• solving systems of equation with fraction in worksheet
• MATLAB Programs - Quadratic equation
• 3rd grade algebra
• what is 5.7 translated to a square root
• 9th grade holt school book
• substitution.java key program
• examples of math trivia elementary
• boolean algebra calculator
• area de una parabola
• free online math problem solver
• free downloadable pdf books on student achievement
• math 8th grade algebra problem solving
• Free IGCSE Maths Pastpapers
• year seven math help online
• Simplify Algebra Calculator
• convert real to fraction
• holt algebra 2 answers
• graph fractional quadratic equation
• answers to math problems for free
• on-line math equation games for 5th grade
• work sheet-Maths,gread 1
• 9th grade long division games
• 6th grade review math worksheets
• nonlinear algebraic solver matlab
• what is squares and square roots
• fourth fraction books
• greatest common factor variable
• one step equations 5th grade math
• Best calculator for college alegebra
• dividing decimals review quiz
• algebra 5th grade homework help
• jokes comics scripts about intermediate algebra
• exponents formulas for high school
• online conic graphing calculator
• answers to mcdougal littell math
• multiplying radical fractions
• "free download math book"
• solving the function of a real variable
• adding square roots solver
• free online graphing calculator with stat function
• modal questions of some chapters maths of 11th standard
• sample problems on finding a vector in physics
• first grade inequalities worksheets
• adding integer practice worksheets
• solving cubic equations on TI-84
• excel+pH calculation
• CAT aptitude papers
• adding and subtracting frations games
• Math Trivia Questions
• 9th grade work
• basic algebra year6
• adding and subtracting numbers until 100
• free cost accounting book
• solve non-linear equations with excel
• dividing worksheet
• adding and subtracting thousandths to decimals
• word problems simultaneous equations
• McDougal Littell workbook answers
• graphics calculator iteration
• kumon maths and reading worksheet for sale
• algebra facts and trivias
• free algebra worksheets for beginners
• college algebra review exercises problems -buy -sale -sell
• Financial Accounting Prentice Hall. 5th edition free to download
• formula for common denominator algebra
• mathematic for kids+pdf
• gretest common factor and least common denominator problem
• least to greatest calculator
• line graphs worksheets
• "cubed polynomials" factor
• how to find x value on graphing calculator
• free printable third grade math
• chemistry answers addison
• glencoe algebra 2 honors
• pre algebra 9th grade
• free downloads of aptitude books
• pictures of algerba
• free mathematical conversions for triangular
• fluid mechanics 6th edition
• free Sample 8th grade math worksheet
• solve simultaneous equations online
• square root of a fraction
• foerster algebra syllabus
• Multiplying positives and negatives integers drill sheet
• solving multiple table functions
• hardest math promblem
• algebra factoring quiz
• rationalizing denominators gcse
• free multiplying with negative and positive integers worksheets
• symbolic solve system of linear equations matlab
• algebraic expression worksheet combining like terms
• KS2 basic maths tests level 2
• adding subtracting and multiplying integers
• what do you learn in conceptual physics honors?
• minus square roots
• learn to do algebra
• algebra solved problems for grade 9
• Worksheets on factors and multiples for grade 5
• solve greatest common divisor problem
• basic algebra formula sheet
• Alegbra 1
• prealgebra with pizazz
• how to find the square root of a problem
• free online test for 5th graders
• freee ppt presentations for students
• factoring variables
• convert mixed number to decimal
• mixed numbers as a percent
• printable Singapore school test papers for science for primary 3
• polar equations problems and solutions
• MIT free sample math papers
• 8th grade algebra worksheets
• algebra slove
• free 9th grade math worksheets
• free pre college algebra WORKSHEETS
• Intermediate Algebra help
• Trivias on Mathematics
• Substitution problem of GMAT
• advanced solving equations printable worksheets
• solving radicals
• surds of 9th std
• factor tree worksheet
• 8th grade Fraction Worksheets
• basic exponent free printable worksheets
• algebra ii depreciation
• free algebra 1 softward
• "Quadratic Programing" tutorial
• free worksheets factor trees
• downloads for TI-84 Plus
• Elementary algebra 4 th edition - teachers answer key by Tussy and Gustafsons
• ebooks + aptitude book
• "beginning and intermediate algebra" ebook
• algebrator functions
• calculate fast exponent code
• math help software
• free geometry problem solver online
• Math Trivias
• trig.pdf
• evaluating expressions answer
• real life problems on graphing linear equation
• linear equations including variables
• easy math word problems with solutions
• flash polynomial solver
• aptitude questions pdf
• algabra 1a worksheet
• quadratic form calculator
• how to solve for a variable using casio calculator
• pre algebra and graphing test
• algebra property calculator
• factoring quadratic calculator
• aptitude test papers with answers
• multiplication of two numbers with exponents
• multiplying by fraction roots
• foerster algebra ii book
• how to solve quotient logarithms
• ratio maths converter
• simplifying radicals with variables and exponents
• math expressions
• solving cubic equation in matlab
• third order polynomial calculator
• free Algebra solver
• convert fraction to a decimal
• solving algebra fraction
• free ged printable worksheets
• factoring help in math
• foil method cubed
• Aptitude questions.pdf
• intermedia algebra 2nd by bello
• fraction square root of 72
• free math problem solver
• free worksheets on factor tree
• Exponent-Exercises to solve
• Math Trivia Answer
• exponential expression
• holt learning algebra 1
• square root three times square root three simplify
• write 55% as a fraction
• aptitude ebooks download
• printable algebra worksheets
• advance maths revision notes
• 100 lineal metres conversion
• FACTOR aX +b Y
• help with statistic homework
• Free calculator to check Multiplication of real numbers in Algebra
• painless chemistry
• master product algebra
• adding and subtracting decimals project
• cpm algebra 2 book problems
• ti-89 foil
• world's hardest math problem
• algebra solver free
• the best algebra textbook
• math investigatory projectt
• factor equations online
• free online algebraic calculator
• cubed roots ppt
• downloads of the algebrator
• pre-Algebra worksheets
• ti-92 graphic calculator ebook
• multiple variable polynomials
• algebra sign number lesson plans
• square difference
• teachers answer sheet to intermediate algebra II
• square root of negative 7
• "free ebook download","The Basic Practice of Statistics"
• "power engineering practice exams"
• Linear equalities
• Instruction's solutions manual for trigonometry online
• how to calculate greatest common factor
• prentice hall mathematics algebra 2 answers
• online free worksheets algebra 1 review
• algrebra equations
• formula of the square root
• systems of equations matlab
• algegrator free
• download free aptitude questions
• exponents in fraction function simplify
• roots 6th order polynomial
• management aptitude test sample papers+free download
• linear algebra beginers
• calculating slope from rise & percent graph
• software help 6th grade math
• power point solve systems of equations
• Maths Trivia Kids
• sample problems in vector
• how to do algebra step by step
• difference between liner & non-liner differential equation
• alegbra 2
• solving equations with multiple variables
• pre algebra math games 4 6th graders
• free maths paper for primary
• teacher's edition beginning algebra prentice hall
• age mixture, investment and motion problem by using elimination method
• binomial function
• study intermediate algebra
• algebra 2 + grade ten
• solver download algebra calculator
• Algebra puzzles for 5th and 6th Graders
• 2 order differential equations
• Program to solve my properties of exponent
• absolute value solver online
• excel add in for system of non linear equations
• algebra teaching software
• formula of finding a parabola
• polynom solution radical galois pdf
• how do you solve a system using graphing calculator
• Pre-algebra worksheet Plotting ordered pairs
• 2-step algebraic equations
• SCX-4300 MFC 4300
• third grade work
• binary conversion for ti 84 plus
• advanced algebra printable
• Aptitude test papers free
• what's the easiest way to teach your child how to solve linear models
• vti calculator emulator roms
• algebra (cube of binomial)
• worksheets for recursive patterns, grade 6
• rules for adding multiplying dividing
• 1st grade math problems to do online
• cube root of y^15
• parabola in grade 10
• free algebra 1 books
• skills that children need for pre algebra
• Algebra 1 & Holt & Textbooks
• soft math
• 9th grade math study sheets
• integration by parts solver
• square root of 0.5 is greater than 0.5
• self help grade ten mathematics
• rationalize radical equations with denominator
• Linear combination method
• ti-84 algebra programs
• impact math course 2 cube roots activities
• go adding integers
• aptitude questions
• glencoe algebra 1 teacher's edition 1998
• algerba factoring
• how to solve radical problems
• factoring cubed functions
• example of least common mutiple
• grade 7 math revision sheet
• glencoe mathematics course 3 answers
• ratios in practical situations
• converting a radical to a decimal
• pearson prentice hall mathematics geometry solution key
• california mathematics homework workbook answers
• Algebra Math Prolbems of 8th graders
• teaching algebra solving equation
• Algebra homework sheets
• reducing fraction expressions
• complex numbers practice problems
• samples of problem solving involving three variables
• beginners algebra quizzes
• calculator that will factor a polynimial and find the gcf
• algebraic simplification exercises
• algebraic trivias
• Free Algebra Equation Solver
• Kids Printable Math Worksheets with pizzazz
• easy way to learn algebra
• sample questions on factoring - algebra
• application algebra
• mulitplying and dividing fractions teaching pages
• high school algebraic simplification exercises
• WORKSHEET ON ONE STEP EQUATIONS
• o level maths worksheets
• hardest maths puzzles
• Cat 6 test 5th grade english
• converting number to decimal 3 places
• graphing and adding and subtracting integers
• exponents worksheet with answers
• time history analysis+matlab m-file
• practice elementary algebra for free
• "Answers to McDougal Littell Worksheets"
• Free printable worksheets for geography ks3
• Interesting Math Trivia
• free 2nd grade online tutoring
• Algebra high school entrance exam
• how to find domain range of equation
• math trivia for high school
• how to simplify basic radical problems
• alg II worksheets
• simple ways to understand algebra
• math trivia
• square root property calculator
• SIX GRADE MATH - ROUNDING DECIMALS
• math problem solver free
• online year 9 math
• how to add and subtract negative and positive integers worksheets
• C++ code solving nonlinear equation systems
• free 7th grade worksheet maker
• the partial-sums addition method
• hyperbola solved problems
• matlab solving nonlinear equations
• solving distance simultaneous equations
• 6th grade free learning software
• factoring equations calculator
• Algebrator $35.00 download
• negatives maths worksheets
• how to use a ti 83 calulator for logs
• ti-84+ downloads
• written fourth grade math qames texas
• linear regression gnuplot
• advanced algebra rational expression
• subtract fractions with exponents
• quadratic simultaneous equations
• features of addition fractions
• solutions algebraic division problems including fractions radical signed form
• algebraic mixture problem
• graphic of ellipise and hyperbola in three dimention
• Chemistry grade 12 vancouver past papers
• homework help for college students in algebra
• "1st Grade Home Work Sheet"
• algebra readiness assessment worksheet
• multiple choice ordering integer worksheets
• problems in real life involving trigonometry
• solving word problems using linear equation
• how to simplify on the ti calculators
• operations with negative square roots
• free sample eog questions
• mixed numbers as decimal numbers
• linear algebra done right book download
• convert fraction decimal 1/26
• algebra test and exercise
• Factoring Quadratic Equations Practice
• how to calculate log base 2
• Prentice_hall Pre Algebra workbook
• square equation
• ks3 worksheets science
• free ratio and proportion worksheets
• Mcdougal littell middle school math course 3 answers workbook
• prime factored form
• ti-84 software downloads
• variable to the power of a fraction
• answers to algebra 1
• linear and nonlinear relationships on grid sixth grade lesson
• Algebrator
• Multiplying variable expressions
• online math help for dummies
• writing pre-algebra expressions
• free math exercises for 1st grade
• machine burden calc
• Basic Math & Pre-Algebra For Dummies Ebook download
• factoring a difference of two square
• free printable algebra pretest
• Online Accelerated Math Algebra 2 Solutions
• maths grade nine
• free pages 6th grade division, multiplication
• mcqaccounting books
• free download ebook Computer Programming Logic Using Flowcharts
• trivia in linear equation in one variable
• math trivia's
• commutative worksheet third grade
• algebra clep examples
• aptitude questions of C language
• cubic factoring calculator
• is 9th grade algebra hard in 8th grade
• square root of fractions
• basic college math & aleks pkg
• download kumon worksheets
• Algebra 2 images
• matlab nonlinear fitting program
• TI-84 How to graph in Standard Form
• maths exam questions for 11 years old
• free basic algebra substitution problems
• College Algebra etextbook(Beecher)
• College Algebra Worksheets
• sheets on solving quadratic equations
• free worksheets on compatible numbers and front-end-estimation
• rewrite mixed number as a percent
• grade 6 order of operation fun worksheets
• algebra 2 convert decimals
• discriminant calculator
• signed numbers including fractions worksheet
• java fractions
• ratio and proportion questions involving chemical mixtures
• math trivia with answers algebra
• how do you subtract uneven fractions
• binomial online calculator
• "algebraic poems"
• easy way to calculate celsius
• mcdougal algebra 2 teachers
• permutations and combinations worksheets
• aptitued question paper
• download line of algebra anton
• pre algebra 7th grade work sheet
• power of (algebra)
• application of algebra
• questions on algebra
• simplifying under square root
• multiplying and dividing integers worksheet
• algebra 2 notes mcdougal online book
• free software to solve algebra word problems
• 6th grade math puzzle
• polynomial in daily life
• rational numbers printable worksheets
• vector worksheets
• PROBLEM SOLVING OF gcf "Greatest Common Factor"
• free printable number value placement chart
• factor tree worksheets
• Printable brain teaser 6th grade
• 2 cubed sixth grade math answers
• relations algebra 1 worksheets
• tenth grade reading worksheets
• GMAT slope line y-intercept questions
• math pre algebra quiz games
• java program to find sum of a number series
• mix number into decimals calculator
• solve coupled differential equations with matlab
• o level physics solved worksheets
• mcdougall littell generator free
• Form 1 Maths test paper
• how to solve a fracion algebraic expression
• printable works
• online calculator for graphing quadratic f(x)
• online scientific calculator t1-83
• formula for getting the percentage of a number
• how to factor ax^3+b
• Advanced-Mathematical-Concepts torrent OR download "Precalculus with Applications "
• algebra 1 structure and method transform equations
• algebra meaning
• multiplying radical expressions calculator
• kumon answers
• importance of series in algebra to our life
• examples of math tricks and trivia
• basic rules of algebra tutorials
• check algebra homework
• saxon algebra 1 answers
• mathecians of rational numbers
• decimal worksheets for 6thgrader
• hard algebra quiz complex fraction, linear equation, factoring
• algebra 2 an incremental development third edition tutorial
• change mixed number to a percent
• ratios proportions inverse worksheets
• how to solve for variables with number(variable)exponent
• 8th grade algebra practice worksheets
• factoring algebretic equations
• paper math tests
• Solving for X addition problems
• polynomial two variable
• intelligence test sample activities for 3rd grade
• algebra connections volume one
• "PPT in List of Maths Formula"
• algebra fast answers
• algebric connection homework
• high common factors and low common multiples
• year 10 transforming equations worksheets
• year 11 general maths yearly exam paper
• algibra
• solver accounting uses
• mcdougal littell algebra 2 answer book
• Algebrator $35.00
• Free Algebra Practice Sheets variable
• square root simplification calculator
• Logic and Aptitude questions and answers
• polynomial long division
• chart on Greatest Common Factor
• negative fraction numbers least to greatest algebra 1
• free printable business worksheets for third graders
• trivias about function and relations
• calculator which can solve algebra solutions
• simplify by factoring
• common factor 76, 34
• learning algebra 1
• formulas de elipse
• free beginner algebra test samples college
• algebra readiness worksheets
• probability aptitude problems
• extracting the sqaure root
• one step at a time worksheet
• mcdougal littell geometry answer
• 5 math trivia
• Alberta exam bank username grade 11
• forms of linear function
• everything i need to know about algebra kid friendly
• lcm calculator and evaluate
• algebra quizes for 7th grade
• college algebra help
• Balancing Chemical Equations w/ Fractions
• college algebra 9th edition problems
• sample linear function word problems with answers
• algebra II square roots
• basic algebraic conventions
• fourier series on the ti-89
• first grade math homework sheets
• how to find the scale
• algebrator.com
• multiplying radical expressions
• learn mathmatics
• maths homework sheet fractions
• simplifying math expressions calculator
• LCM Answers
• special products math
• free beginners algebra test
• algebra 1 saxon even answers
• solving logarithm problems with a TI-83 calculator
• pre algebra simplify worksheets
• Adding and subtracting negatives rules
• college algebra problems
• free 3rd grade algebra worksheets
• translation of worded problem to algebraic equation games
• plural no.?
• t1-84 factor 9 memory error
• algebra 1 fomulas
• free maths books class 7
• order of operation quiz worksheet
• matlab solving ecuations
• simplify the negative under the radical
• calculator for simplifying radical expressions
• calculate square root polynomial
• aptitude books+download
• trigonometric answer
• softwarer for algebra
• solve algebra
• free download Daily Second Grade Math and Critical Thinking Practice
• math tutor in lancaster calif
• 6th grade math worksheets to print
• an free online tutor for the fourth grade
• 10th grade algebra printable worksheets
• simplifying equations by applying k-map
• clock algebra prolems
• how to solve equations for restrictions
• +alegbra 2n + 5 > 1
• multiplying fractions grade 12
• prentice hall algebra 2 with trigonometry workbook answers
• algebraic problem solver free software
• glenco 9th grade math
• math arrays printable worksheets
• Glencoe answers
• college elementary algebra worksheets
• physics mathematical equestions with answers
• math exercise for 5 years old
• algebraic expressions worksheets
• pre-algebra 7th grade worksheets
• simple equations of mathematics for seventh class
• free printable math worksheets for 7th graders
• convert an exponent to a fraction
• learn factoring algebraic formulas
• standard form equations calculator
• solving homogeneous differential equations
• Aptittude questions of C lang
• Geometry homework answers
• decimal to mixed number
• maths formula to find cube root
• 7 grade fractions
• Free Computer math games for 9th graders
• free 8th grade algebra 1 worksheets
• basic algerbra
• find the roots by factoring
• 6th grade fractions practice
• GED cheats
• how to write a decimal as a mixed fraction
• "photo of wolf's bone with 57 notches"
• how to solve a problem with x^3
• high school algebra solve the equation if possible
• kumon answer book download i
• how to solve an equation graphically
• "free ebook download","Practice of Statistics"
• grade 5 line graph worksheet
• www.mathmatics problems.com
• kumon sample papers
• houghton mifflin Math fourth grade Unit 1 Pretest
• math 6 program : free download
• apptitude question and answer for c
• basic concept of algebra
• graphic of inequalities ti-84 plus
• www.technics of factoring(algebra)
• Practice Adding and Subtracting Mixed Fractions
• general maths tests free
• 6 grade convert fractions to dec
• quad formula program ti-84 download
• equation solve for algebra software
• quadratic equation Visual Basic code in graph
• adding integers practice test
• practice problems on radical algebraic expressions
• solving linear equasion models algebra 2
• 5th grade math-factors
• cubed root 2/5
• decimals to mixed fractions
• some of the properties associated with the solution could not be read slow
• calculation process of square root
• exponents and square roots
• math trivia with answers mathematics
• H.S. Algebra I Pre-Tests
• sum of numbers java
• sample of age word problem with fraction
• Free Algebra Symbols
• Fluid Mechanic 6th solution
• how to change from standard form to vertex form
• mathematical aptitude questions
• difference between permutation and combination
• roots and radical expressions explained
• simplify square root expressions
• order of operations worksheets algebra 2
• 3rd grade homework printable
• Adding, Subtracting, Multiplying, and division worksheets
• multiply cube root
• algebra 2 mcdougal littell answers
• online examination on software testing
• Dividing monomials automatic solver
• two kinds of algebraic expression
• solve under root equation of maths
• PRINTABLE MATH SHEETS FOR SITH GRADE
• chapter one problems prentice hall mathmatics textbook
• fractions subtracting formula
• rationalizing complex numbers higher order
• maths printable worksheets on division grade 3
• +"test of Genius" +"creative publications" +pizzazz
• p c s pre examination general science papers
• what are mixing numbers
• solving order of operations for Java
• graph autonomous equations TI 89
• ELEMENTARY ALGEBRA TEST WITH STEP BY STEP
• crossword puzzles math for 5th thru 7th grade
• Free Trinomial Factoring Program
• accounting standards book+free download
• mcdougal littell world history reading study guide answers
• middle school cheat sheet fractions
• Pre algebra and Introductory Algebra by bittinger 2nd edition
• worksheet US geography for 6th graders
• what is it if I move the decimal point three spaces to the left
• compatible numbers worksheet
• How is doing operations (adding, subtracting, multiplying, and dividing) with rational expressions similar to or different from doing operations with fractions?
• "online "foil calculator"
• using matlab to fit multiple equations
• solve a nonlinear ordinary differential equations
• Free College Algebra Calculator
• algebra connections volume one answers
• saxton math algebra 2
• answers to the holt physics study guide
• "root of matrice" matlab code
• Algebra 1 Prentice Hall
• "coordinate plane" AND "printable"
• math investigatory
• online math practice for ninth grade probability and odds
• tutorial and solution boolean algebra simplify
• sample lesson plan on multiplying rational expressions
• percentage problem solving games yr10
• finding the "slope of a quadratic equation"
• algebra beginners exercises
• calculator emulator for TI-84
• pre algebra operation with fraction problems
• Free Aptitude Test Tutorials
• free algebra problem solver calculator
• what is the decimal equivalent to radical 6
• liear equation
• how to find the inverse of a quadratic equation in standard form
• free online TI 83 calculator
• really hard grade 6 maths questions
• free online fraction calculator
• easier ways to solve fraction problems
• Algebra II Answers
• exponents hands-on
• How is doing operations (adding, subtracting, multiplying, and
• how do u divide an integer
• permutations+quiz
• Elementary Intermediate Algebra College Students
• free printable the coordinate plane review study guides in pdf format
• Adding and Subtracting Integer Problems
• beginner algebra for kids
• Help with solving seventh grade math problems
• calculate the scale factor
• saxon algebra 2 problem set 1 answers with work
• calc gcd
• factoring trinomials diamond
• Example Of Math Trivia Questions
• least to greatest
• how to calculate rational equations
• mathematical investigatory project
• mathematics trivia
• ellipse in graphing calculator
• maths exam paper grade 11
• simplify the radical solver
• what is formula to divide decimals?
• factor trinomials, online calculator
• lowest common denominator chart
• How To Simplify Expressions
• solve multiple equations with multiple variables
• basic algebra question
• mix fractions
• use proportions to solve percent problems power points
• Free sat maths test papers for 10 year olds
• teaching yourself algebra
• to find polynomial matrix any equation using javascript program example code
• aptitude questions on linear and quadratic equations
• multiplication of rational expressions
• Free problems for pre-algebra for 6 grade
• Prime number math exercises
• reducing fractions variables exponents
• printable college algebra sheets
• where is the left slanting slash on a TI-89 Calculator?
• algebra greatest least common multiple
• graders mathematical trivia
• prime fraction of 108
• Advanced Algebra terms
• find system of equation using the graph method
• dividing fraction caculator
• simplifying radical expressions with fractions
• free Ca. math assessment for primary grades
• powerpoint lesson plans using algebra in chemistry
• simplified radical form calculator
• root formula
• math differential-table integral
• linear equations solving algorithim
• Online Equation Solver
• dividing integers worksheet
• Prentice Hall Mathematics Pre-Algebra
• equation factorer
• free worksheet and fraction and addition and different denominators
• multiplying percents
• strategies for problem solving workbook teacher edition
• math worksheets to print for third graders
• solving linear equalities
• free worksheets for beginning algebra
• riddle answers to pre-algebra with pizzazz
• square root equations calculator
• FACTOR CUBED POLYNOMIALS
• emulator TI 84 Plus
• ohio grade 10 math papers
• lesson plan for 7th grade math, muliplying fractions
• adding subtracting multiplying dividing fractions work sheets
• advance math, mcgraw hill 6th grade
• subtracting algebraic equations
• sample algebra questions
• algebra 2 Online
• mathamatics grade 10 model papers
• matlab second order ode solver matrix
• adding like terms worksheet
• ias exam pattern sample paper
• objective multiple type maths problems free download
• solving cubed equations
• calculating algebra exponents
• Worksheet for Translating Algebraic Expression
• algebra 1 manipulatives
• free worksheets for ninth grade
• hardest algebra question
• How Do U Divide Decimals
• canadian grade 9 math sheets
• chemical equation of salt and sugar
• algebra 2 solver free
• radical expression terms
• simple way to add/subtract/multiply/divide fractions
• factoring trinomial calculator
• Suare Root of 3
• free online algebra equation solver
• square root formulas for grade 7
• problems and solution of homogeneous equation
• algebra with pizzazz! worksheet 212
• concepts of Algebra 2
• adding,subtracting, dividing and multiplying decimals
• solve college algebra problems
• differential+pair+calculator+download
• solving incomplete quadratic equations
• what's a good calculator for beginning algebra
• printable GED math questions
• sample of beginner algebra
• simplify the expression calculator
• Math ppt class rules for highschool
• simplifying square root equations
• free worksheet on subtraction of integers
• Casio+Emulator+crack
• example of quadratic worded problem
• square root algebra
• Factoring Trinomials Amazing Method
• work sheets algebraic expressions with one independent variable
• general aptitude questions
• non homogeneous second order differential equation
• how to solve mathematical word problems freeware
• simplify 3 square root 56
• algebra 1 2 step equations
• first order differential equations using MATLAB
• c language aptitude questions
• 3rd grade math online worksheet adding subtracting
• online ninth grade sat practice
• how to do radicals on TI-83
• Free Multivariable Graphing Software
• third order equation coefficient
• free 9th grade algebra
• unit conversion q/a of gr-7 and 8 physics
• squere root x y
• free answers for Algrebra
• 2nd order homogeneous constant
• Examples of solving Partial differential equations with delta function
• conceptual physics workbook answers
• Free 9th grade math worksheets
• very hard math equations
• simplifying math expressions with grouping symbols
• fraction computation worksheets
• online polynomial calculator
• algebra worksheets for beginners
• SATS practice papers to print off
• cost accounting sheet
• free word problem solver
• glencoe testenglish
• how to calculate log base 2
• verbal problems about coins including the solution(linear equation)
• pre algebra worksheets
• trigonometric special values chart
• free maths paper online sec 2 singapore
• college math for dummies
• algebra trigonometry mymath blitzer 3rd
• Convert and Simplify Expression
• Dividing monomials problem solver
• pre-algebra pretest
• online graphing calculator for college
• multiple fraction calculator
• pre algebra barcharts free downloads
• algebra pie
• algebra equation finder
• how to find the vertex of an absolute value function
• free algebra solver and shows how
• ratio and proportion equation worksheets
• cost accounting book for free
• free printable pre algebra tests
• c aptitude questions
• Fraction practice for 7th Grader
• algrabrahelp
• free divisibility worksheets
• free holt answers
• algebra 1 + SOL + puzzle
• adding and subtracting integers worksheet
• problems solutions abstract algebra
• free 9th grade algebra problems
• simplify the quotient calculator
• measurement problems for 7th grade math taks
• free-online accounting books
• absolute value of 10
• exponents lesson junior high
• Glencoe Biology 9th grade answers
• pre algebra in 7th grade in ga
• adding subtracting multiplying and dividing integers
• what are the math investigatory problem
• 6TH GRADE MATH
• adding and subtracting integers test
• worksheets on equations for primary level
• permutation and combination.pdf
• factorising expressions x2-8x+1 gcse
• "flash calculation" maple
• how to solve comparison method
• 10th trigonometry formulas
• Fraction equation
• Basic formulas in Algebra 7th class
• dummit foote
• algebra trivia questions
• simplifying fractions radicals
• 7th grade algebra exercises
• quadratics games
• ti 84 plus games downloads
• mathematician trivias
• albert calculator how to calculate log2
• decimal ascending
• equations 5th grade scale
• Simplifying Variables with a number
• 11plus exercices
• write in simplified radical form solver
• factor algebra problems free show work
• free ged preparation worksheets for ohio
• free ks2 worksheets
• practice different kind of graphs for 5th graders
• fun games for ti 84
• year 9 solving equations worksheets
• factoring two-variable equations
• 5th or 6th grade math test online help
• math worksheet add subtract positive negative
• how to evaluate exponential square roots
• easily solving algebraic equations
• Ratio and proportion, Indices, Logarithms
• fractional order nonlinear matlab
• Convert Decimal Fraction
• ninth grade math drills
• permutation combination courses
• grade 8 math algebra sample problems
• algebra equations ks2
• solving radicals with variables and exponents
• how to factor cubed root expressions
• algebra functions simple easy step begin tutorial example
• solve work problem algebra
• maths grade8 solving equations worksheets
• 6th grade daily math problems
• maths worksheets 11+
• negative number raised to a decimal matlab
• freeware of learning math for kids
• Learning Pre Algebra
• hrw practice reflections worksheets
• 3 simultaneous equation solver
• non downloadable algebra 1 problem solver
• foil worksheets
• algebra trivia
• math trivia that important
• college preparatory math program worksheets
• Conic solver
• advanced algebra worksheets
• ti-83 linear expressions
• algebra 1 holt
Search Engine visitors found our website yesterday by using these algebra terms:
│Glencoe Practice Workbook Answers │solving simple equations worksheets free │physics formula parabola │
│algebra question sheet │casio algebra fx 2.0+ partial derivatives │free algebra tests online │
│solving quadratic equations in 1st variable │pythagoras calculator │free printable algebra graph paper │
│free adding and subtracting multiple integers worksheet │Fractional Exponents │multiplying expressions calculator │
│holt algebra │polynomial 4th Grades │the steps for slope in algebra 1 │
│algebra solver show work free │algebra equations.com │free college algebra math lessons │
│class 7th maths sample paper │example of trivia question in math │free 8th grade work sheets │
│algebra a combined approach 3rd edition │algebra pre-tests │write mixed fraction as decimal │
│free year 5 decimal help │prentice hall mathematics pre-algebra set one problems │lcm conversion chart\ │
│solving trinomial │math trivia with answers for kids │Linear Algebra Done Right │
│Difference in computing discount multiplying or dividing │Connected mathematics project investigations in subtracting fractions │subtracting square root of unknowns │
│simpligy negative fraction with 0 exponent │I need a answer key for Glencoe Accounting Fourth Edition │mcdougal littell pre algebra practice workbook │
│free simple lessons on pre-algebra │free rational expression calculator fractions │"manually program" programs TI 84 │
│LEAST COMMON DENOMINATOR CALCULATOR │write the exponential expression as a product without exponents (-a)5 │java code for converting negative number from hex to │
│ │ │decimal │
│6th grade math tutoring online │Reduce expressions to lowest terms calculator │factor activities for 6th graders │
│binary addition and subtracting multiplication for dummies │Binomial theorem + GCSE │simplifying systems of inequalities │
│online calculator.adding and subtracting degree minutes │solve for a variable matlab │solve for x calculator │
│seconds,calculator │ │ │
│powerpoint lesson on proability │Trigonometry poems │algebra II online problem solver │
│prentice hall mathematics algebra 2 free answers │trig answers │free polynomial divider │
│sample aptitude tests of NCA │elementary algebra tutor │Free Pictographs For Math │
│free solve simultaneous algebraic equations online │recursive formula factorial matlab code │Maths Algebra "Problem solving questions" -"Algebra │
│ │ │lessons" │
│Rules for dividing and multiplying fractions │1st grade printable homework │online math solvers │
│holt chem file: problem-solving workbook ansers │java arrays fraction program │c apptitude questions │
│pearson Prentice hall Mathematics, pre algebra, interact │self help pre algebra │pearson Prentice hall, pre algebra interact question │
│question │ │ │
│online chemistry test for 9th standerd │practice tests and 8th grade and printable and math and pretest and │spray on bed liner │
│ │pre-algebra │ │
│level 3 physics formula sheet │grade 3 - numeracy worksheets │examples of work problem in college algebra │
│what algebraic expression makes up a difference? │lowest common multiple calculator │ti84.rom │
│Convert a Fraction to a Decimal Point │polynomial square root problems │refresh in algebra online for free │
│solving sequence logic pictures │how to calculate a cube root on a TI-89 Titanium Calculator │algebra software gives answers │
│application lines equation and line qualities in everyday life │free math worksheets for grade three on algebra geometric patterns │TI-83 plus calculator sin │
│fractions │ │ │
│algerbra solver │least common multiple word problems │subtracting integers free worksheets │
│square roots classroom activities │how to solve quadratics through square roots │algebra solving with degree calculator │
│Algebra and Trig factor out the expression │real life algebraic equation │online graphing calculator ti 89 │
│program algebra 8th grade high school usa │decimal worksheets for 6th grader │bourbaki ebook │
│coordinate plane worksheets │exponents of roots │math exercises equation square │
│Adding and Subtracting Rational Expressions calculator │GRAPHING 8TH GRADE STUDY GUIDE │ti-92 manual ebook │
│hard system of equations │algebra online solver with explanation │maths work sheets for 6 years old │
│equations │writing linear equations │download free aptitude test to print │
│rational exponents worksheet │simplifying mathematical product exponential │classes of algebra 2 │
│solving systems of linear equations with 3 variables │holt algebra 1 │least common multiple of the two expressions │
│free online math worksheets dilations │calculate p value │9th grade quiz │
│evaluate square roots with variable │convolution solve ti 89 │rational expression calculator │
│what are the three types of integers negative positive and │differential equations with inequality constraints matlab │add subtract scientific notation │
│model test paper for architechture apptitude test │general nonlinear regression newton matlab │squaring quadratics │
│how to convert decimals into degree formula │quadratic equation in MATLAB │simplifying cubes │
│prentice hall prealgebra problems section 1-4 │basic polynomial worksheet │how do you multiply negative exponents on calculator │
│ │quetion bank in college math │mcdougal littell ALGEBRA 2 answers EVEN │
│easy way to learn equations │free exam papers for sec 1 │how do you type variables into a graphing calculator │
│solve the PDE, heat equation │java limit decimal numbers of calculation │free online mathematics trivia 3rd grade │
│fractions for dummies │free pre algebra worksheets equations │Factorised quadratic equations worksheet │
│free home work for year1 │MATHS cheats │fractions worksheets for 8th grade │
│explaining Algebra worksheets │free math problems for 6th graders │how to simplify multiple radical expressions │
│square root to decimal │Factorization of algebraic equations rules │how do you calculate linear feet? │
│adding mixed fractions │4. degree +solver │history of quadratic equation │
│polynominal │Solving inequality problems-word problems │Simplifying a higher radical: Problem type 2 │
│LCM calculator for 3 │math tricks evaluating expressions │free accounting books pdf │
│factor tree sheets │free algebra 1 answer │Texas instrument calculator how to change number from │
│ │ │decimal to degrees │
│mathematics answers to 9th grade algebra 1 │convert lineal metres to metres │how to cheat on solver │
│how to do fractions in an equation │9th grade worksheets │6th grade math printouts │
│solved word problems of linear equation with solution │cube root with negative power │methods of getting least common multiple │
│algebra tutor │how to do a third radical on a clacualtor │solving linear programming equations online │
│algebra beginner │slope calculator in visual basic │role of quadratic equation in factorization │
│algebra 1 help ( simplify absolute value decimals │solving radical in simplest form │menstral calculator │
│holt algebra l 2007 used │A Level Mathematics free downloadable worksheets │pre-algebra-worksheets │
│Glencoe Algebra 1 practice workbook │intermediate algebra cheats │ellipses slover │
│5th grade final math and english practice test │graphing hyperbola Calculator │plato interactive mathematics elementary algebra cheat │
│ │ │sheet │
│simplifying compound inequalities │grouping calculator │worksheet solving equations │
│free algebra word problem solvers │algebra 2 step equations with fractions │simplyfy math expression.com │
│graphing "composite function" ti-84 │tecniques of turning age problems into symbols algebra │divide variables with addition │
│algebra pdf │Solved Algebra sums │"standard form" + "6th grade" + ppt │
│free indian apptitude tricks │rules of square roots and exponents │how to use excel to solve algebraic equations │
│trick math problem about 17 hourses │solving equations with fractions in the denominator and a variable │simplifying numerical equations integers │
│worksheets on solving equation with one unknown │Polynomial solve │physics worksheet answers cheats │
│dividing fractions with variables calculator │sample word problems test prep sixth grade math │square root of difference of squares │
│glencoe mcgraw hill math workbook answers │Excelmath answer book │programming "fractional exponent" │
│examples of math trivias │printable precal cheat sheet │residual for linear steps on T1-83 calculator │
│solver excel │Beginners 7 grade Math │Solve a polynomial using MATLAB │
│tic tac toe in math worksheet │2 step equations Algebra │used high school textbook store in san antonio, texas │
│algebra 2 by mcdougall littell online edition │puzzles with solution in algebra │factoring the lowest common denominator │
│free maths homework sheets │parabola calculator │First Grade Math Sheets │
│simplifying a fraction │grade schools math trivia addition │linear equation with fractional exponent │
│PHYSICS EQUATION LIST FORMULAS │Algebra Poems │calculator multiply simplify │
│trigonometric chart │"printable sample" "number line" showing integers 1 0 │how to put equations in a ti-83 calculator │
│basic aptitude math questions │two step division worksheets │ti-84 algebra programs downloads │
│boolean algebra simplification software │calculator for algebra give answers and work out │Binary Numbers CONVERSION FOR TI 84 │
│where can i find a place to answer my math homework problems │how do u insert labels into Graphics Calculator │math sheets third grade │
│agebra worksheet .com │solving equations with radicals cubed │basic college algebra + expansion worksheet │
│math powerpoint 6th grade │Least common factor of 18 and 17 │rate of change formula │
│What Is Simplified Radical Form │calculator value for rational expression │maths equation to convert currency │
│graphing logs on ti-83 │8th grade word problems worksheet │Beginner Algebra │
│softmath │solvers for algebra two │mathematica integration intermediate step │
│3 steps in balincing equations │java with programs online examination │TI-89 Worksheets │
│algebra 2 problems and answers │INTEGER ONLINE WORKSHEET │college algebra tutor │
│basic algebra graph │factoring math problems with answers │free download aptitude questions │
│combining algebraic terms worksheet │Distributive property college │free solving nonlinear algebraic equation with VBA" │
│free rational expression solver │college precalculus algebra online help │algebra forula using pie │
│non homogeneous 2nd order linear differential equations │dimensional analysis worksheets middle school │worksheet math print inverse function │
│complementary functions │ │ │
│Algebraic simplification examples least common denominators │free pre-algebra test │+pre-algebra +"free worksheets" +"bar graph" │
│math trivia trigonometry │hard algebra equations │Examples of Math Trivia │
│free vector problems and solutions │matlab help solve bisection method │Algebra 1 worksheets and answers │
│Math for Dummies online │printable physics formulas │solving quadratic equations graphically │
│solve the following by factoring and making appropriate signs │second order linear equation in two variables, chain rule │what does multiplication have in common with division? │
│charts │ │ │
│free ti 83 emulator │sample of mixture problem │matlab square root waste │
│equations with variable in the denominator │find the decimal form of numbers on your calculator │mcgraw-hill lesson plans first grade science │
│fourth square root of 16 │free homework sheets │online aptitude question │
│free books for cost accounting │how do you add subtract and divide negavtive and positive fractions mixed │where to buy skill tests for glencoe mathematics course 3│
│ │numbers and decimals │ │
│aleks algebra 1 │factoring cubed │Y6 online maths print out problems │
│Tennessee Prentice Hall Mathematics Algebra 1 answers │third grade math sheets │6th grade math practice sheet │
│look solution for trinomial equations │free algebra test maker │square root on ti 89 │
│lattice method in college algebra │long algebra problems │free answers to elementary algebra questions │
│line graph worksheets for 5th graders │algebra software │javascript calculate Fraction │
│greens function first order equation │algebra worksheet for beginners │what is the definition of a rectangle for first grade │
│Aptitude question and answer papers │algebra equations grouping │sign of the square root in the calculator │
│ordering decimals from least to greatest │indiana algebra 1 pg 10 answers │ti 84 games download │
│equivalent algebraic expressions worksheets │Combining like Terms Worksheet │%20calulator%20for%20Alegbra │
│algebra trivias │how to graph basic algebra │square convert linear calculator │
Search Engine users found our website yesterday by entering these algebra terms:
• solving algebra equations for 6th grade
• fast substracting technique
• powerpoint using algebra in chemistry
• do hard equations
• graphing parabolas online
• download algebrasolver
• Solve Algebra Expression
• free math worksheets
• solving second order differential equations with ode45
• 5 problems in real life that can be solve by trigonometry
• MATH TUTER.COM FOR THE GED
• free worksheets greatest common factors
• free exponent worksheet
• Free printable gcse maths paper
• algebra quize
• Free Advanced Algebra Help
• writing a program in TI 83 emulator
• how to solve fractions
• FREE ONLINE WORK FOR NEW SIXTH GRADERS
• online logarithmic calculator
• Glencoe McGraw Hill 6th grade math
• kumon answer sheets
• Can I solve cat sample papers through online
• using special numbers such as square roots like variables
• x^3+63x=316
• solve elimination method calculator
• Algebra 1 + pretest
• 2 variables in a quadratic formula
• free download appitude questions and answers
• calculas
• algebraic expressions exercises with answers
• PRE-ALGEBRA WITH PIZZAZZ
• calculate largest common factor
• algerbra resolve equation exponent
• Prentice hall Intermediate algebra for college students tutorial
• math reviews for 9th graders
• online website for solving parabolas
• glencoe guys
• story problems for adding and subtracting positive and negative numbers
• factoring sum of two cubes
• interactive game adding positive and negative rational numbers
• glencoe algebra I pretest
• Explain Algebra
• balancing equations online
• third roots calculator
• algebra programs
• mathmatical equations
• free math printable worksheets primary 5
• least common denominator worksheet
• how to solve a slope-intercept form of linear equation word problem
• how do i convert expotential value to non expotential
• trigonometry chart
• variables as exponents
• graphing inequalities
• "Line symmetry worksheets"
• How to calculate log base 2
• help finding answer to a division problems
• Percent of a number worksheet
• ged math practice fractions free printable
• classroom activity of subtracting integers
• solving algebraic problems
• free glencoe math workbook answers
• college algebra dugopolski teachers editions
• free algebra training
• Saxon Math Intermediate 4
• free algebra 2 online classes
• reducing fractions online 4th grade
• free online ti 86 calculator
• decimals to mixed numbers worksheets worksheets
• algebra worksheets lines graphing writing equations
• aptitude test sample grade 10 math
• Aptitude questions pdf
• adding negative fractions
• 10th grade worksheets
• physics equation sheet
• permutations grade 7
• free pre-algebra study guide
• solve roots of equations using calculators
• beginners algebra quiz
• decimal fraction in simplest form
• what is function notation equation
• free math games to print
• "Line symmetry lesson plans"
• slope intercept form on the ti-84 plus
• code eight queens solution in C#
• Worksheet in Addition Property of Equality
• square root property
• what is the difference between pre algrebra and 7th grade math
• multiplying and dividing whole numbers worksheets
• Evaluate the expression x4- 5x2+ 6x + 6 for x = 1
• how to factor a trinomial on your ti-89 calculator
• quadratic binomial
• t.i. real phone number
• the steps on how to DO LCM problems
• Free Algebra Problem Solver
• questions and answers the hardest math problem
• Free Aptitude Questions & Answers
• mathematical trivia (elementary)
• free algebra tests and quizzes
• algebra II homework help free
• free algebra tutoring
• trigonometry 8th edition teachers edition pearson
• tensolve f90
• matlab quadratic
• solving simple equations worksheets
• sample aptitude question paper
• algebra problems with answers
• complex numbers tutorial 9th grade
• linear equation powerpoints
• practicing dividing with decimals
• ti 84 emulator
• learning algabra
• trivias about math
• free 8th grade algebra tests
• math games for 6th grade for free with no downloading
• interactive algebra worksheets
• algebra 1 pretest
• free pdf books of accounting
• integer worksheet
• algebrasolver free software
• solutions for maths problems of class 9th
• system of linear equation - test with answer
• ratio formula
• Algebra II Combining like terms
• example program ti-84 plus
• adding integers art projects
• t1-83 degree button
• sample problem with answer about vectors
• negative integers worksheet
• quadratic word problems with solutions
• middle school math with pizzazz book b
• free accounting books
• matlab non-linear equation solver
• decimal fraction percentages free worksheets year 4
• square root calculator step by step
• SAT test practice of ninth graders
• algebra print offs
• Geometry Mcdougal littell 2007
• free online parabola equation solver
• 7th grade math word problems worksheets
• solving distributive property
• Step by Step introductions to pre algebra for beginners
• factoring trinomials calculator
• fraction negative denominator
• algerbra problems online
• division of radicals with different indices CALCULATOR
• sample pre algebra final exam
• algebra quizzes
• free ti-84 emulator
• elementary algebra review tutorial
• cube root computer
• math with pizazz
• glencoe algebra 2 solutions manual
• solving 2 unknowns simultaneous equation solver
• how to use a calculator to solve trigonometry
• algebra function problems with solutions
• specific solution of second order differential equations
• how to use a graphing calculator to find slope
• holt algebra 1 section 2A answers
• algebra worksheet beginners
• "kumon answers"
• answers to 9th grade math homework indiana
• show me some algebra worksheets
• sample problems in trigonometry
• gcf polynomial calculator
• 10 algabra
• equations with square root
• questions and answers the hardest math problem algebra
• kids math trivia
• ti rom
• 6th grade Math aptitude multiple choice
• MATH PROBLE,S
• tests on algebra
• math standard 7 test papers from India
• mathematical trivia for elementary
• free math apptitude test
• decimals calulator
• online radical simplifier
• rules for adding subtracting integers
• graph quadratic equation
• online math practice for ninth grade algebra and odds
• free books in cost accounting
• McDougal Littell Algebra 2 Online answer key
• algabra 1
• squaring mix fractions
• Powerpoint Simplify Radical Expressions
• difference of square
• solve algebra equations
• define numerical question 6th grade math
• Oklahoma Algebra 1 textbook online prentice hall
• order of operations solver
• simplifying expressions with integers
• online math problem solver
• free calculator rational expression
• free adding and subtracting integers worksheet
• Printable Math Sheets for High Schoolers
• rules for adding, subtracting,multiplying.dividing negative numbers
• printable math worksheets for 7th graders
• hardest maths equations
• easy orders of operation equation solver
• pythagoras solver
• Graphing online
• percentage equations
• Writing Algebraic Expressions Power points
• algebraic simplification powerpoint templates
• Mcdougal littell modern world history test items
• reviewing adding, subtracting, multiplying, and dividing decimals
• multiplying and dividing positive and negative integers online calculator
• Math Helper-Algebra
• college math symmetry and circles pdf
• Free Algebra Problem Solvers
• prentice hall algebra 1 online review
• sample advanced algebra problems
• free download aptitude test
• 3rd square root
• How to Calculate a Scale Factor
• Elementary Algebra for dummies
• structure and method book 2 answers
• grade 5 math multiples and factors worksheet
• completing the square practice questions
• instructions for finding slope on TI 83
• graphing worksheet free grade 2
• 4th grade worksheet in area of a house
• mathematical trivia
• scale factor year 9
• aptitude test question with anwers
• kumon answers
• solve equation with square and variable
• high school formula charts
• Trinomial Factoring solver
• educo math free fraction
• exponent games online
• online TI calculators
• plot quadrics online
• how to graph a slope ti-83 plus
• simplifying radicals maple
• secant of hyperbola
• prentice hall high school mathematics
• solve cubed equations
• free online aptitude sample papers
• how to solve simultaneous equations using multiplication
• multiple solve equation excel
• Algebra 2 Practice workbook answers
• Rational Expressions Online Solver
• solving equations by multiplying decimals
• free 4th grade algebra worksheets
• printable math combination
• Algebra Artin
• free downloadable aptitude test
• multiplying and dividing by factor of 10
• Math homework prodlem sheets
• free TI emulators
• glencoe grade 6 & 7 math algebra tests,quizzes and practice sheets
• probability worksheets for 3rd grade
• free online t-83 calculator
• free online negative dividing calculator
• workbook "pre algebra" pdf
• ti 86 linear approximations
• grade 10 parabola worksheets
• Solving Trinomials
• FREE WORKSHEET ON AREA OF SQUARE
• worksheets positive and negative numbers
• everyday mathmatics vocab
• special products and factoring
• dividing matrices on TI-83 plus
• mathematical programming winston ebooks
• math formula list
• maths sheets for 8-9 year olds
• difference quotient logarithm
• pie in maths & its evolution
• answer my math problems they are algebra combine like terms
• easiest rules for adding and subtracting integers
• holt biology dynamics of life chapter quiz
• free algegra lesson for beginners
• use casio calculater online
• simultaneous equation calculator
• free online maths book for sixth standard
• college introduction to algebra
• "Integration by Substitution" steps simple
• Free Math Worksheets Printouts
• polynomial factor calculator
• ti89 negative numbers
• graphing system of equations with fractions
• How do you find the domain of radical expressions
• converting decimals to fractions calculator
• algebra holt book answers
• mixed numbers as decimals
• converting from decimal to fraction on calculator radicals
• math worksheets adding subtracting negative
• solving basic equation worksheets
• algebra maths machine formulas
• complex linear system solver ti 89
• integer equations subtract add
• www.varables in math.com
• Conversão de Base.ppt
• "activities for pre algebra"
• free downloads of aptitude test question banks
• parabolas kids
• Saxon Algebra 1 answers
• multiplying square roots with exponents
• online math solution finder
• polynomial source code for ti 83
• teach me algebra
• mixed numbers decimal conversion practice
• changing fractions to higher terms worksheet
• cramers rule three variable online script
• linear and quadradic equation math ebooks
• free glencoe algebra one answers
• trigonometric identity solver
• formulas for percentage
• Answers fast for Algebra 2
• simultaneous Nonlinear equations two variables
• sqaure roots
• how to do algerba
• fourth order runge kutta method matlab ode45 second order function
• maths online workbook grade 2 free
• formula in solving patterns pre algebra
• permutations notes
• cube root calculator<ti-83
• identify integer values worksheets
• "permutation and combination"
• math prealgebra projects samples
• algebra with lcd
• free algebra solver software
• "College Physics 8th edition"
• "math solver + ti-83"
• how to solve radicals expressions
• examples of lagrange multipliers with fractions
• 7th grade lcm printable quiz
• free ks2 sats Papers
• math worksheets for pre-algebra for 6th grade
• MATHS WORKSHEETS FOR 8 YEAR OLDS
• factor polynomials online
• quadratic root calculator
• online prealgebra calculator
• calculator for solving systems by the method of Substitution
• circle graph worksheets
• 3rd level quadratic formula
• answers to problems in Advanced Mathematical Concepts Merrill
• algebra 1 worksheet and answers
• multiplying roots calculator
• gcse math worksheets algebra
• different log bases on TI
• formula sheet for CAT exam
• substitution equations math homework
• 6th grade math venn diagram worksheet
• algebra homework help
• solving systems of equations using ti-83
• "Standard Form" "Vertex Form" (b/2)
• finding variables worksheets
• "Solving Equations" + Addition and Subtraction + Practice Sheets
• printable lcd worksheets
• prentice hall mathematics algebra 1 examples
• Merrill Glencoe Alg. 2
• ti-84 applications
• free printable worksheets 9th grade
• "algebra chapter 6"+practice+prentice-hall+algebra 1
• How to turn minutes into fractions
• How do you solve step by step mixed types for prentice hall algebra 1 textbook?
• prealgebra definitions and meaning
• radical square roots
• algerbra equations and answers
• Mathematics-working out percentages
• step by step instructions on multiplying monomials
• math answers to homework
• Using TI-84 plus scientific calculator square root
• matlab square(t)
• free worksheets graphing coordinate points
• trig addition solver cos
• trinomial calculator
• McDougal, Littell, online book english vocab
• piecewise function on ti-interactive
• printable TI-83 - logarithims
• online factor equation
• SOLVING INEQUALITIES, TI89
• math problem solver algebra two
• online balance equations
• free downloadable algebra 1books
• real life slope applications
• TI-84 Plus emulator
• ti 89 multiple equation solver
• graphing calculator t183 online
• glencoe algebra 2 answers
• completing a square using quadratic calculator
• how to answer math problems.com
• Math Cheats
• solving quadratic application problem using graphing calculator
• maths year 7 "binomial expansion"
• lattice multiplication sheets
• california algebra 1 concepts and skills cheat
• java divisible
• using the TI 89
• math elimination problem solver
• modern algebra+help
• seventh grade transition mathematics answers to explorations problems
• solving second order nonlinear equations
• simplify rational expressions with polynomials
• unit 11 spelling worksheet for 6th grade
• when finding the slope is impossible
• polynomials worksheet 9th grade
• solving basic equations and variables calculators
• coordinate plane printable worksheets
• College Algebra software
• ratio in mathematics powerpoint presentation for elementary
• fraction problems worksheet for children
• math homework example least common multiple monomials
• how to solve algebraic comparative statistics
• impact mathematics online textbook pre algebra
• Pre-Calc problem solver
• aptitude question with answer
• solving equations worksheet fun riddle
• what is the difference between independent and dependant variables in an algebraic equations
• permutation combination worksheet
• mathematical aptitude sample tests
• simplify rational expression calculator
• free help with quadratic equations
• maths worksheets online for grade fourth factors
• online prentice hall math textbook answers
• multiple derivative calculator
• math flowchart work backwards worksheet
• liner programing for idiots
• finding slope of function matlab
• mixture problems worksheets
• t-89 texas instruments logarithms
• examples of compound and simple machines for 9th grade science projects
• convert mixed numbers to decimal
• merrill algebra one solutions
• presentations on maths ppts on quadrilaterals for class 9th
• basic +algebracic expressions
• converting mixed numbers to decimals
• least to greatest fractions
• How To Program A Graphing Calculator step by step
• partial fraction calculation software
• 8th and 9th grade printable worksheets
• how to find the cube root on TI 89 calculator
• algebra 2 calculators
• adding, subtracting, multiplying, dividing mixed numbers and fractions
• AJmain
• free math work games sheets
• 3rd grade math trivia
• permutations and combinations ppt
• simplifying radicals in ti-84
• solution nonlinear differential equations
• simplifying square roots answers
• complex hyperbola problem
• two step algebra equations worksheet
• rational algebraic expression calculator
• free MATH ppt for KS3
• algrebra learning
• kumon examples
• free chrisrtmas division worksheet
• solving radicals
• "non-algebraic variable in expression"
• math trivia for elementary
• online KS2 SATS papers
• KS3 basic algebra
• Free Online algebra Calculators
• algebra 1 answers holt
• solve algebra equations
• Using a ti-89 to Find a Cube Root
• answers to mcdougal littell algebra2
• operating manual for TI-83 plus calculator
• rational expression answers
• simplify sums and differences of radicals
• 9th grade algebra mathematics calculator
• surd applications in everyday life
• McGraw Hill Algebra permutations
• order of operations with decimals worksheets]
• Holt Algebra book 1
• properties of algebra worksheet
• expressions involving rational exponents
• 'aptitude related questions & answers'
• logarithm worksheet
• worksheet proportions
• Math games Tax and Discount
• algebra second order equation
• answers to algebra 2 questions
• online prentice hall algebra 1 textbook
• year 10 algebra activities
• basis math formulas
• solving exponents
• easy to learn algebra
• find the least common multiple of 32 and 45
• conceptual physics answer key
• modern chemistry book worksheet answers
• free download mathematics eqation
• Glencoe Mcgraw Algebra 1 Answers
• solving radical integrals
• PreAlgrebra
• equation approach worksheets
• calculator triangle slope midpoint
• Algebra1 help with proportions
• mcdougal littell biology work book
• algebra solver shows work online
• algebra1 answers
• TI-89 ROM Downloads
• simplifying exponents with variables
• polynomials cubed
• free algebra problem solver
• hyperbola calculator
• combinations and permutations for 8th grade students
• already made free online printable math
• step by step radical calculator
• Finding the Value of N in Fractions
• grade 6 & 7 math algebra tests,quizzes and practice sheets
• How do you do the cubed root on a teax instrument TI-83 silver edition
• basic 9th grade algebra
• in math what do the roots of a parabla mean?
• trig challenge problems and answers
• permutations worksheets
• discovering advanced algebra worksheet
• holt algebra 1 texas homework and practice workbook
• simplify radical equations calculator
• factoring calculators
• sqaure roots with exponents
• Factor by grouping problem solver
• quadratics project math graph
• convert deciaml to fraction measurements
• Glencoe Geometry Teacher Edition cheat
• equation into slope intercept form worksheet
• why cant we mix percentages, fractions, decimals
• tricks for finding LCM
• solving third-order polynomial
• square root with exponent
• pre-algebra definitions
• adding and subtracting positive and negative fractions
• beginning algebra calculator
• 2-step- algebraic equation that use decimals
• math +trivias with answers
• dividing integers worksheet
• prentice hall mathematics algebra 1 answers
• why can't square roots be in the denominator
• square roots simplified
• free worksheets on graphing coordinates
• maths worksheet for third standard kids
• Quadratic application problems help
• Free Math Worksheets for 8th graders
• free "integral calculator" step-by-step solutions
• convert the square root of 1,723,969
• Review Exam Papers Grade 11
• squares under a radical polynomials
• 3 variable equation calculator
• 7th grade algebra lesson plan
• add subtract multiply and divide fractions worksheet
• Algebra 2 online tutor
• chemical equation movie
• quadratic equation simplify square root
• Real number
• Prentice hall conceptual physics the high school physics program awnsers
• algebraic rational expression calculator online
• algebra1 inequality star puzzle
• factor identity exponent
• solving quadratic formula for ti-84 plus
• printables Maths calculator colour sheets
• algebra 2 textbook answer sheet
• printable homework sheets
• lesson plan for law of exponents
• aptitudes question answers
• basic radical expressions
• hard math equations
• finding fractions answer keys
• applications for TI84 quadratic formula
• algebra 1 concepts and skills even answers
• ti 89 log base 2
• calculator radicands
• Saxon Advanced Algebra answers
• sats papers online to print out
• surds quiz for yr8
• answers to linear equations in two variables grade 9
• answer sheet to Glencoe Mathematics Algebra 1
• ti-83 multivariable
• math help scale factors
• subtracting mixed numbers 7th grade
• printable primary worksheets free ordinal numbers
• sum of n numbers java example
• square root to regular number calculator
• ti-83 rom image
• in text "ap statistics quiz"
• Ti-83 manual for logarithims
• Maths and english homework sheet primary
• factoring polynomials problem solvers
• Factor Algebra
• square root to decimal calculator
• ladder method for prime factorization
• Algebra Solver
• programing calculator+quadratic equation
• Lars Frederiksen ti 89
• ti 84 quadratic equation
• problem solving questions year 9
• simplify radical worksheet
• sample pre algebra equations
• free math sheets on data
• free balancing equation
• mc graw hill beginning algebra seventh edition online lessons
• dividing square roots calculator
• free online fraction caluclator
• worksheet converting fraction decimal percent
• Advanced Algebra level of Prentice Hall Mathematics: Tools for a Changing World.
• 4th roots list
• percentage formulas for
• calculator that turns decimals into fractions
• java tutorials; number guess game
• change mix number to decimal
• Free Simultaneous Equation Solver
• word problems for subtracting negative numbers
• polar solver TI 84
• Variable algebra in matlab
• McDougal Littell Algebra 2 ebook
• teach yourself intermediate algebra
• fractions solve for two variables
• how to calculate variance on ti 83
• intercept method calculator
• Printable GED Practice Tests
• math work sheets equations
• elimination method calculator
• simple ratio formula
• solving linear functions with a TI-83 plus calculator
• gcse logarithm
• T1-83 Calculator online
• LARSON LAB BOOKS FOR 6TH GRADE
• holt algebra 1
• books online; holt algebra with trigonometry
• beginning algebra worksheets
• radical math solver
• synthetic division calculator
• algebra 2 math problems on quadratic problems
• hardest math problem for class x
• find slope calculator
• free online algebra class for 8th graders
• absolute value worksheets printable
• 72375114481892
• negative numbers testing ks3
• solve for x calculator
• graphing ellipses ti-89
• multiplying scientific notation worksheet
• Online Simplifying Fractions Calculator
• converting decimals into fractions calculator
• solving equations with variables worksheets
• variable problem calculator
• factor label method solver
• Trigonometric formulaes
• calculator that helps me reduce fraction online
• Learn the basics on programming TI-83 plus
• free Algebra math worksheet 6 grade
• quadratic equations for dummies
• how do you change a mixed number into a decimal
• basic addition and subtraction problem solving using a graph worksheets
• mcdougal littell algebra 2 answers
• how to solve quadratic equations by factoring diamond method
• quadratic formula plug in site
• how to convert a decimal into a fraction ( plus method)
• Scale Factor of a Triangle
• rules of basic algebra equations
• glencoe mcgraw hill online textbook algebra 1 slope lines
• multiplying consecutively worksheets
• least common denominator variable
• dividing multiplying subtracting integers
• Intermediate Algebra Martin-Gay test prep examples
• +MATH-METHOD OF SUBSTITUTION
• ti89 laplace
• quadratic equation calculator ti 83
• free algebra worksheet generator software
• Cumulative Exam Chapters 1-2
• free maths worksheets prime factors
• who invented linear equations
• factor the difference of two squares calculator solver
• "invented the parabola"
• free online algebra worksheets
• Prentice Hall PDF Math Pre-Algebra Book
• quadratic equation for ti
• free practice for grade 10 algebra
• sample of algebra solved problems
• holt practice book algebra 1 answers
• GCSE maths grade 10
• Free Online Math Tutor
• trig identity solver
• TI-83 calculator factoring
• online factoring calculator
• Finding Lowest Common Denominator finder
• quadratic equation combining
• convert square root to exponent
• power point for elementary statistics with ti 83
• Least Common Denominator fractions Calculator
• converting bases in java
• the value of powers by multiplying the factors
• long division in algebra tutorial
• free online math for 7th graders
• fundamental accounting principle free book 8th
• complete factoring calculator
• LCD Worksheets
• dividing mix numbers
• algebra powers
• multiply divide fractions worksheet
• how to solve arithmetic and geometric equations on a ti-83 plus calculator
• free prime factorization sheets
• ged math practice sheets
• parabola algebra sample
• fractions calculator multiple
• how to "put equations" into a scientific calculator
• factoring trinomials worksheets + doc
• free fun integer worksheets
• ti 89 interpolation
• cube root on graphing calculator TI-83 plus
• equation basics
• Adding sign numbers worksheets
• matlab+solving equation
• solve equations fun worksheet
• third root
• prentice algebra 2 cheats
• Glencoe Pre-algebra
• coordinate graph pictures
• ti89 boolean product
• Steps To Balancing Equations
• ti-84 factoring
• find the degree of a polynomial on calculator
• free worksheets on properties of addition
• algebra worksheets quadratic equations factoring
• calculation + greatest common divisor
• algebra 9th grade problems
• mixed number to decimal
• math trivia for elementary
• rational expressions worksheet
• finding an equation of joint variation
• compound probability worksheet
• why was algebra invented
• how to cheat on the compass math exam
• subtract fraction worksheet
• non-linear, ordinary differential equation in matlab
• worksheet pre-algebra with pizzazz
• essentials of accounting problem solving answer sheet
• pre algebra answer key grade 9
• prentice hall inc intro to geometry answer sheets
• glencoe mcgraw-hill answers for algebra 2
• pizzazz jokes
• answers for texas algebra 2 glencoe mathematics
• expanding cubed
• algebra 1 solver online
• vb6 exponential function -.net
• problem sums of mathmatics
• adding integers math worksheets
• printable worksheets on writing equations in standard form
• prime factoring radicals
• lattice multiplication printable worksheets
• solving equations by square root property
• Percentage Formulas
• nonlinear differential Matlab
• McDougal Littell Algebra 1 real answers
• matlab convert fraction to decimal
• worksheet algebra with pizzazz 160
• a calculator converting decimals to fractions
• boolean algebra free simplifier
• least common denominator variables
• third order quadratic equations
• simplify the square root of 4y squared
• Dividing matrix in TI-84
• maths cubed brackets
• ti 83+ programs equations
• aptitude sample questions which can be solve
• ks3 downloadable worksheets
• TI-83 Factor
• graphing linear equation basics
• tutoring for struggling algebra student
• log function with base number - ti 83
• BBC simplifying expressions in mathematics for children
• prentice hall math conversions
• solving non-linear equations for MATLAB
• balancing chemical equations animation
• quadratic factoring calc
• lcm and gcf practice problems,fifth grade
• techniques in computing greatest common factor
• teacher edition answer book on glencoe/mcgraw-hill geometry
• maths problems and answers for ks3
• aptitude test download
• constrained minimization of hyperbola
• simplifying square roots with variables calculator
• alegebra calculator
• finding gcd online cal
• basic steps in balancing an equation
• online square root calculator
• negative positive integers worksheet
• contemporary abstract algebra solutions chapter 20
• adding and subtracting work sheets
• ppt math logarithm senior high school
• printable worksheet for graphing lines with ti 84's
• compound inequalities representing the four quadrants of the Cartesian coordinate system
• real life applications of graphing functions
• how to convert fraction to decimal
• TI-85 definite integral
• converting square roots into decimals using scientific calculator
• 122:20:1982196736206171961::NO:::
• +"8th grade" +"pre-algebra" +"print worksheets"
• ti rom image download
• how to find the cubic root on a TI-83 plus
• MCDougal Littell Algebra I- quizzes and tests
• solving cubic equations in matlab
• Algebra Tutor program
• square root method
• math equations formulas
• calculator graphic T184 plus where to buy in australia
• polynomial long division worksheet
• lowest common multiple in math
• free homework answers
• ti 83 rom image
• price applications of quadratic relations + grade 10 math + worksheet
• ALGEBRA SUMS
• polynomial factorer
• factoring worksheets
• how is algebra used in daily life
• college algebra book website
• ks3 maths area worksheets
• +"8th grade" +algebra +worksheets
• algebra 5th grade samples
• multiple step equation worksheet
• Order Least to Greatest Fractions
• online caculater with mixed numbers
• simple way to calculate lcm
• solving quadratic equations matlab
• examples of solving differential equations with diagonalization
• square root method in java
• common denominators worksheet
• prime factorization worksheet free printable
• ti89 multivariable equation
• Elementary Algebra 2 factoring polinomials quiz
• turing a fraction into percents with big denominator
• yr 9 maths online
• free online past GCSE papers
• 8 & 9 subtractions for free
• negative exponents and fractions expressions
• ti-84 factoring polynomials
• Exams for kids between 7-8 years old in maths in uk
• math worksheet 9th grade
• using algebra to solve business math worksheets
• write equation in standard form using intergers
• equations of circles solver
• gcf ladder diagram
• 6th grade venn diagram worksheets
• solved maths project for class tenth on statistics
• forth order quadratic equations
• vertex form problems
• how to find consecutive integers using algebra
• square root of 520 radical
• free downloads of aptitude test study guides
• adding radical expressions
• linear equations in two variable
• scale factor games
• radical expression
• multiplying monomials game
• glencoe algebra 1 pages online
• middle school math with pizzazz answers
• Worksheets on Permutations
• solve rational expressions ti84
• second order nonhomogeneous
• math resolve problem worksheet
• middle school math scale and proportion printouts
• free printableworksheets functional words lesson plan
• logarithm equation solver
• Free Basic Math Quiz and answer sheet
• Java Programming complete concepts and techniques cheatbook
• mcqs papers for engineers
• hyperbola real life examples
• identity school algebra formula
• translations math ks3
• multivariable word problem
• how to solve radicals on T.I calculator
• solving nonhomogeneous partial differential equations
• worksheet on graphing an equation using the slope and y intercept
• fraleigh abstract algebra solutions manual online
• Transformation Worksheets
• order fractions from least to greatest calculator
• free two step equations worksheets
• answers to mcdougal littell middle school math book
• find log on TI-83 calculator
• inequalities worksheet easy
• 'algebrator
• 8th grade worksheets
• Worksheet on adding and subtracting integers
• "solve function" grade 8
• free printable math sheets + third grade
• free online math equation writer
• multi step equation calculator
• printable word problems for third graders
• simplify exponential expressions
• radical calculator
• solving math formulas
• kumon answer sheet
• simultaneous equation solver
• exercise of algebraic expressions for 6th grade
• online TI-83 calculator for homework
• sequences worksheets pre algebra
• exercise of algebraic variables for 6th grade
• do ks2 science questions online
• calculator for Greatest Common Factor with the set of numbers
• radical solvers
• decimal to fraction converter with square roots
• free online integer calculator
• bungee jump equations animation
• equations with two variables worksheet
• algebra calculator elimination method
• free polynomials solver
• root calculator radical
• Radical Expressions Calculator
• explanation adding and subtracting integers
• elementary graphing lesson pre-test
• trigonomic ratios
• solve two-step algebra equations
• lowest common multiple calculator
• algebra--solving for two variable
• TRIGONOMIC VALUES CHART
• square root (a+ b)
• solve laplace with ti-89
• write each expression in radical form
• Algebra Aptitude Test sample tests
• algebra 1 work and answers
• free aptitude books with answers
• Functions problems math 9th grade
• "solving non-linear difference equations"
• free geometry triangle measurements and formulas
• trig addition solver
• factor scale
• free math tutor online algebra 8th grade
• conceptual physical science explorations cheat sheet
• permutations and combinations second grade
• abstract algebra online help
• Printable Algebra 2 Worksheets Glencoe
• literal equations worksheet
• mass-spring system oscillatory motion initial conditions homogeneous system
• simultaneous equation calculator
• what models are used to teach equivalent fractions
• SCALE FACTOR worksheets
• ratio formula
• calculator example for ks2
• scott foresman 4th grade math key answer
• Discrete Distribution "Free EBook Download"
• cost accounting tutorial
• algebra dilation
• adding decimal fraction
• free iq test for 6th graders
• homework radical expressions
• how to solve second grade equations
• what is the formula for finding percent
• fraction formulas
• pre algebra answers
• converse farenheit to celsius
• ks3 maths work sheets
• coefficient math worksheets
• java + program to find whether the entered number is prime or not
• balancing chemical equations calculator
• algebra questions and answers for ninth grade student
• online polar graphing calculator
• evaluating expressions worksheet exponents
• how to program basic games step by step for ti-83+
• fraction problems ks2
• AP Statistics Quiz A Chapter 11 modeling the world
• dividing decimals by integerS PROBLEMS
• free solution manual for introductory algebra third edition
• calculus online problems solutions schaum's free
• plato algebra 1 codes
• change a fraction to a dcimal powerpoint
• texas third grade taks test worksheets
• t-83 calculator games
• algebra with pizzazz to multiply binomials mentally
• make second order differential equation into system of first order
• ti 84 quadratic
• solver simultaneous 6
• integral exponents worksheet
• What is the sqare root of 73?
• math practice printouts
• ti 83 act cheat
• elementary algebra worksheet
• What is a scale factor+elementary school
• Dummit chapter 13 solution
• printable worksheets for solving equations using multiplication and division
• holt physics worksheet answers
• free math answer
• prentice hall pre algebra grade 9 answers
• "rational expressions" "challenge questions"
• christmas maths
• multi step equations worksheets
• algebra with Pizzazz! work sheet
• adding, subtracting, multiplying, dividing fractions
• Mathematics for fourth grade/charts
• order of operations grade 6 free worksheets
• geometric worksheets 6th grade
• summation for 4th grade
• basic algebra question sheet
• algebra applications
• solve by factoring
• maple decimal to fraction
• factor and solve two variable polynomials
• simultaneous linear equation calculator online
• c*- algebras notes & Books,problems for free dowonload
• algebra in daily life
• free addition and subtraction of polynomial worksheet
• Free Linear Equation Worksheets
• convert decimal bacteria exponential
• quadratic equation complex coefficients online calculator
• quadratic factorization parabola
• where is the LOG key on the ti-89
• factorial notation worksheets
• algebra power square root
• Finding the vertex of a hyperbola calculator
• online t1 83 calculator
• free printable algebra work
• Holt Geometry workbook ANSWERS
• high school accounting test and key
• worksheets with practice solving one and two step equations
• statistics about online learning for fifth graders
• free pre-algebra course
• ti-84 convert base two binary to base ten decimal
• 6th grade math cheats
• decimal problem solving worksheets
• how to input numbers in java
• Learning Basic Algebra
• division worksheet, explanation
• printable adding integer classroom game
• prentice mathematics Algebra 1 answers
• prentice hall mathematics algebra 1
• free online solving addition and subtraction equations math calculator
• A level Physics worksheets vector
• sample paper for aptitude test
• percent of change worksheet
• standard form in intermediate algebra
• lowest common multiple of 46 and 30
• Algebra problem solving machine FOIL
• prentice hall mathematics algebra 2 teachers answer book
• simple chemical equation balancer for ti-83 calculator
• how to find the scale factor
• mcdougal littell geometry book answers
• answers for chapter 8 test form 1 for algebra 2
• rationalize numerator
• nonlinear equations systems matlab
• math proplems
• USABLE ONLINE TI 89 calCulater
• Lattice multiplication sheets
• calculator
• ti-85 calculator rom
• math solving softwares
• graphing calculator online showing table
• automatic quadratic solver
• 3x3 matrice and java programming
• sideways quadratic graph
• 9th grade work
• liner equation for midpoint
• year 8 exam papers, maths
• printable first grade math sheets
• How to find factorizations for a monomial?
• how is the code area for telephone numbers similar to the concet of area in mathematics?
• free online ti-83 calculator
• equations worksheets combining terms
• elementary linear algebra anton ninth edition answer key
• algebra and trigonometry answers
• online TI-83
• common physics equations for the s.a.t. II physics exam
• Glencoe Mathematics algebra 1 answers
|
{"url":"https://softmath.com/math-com-calculator/graphing-inequalities/permutation-and-combination.html","timestamp":"2024-11-04T14:42:14Z","content_type":"text/html","content_length":"169543","record_id":"<urn:uuid:b09307c3-75f1-4843-9d6e-534c63d0582b>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00798.warc.gz"}
|
Complex number
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i^2 = −1.^[1] In this expression, a is the
real part and b is the imaginary part of the complex number.
Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part.
The complex number a + bi can be identified with the point (a,b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose
imaginary part is zero is a real number. In this way, the complex numbers are a field extension of the ordinary real numbers, in order to solve problems that cannot be solved with real numbers alone.
As well as their use within mathematics, complex numbers have practical applications in many fields, including physics, chemistry, biology, economics, electrical engineering, and statistics. The
Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers. He called them "fictitious" during his attempts to find solutions to cubic equations in the 16th century.
Complex numbers allow solutions to certain equations that have no solutions in real numbers. For example, the equation
has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the imaginary unit i where
i^2 = −1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i^2 = −1:
According to the fundamental theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers.
A complex number is a number of the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying i^2 = −1. For example, −3.5 + 2i is a complex number.
The real number a is called the real part of the complex number a + bi; the real number b is called the imaginary part of a + bi. By this convention the imaginary part does not include the imaginary
unit: hence b, not bi, is the imaginary part.^[3]^[4] The real part of a complex number z is denoted by Re(z) or ℜ(z); the imaginary part of a complex number z is denoted by Im(z) or ℑ(z). For
Hence, in terms of its real and imaginary parts, a complex number z is equal to . This expression is sometimes known as the Cartesian form of z.
A real number a can be regarded as a complex number a + 0i whose imaginary part is 0. A purely imaginary number bi is a complex number 0 + bi whose real part is zero. It is common to write a for a +
0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write a − bi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i.
The set of all complex numbers is denoted by ℂ, or .
Some authors^[5] write a + ib instead of a + bi, particularly when b is a radical. In some disciplines, in particular electromagnetism and electrical engineering, j is used instead of i,^[6] since i
is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb.
Complex plane
A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named
after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a
given complex number are therefore called its Cartesian, rectangular, or algebraic form.
A position vector may also be defined in terms of its magnitude and direction relative to the origin. These are emphasized in a complex number's polar form. Using the polar form of the complex number
in calculations may lead to a more intuitive interpretation of mathematical results. Notably, the operations of addition and multiplication take on a very natural geometric character when complex
numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make
with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating the position vector counterclockwise by a quarter turn (90°) about the origin: (a+bi)i = ai+bi
^2 = -b+ai.
History in brief
Main section: History
The solution in radicals (without trigonometric functions) of a general cubic equation contains the square roots of negative numbers when all three roots are real numbers, a situation that cannot be
rectified by factoring aided by the rational root test if the cubic is irreducible (the so-called casus irreducibilis). This conundrum led Italian mathematician Gerolamo Cardano to conceive of
complex numbers in around 1545, though his understanding was rudimentary.
Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or
higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root.
Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian
mathematician Rafael Bombelli.^[7] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the
theory of quaternions.
Two complex numbers are equal if and only if both their real and imaginary parts are equal. In symbols:
Because complex numbers are naturally thought of as existing on a two-dimensional plane, there is no natural linear ordering on the set of complex numbers.^[8]
There is no linear ordering on the complex numbers that is compatible with addition and multiplication. Formally, we say that the complex numbers cannot have the structure of an ordered field. This
is because any square in an ordered field is at least 0, but i^2 = −1.
Elementary operations
The complex conjugate of the complex number z = x + yi is defined to be x − yi. It is denoted by either or z*.
Formally, for any complex number z:
Geometrically, is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: .
The real and imaginary parts of a complex number z can be extracted using the conjugate:
Moreover, a complex number is real if and only if it equals its conjugate.
Conjugation distributes over the standard arithmetic operations:
Addition and subtraction
Complex numbers are added by separately adding the real and imaginary parts of the summands. That is to say:
Similarly, subtraction is defined by
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex
plane, is the point X obtained by building a parallelogram, three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are
Multiplication and division
The multiplication of two complex numbers is defined by the following formula:
In particular, the square of the imaginary unit is −1:
The preceding definition of multiplication of general complex numbers follows naturally from this fundamental property of the imaginary unit. Indeed, if i is treated as a number so that di means d
times i, the above multiplication rule is identical to the usual rule for multiplying two sums of two terms.
(distributive property)
(commutative property of addition—the order of the summands can be changed)
(commutative and distributive properties)
(fundamental property of the imaginary unit).
The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division. When at least one of c and d is non-zero, we have
Division can be defined in this way because of the following observation:
As shown earlier, c − di is the complex conjugate of the denominator c + di. At least one of the real part c and the imaginary part d of the denominator must be nonzero for division to be defined.
This is called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number).
The reciprocal of a nonzero complex number z = x + yi is given by
This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying reflections more
general than ones about a line, can also be expressed in terms of complex numbers. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance
when the maximum power transfer theorem is used.
Square root
The square roots of a + bi (with b ≠ 0) are , where
where sgn is the signum function. This can be seen by squaring to obtain a + bi.^[9]^[10] Here is called the modulus of a + bi, and the square root sign indicates the square root with non-negative
real part, called the principal square root; also , where .^[11]
Polar form
Absolute value and argument
An alternative way of defining a point P in the complex plane, other than using the x- and y-coordinates, is to use the distance of the point from O, the point whose coordinates are (0,0) (the
origin), together with the angle subtended between the positive real axis and the line segment OP in a counterclockwise direction. This idea leads to the polar form of complex numbers.
The absolute value (or modulus or magnitude) of a complex number z = x + yi is
If z is a real number (i.e., y = 0), then r = |x|. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The square of the absolute
value is
where is the complex conjugate of .
The argument of z (in many applications referred to as the "phase") is the angle of the radius OP with the positive real axis, and is written as . As with the modulus, the argument can be found from
the rectangular form :^[12]
Normally, as given above, the principal value in the interval (−π,π] is chosen. Values in the range [0,2π) are obtained by adding 2π if the value is negative. The value of φ is expressed in radians
in this article. It can increase by any integer multiple of 2π and still give the same angle. Hence, the arg function is sometimes considered as multivalued. The polar angle for the complex number 0
is indeterminate, but arbitrary choice of the angle 0 is common.
The value of φ equals the result of atan2: .
Together, r and φ give another way of representing complex numbers, the polar form, as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the
original rectangular co-ordinates from the polar form is done by the formula called trigonometric form
Using Euler's formula this can be written as
Using the cis function, this is sometimes abbreviated to
In angle notation, often used in electronics to represent a phasor with amplitude r and phase φ, it is written as^[13]
Multiplication and division in polar form
Formulas for multiplication, division and exponentiation are simpler in polar form than the corresponding formulas in Cartesian coordinates. Given two complex numbers z[1] = r[1](cosφ[1] + isinφ
[1]) and z[2] = r[2](cosφ[2] + isinφ[2]), because of the well-known trigonometric identities
we may derive
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by i corresponds to a quarter-turn counter-clockwise,
which gives back i^2 = −1. The picture at the right illustrates the multiplication of
Since the real and imaginary part of 5 + 5i are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and
blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π.
Similarly, division is given by
Euler's formula
Euler's formula states that, for any real number x,
where e is the base of the natural logarithm. This can be proved through induction by observing that
and so on, and by considering the Taylor series expansions of e^ix, cos x and sin x:
The rearrangement of terms is justified because each series is absolutely convergent.
Natural logarithm
Euler's formula allows us to observe that, for any complex number
where r is a non-negative real number, one possible value for z's natural logarithm is
Because cos and sin are periodic functions, the natural logarithm may be considered a multi-valued function, with:
Integer and fractional exponents
We may use the identity
to define complex exponentiation, which is likewise multi-valued:
When n is an integer, this simplifies to de Moivre's formula:
The nth roots of z are given by
for any integer k satisfying 0 ≤ k ≤ n − 1. Here ^n√r is the usual (positive) nth root of the positive real number r. While the nth root of a positive real number r is chosen to be the positive real
number c satisfying c^n = r there is no natural way of distinguishing one particular complex nth root of a complex number. Therefore, the nth root of z is considered as a multivalued function (in z),
as opposed to a usual function f, for which f(z) is a uniquely defined number. Formulas such as
(which holds for positive real numbers), do in general not hold for complex numbers.
Field structure
The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any
complex number z, its additive inverse −z is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for
example the law of commutativity of addition and multiplication for any two complex numbers z[1] and z[2]:
These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field.
Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation z[1] < z[2] that is compatible with the addition and multiplication. In fact, in any ordered
field, the square of any element is necessarily positive, so i^2 = −1 precludes the existence of an ordering on C.
When the underlying field for a mathematical topic or construct is the field of complex numbers, the topic's name is usually modified to reflect that fact. For example: complex analysis, complex
matrix, complex polynomial, and complex Lie algebra.
Solutions of polynomial equations
Given any complex numbers (called coefficients) a[0],…,a[n], the equation
has at least one complex solution z, provided that at least one of the higher coefficients a[1],…,a[n] is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact,
C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x^2 − 2 does not have a rational root, since √2 is not a rational number)
nor the real numbers R (the polynomial x^2 + a does not have a real root for a > 0, since the square of x is positive for any real number x).
There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that
any real polynomial of odd degree has at least one real root.
Because of this fact, theorems that hold for any algebraically closed field, apply to C. For example, any non-empty complex square matrix has at least one (complex) eigenvalue.
Algebraic characterization
The field C has the following three properties: first, it has characteristic 0. This means that 1 + 1 + ⋯ + 1 ≠ 0 for any number of summands (all of which equal one). Second, its transcendence degree
over Q, the prime field of C, is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to
C. For example, the algebraic closure of Q[p] also satisfies these three properties, so these two fields are isomorphic. Also, C is isomorphic to the field of complex Puiseux series. However,
specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields that are isomorphic to C.
Characterization as a topological field
The preceding characterization of C describes only the algebraic aspects of C. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not
dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological
properties. C contains a subset P (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions:
• P is closed under addition, multiplication and taking inverses.
• If x and y are distinct elements of P, then either x − y or y − x is in P.
• If S is any nonempty subset of P, then S + P = x + P for some x in C.
Moreover, C has a nontrivial involutive automorphism x ↦ x* (namely the complex conjugation), such that xx* is in P for any nonzero x in C.
Any field F with these properties can be endowed with a topology by taking the sets B(x,p) = {y | p − (y − x)(y − x)* ∈ P} as a base, where x ranges over the field and p ranges over P. With this
topology F is isomorphic as a topological field to C.
The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex
numbers are connected, while the nonzero real numbers are not.
Formal construction
Formal development
Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R^2 of ordered pairs (a,b) of real
numbers. In this notation, the above formulas for addition and multiplication read
It is then just a matter of notation to express (a,b) as a + bi.
Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of C more immediately. This
characterization relies on the notion of fields and polynomials. A field is a set endowed with addition, subtraction, multiplication and division operations that behave as is familiar from, say,
rational numbers. For example, the distributive law
must hold for any three elements x, y and z of a field. The set R of real numbers does form a field. A polynomial p(X) with real coefficients is an expression of the form
where the a[0], ...,a[n] are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring
The quotient ring R[X]/(X ^2 + 1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and −X, respectively. (The cosets of) 1 and X form a basis
of R[X]/(X ^2 + 1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the
extension field can be written as ordered pairs (a,b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach—the two
definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R.
Matrix representation of complex numbers
Complex numbers a + bi can also be represented by 2×2 matrices that have the following form:
Here the entries a and b are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such
matrices. The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such
matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:
The conjugate corresponds to the transpose of the matrix.
Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices other than that square to the negative of the identity matrix. See the
article on 2 × 2 real matrices for other representations of complex numbers.
Complex analysis
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural
proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented
as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the
complex function's dynamic transformation of the complex plane.
Complex exponential and related functions
The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and
imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C,
endowed with the metric
is a complete metric space, which notably includes the triangle inequality
for any two complex numbers z[1] and z[2].
Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function exp(z), also written e^z, is defined as the infinite series
and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states:
for any real number φ, in particular
Unlike in the situation of real numbers, there is an infinitude of complex solutions z of the equation
for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of w—satisfies
where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2π, log is also multivalued. The principal value of log
is often taken by restricting the imaginary part to the interval (−π,π].
Complex exponentiation z^ω is defined as
Consequently, they are in general multi-valued. For ω = 1 / n, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above.
Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and
logarithm identities. For example, they do not satisfy
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
Holomorphic functions
A function f: C → C is called holomorphic if it satisfies the Cauchy–Riemann equations. For example, any R-linear map C → C can be written in the form
with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand is real-differentiable, but does not satisfy the Cauchy–Riemann equations.
Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions f and g that agree on an arbitrarily small open subset of C necessarily agree
everywhere. Meromorphic functions, functions that can locally be written as f(z)/(z − z[0])^n with a holomorphic function f, still share some of the features of holomorphic functions. Other functions
have essential singularities, such as sin(1/z) at z = 0.
Complex numbers have essential concrete applications in a variety of scientific and related areas such as signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics,
cartography, and vibration analysis. Some applications of complex numbers are:
Control theory
In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The
root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane.
In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a linear, time-invariant
(LTI) system has poles that are
• in the right half plane, it will be unstable,
• all in the left half plane, it will be stable,
• on the imaginary axis, it will have marginal stability.
If a system has zeros in the right half plane, it is a nonminimum phase system.
Improper integrals
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour
Fluid dynamics
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.
Dynamic equations
In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in
terms of base functions of the form f(t) = e^rt. Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve
the system in terms of base functions of the form f(t) = r^t.
Electromagnetism and electrical engineering
In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary,
frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus.
In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I, which is generally in use to denote electric current, or, more particularly, i, which is generally in use to
denote instantaneous electric current.
Since the voltage in an AC circuit is oscillating, it can be represented as
To obtain the measurable quantity, the real part is taken:
The complex-valued signal is called the analytic representation of the real-valued, measurable signal . ^[14]
Signal analysis
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in
terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the
corresponding z is the amplitude and the argument arg(z) is the phase.
If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form
where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above.
This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and
otherwise process digital audio signals, still images, and video signals.
Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
Quantum mechanics
The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps
most standard. The original foundation formulas of quantum mechanics—the Schrödinger equation and Heisenberg's matrix mechanics—make use of complex numbers.
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer
standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.
Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets.
Every triangle has a unique Steiner inellipse—an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found
as follows, according to Marden's theorem:^[15]^[16] Denote the triangle's vertices in the complex plane as a = x[A] + y[A]i, b = x[B] + y[B]i, and c = x[C] + y[C]i. Write the cubic equation, take
its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner
Algebraic number theory
As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such
equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to Q, the algebraic closure of Q, which also contains all algebraic numbers, C has
the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically
applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a
purely geometric problem.
Another example are Gaussian integers, that is, numbers of the form x + iy, where x and y are integers, which can be used to classify sums of squares.
Analytic number theory
Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done
by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function ζ(s) is related to the distribution of prime numbers.
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his
Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term in his calculations, although negative quantities were not conceived of in
Hellenistic mathematics and Heron merely replaced it by its positive ().^[17]
The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see
Niccolò Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of
negative numbers. As an example, Tartaglia's formula for a cubic equation of the form ^[18] gives the solution to the equation x^3 = x as
At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z^3 = i has solutions −i, and . Substituting these in turn for in Tartaglia's cubic
formula and simplifying, one gets 0, 1 and −1 as the solutions of x^3 − x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to
solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly
paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues.
The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature^[19]
[...] sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine.
([...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à
celle qu’on imagine.)
A further source of confusion was that the equation seemed to be capriciously inconsistent with the algebraic identity , which is valid for non-negative real numbers a and b, and which was also used
in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity ) in the case when both a and b are negative even
bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of √−1 to guard against this mistake. Even so, Euler considered it natural to introduce
students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way
In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For
instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could
be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:
In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities.
The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's De Algebra tractatus.
Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous
proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of
the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology.
In the beginning of 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis.^[20]
The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik
Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.^[21] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental
ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case.
The common terms used in the theory are chiefly due to the founders. Argand called the direction factor, and the modulus; Cauchy (1828) called the reduced form (l'expression réduite) and apparently
introduced the term argument; Gauss used i for , introduced the term complex number for a + bi, and called a^2 + b^2 the norm. The expression direction coefficient, often used for , is due to Hankel
(1867), and absolute value, for modulus, is due to Weierstrass.
Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others.
Generalizations and related notions
The process of extending the field R of reals to C is known as Cayley–Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real
vector space) are of dimension 4 and 8, respectively. In this context the complex numbers have been called the binarions.^[22]
However, just as applying the construction to reals loses the property of ordering, more properties familiar from real and complex numbers vanish with increasing dimension. The quaternions are only a
skew field, i.e. for some x,y: x·y ≠ y·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: for some x,y,z: (x·y)·z ≠ x·(y·z).
Reals, complex numbers, quaternions and octonions are all normed division algebras over R. However, by Hurwitz's theorem they are the only ones. The next step in the Cayley–Dickson construction, the
sedenions, in fact fails to have this structure.
The Cayley–Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis (1,i). This
means the following: the R-linear map
for some fixed complex number w can be represented by a 2×2 matrix (once a basis has been chosen). With respect to the basis (1,i), this matrix is
i.e., the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix
has the property that its square is the negative of the identity matrix: J^2 = −I. Then
is also isomorphic to the field C, and gives an alternative complex structure on R^2. This is generalized by the notion of a linear complex structure.
Hypercomplex numbers also generalize R, C, H, and O. For example, this notion contains the split-complex numbers, which are elements of the ring R[x]/(x^2 − 1) (as opposed to R[x]/(x^2 + 1)). In this
ring, the equation a^2 = 1 has four solutions.
The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Q[p] of p-adic numbers (for any
prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Q[p], by Ostrowski's theorem. The algebraic closures of Q[p] still carry a norm, but
(unlike C) are not complete with respect to it. The completion of turns out to be algebraically closed. This field is called p-adic complex numbers by analogy.
The fields R and Q[p] and their finite field extensions, including C, are local fields.
See also
Wikimedia Commons has media related to Complex numbers.
1. ↑ Charles P. McKeague (2011), Elementary Algebra, Brooks/Cole, p. 524, ISBN 978-0-8400-6421-9
2. ↑ Burton (1995, p. 294)
3. ↑ Complex Variables (2nd Edition), M.R. Spiegel, S. Lipschutz, J.J. Schiller, D. Spellman, Schaum's Outline Series, Mc Graw Hill (USA), ISBN 978-0-07-161569-3
4. ↑ Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), "Chapter P", College Algebra and Trigonometry (6 ed.), Cengage Learning, p. 66, ISBN 0-618-82515-0
5. ↑ For example Ahlfors (1979).
6. ↑ Brown, James Ward; Churchill, Ruel V. (1996), Complex variables and applications (6th ed.), New York: McGraw-Hill, p. 2, ISBN 0-07-912147-0, “In electrical engineering, the letter j is used
instead of i.”
7. ↑ Katz (2004, §9.1.4)
8. ↑ Abramowitz, Milton; Stegun, Irene A. (1964), Handbook of mathematical functions with formulas, graphs, and mathematical tables, Courier Dover Publications, p. 17, ISBN 0-486-61272-4, Section
3.7.26, p. 17
9. ↑ Cooke, Roger (2008), Classical algebra: its nature, origins, and uses, John Wiley and Sons, p. 59, ISBN 0-470-25952-3, Extract: page 59
10. ↑ Ahlfors (1979, p. 3)
11. ↑ Kasana, H.S. (2005), "Chapter 1", Complex Variables: Theory And Applications (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5
12. ↑ Nilsson, James William; Riedel, Susan A. (2008), "Chapter 9", Electric circuits (8th ed.), Prentice Hall, p. 338, ISBN 0-13-198925-1
13. ↑ Electromagnetism (2nd edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0
14. ↑ Kalman, Dan (2008a), "An Elementary Proof of Marden's Theorem", The American Mathematical Monthly, 115: 330–38, ISSN 0002-9890
15. ↑ Kalman, Dan (2008b), "The Most Marvelous Theorem in Mathematics", Journal of Online Mathematics and its Applications External link in |journal= (help)
16. ↑ Nahin, Paul J. (2007), An Imaginary Tale: The Story of √−1, Princeton University Press, ISBN 978-0-691-12798-9, retrieved 20 April 2011
17. ↑ In modern notation, Tartaglia's solution is based on expanding the cube of the sum of two cube roots: With , , , u and v can be expressed in terms of p and q as and , respectively. Therefore, .
When is negative (casus irreducibilis), the second cube root should be regarded as the complex conjugate of the first one.
18. ↑ Descartes, René (1954) [1637], La Géométrie | The Geometry of René Descartes with a facsimile of the first edition, Dover Publications, ISBN 0-486-60068-8, retrieved 20 April 2011
19. ↑ Caparrini, Sandro (2000), "On the Common Origin of Some of the Works on the Geometrical Interpretation of Complex Numbers", in Kim Williams (ed.), Two Cultures, Birkhäuser, p. 139, ISBN
3-7643-7186-2 Extract of page 139
20. ↑ Hardy, G. H.; Wright, E. M. (2000) [1938], An Introduction to the Theory of Numbers, OUP Oxford, p. 189 (fourth edition), ISBN 0-19-921986-9
21. ↑ Kevin McCrimmon (2004) A Taste of Jordan Algebras, pp 64, Universitext, Springer ISBN 0-387-95447-3 MR 2014924
Mathematical references
Historical references
Further reading
• The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4–7 in particular deal extensively (and enthusiastically)
with complex numbers.
• Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial
equations and the structures of modern algebra.
• Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual
• Conway, John B., Functions of One Complex Variable I (Graduate Texts in Mathematics), Springer; 2 edition (12 September 2005). ISBN 0-387-90328-3.
External links
Wikiversity has learning materials about Complex Numbers
Wikibooks has a book on the topic of: Calculus/Complex numbers
This article is issued from
- version of the 11/20/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files.
|
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Real_and_imaginary_parts.html","timestamp":"2024-11-09T12:48:01Z","content_type":"text/html","content_length":"191346","record_id":"<urn:uuid:fece769e-0508-434a-b34f-99a572bec03b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00428.warc.gz"}
|
bra problem
Our users:
This is the best software I have come across in the education field.
Stephen J., NY
The Algebrator was very helpful, it helped me get back on track and bring back my skills for my next school season. The program shows step by step solutions which made learning easier. I think this
would be very helpful to anyone just starting to learn algebra, or even if they already know it, it would sharpen their skills.
Colin Bers, NY
You've been extremely patient and helpful. I'm a "late bloomer" in the college scene, and attempting math classes online are quite challenging to say the least! Thank you!
Maria Lopez, CA
Excellent software, explains not only which rule to use, but how to use it.
Billy Hafren, TX
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2014-08-13:
• british old sats questions for ks2 yr 6
• sum calculator
• solving math problems by completing the square
• integers worksheets
• solution of GCSE "O level" " Physics" "Paper 2" pdf
• evaluate algebraic expressions worksheets
• free prealgebra test
• artin algebra solutions
• equation for least common multiple
• hard algebra problem
• simplifying expressions before solving equations, powerpoint
• Least Common Multiple Calculator
• florida prentice hall mathematics pre algebra workbook teachers edition
• how to solve logarithms+calculator
• how to learn algebra fast free online
• ti 89 complex expand
• help with 6th grade math probability
• exponent expression simplify calculator
• maths sats problem questions downloads
• TI-83 calculator emulator for mac os x
• orleans hanna algebra test
• solving hands on equations calculator
• trig answers
• 5th graders statistic about exercise
• maths probems
• ti-89 and laplace transform
• Sixth grade Combinations and Permutations
• factor 3rd order polynomials
• middle school formula sheet for volumes
• "fun math worksheets" Middle School
• free algebra calculator step by step
• factoring quadratic equations worksheet
• simplifying quadratic equations
• PROBIBILITY MATH
• cost accounting chapter 10 homework
• fifth grade word problems
• factoring third order polynomials
• least common denominator of 14 and 52
• "greatest common factor" 1144
• systems of equations in real life problems
• trinomial calculators
• answer algebra book
• professional algabra
• checker/math
• phytagoras formula
• mcdougal Littell Algebra II workbook Final Exam review
• how to do polynomials for 9th grade for free
• rules for binomials adding multiplying dividing subtracting
• test & answers- 6th grade science
• solving simultaneous equations online
• mathmatical signs
• logarithm games
• math poems using math terms
• how to find square root of polynomial
• solved aptitude questions
• freegcse maths practice papers
• free algebra solver (trinomials)
• T183 calculators online
• solving equations with radicals and negative exponents
• free accounting textbooks to dowload
• Accountancy subject online problem solver
• 6th grades erb released test
• FREE KUMON
• algebra integer equations worksheets
• glencoe algebra 2 practice workbook answers
• Math Tutor Programs
• download "Teach yourself calculus"
• 6th grade online math taks practice test
• Maths for Dummies
• McGraw Hill Mathematics Daily Homework Practice Grade 4
• Quadratic Inequalities in Additional Mathematics
• algebra study problems
• free inequality worksheet
• prentice hall workbook algebra 2 answer key
• common graphs of function
• nonlinear slope calculator online
• begining algebra free problem solver
• basic math linear equations tutorials
• solving difference quotients
• algebra tests for kids
• solving quadratic equations using square roots
• algebra 2 tutoring
• Formula to Convert Decimal to Fraction
• mathimatical programs
|
{"url":"https://linear-equation.com/of-a-linear-equation/monomials/free-algerbra-problem-solver.html","timestamp":"2024-11-09T09:06:49Z","content_type":"text/html","content_length":"81832","record_id":"<urn:uuid:1bc50327-e4cd-4a54-b0cc-05f97fc8fe5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00609.warc.gz"}
|
Microsoft Excel Functions Knowledge Test! Trivia Quiz
Questions and Answers
• 1.
The ___ function adds all of the numbers in a range of cells
The SUM function is used to add all of the numbers in a range of cells. It is a built-in function in spreadsheet software that allows for quick and efficient calculation of the sum of multiple
values. By specifying the range of cells, the SUM function adds up all the numbers within that range and returns the total sum.
• 2.
Number values are often called labels.
Correct Answer
B. False
Number values are not often called labels. Labels are typically used to represent categorical or qualitative data, while number values are used to represent quantitative or numerical data.
Therefore, the statement is false.
• 3.
Cell contents are either values or formulas.
Correct Answer
B. False
Cell contents can be either values or formulas. Values are static data entered directly into the cell, such as numbers, text, or dates. Formulas, on the other hand, are equations that perform
calculations using values from other cells. Therefore, the statement is true and the correct answer is False.
• 4.
The values that an Excel function uses to perform operations are called
Correct Answer
In Excel, arguments are the values that are provided to a function in order to perform calculations or operations. These arguments can be numbers, text, cell references, or other functions. The
function uses these arguments to carry out the desired operation and return a result.
• 5.
An excel document is called a
Correct Answer
An excel document is called a workbook because it refers to a file that contains multiple worksheets or spreadsheets. A workbook acts as a container for organizing and managing data in Excel. It
allows users to create, edit, and store various sheets within a single file. Each sheet within the workbook can be used to input and analyze data independently, making it a convenient and
efficient way to handle complex data sets. Therefore, the term "workbook" accurately describes the nature and functionality of an excel document.
• 6.
A ______is formatted as a pattern of uniformly spaced horizontal rows and vertical columns
Correct Answer
A worksheet is formatted as a pattern of uniformly spaced horizontal rows and vertical columns. This format allows for organizing and presenting data in a structured manner, making it easier to
input, manipulate, and analyze information. Worksheets are commonly used in spreadsheet software like Microsoft Excel and Google Sheets, where users can enter data into cells and perform
calculations or create charts based on the data. The uniform spacing of rows and columns ensures consistency and facilitates efficient data management.
• 7.
An excel formula begins with which symbol?
Correct Answer
equal sign =
An excel formula begins with the equal sign (=) because it is used to indicate that the following characters are part of a formula and not just regular text. This symbol tells Excel to perform a
calculation or function using the values or references that come after it. Without the equal sign, Excel would interpret the characters as text rather than a formula, resulting in incorrect
calculations or errors.
• 8.
The____allows the user to enter or edit he value or formula contained in the active cell.
Correct Answer
formula bar
The formula bar is a feature in spreadsheet software that allows the user to enter or edit the value or formula contained in the active cell. It is located at the top of the spreadsheet interface
and provides a convenient and accessible way for users to input or modify data in a cell. By typing directly into the formula bar, users can easily enter numerical values, text, or complex
formulas, making it an essential tool for data manipulation and analysis.
• 9.
Two or more cells that Excel treats as a single unit is called a
Correct Answer
In Excel, a range refers to a group of two or more cells that are selected together. When cells are grouped into a range, they can be manipulated or formatted as a single unit. This allows users
to perform calculations or apply formatting to multiple cells simultaneously, saving time and effort. By selecting a range, users can easily perform operations such as summing the values,
applying formulas, or changing the formatting across the entire range of cells.
• 10.
The ____is outlined in black and is ready to accept data.
Correct Answer
active cell
The active cell refers to the currently selected cell in a spreadsheet or table. It is highlighted or outlined in black to indicate that it is ready to accept data or perform actions such as
editing, formatting, or applying formulas. The active cell is where any input or changes made will be applied to, making it an important element in data manipulation and analysis.
• 11.
Anything a user types into a cell is known as a
Correct Answer
cell content
In the context of a spreadsheet or worksheet, a cell is a rectangular box where data can be entered. The data entered into a cell is referred to as the cell content. This can include numbers,
text, formulas, or any other type of information that the user inputs. The answer "cell content" accurately describes anything that a user types into a cell, as it encompasses all types of data
that can be entered into a cell.
• 12.
The small black square in the lower right corner of a selected cell is called a
Correct Answer
fill handle
The small black square in the lower right corner of a selected cell is called a fill handle. The fill handle is used to quickly copy and fill data into adjacent cells. By dragging the fill
handle, the content of the selected cell can be automatically filled into the neighboring cells, either by copying the value or by filling a series of numbers, dates, or other patterns. The fill
handle is a convenient tool for quickly populating a range of cells with similar data.
• 13.
Values are compared using a ___ operator
Correct Answer
The given answer is correct because values are compared using a comparison operator. Comparison operators are used to compare two values and determine their relationship, such as whether they are
equal, not equal, greater than, less than, etc. These operators are essential in programming to make decisions based on the comparison results.
• 14.
The _____ function counts the number of cells within a range that meets a certain criteria.
Correct Answer
The COUNTIF function is used to count the number of cells within a range that meet a specific criteria. It allows you to specify the range of cells to be counted and the criteria that the cells
must meet. This function is useful for analyzing data and determining the number of cells that satisfy a particular condition.
• 15.
Excel’s ____ function retreats the date and time from the computer’s calendar and clock.
Correct Answer
The NOW function in Excel retrieves the current date and time from the computer's calendar and clock.
• 16.
To ___ a table means to arrange all of the data in a specific order.
Correct Answer
To sort a table means to arrange all of the data in a specific order. This process involves organizing the information in a structured manner, such as alphabetically, numerically, or
chronologically. Sorting allows for easier analysis and retrieval of data, as it brings together similar or related items. It can be done in ascending or descending order, depending on the
desired arrangement.
• 17.
A gallery of text styles that can be used to apply decorative effects to text is called a
Correct Answer
Word Art
A gallery of text styles that can be used to apply decorative effects to text is called Word Art. Word Art is a feature in various software applications, including Microsoft Word, that allows
users to enhance their text with various artistic effects such as shadows, gradients, and 3D effects. It provides a range of pre-designed styles and effects that can be applied to text to make it
visually appealing and eye-catching. Word Art is commonly used in graphic design, presentations, and other visual projects to add creativity and visual interest to text elements.
• 18.
An image that appears to have length, depth and width is said to be
Correct Answer
The term "3D" refers to a three-dimensional image that appears to have length, depth, and width. This means that the image is not flat or two-dimensional, but rather it has a sense of depth and
can be viewed from different angles. In contrast, a two-dimensional image only has length and width, like a photograph or a painting. Therefore, when an image is described as "3D," it means that
it has an added dimension and appears more realistic and immersive.
• 19.
The Excel operator for greater than or equal to is said to be
Correct Answer
The correct answer is >=. This operator is used in Excel to compare two values and determine if the first value is greater than or equal to the second value. It returns a logical value of TRUE if
the condition is met and FALSE if it is not.
• 20.
Each pie slice displayed on a pie chart is an example of a
Correct Answer
data marker
Each pie slice displayed on a pie chart represents a specific portion or category of data. These slices act as data markers, visually representing the data points or values they represent. By
using different colors or patterns for each slice, the chart effectively communicates the distribution or composition of the data set. Therefore, the correct answer is "data marker."
• 21.
A ___ _______ shows the relationship of each part of the data to the whole.
Correct Answer
Pie Chart
A pie chart is a type of visual representation that shows the relationship of each part of the data to the whole. It is commonly used to display data in percentages or proportions, where each
category is represented by a slice of the pie. The size of each slice corresponds to the proportionate value it represents in relation to the whole. This type of chart is particularly useful for
illustrating data that can be divided into distinct categories and comparing the relative sizes or shares of each category.
• 22.
You would ____ a table if you wanted to display only data that matched specific criteria.
Correct Answer
To display only data that matches specific criteria, you would use a filter. A filter allows you to sort through a large amount of data and display only the information that meets certain
conditions or criteria. By applying a filter, you can easily narrow down the data to focus on what is relevant and eliminate any unnecessary information.
• 23.
By including worksheets in a _____ you can enter or edit data on them simultaneously.
Correct Answer
By including worksheets in a group, you can enter or edit data on them simultaneously. This means that any changes made to one worksheet within the group will automatically be applied to all the
other worksheets in the group. This can be useful when working on a project that requires input or edits across multiple worksheets, as it allows for efficient and synchronized data entry.
• 24.
A predefined formula is called a
Correct Answer
A predefined formula is called a function because a function is a set of instructions or operations that can be performed on one or more input values to produce a desired output. In programming
or mathematics, functions are used to encapsulate a specific task or calculation that can be reused multiple times. They are predefined to provide a standardized and efficient way of performing
common operations, making the code more modular and easier to maintain. Therefore, the term "function" accurately describes a predefined formula that can be used to calculate specific results.
• 25.
On a line chart, the y-axis is known as
Correct Answer
value axis
The y-axis on a line chart is known as the value axis because it represents the numerical values or measurements of the data being plotted. It is used to show the dependent variable or the
variable being measured on the vertical axis. The value axis helps in understanding the relationship between the x-axis (which represents the independent variable) and the corresponding values or
trends of the data points on the line chart.
• 26.
The spacing between the tick marks is determined by the
Correct Answer
Major units value
The spacing between the tick marks on a graph or chart is determined by the major units value. This value represents the interval or distance between each tick mark on the axis. For example, if
the major units value is set to 5, then there will be a tick mark every 5 units on the axis. This helps to visually represent the data in a clear and organized manner, making it easier for
viewers to interpret the information being presented.
• 27.
If you want to chart trends over time use a
Correct Answer
line chart/graph
A line chart/graph is the best option for charting trends over time because it allows for the visualization of data points connected by lines, showing the progression and direction of the trend.
This type of chart is ideal for displaying continuous data, such as stock prices, population growth, or temperature changes, over a specific period. The line chart/graph provides a clear and
concise representation of how the data changes over time, making it easier to identify patterns, fluctuations, and overall trends.
• 28.
On a line chart, the x-axis is known as the
Correct Answer
category axis
The x-axis on a line chart is known as the category axis because it represents the different categories or groups being compared. It is typically used to display qualitative data or discrete
variables, such as time periods, names, or labels. The category axis helps to organize and differentiate the data points along the horizontal axis, allowing for easy comparison and analysis of
the values associated with each category.
• 29.
Goal Seek is an example of a
Correct Answer
What if analysis
Goal Seek is an example of a "What if analysis" because it allows users to determine the input value needed to achieve a desired result. It helps in understanding how changes in one variable can
impact the outcome of a formula or calculation. By specifying a target value and adjusting the input value, Goal Seek calculates the necessary value to achieve the desired result. This type of
analysis is useful for making informed decisions and understanding the sensitivity of a model to different variables.
• 30.
Changing a value in a cell to see what effect it has on values in other cells that are calculated using the value is said to be doing
Correct Answer
what if analysis
What if analysis refers to the process of changing a value in a cell to observe the impact it has on other cells that are calculated using that value. It allows users to explore different
scenarios and understand how changes in one variable can affect the overall outcome. By performing what if analysis, users can make informed decisions and evaluate the sensitivity of their
calculations to different input values.
• 31.
An Excel worksheet is made up of one or more workbooks.
Correct Answer
B. False
An Excel worksheet is not made up of workbooks, but rather a workbook is made up of one or more worksheets. A workbook is the main file in Excel that can contain multiple worksheets, which are
individual sheets where data can be entered and manipulated. Therefore, the statement that an Excel worksheet is made up of one or more workbooks is incorrect.
• 32.
You can select an element of a chart to format by using the chart’s element box.
Correct Answer
A. True
The statement is true because the element box allows you to select and format specific elements within a chart. This feature is commonly found in chart editing tools and allows users to customize
the appearance and formatting of individual chart elements such as data points, axis labels, titles, and legends. By selecting an element in the chart's element box, users can apply various
formatting options such as changing colors, fonts, sizes, and styles to enhance the visual presentation of the chart.
• 33.
Excel does not have a spell checker.
Correct Answer
B. False
Excel does have a spell checker. This feature allows users to check the spelling of words in their spreadsheet and make corrections if necessary. The spell checker in Excel helps to ensure that
the text entered is accurate and free from spelling errors. Users can access the spell checker through the Review tab in the Excel ribbon, where they can choose to check the spelling of the
entire worksheet or just a selected range of cells.
• 34.
Arithmetic operators are used to perform basic mathematical operations in Excel.
Correct Answer
A. True
Arithmetic operators are indeed used in Excel to perform basic mathematical operations such as addition, subtraction, multiplication, and division. These operators allow users to manipulate
numerical data and perform calculations within Excel spreadsheets. Therefore, the statement "Arithmetic operators are used to perform basic mathematical operations in Excel" is true.
• 35.
If a value is changed in in a cell, Excel recalculates all formulas that reference that cell.
Correct Answer
A. True
When a value is changed in a cell, Excel automatically updates all formulas that reference that cell. This ensures that any dependent formulas are recalculated accurately based on the new value.
This feature is useful in maintaining data integrity and ensuring that all calculations are up to date. Therefore, the given statement is true.
• 36.
When the user changes a value in a cell, Excel automatically recalculates.
Correct Answer
A. True
When a user changes a value in a cell, Excel automatically recalculates because it has built-in formulas and functions that depend on the values of other cells. This automatic recalculation
ensures that all the formulas and functions in the spreadsheet are updated and reflect the latest changes made by the user. This feature saves time and effort for the user, as they don't have to
manually recalculate the entire spreadsheet every time a value is changed.
• 37.
A pie chart should be limited to no more than seven categories.
Correct Answer
A. True
A pie chart should be limited to no more than seven categories because it becomes difficult to interpret and compare the proportions accurately when there are too many categories. With more than
seven categories, the slices of the pie become smaller and harder to distinguish, leading to confusion and potential misinterpretation of the data. Limiting the number of categories ensures that
the chart remains clear and easy to understand for the audience.
• 38.
.A menu that is context sensitive displays options related to the current task.
Correct Answer
A. True
A context-sensitive menu displays options that are relevant and related to the current task or context. This means that the menu dynamically changes based on the specific situation or object
being interacted with. It provides users with a more efficient and streamlined experience by only showing options that are applicable to their current needs. Therefore, the statement "A menu that
is context sensitive displays options related to the current task." is true.
|
{"url":"https://www.proprofs.com/quiz-school/story.php?title=cgs1060","timestamp":"2024-11-03T05:53:38Z","content_type":"text/html","content_length":"533880","record_id":"<urn:uuid:f85c7037-3eab-48a6-a64a-b080b2691455>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00824.warc.gz"}
|
Tag: dotnet
• You have 10 red and 10 black marbles. In how many ways can you put the 20 marbles in a row so that no more than two marbles of the same colour lie next to each other? An example of a valid
sequence would be: Can you reason your way out of this one or…
|
{"url":"https://www.mellekoning.nl/tag/dotnet/","timestamp":"2024-11-08T23:59:17Z","content_type":"text/html","content_length":"85327","record_id":"<urn:uuid:515f2d03-7c6c-4259-a9f0-a11a5f511b22>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00649.warc.gz"}
|
Forex Adaptive Indicators: new, difficult, profitable, but – not for all
AT&CF system authors just wanted to draw the attention of programmers and technical analysts to the application of digital methods, and even didn't expect, how convenient to use Forex adaptive
indicators for the analysis of the financial markets. We will try to understand how it can be used in real trade.
Dynamics of exchange prices from the mathematical point of view is described by not analytical function – its schedule can be smooth or broken, continuous or with gaps. Fourier's theory is the base
of the market spectral analysis and its time row. According to Fourier, any such function on a final time interval can be presented by the infinite sum of sinusoidal functions which are called their
frequency ranges.
It is supposed that any dynamic curve can be simulated near a sinusoid with sufficient extent of approach. The Forex adaptive indicators developed on this base break a flow of price data into
sinusoids with different parameters of cycles which, in fact, reflect cyclic fundamental and technical dependences in the financial market.
In other words, the spectral analysis of the financial and trade markets are software applications from the theory of signals which are applied in digital trade systems. It is considered that there
are no analogs these tools, but their methods of an interpretation of results are similar to standard Forex indicators and fans of the technical analysis apply them in real trade for a long time.
Basic concepts
We won't load the reader with excess mathematical details, we will tell only that searches of the technologies put in Forex adaptive indicators are caused by the fact that all technical tools use
historical data for calculation. And, as a rule, after repeated averaging, correction on volume and the other «improvements» which are catastrophically reducing the accuracy of trade signals.
Nevertheless, price quotations represent the usual statistical array and if a price signal to carry out through Fourier's transformation, then it is possible to estimate «frequency» of the market and
duration of fluctuations. On the probability of to what cycle the current fluctuations belong, it is possible to estimate a cycle phase (market type: bull, bear or flat) and to make the trade
Of course, the price moment calculated by means of this method never coincides real «up to one minute», but the moment of the termination of the current tendency can be defined unambiguously.
How can it be put into practice?
From the point of view of Fourier theory price ranges on day diagrams can be divided into several spectral bands:
• low frequency: according to Fourier it is the range of frequencies from 0 to 4, from the point of view of the price − long-term trends more than 60 days;
• mid frequency: range of frequencies from 5 to 40, medium-term and short-term cycles (impulse correction) − 10-60 days;
• high-frequency: the range of frequencies from 41 to 130, from the point of view of the market − market noise which occupies the most part of trade time (range), the cycle period − less than 6
The theoretical duration of the frequency range of Fourier – 130 periods, for the foreign exchange market usually take into consideration the value 120 or 24 trade weeks. In that case, a band of mid
frequencies will cause the greatest interest − the period of 16 days, 16 cycles a year and peak 32 cyclic sine curves of the 8th day period. All remaining will be considered as market noise.
Numerous practical tests showed that methods of spectrum analysis show excellent results in the technical analysis of all types of the markets. For instance, frequency analysis application simplifies
selection of a set of moving averages or parameters of oscillators, such as RSI, MACD, Stochastic – it allows to clean the rate of «low-frequency sections», uninteresting for the trade.
AT&CF system
The only trade system on Forex adaptive indicators available for a broad audience is the method of following a tendency and market cycles or AT&CF. The basic system is realized by authors in the MTC
form which generates a set of trade signals, but in general, all its tools are quite applicable for manual trade. Developers have carried out a set of tests on various assets which, in general, have
shown brilliant results − profitability at the level of 500-690% and a profit factor at the level of 15-17. But we will note at once – all available results of tests rather old (2000-2007), the
markets became more speculative nowaday, it is impossible to ignore completely fundamental influence.
The trade system is the technical indicators for MT4(5) which are connected to the terminal in the usual way. The tools considered below are available in free access and are given for an example of
the logic of the main trade signals.
Only the number of bars for calculation and colours schemes are usually available for settings. For thinner tuning, curious can rummage in a program code. The basic package includes:
Trend indicators:
• Two reference trend lines: RFTL (Reference Fast Trend Line) − «fast», RSTL (Reference Slow Trend Line) − «slow».
• FATL (Fast Adaptive Trend Line): an analog of fast moving average − the «fast» adaptive trend line calculated with use of the low-frequency digital filter;
• SATL (Slow Adaptive Trend Line): an analog of long moving average − the «slow» adaptive trend line for which it is used digital low frequency of the second order of the filter;
The FATL and SATL lines look the clearest - it is statistical estimates of lines of a short-term and long-term trend which, unlike traditional movings, have no phase delay of rather current prices:
• FATL (k) value is a mathematic expectation of the price Close (k), where k − number of the trade period;
• SATL (k) value is a mathematic expectation of FATL (k) for any k on the given time slot T.
• RBCI (Range Bound Channel Index) − the channel index calculated on the basis of the bandpass filter;
The bandpass filter deletes the market «noise» formed by low-frequency components of a range (trend) and the speculative «noise» formed by high-frequency components of a range.
• the optimized index of commodity channel PCCI (Perfect Commodity Channel Index) – a rated component of an exchange rate fluctuations.
This analog of a CCI index reflects better a trade situation because instead of the sliding average uses a difference between the price of the closing of the period and its mathematic expectation.
• rates of change of the adaptive FTLM lines (Fast Trend Line Momentum) – for FATL and STLM (Slow Trend Line Momentum) – for SATL;
Analogs of classical Momentum – FTLM and STLM indicators show a rate of change (falling/growth) of FATL and SATL and calculated according to the same scheme, but use not the Close prices, but trend
values smoothed as a result of a filtration. As a result, their lines turn out more smooth (without peaks and failures) and «regular» that increases forecast accuracy.
As a rule, the set is supplemented with the adaptive REI indicator – the oscillator (overbought / oversold of a market).
Basic trade principles
Use of Forex adaptive indicators and development of MTC on their basis assumes the following:
• business is running only according to the basic tendency determined by the SATL line («slow» adaptive);
• dynamics of the market is estimated on FTLM and STLM indicators − «fast» and «slow» trend features;
• the phase of the market is estimated according to the area (neutral, , local extrema) where the RBCI market cycles index is located;
• signals of oscillators are considered secondary if trend indicators show the strong, accurately expressed tendency;
• to consider signals of oscillators the main in cases when trend indicators signal about the lack of a pronounced tendency;
• the RBCI, PCCI indexes and values of the volatility of the «fast» adaptive line are applied to the StopLoss installation.
Main interpretations of adaptive indicators:
The direction of a trend is defined by the SATL line: grows –a bull trend in the market, falls – bear, the horizontal line – the market is neutral.
For determination of a turn the following scheme is used:
Interpretation of the advancing STLM indicator:
• positive value – a bull trend, negative – bear;
• STLM local extremum always precedes similar max/min on the SATL indicator;
• the emergence of max/min on the STLM line is a necessary condition for the formation of the top or a bottom on the SATL line;
For the ascending trend:
• if STLM and SATL grow together – the bull trend accelerates;
• horizontal, but positive STLM line at the growing SATL line – the stable ascending trend with an average speed;
• the more absolute value of the STLM indicator, the bigger potential at a bull trend.
For a bear trend we argue similarly:
• if STLM and SATL fall together – the falling of the price accelerates;
• horizontal, but negative STLM line at the growing SATL − the stable descending trend;
• the more absolute STLM value, the stronger will be a bear trend.
Coordination of signals with the FATL indicator:
• both of FATL (fast) and SATL (slow) trend lines together: grow – a strong bull trend, decrease – bear;
• the FATL line grows, the SATL line falls − bull correction on the descending trend or transition in a flat;
• the FATL line falls, the SATL line grows – bear correction on the ascending trend or consolidation;
• the beginning of active one-way traffic of the FATL and SATL lines at the same time – we wait for a turn (or completion of correction) and further the price has to move in the direction of SATL
indicator line.
Interpretation of the adaptive indicator PCCI signals which shows price deviation degree from the mathematics expectation:
• if it more than , then we wait for the movement down (or at least, corrections), of course, in case of confirmation of a signal by other indicators;
• if the value of the indicator less than 1 – we wait for the similar movement up.
Anyway, the deviation from zero value has to draw the attention of the trader.
Main schemes of signals
The most widespread situation are brought, with author's marking. Conservative medium-term trade with low risk level is meant.
Signals in an initial phase of a trend (S6 and L6)
The beginning of a bear trend – a condition for the transaction for sale:
• SATL indicator line – already decreases;
• STLM indicator line – above zero yet.
• RBCI indicator – in an overbought zone (above 0.01).
We open the transaction at the price of the closing of the period preceding the emergence of a signal (or above).
The beginning of a bull trend – a condition for the transaction on the purchase:
• SATL indicator line – already grows;
• STLM indicator line – below zero yet.
• RBCI indicator – in a zone of an oversold (lower than -0.01).
We open the transaction at the price of the closing of the period preceding the emergence of a signal (or below).
Signals on continuation of a trend after correction (S2A,S2B and L2A, LS2B)
For sales on a trend: S2A and S2B – signals of a continuation of a bear tendency after intermediate bull correction:
• the descending trend was created and gathered force (intersection from top to down of SATL and RSTL);
• synchronization of the movement of FATL, FTLM, RBCI indicators at the time of a signal;
• for the diagram S2A, the STLM indicator shall either fall or move horizontally;
• for the diagram S2B, the STLM indicator can grow, that specifies convergence of the SATL and RSTL lines.
• the absence of the resold PCCI market is necessary;
• the transaction for sale opens at the price of the of the next period or above.
For purchase on a trend: signals of a continuation of a bull trend after bear correction.
• the ascending trend was created and gathered force (intersection from below up of SATL and RSTL);
• synchronization of the movement of FATL, FTLM, RBCI indicators at the time of a signal;
• for the diagram L2A, the STLM indicator shall either grow or move horizontally;
• for the diagram L2B, the STLM indicator shall fall (a convergence of the SATL and RST lines);
• there is no PCCI market ;
• the transaction on purchase opens at the price of the of the next period or below.
Strong signals of continuation of a trend (S3 and L3)
For the transaction for sale on the descending trend:
• STLM line moves below zero;
• FATL line was developed down;
• the transaction opens at the price of the of the previous period (or above).
The trade signal is generated if on medium-term (at least!) descending trend (or flat) − the SATL line is horizontal or sent down − PCCI forms local max in an overbought zone;
For the transaction on the purchase when the ascending trend:
• STLM line moves above zero;
• FATL line was developed up;
• the transaction opens at the price of the of the previous period (or below).
The trade signal is formed if on a medium-term bull trend (or at consolidation) − the SATL line is horizontal or sent up − PCCI forms local max in an oversold zone.
Signals of acceleration of a trend (S8 and L8)
Conditions for the transaction for sale:
Simultaneous breakthrough of the FATL indicator lines from top to down by the trend lines: «slow» SATL and «fast» RFTL.
The transaction opens at the Open price of the closed period or above.
Conditions for the transaction on the purchase:
a similar breakthrough in the direction from below up
The transaction opens at the Open price of the closed period or below.
Signals on end of a trend (S5 and L5)
Signals of the last trend throw after correction after which the trend is developed.
For sales:
• SATL line goes down (a bear long-term trend), there is a crossing fact from top to down of the SATL and RSTL lines;
• RBCI and PCCI indicators in the overbought condition.
The transaction opens at the Close price of the previous period or above.
For purchases:
• SATL line goes up (a bull long-term trend) after crossing from below up of the SATL and RSTL lines;
• RBCI and PCCI indicators in the resold condition.
The transaction opens at the Close price of the previous period or below.
Standard signals of a turn (S1 and L1)
Conditions for the transaction on the purchase:
• the first bear corrections on the ascending trend;
• the growth of the price on the falling STLM;
• the converging of the SATL and RATL lines;
• we open the transaction at a turn of FTLM and RBCI down;
• StopLoss – above the last local maximum.
Temporary correction is possible – we watch PCCI dynamics and at an opportunity – we add volume.
Conditions for the transaction for sale
• the first bull corrections on the descending trend must be before a signal;
• price falling on the growing STLM;
• the converging of the SATL and RATL lines;
• we open the transaction at a turn of FTLM and RBCI indicators down;
• StopLoss – above the last local minimum.
Signals of double divergence of indicators (S4 and L4)
The situation is possible when the values of the FTLM indicator are close to zero. The advancing signal visually looks as a weak turn of the FTLM line to the FATL line (from below up – for sales,
from top to down – for purchases).
For transactions for sale:
• must be a slow ascending trend with the multidirectional movement of the RBCI (down) and FTLM (up) lines;
• the formation of a minimum of FTLM, provided that FATL and RBCI keep the directions, is a signal.
The transaction opens at the Open price or above.
For purchases:
• must be a slow bear trend with the multidirectional movement of the RBCI (up) and FTLM (down) lines;
• the formation of a maximum of FTLM, provided that FATL and RBCI keep the directions, is a signal.
The transaction opens at the Open price or below.
Strong signals of a turn (S7 and L7)
For an entrance against the current bull trend (sale):
• strong breakthrough by the FATL line of the SATL line from top to down.
• we sell on a FATL indicator maximum after the first technical correction of the descending trend up.
We open the transaction at the price of the opening or above.
For an entrance against the current bear trend (purchase):
• strong breakthrough of the SATL line by the FATL line from below up;
• we sell on a FATL indicator minimum after the first technical correction of the ascending trend down.
We open the transaction at the price of the opening or below.
Several practical remarks
This system of indicators don`t care about what trade asset you offer – it perceives everything as the digital array and doesn't depend on a news background, and therefore the fundamental analysis
isn't used in this system. It is considered that news splashes in the price are just considered in the current calculation of values of the indicator.
As if didn't convince you not to use adaptive indicators together with classical – don't believe. The symbiosis of such methods yields rather stable results.
The periods below H1 aren't recommended at all, it is the best to use H4-D1 above for currency pairs and cross-assets.
If STLM and FTLM indicators move together in one direction, then we don't open the transaction against them.
If the STLM indicator accurately moves up (down) – opens only in his direction.
On a turn of «fast» FTLM and RBCI indicators, it is possible to be added to open positions from extreme points down (up).
These are not all signals and rules, it is much bigger them, it is necessary to test on a concrete trade asset before application, some indicators (most often, REI) may be excluded.
The system doesn't make recommendations about a money management, it is just a set of technical tools.
And as the conclusion …
Forex adaptive indicators − essentially new approach in the market analysis which is actively used in trade technologies of large players and the medium-term forecast, at least, is important for
them. The system in an optimum way adapts under the current market, isn't late at all, it doesn't depend on the broker and trade conditions. Telling the truth, it has no parameters of thin tuning,
but even when using standard recommendations the set of indicators shows a positive on all main pairs (EUR/USD shows the best results), time frame – not less H1, otherwise, we receive the mass of
«false» signals.
The profitable result is important for any investor and to understand in detail method details – optional. The ordinary traders who aren't tempted in mathematics and methods of a digital filtration
are recommended not to bother concerning a calculation procedure and to pay attention not to formulas, but their trade interpretation and practical results. Separate Forex adaptive indicators can be
used instead of standard, but the most reasonable are using them as the automated analytical system which could define the most probable among the mass of possible signals.
The AT&CF system is of interest to any category of the investors ready to put in medium-term trade strategy.
Social button for Joomla
|
{"url":"http://dewinforex.com/forex-basics/forex-adaptive-indicators-new-difficult-profitable-but-not-for-all.html","timestamp":"2024-11-06T04:03:34Z","content_type":"text/html","content_length":"61164","record_id":"<urn:uuid:d22a8dae-c522-4499-83da-b99dbb8bf5b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00631.warc.gz"}
|
ball mill efficiency calculations
WEBDec 1, 2013 · The effect of ball size on the particle size reduction has been investigated first for varying rotation speed of the container. Percent passing and size distributions of the
milled Al 2 O 3 powder are shown in Fig. 1, Fig. 2, respectively, as a function of particle size for varying ball average particle sizes (d 50) of the milled Al 2 O 3 powder are .
WhatsApp: +86 18838072829
WEBOct 15, 2015 · Mill Critical Speed Calculation. In this experiment the overall motion of the assembly of 62 balls of two different sizes was studied. The mill was rotated at 50, 62, 75 and 90%
of the critical speed. Six lifter bars of rectangular crosssection were used at equal spacing. The overall motion of the balls at the end of five revolutions is shown ...
WhatsApp: +86 18838072829
WEBApr 30, 2023 · Ball mill is a type of grinding equipment that uses the rotary cylinder to bring the grinding medium and materials to a certain height and make them squeeze, impact, and grind
and peel each other to grind materials. Ball mill is the key equipment for crushing materials after they are crushed. It is widely used in cement, silie products, new ...
WhatsApp: +86 18838072829
WEBJan 1, 2021 · According to the calculation of the ball mill power at home and abroad, ... as to improve the production efficiency. Therefore, the design of ball mill liner plays an important
role .
WhatsApp: +86 18838072829
WEBJan 1, 2009 · The Bond ball mill work index is an expression of the material's resistance to ground and a measure of the grinding efficiency. The test is a standardized methodology that ends
when a circulating load of 250% is obtained. In this paper, a new method based on the Population Balance Model (PBM) is proposed and validated to estimate the .
WhatsApp: +86 18838072829
WEBFalse air calculations, heat loss calculations, LSF, Silica modulus, alumina modulus, calorific value, minimum combustion air, alkali by sulfur ratio ... Critical Speed (nc) Mill Speed (n)
Degree of Filling (%DF) Maximum ball size (MBS) Arm of gravity (a) Net Power Consumption (Pn) ... Separator efficiency (SE) Separator efficiency (fine ...
WhatsApp: +86 18838072829
WEBApr 1, 2013 · Highlights Circulating load and classifiion efficiency effect on ball mill capacity revisited. Relative capacity model introduced and validated. Relationship between circulating
load and classifiion efficiency verified by industrial data. Existing fine screening technology could increase ball mill circuit capacity 15–25%.
WhatsApp: +86 18838072829
WEBMar 15, 2024 · To tackle this problem, we made a clay material by mixing bentonite with NH 4 Cl (NH 4 Clbentonite) in a ball mill. NH 4 Clbentonite increased Nuse efficiency by %, boosted crop
yield by times, and reduced the Pb and Cd levels in water spinach shoots by % and %, respectively. This work suggests a simple and effective .
WhatsApp: +86 18838072829
WEBKeep operation in a good efficiency. Conventional grinding system. Main Machine. 1. Feeding system 2. Tube mill 3. Dynamic separator 4. Dedusting (BF/EP) 5. Transport equip. ... – After work
inside the millCalculation quantity of ball charge and filling degreeSample sieve analysis. 1st compartment Sieve : 16, 10, 6, 2, , , ...
WhatsApp: +86 18838072829
WEBJan 1, 2021 · Ball mills have been used as the main grinding tool for cement production for over 100 years. Although easy to operate and competitive compared to other technologies, the poor
efficiency of ball mills has been one of the main reasons for research and development of more efficient grinding processes in recent years.
WhatsApp: +86 18838072829
WEBSep 11, 2017 · In this context, the ball mill and the air classifier were modelled by applying perfect mixing and efficiency curve approaches, respectively. The studies implied that the shape
of the efficiency ...
WhatsApp: +86 18838072829
WEBMay 1, 2020 · The main aim of this study is to improve the processing capacity of the largescale ball mill. Taking a Φ × m ball mill as the research object, the reason for the low processing
capacity of the ball mill was explored via process mineralogy, physicochemical analysis, workshop process investigation, and the power consumption .
WhatsApp: +86 18838072829
WEBJul 1, 2016 · The grinding circuit investigated in the current study is a twostage grinding circuit (Fig. 2) in which two overflow type mills are arranged in series and operated under wet
runofmine ore was crushed to 10 mm, and it was then fed to the first ball mill, which was closed by a rake second mill operated in a combined .
WhatsApp: +86 18838072829
WEBFeb 19, 2021 · The ball mill process parameters discussed in this study are ball to powder weight ratio, ball mill working capacity and ball mill speed. As Taguchi array, also known as
orthogonal array design, adds a new dimension to conventional experimental design, therefore, Orthogonal array (L9) was carefully chosen for experimental design to .
WhatsApp: +86 18838072829
WEBApr 23, 2023 · The energy con sumption for spherical balls was Kw after grinding for 420 minutes, producing a 45µm. residue of % whilst that of cylpebs was after grinding for 295 minutes
producin ...
WhatsApp: +86 18838072829
WEBDec 17, 2018 · Ball Mill Optimization. This document summarizes a study on optimizing ball mills for clinker grinding in cement plants. It presents empirical equations relating particle size
reduction to specific energy requirements. Data from plant operations and lab experiments on grinding various materials to the superfine and nanoscale are used. .
WhatsApp: +86 18838072829
WEBLoveday (2010) reported on batch tests in a laboratory ballmill (300 mm diameter), to investigate the replacement of a portion of S. Nkwanyana, B. Loveday / Minerals Engineering 103–104 (2017)
72–77 Table 1 Example of design power and costs for a SAG/ballmill circuit in Peru, with a million t/a capacity (Vanderbeek et al., 2006).
WhatsApp: +86 18838072829
WEBJun 16, 2023 · The Ball Mill Grinding Media Calculation Process. To calculate the optimal amount of grinding media balls, follow these steps: Determine the Target Grind Size: Identify the
particle size distribution you aim to achieve. This will depend on the specific appliion and desired end product. Understanding your target grind size is vital for ...
WhatsApp: +86 18838072829
WEBDec 1, 2023 · The maximum adsorption efficiency was % under the optimal conditions (40℃, pH 8, reaction time = 90 min, dosing amount = mg), and the adsorption efficiency could be improved by
WhatsApp: +86 18838072829
WEBAG/SAG/Ball Mill. Hydrocyclone. Autogenous Scrubber. Thickener Calculator. Mass Balance. Vibrating Feeder. Belt Conveyor. Rotary Screen. Screening Area. RosinRammler. This online calculator is
a free tool. The range of mineral processing calculators are built by a collaboration of engineers from various countries. If you have any additions or ...
WhatsApp: +86 18838072829
WEBJun 1, 2018 · In this article, alternative forms of optimizing the milling efficiency of a laboratory scale ball mill by varying the grinding media size distribution and the feed material
particle size distribution were investigated. Silica ore was used as the test material. The experimental parameters that were kept constant in this investigation was the ...
WhatsApp: +86 18838072829
WEBDec 12, 2023 · One of the factors affecting the efficiency of the ball mill is the loading of the balls. ... the optimal trend for choosing cable crosssections for different durations of the
calculation period ...
WhatsApp: +86 18838072829
WEBApr 1, 2013 · For a closed circuit ball mill flowsheet as represented in Figure 2, a simplified relationship (Equation 1) for relative capacity at different circulating load and classifiion
efficiencies was ...
WhatsApp: +86 18838072829
WEBNov 30, 2022 · A ball mill also known as pebble mill or tumbling mill is a milling machine that consists of a hallow cylinder containing balls; mounted on a metallic frame such that it can be
rotated along its longitudinal axis. The balls which could be of different diameter occupy 30 – 50 % of the mill volume and its size depends on the feed and mill size.
WhatsApp: +86 18838072829
WEBFrom this experiment we got following result: MWh/t. Now using standard formula we can calculate grinding media wear rate: Wear Rate (g/kWh) = * H * D^ * η. H=Hardness in Mohs Scale. D=
Diameter of Ball (mm) η=Efficiency Factor (decimal) Factors affecting wear rate of grinding media.
WhatsApp: +86 18838072829
WEBJan 15, 2023 · Therefore, to increase the milling efficiency, it is better to run the mill at shorter time with suitable ball size and rotational speeds. Download : Download highres image
(229KB) Download : Download fullsize image; Fig. 5. Milling efficiency vs. milling time at 1000 rpm at different ball sizes. Download : Download highres image (196KB)
WhatsApp: +86 18838072829
WEBObjectives. Upon completion of this lesson students should be able to: Differentiate between different type metallurgical accounting methods and benefit of metallurgical accounting. Recall the
definitions of grade, recovery, yield and other characteristics used in mass balancing. Recognize the appliion of two product formula for balancing.
WhatsApp: +86 18838072829
|
{"url":"https://www.lgaiette.fr/7908_ball_mill_efficiency_calculations.html","timestamp":"2024-11-12T18:41:20Z","content_type":"application/xhtml+xml","content_length":"26032","record_id":"<urn:uuid:b6184263-b052-497c-99ba-df0485cd8b77>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00002.warc.gz"}
|
John Craig
CRAIG, JOHN, an eminent mathematician, flourished at the end of the 17^th and the beginning of the 18^th centuries. The only circumstance known respecting his life is, that he was vicar of Gillingham
in Dorsetshire. The following list of his writings is given in Watt’s Bibliotheca Britannica – "Methodus figurarum, lineis rectis et curvis comprehensarum: quadraturas determinandi. London, 1685,
4to. – Iraetatus Mathematicus, de figurarum curvilinearum, &c. et locis geometricis. London, 1692, 1693, 4 to. – Theologiae Christianae Principia Mathematica. London, 1699, 4 to. Reprinted, Leipsic,
1755. – De Calculo fluentium, lib. ii. et de optica analytica, lib. ii. London, 1718, 4 to. – The quantity of the Logarithmic Curve; translated from the Latin, Phil. Trans. Abr. Iv. 318, 1698. –
Quantity of Figures geometrically irrational. Ib. 202, 1697. – Letter containing solutions of two Problems: 1, on the solid of Least Resistance; 2. the Curve of Quickest Descent. Ib, 542, 1701. –
Specimen of determining the Quadrature of Figures Ib. v. 24, 1703. – Solution of Bernouilli’s Problem. Ib. 90, 1704. – Of the length of Curve Lines. Ib. 406, 1708. – Method of making Logarithms. Ib.
609, 1710. – Description of the head of a monstrous Calf. Ib. 668, 1712."
|
{"url":"https://www.electricscotland.com/history/other/craig_john.htm","timestamp":"2024-11-01T23:32:58Z","content_type":"text/html","content_length":"14992","record_id":"<urn:uuid:04b61b26-397d-43c0-8a77-7d5cb375ae5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00619.warc.gz"}
|
Friday Fun: Solving a Rubick’s Cube with LEGO
Solving a Rubick’s Cube with LEGO and popsicle sticks
The user enters the cubes state into a python GUI, which the solves the cube using the beginners layer by layer method. It then sends the solution to the Arduino via (usb) serial, which then solves
|
{"url":"https://www.epanorama.net/blog/2013/02/22/friday-fun-solving-a-rubicks-cube-with-lego/","timestamp":"2024-11-14T22:08:39Z","content_type":"text/html","content_length":"26438","record_id":"<urn:uuid:3b26d911-97b6-4866-ac6e-aba502177b4a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00535.warc.gz"}
|
Neural Cryptography- Stop me before I SIGKILL again
A plea for help with the only thing that transformers can't seem to do.
The Problem
The year is 2020 and the proliferation of Big Old Language Models is still a faraway dream. I am staring at code from a client:Code has been changed to protect the innocent.
#gotta make a token and send it to the client!
very_random_number = get_insecure_random_number()
# e.g. (.0324191942[...])=>"ABXDC"
two_factor_token = convert_representation(very_random_number)
send_email("Your two factor authentication token is:"
I have seen this code and many variations of it. There are many services in your life which strongly resemble it. Is it secure?
There are good reasons to say "no, obviously not" to the above question:
• "You're calling a function that says it's insecure!" Yeah, fair enough. Some random number generators (RNGs) are 'secure', meaning that there is good reason to believe that seeing previous
outputs cannot be used to predict future outputsTo the best of my knowledge, this is not because they have been proven secure, but rather that they have withstood tons of cryptanalysis.. Others
are not: Like this one, they do not make these guarantees in exchange for speed or simplicity of implementation.
• "This is just security by obscurity!" Absolutely. Knowing the underlying RNG and the algorithm used to perform the transform allows closed-form solutions for several non-secure RNGs, typically
using SAT solvers. Several cryptocurrency casinos have met their end in this exact way: the Node based runtime allowed an attacker to recover the state of the RNG and predict future output.
But there is one big problem: What is the POC? If I have no POC, must I not GTFO? There are theoretical weaknesses, but actually predicting tokens based on previous tokens has not been done. It might
be possible with a bespoke SAT solver solution, but that's certainly beyond my ability. And that still depends on an attacker having access to the source code. Should this be fixed immediately? Or
merely noted as a low-severity finding An industry term that means 'never fixed'
This underlying conundrum was the subject of my last article on this subject. It attacks XORSHIFT128 with the goal of eventually attacking XORSHIFT128+. That would be this algorithm, which is used in
Chrome and thus probably the most common RNG:
def xorshift128plus(x, y):
s0, s1 = y, x
s1 ^= (s1 << 23) & MAXSIZE
s1 ^= (s1 >> 17)
s1 ^= s0
s1 ^= (s0 >> 26)
x = y
y = s1
generated = (x + y) & MAXSIZE
return generated, x, y,
The techniques from that article are primitive in the cold light of 2024; LSTMs were already being replaced by Transformers, and at this point they have been eclipsed entirely. But now I'm wiser, I
have a faster computer, and more importantly there have been approximately ten million new advancements in machine learning. How much harder could it be?
Much harder, as it turns out.
The graveyard of failed technique
Several years of tinkering in my free time have resulted in no success in this field of problems. The original problem is this:
Given the previous N outputs of xorshift128+, predict the next output of xorshift128+
It is a straightforward problem; given that it is solvable with an SAT solver, it should be possible enough with a neural network, our beloved universal function approximator It probably is possible
to approximate this function, but it would also be convenient if it was possible to learn this representation. . But no amount of compute or hyperparameter tweaking succeeded in any way — Not running
at home on my beefy M1, not on rented GPU time from AWS. There is a wealth of literature about making your network stop overfitting- but seemingly none about what to do when your network doesn't fit
at all.
With this in mind, I tried to narrow down my dream somewhat:
• Can a single bit of subsequent output be predicted from the previous N outputs, for any reasonable output of N? No. Same problem as the above.
• Can the state of the RNG be predicted given its previous outputs? No. This is building a network to reverse the operations of the above algorithm, so it's not really clear that this is easier.
But then, surely...
• Given 128 bits of state, can I calculate the output of the above function? Also no. Even though this is just _running_ the function? Really?
• Given 128 bits of state, can I calculate one bit of output? Generally, no. For some bits, it works- but some of the bits in this are subject to fewer shifts and XORs than others.
I have learned much about the innards of neural network design in the interim, but none of it has actually solved this problem. I have tried every kind of transformer stack that fits on my machine.
Given that the input is ideally 128 bits and certainly no larger than 256 bits, you can get a pretty large network into an M1 with 64 GB of ram. But to emphasize: no combination of architectures,
regularization techniques, or other tricks seems to get the job done. This went up to networks that hit the O(N²) memory bottleneck for transformers It's not particularly difficult to generate
millions of test cases, so overfitting is the least of my problems. All of them dutifully run for hours or days, stuck at mean error which is no better than chance before I terminate them.
So, in the face of all reason, three and a half years of tinkering has produced more or less nothing. It is difficult to say why, but there are several compelling reasons:
• XOR— The wretched XOR operation is not easily divisible. If you entertain yourself by drawing a graph of the function on the unit square, a line between (0,1) and (1,0) will necessarily intersect
the space between (0,0) and (1,1). This isn't itself a disqualification for this as a technique- but enough has been written about "the XOR problem" that it stands out as a type of function that
single layers struggle with. Learning a series of XOR operations seems more challenging, and since the essential idea behind this network also involves predicting state from output, it would
require reversing these operations.
• Input data entropy —Another problem seems to be the entropy of the input data. In image synthesis and identification, all bits are correlated. There could theoretically be any one of 256^3 colors
for any given pixel, but in practice blue pixels are next to other similarly blue pixels, and so on. Similarly, text could contain any number of different tokens; in practice, the English
language is contained to perhaps ten bits of entropy a word, and thus maybe half that for any given token.
On the other hand, RNGs are by definition trying to make as little correlation between input and output as possible; this usually means that each bit of state is near-completely uncorrelated with
its neighbors. Though the input size is much smaller, it is seemingly impossible to cohere input bits into more easily processable chunks. Every input bit matters and every output bit matters.
Having exhausted the range of general neural networks, I decided to commit the unforgivable sin of designing network architecture specifically to the problem at hand. If I hand-designed a network to
perform the forward pass from state to output, could I use to calculate output to state? It would not be a complete solution, but it would be progress.
Gradient Descent Into Madness
Conventional machine learning follows a tried-and-true pattern. We define a function which is itself defined in terms of its parameters: each node in a densely connected network has a large pile of
weights to determine its output values. When combined together, the functions that this network can approximate a surprising number of output functions. It can also approximate a lot of useless
functions, so training is needed: we calculate the function's error given some weights and then change the weights slightly to reduce that error across the training examples.
In this case, however, I would like to calculate the inputs given the outputs. Thus, my idea was to perform gradient descent on the inputs, given the correct answer and outputs. This means the
• Design a network that, given the state for XORSHIFT128+ at any point, correctly calculates the output.
• Given these fixed network weights and a given output, use a random input.
• Then, calculate gradient descent on this input. We can't do this for normal networks, because the entire domain of input is usually not meaningful — it is quite feasible to produce a pile of
noise which a given network identifies as a cat or a dog, but for us any given input of bits is a meaningful answer.
• Keep going until the answer is correct? HOPEFULLY???
I would like to do this in a way that is as general as possible, while acknowledging that the current state-of-the-art is too general. XORSHIFT128+ is built from the following components, which
constitute the majority of RNG operations:There are just not that many bitwise operations that don't leak entropy, so the list has to be pretty short here.
• XOR
• Bit shifting
• Add-with carry
However, we can't just use the built-in pytorch functions like xor and roll for these as they are not smoothly differentiable. Performing gradient descent is only possible if every operation has a
gradient! If we want to perform gradient descent on random input, it also needs to have a meaningful gradient across the entire domain- from [0,1). So it is written, so it shall be done.
XOR's inputs for a given bit can have one of two values, which (when extended to real input) means anything between 0 and 1. To make the math easier we will actually have these be to -1 and 1 inside
of nodes It makes it possible to do the matrix multiplication trick at the bottom. . Let's start by trying to "push" these values towards the edges. I accomplish this with the 'certainty' parameter,
which indicates how much these should be flattened into one of the two above. There are many real valued functions, but I use:
xx = (x -0.5) * 2
inputs = torch.tanh(xx*self.certainty)
which takes an input vector x and subsequently turns it into something much closer to one of the above. Higher certainty means more 'pushing' into these two values, but also makes the function's
gradients much more jagged. As we'll come to see this matters a lot - too much certainty, and the gradients are impossible to calculate — not enough certainty, and error propagates through the
operations and befouls the answer.
Because each bit can have one of two values, we can have four possible outputs that have meaning: [[-1,-1],[-1,1],[1,-1],[1,1]]. Calculating the similarity between a given input and these four
outputs is quite simple: we multiply the above 4x2 matrix by a 2x1 matrix with the input and receive four numbers, each of which will be higher in the case that they match more closely. Our old
friend softmax comes to the rescue to convert this mix of positive and negative numbers into percentages of certainty. I also use certainty here to make the calculations favor one position over the
target= torch.tensor([[-1,-1],[-1,1],[1,-1],[1,1]]).float().T
prod = torch.matmul(inputs,target)
best_name = torch.softmax(prod*self.certainty,dim=2)
The resulting 4x1 matrix needs to then be converted into the correct output, which I have defined as a 1x4 lookup table. For example:
Can be multiplied by the XOR lookup table:
[0,1,1,0] //transposed
producing a value very close to 1. There are some reasons to believe that this isn't the best solution for the problem at hand, but it works and is simple for each bit. We then simply perform this
operation column-wise on a set of input vectors, letting us perform half of operations like x ^= x << 23; that are ubiquitous in all cryptography With a little static single assignment to avoid these
in-place shifts.
Bit Shifting
The other half of these involve bit shifting, which is itself a pretty simple operation. Given a list of bits [0,1,1,0...], a bit width, and an integer N, we want to shift them to the left or the
right. So an 8-bit integer:
shifted 2 to the left becomes:
or shifted 2 to the right becomes:
Any positions not filled previously become 0; we don't wrap them around. Torch provides torch.roll which mimics this functionality with wrapping: we would get 11000011 on the shift right, which is
not what we want. The fix for this is pretty trivial, but it is not smoothly differentiable, so it is dead to me. I need a smooth shift for my smooth brain.
Given a vector of length N, we can get an identity transpose by putting 1s along the top-left to bottom-right diagonal. Multiplying this by the vector changes nothing. For example:
But we can subtly alter this matrix to perform more exciting and dynamic operations: changing the position of the 1 in a given row to the 1 you'd like to end up in allows multiplying an input vector
to transpose its output. For example:
Thus, by shifting each column to the left or the right and truncating the remaining zeroes will let us create a matrix that shifts by the appropriate amount. How then to turn this into something
differentiable? I define a tensor with all the possible useful shifts: for a bit with of N, we can shift between 1-N (all the way to the right) and (N-1) (all the way to the left):
def bit_shift_matricies(bits):
eyes = torch.stack(
[torch.roll(torch.eye(bits), i, 0) for i in brange(bits)]
for i in range(2 * bits - 1):
b = bit_to_minus(i,bits)
if b<0:
eyes[i][:][b:] = 0.0
elif b>=0:
eyes[i][:][:b] = 0.0
return eyes
Given this, a parameter between 1-N and N-1 can be used to return a weighted average of the sum of these matrices. That is, we extend the shift function by having shift left by N and shift left by
N-1 defined by taking the average of its two outputs, at the bitwise level.
def differentiable_shift(x,n,eyes,certainty,bits):
shift = weighed_smooth_vector(
mults = torch.sum(shift[:,None,None]*eyes,0)
return torch.matmul(x,mults)
The shift vector needs to produce a probability distribution that assigns the highest value to n, which represent the current shift amount. A gaussian distribution which is normalized to the legal
values is used here, and then each matrix's elements are multiplied by that value to produce the right one. With a reasonably high value for certainty (~6) this will become 1 for the given value and
0 for all others while also being differentiable But not very differentiable. The gradients seem to be very small.
Adding with carry is the last required operation: given a pair of numbers expressed bitwise, we would like to produce the bitwise output in a smoothly differentiable way. We use a similar technique
as the xor gates: each combination of bitwise input is matched via matrix multiplication to one of the eight combinations of possible inputs, which then have a fixed table of outputsAt the hardware
level, add-with-carry is carried out with logic gates, so it shouldn't be too shocking to see them, here. .
def forward(self, x):
x = 2*(x - 0.5) //moves 0,1 to the -1, +1 domain
//multiplies a 1x3 matrix by an 3x8 matrix to produce a 1x8 matrix)
x = torch.matmul(x, self.truthTableInputs)
//subsequently turns this 1x8 matrix into a propability distribution
x = torch.softmax(x* self.certainty, dim=1)
//then multiplies it by the correct outputs for each combination
//to produce an output and the carry bit for the next column
x = torch.matmul(x, self.truthTableOutputs)
return x
Moving from left to right, this component of the network produces a row of outputs, and a row of carry bits, which are used to perform bitwise arithmetic in the same way that a computer does. The
associated network simply performs these gates in the right order.
To come so far, only to fail
With these three pieces in hand, we can calculate the correct output quite easily, by chaining them together and verifying them. The repository's provided verify_model_works.py randomly selects
several thousand inputs and outputs and verifies that the output of the torch version matches the RNG itself, both for the output and the state. Varying the certainty parameters through the model can
make it more differentiable, but shows that there is eventually error introduced into the calculations — the values I've chosen were the smallest whole numbers I could get to work.
Unfortunately, this doesn't work for the proposed task. The associated perform_descent.py code is currently configured to perform the task of performing gradient descent here. There are several
interesting questions which I invite you to play with: if almost all of the bits are already correct, will the model converge? Will the model converge if all the inputs are set randomly, or all very
close to 0.5? I was surprised to discover that even in the case where all but one bit in the input is set to its correct value, the model does not always converge to the correct answer.
Thus, what I need is help. If you are purely curious about the topic, that's good enough for me, but there are practical reasons as well. Many applications I have seen rely on these types of insecure
RNGs, shielded only by the obfuscation of their source code. I also don't see any theoretical reason to see why a CSPRNG would be different—though obviously I would really like to find one. withstand
attacks in such a way. Easy to say after not discovering anything that works, but the individual operations are identical.
I have tried a lot of things that have not worked, but there are several reasonable ideas:
• Alternative network designs — is there a way to design this network so that it is more smoothly differentiable across more parameters? In particular, the certainty parameter in each node works
well to make the calculations less fuzzy, but it also makes gradients much sharper which harms learning. Would sharing this parameter universally across the network help? Is there a way to design
the network that allows gradients to be fuzzier while still converging to the right answer? I didn't really cover it here, but having this parameter also makes it theoretically possible to
perform normal gradient descent on bitwise operations. It runs into the same problems, as far as I can tell.
• Alternative gradient descent — Unlike normal instances of neural networks, only answers that are equal to 0 or 1 are truly essential to its functioning. Is it possible to design a network where
more correct bits in the input always produce a lower loss value? If this condition was always true or even often true it would be possible to skip gradient descent. Because these tools already
implement most of the bitwise functions needed to attack networks, any speedup here or any narrowing that works on a CSPRNG would be very important, no matter how small — getting a faster result
than bruteforce search is still good enough. If you become a crypto billionaire by using this to break SHA256 preimage resistance, please think kindly of me :)
• -Something else - Is there some combination of functions here that make things work better? Importantly, all the functions here are no longer used as activation functions for larger networks due
to the problems they cause with gradient propagation. Is it possible to use sigmoid in place of TanH, or somehow another one of the myriad functions which makes gradients propagate more
effectively through the network?
Although it is painful to publish a negative result, the alternative is losing my mind. I have many more ideas to try, but I am not sure that they will work and I don't know anyone else working on
the topic. There are several papers in the area available online, but almost all of them are.. sketchy, with little genuinely interesting output and some purporting to do things that are seemingly
impossibleMy favorite by far is this, which somehow purports to learn digits of π and e from supervised learning, claiming statistical significance through an incredible abuse of statistics.. There
is a dearth of good research despite the importance of any successful work in the area, and the relative ease of experimentation alone. All the code here runs easily on my macbook without really
taxing the machine at all. It is a curious gap.
The provided repository has been winnowed down from my previous work on the subject and provides what I hope is a straightforward example on trying to attack the problem. If you have any information
or suggestions that I might try, I will happily implement them and report back if they seem interesting, and I have not thought of them yet. I am particularly interested in the expertise of someone
who has worked on ML research or cryptography. I am also seeking employment in this area— If you would like me to grind my face against this or another related problem, for money, I would be thrilled
to do so.
Please send me a message if you have anything you think would be helpful.
|
{"url":"https://www.airza.net/2024/09/04/neural-cryptography-stop-me-before-i-sigkill-again","timestamp":"2024-11-11T21:14:06Z","content_type":"text/html","content_length":"65222","record_id":"<urn:uuid:aaca3077-0ee0-4fb3-866f-ff504293eb6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00327.warc.gz"}
|
Our users:
I liked the detailed, clearly explained step by step process that Algebrator uses. I'm able to go into my class and follow along with the teacher; it's amazing!
Malcolm D McKinnon, TX
I have tried many other programs that did not deliver on what they promised. I decided to take a chance with the Algebrator. All I can say is WOW! Thank you!
Clara Johnson, ND
My twins needed help with algebra equations, but I did not have the knowledge to help them. Rather then spending a lot of money on a math tutor, I found a program that does the same thing. My twins
are no longer struggling with math. Thank you for creating a product that helps so many people.
Jenny Lane, AL
No Problems, this new program is very easy to use and to understand. It is a good program, I wish you all the best. Thanks!
C.P., Massachusetts
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2013-05-06:
• mcdougal littell algebra book 2 page 85 answers
• aptitude ques
• multiplying and dividing decimals multiple choice
• factoring formula program
• free practice on year 11 exams
• parabola+equation
• free one steps equations worksheets
• Easy way to learn Statistics
• algebra 2+practice workbook+answers and solution+free
• node/21
• quadratic formula through points
• square roots and cube roots worksheet
• Combination and Permutation Examples
• google coin linear equations
• how do simplify addition exponent problems
• ti81 value of x in a trinomial
• cambridge gce o level english pass papers free downloads
• grade 10 principle of math ontario lesson plan
• simplifying variable expressions worksheet
• square root method
• clep algebra practice
• types of graphs (direct, inverse, parabolic, exponential)
• how to solve difficult alegbra
• 8% as a decimal
• basic algrebra problem
• solve equation problem software
• inequality elimination method calculator
• a survey of mathematics with applications,expanded (8th addition) (ANGLE)
• free printable college word problems
• beginners Physics for College Students
• +trivias about functions
• subtracting integers worksheets
• third order quadratic formula
• partial sums addition method
• passing algebra easy
• georgia 6th grade math word problem with fraction
• algebra equations for fifth grade
• combinations and permutations basics
• program ti 83 quad
• aptitude test papers for C#
• gmat maths free tests
• creative publications pre algebra with pizzazz
• easy logic table problems 7th
• factoring cubed polynomials
• everyday mathmatics
• cpm geometry book answers
• intermedia algebra pdf
• algebra with pizzazz answers worksheets
• printable 6th grade expression worksheets
• volume math test online
• math trivias about numbers
• ks3 maths worksheets
• solving rational expressions calculator
• multiplying, dividing,and simplifying radical expressions
• dividing radical calculator
• prentice hall algebra 1 workbook
• factoring polynomials cubed
• rationanlizing the denominator worksheet
• free 8th grade ratios worksheets
• basic algibra
• Math Symbol Pie series
• algebraic structures solved exercises
• seventh grade algebra california sample test
• printable sat algebra questions
• linear algebra beginner
• worksheets for multiplying by 6
• holt algebra 1 enrichment
• Physics Formula Sheet
• free college algebra calculators
• c language aptitude question
• AJmain
• java sum values
• erb for 6th grade
• thrid grade trivia
• writing and evaluation expressions worksheets
• equation problems- free printables--high school
• moving rational exponent in an equation
• Algebra Solver
• algebra of boolean to ladderdiagram
• multiply and divide integers worksheet
• solver excel solve equation
• how to store information in a TI 89
• how to solve third order polynomial equation
|
{"url":"https://mathworkorange.com/math-help-calculator/trigonometry/equation-for-slope-7th-grade.html","timestamp":"2024-11-03T02:23:00Z","content_type":"text/html","content_length":"87063","record_id":"<urn:uuid:fb902064-3b63-446b-afdd-36f54fcd4608>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00300.warc.gz"}
|
God Plays Dice
Robert Sedgewick's web site
: slides for a talk entitled
Impatiemment Attendu
, given at
a conference in honor of Phillipe Flajolet's 60th birthday
. The gist of this talk appears to have been something like "
The bookAnalytic Combinatorics
, in case you're wondering.)
I mention it because apparently, at the beginning of this month in Paris,
this post I made in September was projected on a big screen in front of a bunch of important people
. I am of course amused. (The title "impatiemment attendu" is not mine, though; I took it from a paper by
Nicolas Pouyanne
. I suppose the English translation "impatiently awaited" is mine, but this was not a translation that required some huge inspiration.)
At the time, Amazon said that the book would be out on December 31. It's December 31. It's not out yet, as far as I know. I'll be at
ANALCO '09
on Saturday. A slide in the presentation says that the book will be available at SODA (ANALCO takes place the day before); maybe the book will be there?
I have a new e-mail address.
To figure it out, concatenate the first three letters of my first name, my entire last name, and "@gmail.com".
The intrepid reader can figure out my "academic" e-mail address. Once I get things set up those should redirect to the same place anyway. (I'm tired of checking multiple addresses.)
And I apologize for obfuscating the address like this, but it's a new address, and I'd like to keep the spammers at bay for at least a little while.
Happy New Year! (Do I have any readers in Japan, Korea, Australia, or anywhere else where it's 2009 already? And Kate, if you're reading this, I urge you to remember that you're on vacation and you
should get off the Internet.)
While reading a paper (citation omitted to protect the "guilty"), I came across a reference to an "n-dimensional tetrahedron", meaning the subset of R^n given by
x[1], ..., x[n] ≥ 0 and x[1] w[1] + ... x[n] w[n] ≤ τ
for positive constants w[1],..., w[n] and τ.
Of course this is an n-simplex. But calling it a "tetrahedron" is etymologically incorrect -- that means "four faces", while an n-simplex has n+1 faces. This probably occurs because most of us tend
to visualize in three dimensions, not in arbitrary high-dimensional spaces.
I'm not saying that "tetrahedron" shouldn't be used here -- I'm just pointing out an interesting linguistic phenomenon.
I recently read What Mad Pursuit: A Personal View of Scientific Discovery
, which is Francis Crick's account of the "classical period" of molecular biology, from the discovery of the double helix structure of DNA to the eventual figuring out of the genetic code. It differs
from the more well-known book by James Watson, The Double Helix: A Personal Account of the Discovery of the Structure of DNA
Crick was trained as a physicist, and learned some mathematics as well, and every so often this pokes through. For example, back when the nature of the genetic code wasn't known, combinatorial
problems arose to prove that a genetic code of a certain type was or was not possible. One idea, due to Gamow and Ycas was that since there are twenty combinations of four bases taken three at a time
where order doesn't matter, perhaps each one of those corresponded to a different amino acid. This turned out to be false. Another, more interesting problem comes from asking how the cell knows where
to begin reading the code. What is the largest size of a collection of triplets of four bases such that if UVW and XYZ are both in the collection, then neither VWX nor WXY is? The reason for this
constraint is so that the "phase" of a genetic sequence is unambiguous; if we see the sequence UVWXYZ, we know to start reading at the U, not the V or the W. Thus the collection can't contain any
triplet in which all three elements are the same, and it can contain at most one of {XYZ, YZX, ZXY} for any bases X, Y, Z, not necessarily distinct. There are sixty triplets where not all three
elements are the same, thus at most twenty amino acids can be encoded in such a code. There are solutions that acheive twenty; see the paper of Crick, Griffith, and Orgel.
Note that the "20" in the two types of code here arises in different ways. If we assume a triplet code with n bases, then the first type of code can encode as many as n(n+1)(n+2)/6 amino acids, the
second (n^3-n)/3.
Crick says that the more general problem of enumerating the number of codes which imply their own "reading frame" was considered by Golomb and Welch, and separately Freudenthal. Based on the title
and the date, I think the first of these is the paper I point to below -- but our library doesn't have that journal in electronic form, and the physical library is closed this week!
F. H. C. Crick, J. S. Griffith, L. E. Orgel. Codes Without Commas. Proceedings of the National Academy of Sciences of the United States of America, Vol. 43, No. 5 (May 15, 1957), pp. 416-421.
George Gamow, Martynas Ycas. Statistical Correlation of Protein and Ribonucleic Acid Composition Statistical Correlation of Protein and Ribonucleic Acid Composition. Vol. 41, No. 12 (Dec. 15, 1955),
pp. 1011-1019.
Golomb, S.W., Gordon, B., and Welch, L.R., "Comma-Free Codes", The Canadian Journal of Mathematics, Vol. 10, 1958. (Citation from this list of Golomb's publications; I haven't read it.)
Terence Tao: Use basic examples to calibrate exponents. This article, for the eventual Tricki, gives many examples of the following basic procedure. In many problems there is a "size" parameter N,
and the problem has an "answer" that we believe for some reason behaves like N^k for some constant k. A quick way to find N is to look at "basic examples" (say, random graphs in a graph-theoretic
The interesting thing about this article -- and about the Tricki as a whole, once it finally launches -- is that its organizational principles are not the same as most mathematical exposition. A
typical lecture or section of a textbook gives problems with similar statements but not necessarily with similar proofs; the Tricki will group together problems with similar proofs but not
necessarily with similar statements.
Comfort with meaninglessness the key to good programmers, from Boingboing.
Is this true for mathematics as well as computer programming?
The New York Times reports on bad housing news:
The median price of a home plunged 13 percent from October to November, to $181,300 from $208,000 a year ago. That was the lowest price since February 2004.
They mean that house prices have gone down 13 percent in a year, i. e. from November 2007 to November 2008. That's what the National Association of Realtors press release says.
But one sees this pretty often -- the confusion between monthly declines and annual declines. And sometimes a 1% decline in a month might be reported as a "12% per year" decline -- but then the "per
year" gets dropped, the statement "prices of X dropped 12% this month" is made, and those who aren't familiar with how people who care about the price of X report their numbers get confused.
Don't get me wrong -- a drop of 13% in a year is still a big deal. But a drop of 13% in a month would be a much bigger deal.
From Daniel Engber at Slate: Wind Chill Blows. Back when nobody read this blog, I wrote about how the heat index doesn't make sense to me, because I know what 95 degrees with "typical" humidity for
my location feels like, and telling me it "feels like" 102 is misleading. (That's Fahrenheit, not Celsius; we're not literally boiling here in the summer.)
Something similar is true for wind chill. Both of these measures only take into effect two of the many variables that effect comfort -- temperature and either humidity or wind speed. They assume that
these are the only two variables which actually vary -- clothing, amount of sunlight, weight, etc. are held constant. In reality, comfort is a function of many variables, and it's misleading to
create an index that assumes it's just a function of two variables. People know that they should take more than the temperature into account, but I've seen quantitiatively unsophisicated people think
of the wind chill as some perfect index of the weather.
But let's face it, a wind chill of zero sounds scarier than a temperature of sixteen. (Those are approximately the numbers I heard reported this morning in Philadelphia.) That means more people watch
the news.
From the Wall Street Journal:
U.S. counterterrorism officials have been on guard for homegrown recruitment by radical groups. Intelligence analysts from the New York Police Department, in a study of radicalization in Western
Muslim communities, warned that "jihadist ideology" is "proliferating in Western democracies at a logarithmic rate."
Maybe it's just me, but logarithmic proliferation doesn't seem all that scary.
Uncyclopedia has a list of methods by proof. My favorite:
PROOF BY CASES
AN ARGUMENT MADE IN CAPITAL LETTERS IS CORRECT. THEREFORE, SIMPLY RESTATE THE PROPOSITION YOU ARE TRYING TO PROVE IN CAPITAL LETTERS, AND IT WILL BE CORRECT.
Of course, you have to be careful which letters you use as variable names in stating your result -- you can only use one of a, α, and A, for example.
And some of them are methods of proof that are actually used:
Proof by Diagram
Reducing problems to diagrams with lots of arrows. Particularly common in category theory.
And here's an interesting point about mathematical writing:
Proof by TeX
The proof is typeset using TeX or LaTeX, preferably using one of the AMS or ACM stylesheets. When laid out so professionally, it can't possibly have any flaws.
The collected papers of Paul Erdös are available online.
I'm glad I found this. There's a theorem of Erdös and Turan that I've been curious about for a while. Namely, let X[n] be the order of a permutation in S[n] selected uniformly at random. Then
where Φ is the cumulative distribution function of a standard normal random variable. Informally, log X[n] is normally distributed with mean (log^2 n)/2 and variance (log^3 n)/3. Unfortunately the
proof, in On some problems of a statistical group-theory, III, doesn't seem to explain this fact in any "probabilistic" way, so I'm not quite as excited to read the paper as I once was. But I had
believed the proof was in the first paper in the (seven-paper) series, which is in storage at our library, and it's nearly the holidays, so I probably would have had to wait quite a while to get a
copy just to see that it wasn't the one I wanted.
In fact, it was worrying that I had the wrong paper that led me to find this resource in the first place -- seeing the "I" in the citation I had got me curious, so I went to Google. What I expected
to see in the Erdos-Turan paper, and what I actually wanted to see, was a "probabilistic" proof somehow based on the central limit theorem. This exists; it's in the paper of Bovey cited below. Also,
Erdös seems to have not been good at titling papers; titles "On some problems in X", "Problems and results in Y", "Remarks on Z", "A note on W", etc. are typical. I guess he was too busy proving
things to come up with good titles.
Bovey, J. D. (1980) An approximate probability distribution for the order of elements of the symmetric group. Bull. London Math. Soc. 12 41-46.
Erdos, P. and Turan, P. (1967) On some problems of a statistical group theory. 111. Acta Math. Acad. Sci. Hungar. 18 309-320.
Ian Ayres asks a question at Freakonomics. Which of the following is the correct answer? 4π, 8π, 16, 16π or 32π square inches?
No, I didn't forget the question. But it's possible to make a reasonable guess by trying to reverse-engineer the question.
(Don't read the comments. They're full of people who didn't get it.)
"I don't know. A proof is a proof. What kind of a proof? It's a proof. A proof is a proof, and when you have a good proof, it's because it's proven." -- Jean Chretien, former prime minister of
Canada. The context appears to be something having to do with Canada's involvement in the Iraq war, but I'm having trouble finding details. It seems that this was a Big Thing in Canada when it
happened, so perhaps I have Canadian readers who can explain?
Specializing in Problems That Only Seem Impossible to Solve , by Bina Venkataraman, in yesterday's New York Times.
This is an article about Jessica Fridrich, a professor at Binghamton University, who at one point held the world record for the fastest solving of the Rubik's Cube. She currently specializes in the
research of information hiding in digital imagery.
A Ballot Buddy System, by Randall Lane, an op-ed in today's New York Times.
As you may remember, there was a presidential election six weeks ago in the United States. But Barack Obama isn't officially elected president until today; today is the day that the electors cast
their votes. This is the first time since 1892 that a state will have electors voting for more than one candidate. Maine and Nebraska both have laws in which two electors go to the winner of the
popular vote in the state and one goes to the winner of each congressional district. Nebraska went for McCain, but the 2nd congressional district (Omaha and some of its inner suburbs) went for Obama.
It's been suggested that all states should apportion their electoral votes in this way, on the assumption that less people live in "safe districts" than "safe states". (I'm not sure if this is the
case, especially with the way some districts are gerrymandered these days.) But the problem with this is that the majority of people (and legislators) in any state would see their party hurt by the
passage of such a law in their state.
Lane's suggestion is that Republican-leaning states and Democratic-leaning states with approximately the same number of electoral votes (say, Texas and New York) could agree to pass these laws
together. The problem is that in each pairing, it seems that you'd want two states that are roughly of equal size and are equally far from the political center; it seems that it might not be possible
to construct such a pairing. The obvious problem is what to do with California? It's easy to state a few plausible pairs, as Lane does, but I'm not sure that all the states could be paired off in
this way. Furthermore, things probably get weird, in terms of how much "power" each state holds in presidential elections, if some substantial number of states have enacted such laws.
Okay, not really. But here's a fake proof that π = 3, which I hadn't seen before.
Tao Xie and Yuan Xie (brothers, in case you're wondering) maintain an advice collection consisting of links to things other people have written about how to succeed in scientific careers -- on
getting a PhD, writing papers, and so on. Those links that seem to be aimed towards people in certain subjects are mostly aimed at computer scientists, but at least some of what I'm finding in their
links seems reasonable.
Of course, one piece of advice they probably should give is "don't spend lots of time reading this sort of advice".
Since I'm talking about brothers, I feel obliged to mention the following paper:
Michalis Faloutsos, Petros Faloutsos and Christos Faloutsos, On Power-Law Relationships of the Internet Topology, SIGCOMM 1999. It has nothing to do with the Xie brothers' advice collection, but I
wanted to mention it anyway because I saw a citation to it and I was amused.
Why are people in Iowa interested in combinatorics? Combinatorics is more popular in Iowa than in any state but Massachusetts.
Google now has a feature called "Google Insights"; you can type in a search term and see where people are searching for it, how frequency of searches varies with time, etc. In states where there's a
lot of volume it's possible to zoom in; in Massachusetts it's possible, for example, and most of the interest is in Cambridge. Given that there is a Big Important University and a liberal arts school
that has a well-known mathematics department in Cambridge, that's not surprising. But I can't zoom in on Iowa.
(It's possible to get results by country, too, but these results seem ridiculously skewed; I suspect that Google may be normalizing by the number of Internet users in a given area, and the user pool
is different in different places.)
Another interesting one: "probability" is popular in Maryland, and among cities in that state it's most popular in College Park and Laurel. College Park is where the University of Maryland is. Laurel
is where the NSA is. You can see similar things in other states; for example, in New York, "probability" is most common in Stony Brook, Troy (RPI), and Ithaca (Cornell). In Pennsylvania, it's
University Park (Penn State), Bethlehem (Lehigh), and State College (Penn State again). The general pattern seems to be first a few college towns, then the big cities -- the places with the fourth
and fifth highest numbers for "probability" in Pennsylvania are Pittsburgh and Philadelphia.
Most mathematical search terms I could think of are highly seasonal -- they're less common in the (Northern Hemisphere) summer, when schools aren't in session. That seems to imply that lots of the
people doing the searching are students. I couldn't find a mathematics-related search term that didn't show this seasonality; I don't know if it can be done, because only search terms that receive a
reasonably large amount of traffic are reported on the site at all, and things which are important enough to get lots of traffic are probably studied in schools.
I'm taking a break from proofreading a paper. I'm reading it out loud, because I find this is the best way to catch mistakes; it forces me to look at every word.
There are inequalities in this paper, so the signs ≤ and ≥ come up a lot. How do you pronounce these? When I was in college I pronounced them "less than or equal to" and "greater than or equal to".
But sometime around the first year of graduate school I seem to have shifted to "at most" and "at least", which have the obvious advantage of being shorter.
Edit (11:15 pm): It appears I've mentioned this before.
The Geometry of 3-Manifolds, a lecture by Curt McMullen. This is one of the Science Center Research Lectures; in which Harvard professors talk about their research to the interested public; the
series doesn't appear to have a web page, but here's a list of videos available online in that series; these include Benedict Gross and William Stein on solving cubic equations. There are
non-mathematical things too, but this is at least nominally a math blog, so I won't go there.
McCullen apparently also gave this lecture at the 2008 AAAS meeting in Boston, and has a couple other video lectures available online.
And now I want a do(ugh)hnut-shaped globe with just North and South America on it. This is a fanciful example of what an "armchair Magellan" might suspect the world looked like if humans had reached
the North and South poles starting from somewhere in the Americas but had never crossed the Atlantic or Pacific; they might suspect that the cold area to the north and the cold area to the south are
actually the same. McMullen uses to illustrate that tori and spheres are not the same, since loops on the sphere are contractible but loops on the torus are not. The lecture, which leads up to
telling the story of the Poincaré conjecture, begins by using this as an example of how topology can distinguish between surfaces.
Finally, here's an interesting story, which may be well-known to some people but wasn't to me: Wolfgang Haken, one of the provers of the Four-Color Theorem, may have intended that (famous,
computer-assisted) proof to be a "trial balloon" for a brute-force proof of the Poincare conjecture.
In honor of my age now being a square, my parents took me out to dinner. (Okay, so it had nothing to do with my age being a square.)
My mother: "I read in the US Air in-flight magazine that... um... that girl from Boy Meets World is writing books about math now."
Me: "You mean the girl from The Wonder Years??"
Yes, this actually happened. The two shows are not all that different, and the male leads in them are brothers in real life and look similar, so it's a natural mistake to make.
Danica McKellar majored in math at UCLA (where Terence Tao was one of her teachers, and has written two books aimed at middle school girls to tell them that math doesn't suck, namely Math Doesn't
Suck: How to Survive Middle School Math Without Losing Your Mind or Breaking a NailKiss My Math: Showing Pre-Algebra Who's Boss
By the way, if you're in Philadelphia and want "modern Cuban cuisine", Alma de Cuba has it. I especially recommend that you go to this restaurant if someone else is paying.
Yesterday, my age was a factorial.
Today, my age is a square.
This raises an interesting problem.
At Language Log recent discussion has gone on about how you can translate from one language to another, but you can't translate from one game to another. For example, you can't take a game of chess
and translate it into poker.
I'm reminded of the Subjunc-TV in Douglas Hofstadter's Godel, Escher, Bach: An Eternal Golden Braid
At Language Log, I learned that there are certain "logical games" for which a notion of translation is possible. These are apparently of interest to logicians; you can read more at the Stanford
Encyclopedia of Philosophy.
But in combinatorial game theory, we can associate each position in certain games with a "number"; is it meaningful to say that positions in different games which have the same number are the "same
position"? In this case, translations between games would become possible, except that those numbers are apparently difficult to calculate.
Ken Jennings, the 74-time Jeopardy! champion, has a blog.
Today he wrote about polyominoes, inspired by the question of how many pieces there would be in Tetris if it were played with pieces made of five or six squares instead of the canonical four. He's
not a mathematician, and he finds it surprising that there's no nice formula for the number of polyominoes with n cells. I suppose it is kind of surprising to someone with no mathematical training;
by this point in my life I've gotten used to the fact that things just don't work that way.
Via X=Why?, I found an article on how a criminal investigation lab in California is inviting students to come in and showing them that math is useful for solving crimes. (In the CSI way -- figuring
out how blood splatters, say -- not in the Numb3rs way.) This is certainly a good thing to do.
But can you make sense of this?
Craig Ogino, the department's crime lab director, started the event by offering a prize of $10 to the student who could use trigonometry to determine the number in gallons of a mixture used to
make methamphetamine, based on his sketch.
I'm assuming that trigonometry was actually used for something else -- like, say, the aforementioned blood splattering analysis, seen later in the article -- and that the reporter made a mistake. But
I'm not totally sure. Any thoughts?
Shitload of math due Monday, from The Onion:
Making matters worse, students said, was their math textbook, which reportedly doesn't even have any of the freaking answers in the back.
How would these kids feel if they learned that eventually the questions aren't even in the book, but you have to come up with them yourself?
This year's Putnam exam problems, via 360.
I haven't thought about these, but I might as a break from writing things up over the next few days.
The following three formulae are reasonably well known:
1 + 2 + 3 + ... + n = n(n+1)/2
1^2 + 2^2 + 3^2 + ... + n ^2 = n(n+1)(2n+1)/6
1^3 + 2^3 + 3^3 + ... + n ^3 = (n(n+1)/2)^2
(The sums of first and second powers arise pretty often naturally; the sum of cubes is rare, but it's easy to remember because the sum of the first n cubes is the square of the sum of the first n
natural numbers.)
The first member of this series I can't remember is the following:
1^4 + 2^4 + 3^4 + ... + n ^4 = n(n+1)(2n+1)(3n^2+3n-1)/30
and generally, the sum of the first n kth powers is a polynomial of degree k+1.
I ran into these formulas, which I'd seen plenty of times before while perusing the book Gamma: Exploring Euler's ConstantGAMMA: Eulers Konstante, Primzahlstrände und die Riemannsche Vermutung
Anyway, back to the main story. Say I wanted to know the sum of the first n fifth powers. Well, there's a general method for finding the formula of the first k powers; it involves the Bernoulli
numbers. But let's say I didn't know that. Let's say somebody hands me the sequence
1, 33, 276, 1300, 4425, 12201, 29008, 61776, 120825, 220825
in which the nth term, s[n], is the sum 1^5 + 2^5 + ... + n^5 -- but doesn't tell me that's where the sequence comes from -- and challenges me to guess a formula for it in "closed form". (Smart-asses
who will say that there are infinitely many formulas are hereby asked to leave.) How would I guess it?
Well, it can't hurt to find factorizations for these numbers. And if you do that you get
1, 3^1 11^1, 2^2 3^1 23^1, 2^2 5^2 13^1, 3^1 5^2 59^1, 3^1 7^2 83^1, 2^4 7^2 37^1, 2^4 3^3 11^1 13^1, 3^3 5^2 179^1, 5^2 11^2 73^1
and this seems interesting; these numbers seem to have lots of small factors. Furthermore, a fair number of them seem to have one largeish prime factor, which I've bolded. (Yes, I realize, 11 times
13 isn't prime, but I actually did think of it as a large factor.) What are the large factors that I observe in these numbers? They are
?, 11, 23, ?, 59, 83, ?, 143, 179, ?
and the nth of these is easily seen to be 2n^2 + 2n - 1. (Some terms don't show up from inspection of the factorizations because they get "lost in the noise", as it were.)
From there the rest is pretty easy. We see that often (but not always), s[n] is divisible by 2n^2 + 2n - 1. You can check that for n = 1, 2, ..., 10, the term s[n] is always divisible by (2n^2 + 2n -
1)/3. So now consider the sequence t[n] = s[n] / ((2n^2 + 2n - 1)/3). The numbers t[1] through t[10] are
1, 9, 36, 100, 225, 441, 784, 1296, 2025, 3025
and I recognized that all of these are squares; in particular t[n] = (n(n+1)/2)^2.
Putting everything together, I get the conjecture that the sum of the first n fifth powers
s[n] = n^2(n+1)^2(2n^2+2n-1)/12
which could be proven by induction, but actually writing out the proof is best left to undergrads.
The method here is reminiscent of Enumeration of Matchings: Problems and Progress by James Propp. In that article, Propp lists various unsolved problems in the enumeration of tilings, and conjectures
that some of them might have answers which are given by simple product formulas, because actually counting the tilings in question gave numbers with nice prime factorizations.
Edit, 9:13 pm: of course this is not the only method, or even the best method; it's just the method I played around with this morning. See the comments for other methods.
Weak security in our daily lives (in English): basically, you can use a de Bruijn sequence to break into a car with keyless entry in what might be a non-ridiculous amount of time. I'm referring to
the sort which have five buttons marked 1/2, 3/4, 5/6, 7/8, and 9/0, and a five-digit PIN that has to be entered. This trick takes advantage of the fact that the circuitry only remembers the last
five buttons pressed, so if you press, say, 157393, then the car will open if the correct code is either 15739 or 57393. It is in fact possible to arrange things so that each key you press, starting
with the fifth, completes a five-digit sequence that hasn't been seen before.
Of course, you shouldn't do this.
Via microsiervos (in Spanish).
Gil Kalai mentions a metaphor I hadn't heard of before about the foundations of mathematics:
To borrow notions from computers, mathematical logic can be regarded as the “machine language” for mathematicians who usually use much higher languages and who do not worry about “compilation.”
Of course there would be analogues to the fact that certain computer languages are higher-level than others as well. To take an example dear to me, the theory of generating functions might be at a
higher level than the various ad hoc combinatorial arguments it's often introduced to students as a replacement of. I don't want to press this metaphor too hard because it'll break -- I don't think
there are analogues to particular computer languages. But feel free to disagree!
From Family Guy:
"Chris, everything I say is a lie. Except that. And that. And that. And that. And that. And that. And that. And that."
A Russian teacher in America, by Andrei Toom.
It's what it sounds like. Toom repeats the familiar litany that in America, students learn for grades, and only incidentally for learning; it's an interesting read, mostly because of the perspective
that he's able to bring to it as somebody who didn't grow up within the American system.
Cosmic Variance is thankful for the spin-statistics theorem, because it enables the division between matter and force, which is kind of important.
In this spirit, I am thankful for the second Borel-Cantelli lemma, which states that if countably many events E[1], E[2], E[3], ... are independent and the sum of the probabilities of the E[n]
diverges to infinity, then the probability that infinitely many of them occur is 1. Let these events be "something interesting happens at time n", for a suitable quantization of time; then given
infinite time, infinitely many interesting things will happen. (Of course I'm making an independence assumption here.) I like interesting things.
I am also thankful for whoever it was that took a picture of a monkey at a typewriter.
I haven't really digested it yet (in fact, I'm only a third of the way through it), but AnathemNeal Stephenson is making a strong run at the title of My Favorite Book. Mostly because there's math
hidden everywhere inside it. And how could I resist this, which almost feels like something out of The Glass Bead Game
Three fraas and two suurs sang a five-part motet while twelve others milled around in front of them. Actually they weren't milling; it just looked that way from where we sat. Each one of them
represented an upper or lower index in a theorical equation inolving certain tensors and a metric. As they moved to and fro, crossing over one another's paths and exchanging places wile
traversing in front of the high table, they were acting out a calculation on the curvature of a four-dimensional manifold, involving various steps of symmetrization, antisymmetrization, and
raising and lowering of indices. Seen from above by someone who didn't know any theorics, it would have looked like a country dance. The music was lovely even if it was interrupted every few
seconds by the warbling of jeejahs.
Please, Internet, deliver me video of this mathematical dancing. Somewhat more seriously, though, moving pictures often float in my mind (and I suppose the minds of others) as I attempt to understand
various mathematical structures.
Stephenson gave an interesting
talk/Q-and-A at Google about the book, if you've got an hour to kill. I think if you liked Cryptonomicon
you'll like this one; on the other hand I was disappointed by the Baroque Cycle, which lots of people seem to have liked. I suspect this has to do with the times in which they're set; the Baroque
Cycle takes place a few centuries ago, Cryptonomicon during World War Two, and Anathem on another planet entirely, but one in which the secular world is roughly comparable to present-day Earth.
(Perhaps a bit too comparable; Earth intellectual history and the intellectual history inside Anathem are essentially the same thing with different names.) Except in Anathem, the mathematicians live
in what are essentially monasteries cut off from the outside world. I don't think I could handle that. I suppose some would argue that universities aren't the Real World, though...
Charlie Siegel, whose office is just down the hall from mine, had for a while the following statement on the blackboard in his office (paraphrased and renotated a bit): Let g(n) be the number of
lines on the generic hypersurface of degree 2n-1 in complex projective (n+1)-space. Then where the notation ^n in f(z)". For example, g(2) is the coefficient of z^2 in (1-z)(3)(2+z)(1+2z)(3z), which
turns out to be 27; this means that the number of lines on the generic hypersurface of degree 3 in complex projective 3-space is 27. Informally, "there are 27 lines on the cubic". Want to know why?
For the cubic case, see this post by Charlie or go to his talk last week.
The first few values of g(n) are 1, 27, 2875, 698005, 305093061, 210480374951, 210776836330775, 289139638632755625... -- so this sequence grows pretty quickly. How quickly? I couldn't tell at first,
but it kept nagging at me every time I was in Charlie's office. The sequence is A027363 in the Encyclopedia of Integer Sequences. It turns out that it's in an appendix of the paper that the
Encyclopedia links to (arXiv:math/0610286, specifically its appendix by Zagier) but the derivation there is long and involved and it was more fun with the formula myself. There are a few gaps, but
here it is.
We can deal easily with the 1-z factor out front, and we want where
We can already see it's going to be kind of tricky; the coefficients of f[n](z) are probably going to be pretty big and not that far apart.
Now, we can pull out a 2n-1 from each factor in the expression for f[n](z), and we get
Call the product p[n](z). Now this is where, out of nowhere, I start having Fun With Probability. (It's kind of amusing because there is nothing probabilistic about the original question.) The term
corresponding to j in that product is the probability generating function of a random variable which is 1 with probability (j/(2n-1)) and 0 otherwise. The product is thus the p.g.f. of the sum of
such random variables for j = 0, 1, ..., 2n-1.
Now, this sum has mean which is the sum of the means of those random variables, that is,
and variance which is the sum of their variances, which is approximately n^2/3. Furthermore, since the sum is a sum of a bunch of small contributions, it's approximately normal. So p[n](z) is the
probability generating function of a random variable which is roughly normal with mean n and variance n^2/3, but integer-valued, and its kth coefficient is the probability that such a random variable
is equal to k.
Therefore [z^k] p_n(z) is approximately
which is the density of such a normal random variable. We want to know [z^n] p[n](z) - [z^n] p[n-1]}(z), and this is approximately q[n](n) - q[n](n-1). You'd think this would be q[n]'(n), but q[n](n)
is actually zero -- the fact that the coefficients were "close to each other" is even more troublesome than you'd expect. Still, we can make a Taylor series for q[n] at n, and the linear term is
zero, so we get And g(n) = [z^n] f[n](z) - [z^n] f[n-1](z) = (2n-1)^2n ([z^n] p[n](z) - [z^n] p[n-1](z)); using the approximation we get
and now note that
Therefore which matches (with a bit of computation) the result given by Zagier in the arXiv, and the terms reported in the OEIS. I wouldn't have trusted it otherwise; that "take the second
derivative" step is particularly sketchy, although it can probably be justified as there are results on the rate of convergence of the central limit theorem. But asymptotic analysis is nice in this
way; if it's easy to compute the terms of some sequence, then we can often be confident in results like this even without
Besides, I'd never seen the trick of using the second derivative to approximate a difference before. At least not that I can remember.
At mathlog: Jever Bierdeckel-Mathematik und die Brücken von Kaliningrad. That is, "Jever coaster mathematics and the bridges of Konigsberg." (Why this funny translation? Because you haven't heard of
the bridges of Kaliningrad, even though that's what the city is called now.)
It includes a solution to the general problem of identifying graphs with Eulerian tours. In English, in the form of a poem.
And what do coasters have to do with this? Jever is a beer, and the problem of finding an Eulerian tour is on a coaster they distribute (in German).
Edit, 1:28 pm: In the comments, Wing says that he thought the post would be about Euler beer. Apparently there is a beer named Euler. Someone's auctioning off a coaster on ebay.
It seems that some people describe the torus as the shape of a bagel, and others as the shape of a doughnut.
I wonder if this is somehow correlated with geography; bagels are more common in some places, doughnuts in others.
One way of stating the Prime Number Theorem is that the "probability" that a large number near x is prime is 1/log(x). (Here, as always, all logs are natural.)
A heuristic derivation of the prime number theorem, Frank Morgan, via Andrew Gelman, with some embellishment by me: let P(x) be the "probability" that a large integer x is prime. Then how do P(x+1)
and P(x) compare? Say x is prime, which it is with "probability" P(x); it "divides" x+1 (and all larger numbers) with probability 1/x. (Of course, this doesn't actually make sense; the idea is that
we're modeling the numbers divisible by x as a random set with density 1/x, which should have the same large-scale properties.) Other than this, x and x+1 are equally "likely" to be prime. So the
function P satisfies
(*) P(x+1) = P(x) [P(x) (1 - 1/x)] + (1 - P(x)) P(x)
since x+1 is "prime" with probability P(x) (1-1/x) if x is "prime", and P(x) otherwise. Divide through by P(x) to get
P(x+1)/P(x) = P(x) (1-1/x) + (1-P(x))
P(x+1)/P(x) = 1 - P(x)/x.
Now, P(x+1) can be approximated by P(x) + P'(x), so we have
1 + P'(x)/P(x) = 1 - P(x)/x
P'(x)/P(x) = -P(x)/x
which has the general solution P(x) = 1/(C + log x).
This is a bit of a lie. For one thing, the difference equation (*) actually seems to have solutions that differ from those of the differential equation by a constant factor, which seems to depend on
the initial conditions. (This amounts to changing the base.) For another thing, the assumption that x+1 might be divisible by x is, um, stupid if we're actually talking about prime numbers. (It's
probably possible to rephrase this heuristic derivation as a rigorous result about random sets, though.) Still, it gets the prime number theorem to within a constant factor, which isn't bad for such
a simple argument.
I'm reading The Know-It-All: One Man's Humble Quest to Become the Smartest Person in the WorldA. J. Jacobs -- I remembered hearing good reviews, and they're for the most part right. It's an amusing
book to read, because it's part trivia and part A. J. Jacobs, who's an editor at Esquire, read the entire Encyclopedia Britannica in a year or so, which is long enough that while he was reading it
amusing stories unfolded in his life -- most of which have some trivia worked in. Is it great literature? No. But it's fun. And there's the inevitable moment when you know something that Jacobs
doesn't mention he knows; that makes you feel a little smarter when you're reading any book, but especially one with this book's subtitle.
It's amusing and interesting to be reading this during my less ambitious attempt to read the The Princeton Companion to Mathematics
. But I'm mostly making this post because I couldn't resist sharing this, from when Jacobs joins Mensa:
Of course, I'm terrified that I'll be rejected. In fact, I'm pretty sure that they'll send me a letter thanking me for my interest, then have a nice hearty laugh and go back to their algebraic
topology and Heidegger texts and Battlestar Galactica reruns.
From britannica.com, it looks like the EB does not have an article titled "algebraic topology". But there are articles titled "topology" and "mathematics" with sections named "algebraic topology",
and they do seem to have serious treatments of mathematics in there. Jacobs admits (p. 338) that "the math sections... are my bête noire.
Kiyoshi Ito (of Ito calculus fame) is dead.
(When? I'm not sure. The New York Times says Monday, the 17th, but the Japan Times published an obituary on Saturday the 15th which said he died "Monday" -- so I'm guessing the 10th.)
See the obituaries, the MacTutor biography and Wikipedia article, or this Notices article upon his receipt of the Gauss Prize for an idea of his contributions.
From The Princeton Companion to MathematicsJános Kollár:
Finally, if we marry a scheme to an orbifold, the outcome is a stack. The study of stacks is strongly recommended to people who would have been flagellants in earlier times.
I feel this way about most of algebraic geometry, but that's only because Penn has a high enough concentration of algebraic geometers that I get tired of hearing people walking around and talking
about it all the time.
Also, I find myself screaming at the book quite often. But I do it in a good way; it's often some crucial insight that makes me think "why didn't anybody tell me that before?" (Of course, it's
possible they did and I wasn't listening.) And sometimes it's "oh, damn, I thought I came up with that myself". For example, I just read the article on computational number theory, by Carl Pomerance,
in which he explains a heuristic reason why Fermat's last theorem is true. I won't give it in full here, but it's basically the following. First, Euler showed it for n = 3. Second, consider all the
positive integers which are nth powers for some n ≥ 4. The "probability" that a number m is in this set is about m^-3/4. So replace the set of fourth-or-higher powers with a random set S, which
contains m with probability m^-3/4. Then the probability that a givennumber n can be written as a sum of two such elements of this set S is proportional to n^-1/2; independently, it has probability n
^-3/4 of being in S. So the probability that n can be written as a sum of two elements of S and is also in S itself is proportional to n^-5/4, and so we expect finitely many examples. This isn't
quite true, because the set of fourth-or-higher powers has some nontrivial structure. But it also took a couple hundred words, instead of a couple hundred pages like the real proof.
But I figured out something like this when I was in college, and I was so proud of myself! So it saddens me to learn I'm not the only one who thought of it. (It also makes me happy, though, because
the idea is due to Erdos and Ulam, and there are worse people to be imitating.)
Is there some connection between the etymology of "theorem" and words like "theology" or "theist"?
For "theorem" the OED says: theorem, from the late Latin theorema, from the Greek θεωρημα, spectacle, speculation, theory, (in Euclid) a proposition to be proved, from θεωρειν to be a spectator (
θεωροσ), to look at, inspect. (This isn't an exact quote; I've expanded some of the abbreviations, and suppressed some of the accent marks. But if you're the sort of person who could actually answer
my question you probably already knew that.)
But for the words where "the-" or "theo-" is god-related, of which there are a lot, it just says things like "from Greek θεοσ, God" and doesn't go any further. And maybe you could imagine that people
"look at" or "inspect" God. Of course I recognize that the OED is not the best possible source for these things -- but I'm suspecting that someone in my audience has also noticed this apparent
coincidence of words and knows the answer.
(I just want to reiterate that the title "God Plays Dice" is not a religious thing; it's alluding to the quote of Einstein, as I've written before.)
In this week's New York Times Magazine, Clive Thompson writes about people trying to win the Netflix Prize.
Early in 2007, Netflix, the video-rental-by-mail service, made (anonymized) data on its users preferences available, and is offering $1,000,000 to the first person or team that creates an algorithm
for recommending movies that makes a certain substantial improvement upon the existing algorithms. More precisely, Netflix attempts to predict the star rating, on a one-to-five scale, that users will
give to a movie; the root mean square error in their old algorithm was about 0.95 stars, and they'll pay the $1,000,000 for improvement of this by 10%. (They also pay $50,000 for improvements of 1%.)
Why are they doing this? Well, Netflix recommends movies to lots of people, and they want their recommendations to be good. And it sounds like they're getting a lot more than a million dollars worth
of time from the various competitors working on this; if they actually had these people on their payroll it would cost more.
It's interesting that there are certain movies that are very difficult to predict; Napoleon Dynamitea list of movies like that, although it's a year and a half old so there's been progress since
then. (Hmm, I'm not sure if I like Napoleon Dynamite -- sometimes I do and sometimes I don't.)
Apparently one of the major mathematical tools that has emerged as being useful here is singular value decomposition. I'm not surprised that some Big Fancy Linear Algebra tool was necessary, as a lot
of the algorithms essentially seem to work by identifying various dimensions along which movies can vary. (Look, it's a hidden metric space!) Hmm, I wonder if the space of movies has some nontrivial
topology... no, I will not get sucked into thinking about this!
When I go to the library, even if it's just to get a specific book, I tend to browse a bit. I've found some interesting books this way.
So I was just there, and Anatomy of Integers, the proceedings of this conference, was shelved near the calculus textbooks. In particular, it was near various books that collect integrals. "Anatomy of
integers" is a name that seems to be used for a certain branch of number theory that looks at the distribution of prime factors; for a nice introduction to the concept, see Andrew Granville's anatomy
of integers and permutations. (This paper, still in development, talks about the analogy between the prime factorizations of integers and cycle structure of permutations. For example, the
distribution of the number of cycles of a random permutation of {1, 2, ..., n} is approximately normal with mean and variance near log n; the Erdos-Kac theorem says that the distribution of the
number of prime factors of a random integer less than or equal to e^n is asymptotically normal with mean and variance near log n. Lots of results about prime factorizations or about permutations seem
to have counterparts in the other realm. Anyway, this isn't calculus! But it was with the calculus books.
Similarly, I found a book entitled Mathematical Essays: In Honor of Su Buchin, which was a volume of mathematical research articles in honor of the Chinese differential geometer Su Buchin, shelved
with various books that consist of reflections by mathematicians on mathematics -- the sort of nontechnical writing that "essay" usually means.
Is there something wrong with Penn's library? Not at all. They were in the right place according to the Library of Congress Classification, but whoever does that classification seems to make the
occasional error. Of course the reasons for these particular errors were obvious from the titles, which is why I spotted them. I am not sure if the classification is done by people who aren't
familiar with mathematics; that would explain the integer/integral mistake, at least.
In the issue of the Notices of the American Mathematical Society, which arrived in the mail today, I learned that Structure and Randomness: Pages from Year One of a Mathematical Blog
I saw a draft copy of it when Tao posted it on his blog back in April; he announced the book was going to exist here and posted the draft here. The book is a collection of Tao's posts on various open
problems, expository posts, summaries of lectures, and so on, and is in large part intended to be, so far as I can tell, what one might think of as an "archival" version of the blog. A lot of Tao's
blog posts tend to be longer than typical blog posts, so his blog seems particularly suited for the book medium.
The title of the draft version is "What’s new - 2007: Open questions, expository articles, and lecture series from a mathematical blog", which describes the content well -- but I like "Structure and
Randomness" better. Here's a long post I made back in July of 2007 on "psuedorandomness" in the primes, in large part inspired by watching a talk Tao gave entitled "Structure and Randomness in the
Prime Numbers".
A joke that I've seen in several places today (first from my friend Karen): "An infinite number of mathematicians walk into a bar. The first one orders a beer. The second orders half a beer. The
third, a quarter of a beer. The bartender says "You're all idiots", and pours two beers."
I can "improve" this joke. An infinite number of mathematicians walk into a bar. The first one orders a beer. The second orders two beers. The third orders three beers, and so on. The bartender takes
a twelfth of a beer from the first one and they all walk away happy.
(Why, you ask? Because ζ(-1)=-1/12.)
This is a few days old, but I didn't mention it when I first saw it: Microsoft Exploit Predictions Right 40% of Time, from Slashdot. (The link is to Slashdot; here's the original article.)
Apparently Microsoft has a mechanism for predicting which parts of its code are most likely to be exploited by malevolent hackers; they made nine predictions in October, and four actually got
exploited. Of course, the Slashdot commentary is filled with reflexive Microsoft-bashing, and people pointing out that that's worse than flipping a coin. But the correct comparison isn't to flipping
a coin. The correct comparison is to the number of hits that would have been obtained if Microsoft had picked nine pieces of their code at random, which presumably is much less than four. There are a
few people in the comments trying to put numbers on this, but nothing that really sounds informed.
From Emanuel Kowalski, quoting what is apparently folklore: "The only 'truly' divergent series is the harmonic series."
The idea is that one can assign a value somehow to basically any other divergent series; see the link for a more thorough explanation.
(But don't tell the calculus students! They already can't remember the harmonic series is divergent.)
And I usually think of the harmonic series as being "equal" to log n, although of course log n isn't a number. So I amend the folklore, somewhat facetiously: "there are no divergent series."
Here's an interesting idea, which falls in the very large set of "things I thought about but have no idea how to implement": M. Boguna, D. Krioukov, kc claffy. Navigability of Complex Networks,
arXiv:0709.0303v2. (Published in the November 16, 2008 issue of Nature Physics; I learned about it from physorg.com.) The basic idea is that it is possible to navigate in complex networks by
navigating in some underlying hidden metric space. Nodes are close together in the hidden metric space if they are similar to each other in some abstract sense, and they are more likely to be
connected to each other in the network if they are closer together.
To take an example where the metric space isn't hidden, one navigates between airports by thinking about distances on the surface of the Earth; if you route greedily, by at each airport taking the
flight which gets you closest to where you want to go, you usually will end up at your destination. Of course, this isn't a priori true -- you could get stuck in an infinite loop -- but the point is
that real networks seem to obey this.
This is something that I think a lot of people have an intuitive sense of. For example, there are people I've met that when I meet them, I want to ask "why haven't we met before?" -- some combination
of shared geography and shared interests makes it seem like we should know each other. (In my own life, there is at least one case of somebody who I never met when I was an undergrad, though there
were plenty of amusing stories about mutual friends of ours that we'd both heard; we met shortly after I moved back to Philadelphia for grad school.)
I'm writing a paper, and of course this requires the use of LaTeX. As many of you know, the way one creates cross-references inside a LaTeX document is a two-step process. First, you insert the
command \label{big-important-theorem} where your Big Important Theorem is in the paper. Then when you want to refer to your theorem, you write something like "And as a consequence of Theorem \ref
{big-important-theorem} we can prove Corollary \ref{million-dollar-problem}, and so I claim to be the winner of a million dollar prize."
No, I have not written that sentence. Nor do I plan to. In fact, I would add "claims to be the winner of one of the Clay prizes in the body of the paper" to John Baez's crackpot index. Baez couldn't
have put that in his list, because it was written in 1998. He does mention the Nobel, though, which carries a similar monetary value.
But this leads to a question -- how do you label your theorems, equations, etc. in your own LaTeX code? I try to come up with names that reflect what the labeled object is about, but this isn't so
easy, because sometimes there are lots of objects that are "about" the same sort of thing. I'm tempted to just find some source of extra-mathematical names. So I could name my theorems \label
{market}, \label{chestnut}, \label{walnut}, \label{locust}, ... (Philadelphia streets), say. Of course, this sequence has the disadvantage that the streets come in order; a set of names with no
natural order is probably better, because then I won't feel like I'm moving things out of order when I move them around.
On a related note, often one sees bibliographical references of the form
[Be74] Edward A. Bender. Asymptotic methods in enumeration. SIAM Review,
Vol. 16, No. 4, October 1974.
I prefer this form to the form where [Be74] is replaced by a number, because if something is cited more than once in the same paper, I only need to look at the reference once; the second time I see
[Be74] in the paper I know it's the paper by somebody whose name starts with Be, written in 1974. (And if it's a paper I've heard of already, sometimes I don't have to look at the citation at all.)
But what's the convention for picking the letters to be used? First two letters of the name seems common for a single-authored paper, but by no means universal; I've definitely seen one-letter
citations, the problem being that if you have a reasonably extensive bibliography you'll want to cite two different authors with the same initial. I'm currently using just the first letter of each
name for multiple-author works -- [FS08] for Flajolet and Sedgewick's Analytic Combinatoricsreadable online!), for example. I've tried to reverse-engineer whatever convention there is from other
people's reference lists and I can't. Is there actually no convention?
Of course, there is no need for a convention, as long as each work cited has a unique identifier.
There is, I just learned from a television commercial, a Monopoly Electronic Banking Edition
The children of the future, it appears, will perhaps not be able to claim that Monopoly is educational because it requires them to do math. (Although I'll admit that I always found Monopoly
frustrating because other people's mental math is slower than mine.)
Somewhat more seriously, are the children of the future (or, perhaps, the present) going to feel that they have less reason to learn mathematics because they don't deal with cash? That seems like it
would be something that would motivate at least some students to learn basic arithmetic.
The orbit of the moon around the sun doesn't look like what you'd expect.
(Although now that I've told you this, you might think a little harder about what the orbit looks like.)
From 360: Shinju, a geometrical game. You're given an arrangement of shells in some of the squares of a square grid. One of the shells hides a pearl. Your goal is to find it, opening at most four
shells. When you open a shell, it either contains the pearl, or a number. That number is the distance to the pearl, in the (T)chebyshev distance.
There's a nice little result hiding here:
Problem 1: Given any arrangement of shells, prove that it is possible to find the shell in four clicks.
Since you always get four clicks (at least as long as I played), the game becomes trivial if you can find a constructive proof of that fact. (If you're the sort of person that likes to rack up points
in video games, though, I think you get more points if you don't use all your clicks -- so how do you set things up so that you're likely to solve the puzzle in less than four clicks?)
Problem 2: Generalize (to higher dimensions, different metrics, etc.)
I cashed in 5.793 kilograms of assorted coins today at the bank. That's $115.19, for a density of $19.88 per kilogram; I know the weight because the bank's coin counting machine gave a receipt that
had the numbers of each type of coin I received. I then immediately proceeded to the bookstore and purchased the The Princeton Companion to Mathematics
I want to compliment the design of it; the side of the book which faces out on the shelf has "The Princeton Companion to" in small letters and then "Mathematics" in large letters, perhaps giving the
impression that the book contains all of mathematics. This is not true, but from the portions of it I've seen online and the part I've read so far, it seems to admirably solve the optimization
problem of squishing down mathematics to an object that only weighs five pounds. That's less than the coins I hauled to the bank to get the cash I paid for it! For links to various reviews, see this
entry from Tim Gowers' blog. Gowers is the editor, and also wrote substantial parts of the book, although it has many other contributors; you can play the party game "how many of these names do I
recognize?" I was interested to see that I recognize many more than I would have when I started grad school.
Yes, I'm actually trying to read it cover-to-cover. Like many mathematicians, there are a bunch of things that people seem to assume I'm familiar with, but I only heard about very briefly in some
course in my first year of grad school in which I was holding on by the skin of my teeth. Indeed, this is one of the uses that's recommended in the introduction! I will resist the urge to turn this
blog into a Companion to the Princeton Companion to Mathematics.
Mr. K points out that the Etch-a-Sketch toy helps students understand graphing. You turn one knob and the x-coordinate changes; you turn the other knob and the y-coordinate changes.
That's a good point -- but I was surprised to learn that eighth graders (that's who Mr. K teaches) are familiar with the Etch-a-Sketch.
From the wrapping a univeral remote I recently bought: "Controls VCR/DVD combos, TV/VCR combos, and TV/DVD combos!"
This is because it controls two devices, and those are the three types of device that people have. But I guess "controls two devices" doesn't work from a marketing standpoint, so they have to list
all 3-choose-2 subsets. (And I also find elsewhere on the packaging "manage up to 8 separate devices with one remote control". Something doesn't add up here.)
Chemistry hobbyists face a labyrinth of local and state regulations -- an article about how increasing regulation makes amateur chemistry more and more difficult. Via slashdot.
Fortunately mathematics is not so easily regulated. Although if mathematics comes to rely increasingly on computers to do the "dirty work", then one can imagine a future where amateur mathematics is
concentrated in certain fields -- the fields which don't require computation -- if high-powered computers become regulated. But in the end mathematics is pure thought, and they can't regulate what
goes on inside our brains.
(I studied chemistry as well as mathematics in college. I would never have laboratory equipment in my home. But this is because whenever I touched laboratory equipment it broke, which is why I got
out of chemistry. In the hands of a competent chemist, there's nothing to fear.)
I was curious: how will the electoral vote apportionment change between now and 2012? (Reapportionment is done after each census, and censuses take place in years divisible by 10; the apportionment
takes effect the year after the census. Thus the 2004 and 2008 presidential elections were done under one apportionment, and the 2012, 2016, and 2020 elections will be done under another one.)
I don't know (my first attempt at programming the apportionment gave some really strange-looking results) but I wanted to share an amusing fact.
Each of the 50 states receives a number of electoral votes equal to its number of Representatives, plus two. So the question is really one of determining the number of Representatives that each state
gets. The way this works is as follows. First, each state receives one seat. Then, let the populations of the states be P[1], P[2], ..., P[50]; let
Q[i,j] = P[i] / (j(j-1))^1/2
for 1 ≤ i ≤ 50 and all positive integers j. Sort these numbers, and take the 385 largest of these numbers. Now state i (the state with population P[i]) gets r = r[i] representatives, where r is the
unique integer such that Q[i,r] is one of the 385 largest Q's, and Q[i,r+1] is not. (385 is 435-50; 435 is the number of seats in the House of Representatives, and 50 seats were already assigned in
the previous step, one for each state.) Essentially, this assigns the seats in the House "in sequence", so we can speak of the 51st seat, 52nd seat, ..., 435th seat.
So what if there's a tie for 385th place in that ordering? This can occur, of course, if two states have the same population, and I bet some tiebreaker is written into the law. But what if two states
have different populations, but after dividing by the square root factor, two of the Q[i,j] are the same? Surprisingly, this can happen. Let P[1] = 6P[2]. Then it's not hard to see Q[1,9] = Q[2,2];
that is, state 1 gets its ninth seat "simultaneously with" state 2 getting its second seat. More generally, if
P[1] / (m(m-1))^1/2 = P[2] / (n(n-1))^1/2
then state 1 gets its mth seat simultaneously with state 2 getting its nth seat. Note that P[1]/P[2] is rational. So a tie can only occur when (m(m-1)/n(n-1))^1/2 is rational; when does this happen?
When n = 2, this amounts to asking when (m(m-1)/2) is a square; this happens for m = 2, 9, 50, ... (the indices of the square-triangular numbers in the sequence of triangular numbers) So one state
can receive its second seat at the same time another one gets its 9th seat, its 50th seat, ... if the larger state has 6, 35, ... times the population of the smaller one.
Somehow I doubt the law covering apportionment has a provision for this. I suspect the provision taken would be similar to what happens if there's a tie in an election; I know there are some
jurisdictions that just flip a coin in that case.
Edit, 10:53 pm: Boris points out in the comments that somebody's done the projection. Texas gains 4, Florida and Arizona each gain 2; the Carolinas, Georgia, Utah, Nevada, and Oregon each gain 1. New
York and Ohio each lose 2; Massachusetts, New Jersey, Pennsylvania, Michigan, Illinois, Minnesota, Iowa, Missouri, Louisiana, and California each lose 1. At first glance this shift seems like it
would favor the Republicans in the presidential race; nine of the seats created are in states that voted for McCain in '08, and only two of the seats destroyed are. But I'm not sure about this
analysis; states are made of people, so as a state's population grows or shrinks its political makeup changes as well. Maybe Nate Silver will have something to say about this?
Matt Yglesias makes an interesting point. The "typical" American is white, in that more than half of all Americans are white. The "typical" American is Christian, in that more than half of all
Americans are Christian. But does this mean that the "typical" American is a white Christian, in that more than half of all Americans are white Christians? Not necessarily; I don't have the numbers.
Moreover, the "typical" white Christian votes Republican. Thus typical people vote Republican, so the Republicans should have won last night. But they didn't.
The point is that most people are "typical" in some ways, but few people are "typical" in all ways. And a party that is based around just people who are "typical" in all ways (note that I'm not
saying this describes the Republican party) is doomed to fail, because most people are unusual along some dimension. I don't think this deserves the name of "paradox", but it's just something worth
keeping in mind about How Statistics Work.
I heard that the featured articles at Wikipedia were the articles on McCain and Obama, and for the first time ever they had two featured articles at once.
So I went over to check.
But Wikipedia runs on GMT... and so it's Wednesday there. The featured article is Group (mathematics).
The "We" in the subject line is "mathematicians", of course.
Scott Aaronson points out that the probability of your vote changing the results of the election scales like N^-1/2, where N is the number of people. But the amount of change your vote creates, if it
tips the election, scales like N. So the expected amount of change you will cause, by voting, scales like N^1/2. That's a big number, so you should vote. If you live in a big country, you should vote
more, although that's irrelevant; if any other country is voting today, the US media has ensured that I don't know that.
Of course, N^1/2 seems a bit high, and it comes from modeling people as flips of a fair coin; Aaronson points out that under a more realistic prior (due to Andrew Gelman), the expected probability
that your vote flips the election is N^-1, so the expected amount of change your vote causes doesn't depend on the size of the country.
I won't "officially endorse" anybody. But one of the candidates in the present election trained as a lawyer, taught in a law school for a while, and likes to compare himself to a certain Senator from
Illinois. That senator, in order to equip himself to understand the law better, studied Euclid. Being a mathematician, I think this is pretty cool. Who I'm voting for is left as an exercise for the
The AMS puts together a roundup of math in the (mainstream) media.
I think I may be the last person to know this, but in case I'm not, here it is.
I just googled JSTOR in order to find it. (Of course, it's at jstor.org.)
Anyway, the Google result I get reads (with links omitted):
JSTOR: Home
Scans of print journals, with 10 major math journals (requires subscription).
www.jstor.org/ - Similar pages - Note this
Does Google know I'm interested in math, or does it say this to everybody?
As it turns out, I was in fact looking for an article from the Bulletin of the American Mathematical Society, so telling me there were math journals in there worked out well for them.
Political campaigns should not campaign in such a way as to maximize their expected number of votes. Rather, they should campaign in such a way as to maximize their probability of winning.
Early in a campaign, these are pretty much the same thing, because one doesn't know how things are going to play out; late in a campaign they diverge. The candidate that's behind in the polls should,
perhaps, start doing things that are, in expectation, Bad Ideas. If there were something that McCain could do that had a 1% chance of swinging 10% of the vote to him in the next 24 hours, and a 99%
chance of scaring all the voters away, he should do it. (For example, let's say McCain turns out to secretly be an extraterrestrial; we probably don't want to be ruled by extraterrestrials, but who
knows, we might change our mind? Of course, I'm being deliberately silly here.)
This is somewhat analogous to what happens in sports; strategies change late in a game. In the early innings of a baseball game you play, basically, in such a way as to maximize the difference
between the expectation of your number of runs and your opponents' number of runs. In the late innings, when you have a better idea of how many runs you need, you change your strategies. See, for
example, the bottom of the ninth inning of Game 3 of the World Series; tie game, bases loaded, nobody out. The Rays bring in one of their outfielders to create a fifth infielder; the idea is
essentially that they want to maximize the probability of the Phillies scoring zero runs, and if the ball gets into the outfield a run is scoring anyway. (As it turns out, the Phillies won that game
-- on a hit in the infield.) Of course, nobody saw that because it happened at two in the morning. Things are different when you're playing for one run.
Basically, what's happening as I write this is that Obama is running out the clock. (Yes, baseball doesn't have a clock. But Obama's a basketball player, so I think he'd like this metaphor.)
What politics and sports have in common, of course, is that there's a huge difference between second place and first place. If you're a company of some sort, does it matter if you sell 1% more or 1%
less than your competitor? Not really, although it might have meaning for your pride. But if you're a politician, 1% of the vote makes a big difference.
Nate Silver returns to the idea of conditional probability: he's saying Pennsylvania is "in play" in this election, not because the polling in Pennsylvania is close, but because conditioned on the
election being close, Pennsylvania is close. In general, I've heard quite a few people argue that the candidates should focus on the states that would be close if the election were roughly tied, not
the ones that actually are close, because the details of which state a candidate deliberately tries to sway things in only matter in a close election anyway.
Unfortunately, subtleties like this seem to be lost on some of Silver's commenters.
In case you're wondering, my last post titled "Conditional probability is subtle" had nothing to do with politics.
Via Keith Devlin: The Theorem. (To give more of a description would spoil the punchline; I'd recommend not reading Devlin's article until after you've seen the video.) This should be familiar to
anybody who's seriously thought about mathematics, although my life doesn't come with background music like the video does.
A citation reproduced verbatim from Benoit Mandelbrot's The Fractal Geometry of Nature
Knuth, D. 1968-. The art of computer programming. Reading, MA: Addison Wesley.
The "1968-" is not Knuth's birth date, of course, but the date at which he started writing the work in question, which was fifteen years in the making when Mandelbrot wrote. It's still not done.
Incidentally, Mandelbrot's book is a good one, full of pretty pictures (although of course not as intricate as one might see now, because the book is a quarter-century old); it's also fairly casually
written. Mandelbrot describes it as an "essay", in what I take to be the original Francophone sense of the word that means roughly an "attempt" at something, and so it's rather discursive, personal,
and nonrigorous; these idiosyncrasies are good for a book one wants to read casually, as I do right now, but someone who wanted a rigorous understanding of the concepts might look elsewhere. I'm
tempted to say it's a good series of lectures crystallized into paper form, although as far as I know it was never intended as such.
I think I'd heard of the book before, but it was Nova's Hunting the Hidden Dimension (aired on Tuesday night) that got me to actually head to the library and find it. I suspect I'm not alone, because
apparently it's selling quite well on Amazon.
(The usual disclaimer on books I'm reading that I borrowed from the library applies: ignore my recommendations if you're at Penn, because I have the library's copy.)
Donald Knuth is no longer giving reward checks for those who find errors in his books. Apparently people have been stealing his banking information and he's had to close some checking accounts. He's
maintaining a web page with the amount of rewards people would receive, which is basically as good, because nobody was cashing the checks anyway. And perhaps it's better, because now everybody in the
world can see that you found a bug in one of Knuth's books! If you just had a framed check on your wall, only the people you invited in could see it.
Really, one World Series win in twenty-eight years is pretty close to average; over the last twenty-eight years there have been between twenty-six and thirty teams in Major League Baseball, so the
average team should have won the World Series about once in that time span.
Still, this blog congratulates the Philadelphia Phillies on their winning the World Series in five and a third games. Not that any of them are reading it.
(If I cared about sports other than baseball, things would be different; Philadelphia has four major professional sports teams, and until last night none of them had won the championship of their
sport since the 1983 Sixers; that's about 100 sports seasons without a championship, which is noteworthy. At that time I was a small ball of undifferentiated cells. As was Cole Hamels.)
The margin of error is only the beginning of political polling: "If one or more of the above statements [about certain red and blue balls] are true, then the formula for margin of error simplifies to
Margin of Error = Who the hell knows?"
By combing through my logs I found The electoral college and second terms, which is related to my post on translating popular votes to electoral votes. (Roughly speaking, in a close election, the
electoral-vote margin is about five times the popular-vote margin.)
From Alan Schwarz in the New York Times:
You can hear it in the same broadcast booth. One announcer will say that Joe Hitter is in a slump, suggesting that he is somehow plagued and that his chances of success are lower than normal.
Then, as if on cue, the color man (in jovial agreement) will say that, yes, Mr. Hitter is due to break out— implying that his chances of success are higher than usual.
These exchanges and more are collected on the recently released three-disc set, “Bert and Ernie Teach Probability.”
Things have been slow around here; between the World Series, obsessing over the election, and Real Work, the blog's kind of getting left behind. But I thought you might appreciate this. According to
the model implied by What Announcers Say, the hitters who are most likely to do poorly in a given game are the ones who have been performing near their long-term average.
(Oh, and by the way, "Bert and Ernie Teach Probability" doesn't exist. I was up late last night -- the game didn't end until quarter of two in the morning -- so I'm not thinking quite straight. So I
checked. It should exist. Dear Internet, get working on that.)
|
{"url":"https://godplaysdice.blogspot.com/2008/","timestamp":"2024-11-10T22:08:33Z","content_type":"text/html","content_length":"358903","record_id":"<urn:uuid:8baf1856-36eb-4c0c-b9c7-d9b3c1703092>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00309.warc.gz"}
|
Total Product, Average Product and Marginal Product: Formulae, Examples
The compilation of these Production and Costs Notes makes students exam preparation simpler and organised.
Total Product, Average Product and Marginal Product
What is the production function in economics? Let us study the definitions of Total Product, Average Product and Marginal Product in simple economic terms along with the methods of calculation for
each. We will also look at the law of variable proportions and the relationship between Marginal product and Total Product.
Production Function
The function that explains the relationship between physical inputs and physical output (final output) is called the production function. We normally denote the production function in the form:
Q = f(X1, X2)
where Q represents the final output and X1 and X2 are inputs or factors of production.
Total Product
In simple terms, we can define Total Product as the total volume or amount of final output produced by a firm using given inputs in a given period of time.
Marginal Product
The additional output produced as a result of employing an additional unit of the variable factor input is called the Marginal Product. Thus, we can say that marginal product is the addition to Total
Product when an extra factor input is used.
Marginal Product = Change in Output/Change in Input
Thus, it can also be said that Total Product is the summation of Marginal products at different input levels.
Total Product = Ʃ Marginal Product
Average Product
It is defined as the output per unit of factor inputs or the average of the total product per unit of input and can be calculated by dividing the Total Product by the inputs (variable factors).
Average Product = Total Product/Units of Variable Factor Input
Relationship between Marginal Product and Total Product
The law of variable proportions is used to explain the relationship between Total Product and Marginal Product. It states that when only one variable factor input is allowed to increase and all other
inputs are kept constant, the following can be observed:
When the Marginal Product (MP) increases, the Total Product is also increasing at an increasing rate. This gives the Total product curve a convex shape in the beginning as variable factor inputs
increase. This continues to the point where the MP curve reaches its maximum.
• When the MP declines but remains positive, the Total Product is increasing but at a decreasing rate. Thisgiveends the Total product curve a concave shape after the point of inflection. This
continues until the Total product curve reaches its maximum.
• When the MP is declining and negative, the Total Product declines.
• When the MP becomes zero, the Total Product reaches its maximum.
Relationship between Average Product and Marginal Product
There exists an interesting relationship between Average Product and Marginal Product. We can summarize it as under:
• When Average Product is rising, Marginal Product lies above Average Product.
• When the Average Product is declining, the Marginal Product lies below the Average Product.
• At the maximum of Average Product, Marginal and Average Product equal each other.
What are Returns to a Factor? What do you mean by the Law of Diminishing Returns?
Returns to a Factor are used to explain the behavior of physical output as only one factor is allowed to vary and all other factors are kept constant. This is a short-run concept.
The law of diminishing returns to a factor states that as the variable factor is allowed to vary (increase), keeping all other factors constant, the Marginal Product first increases, reaches its
maximum, and then declines and even becomes negative.
|
{"url":"https://www.learncram.com/notes/total-product-average-product-and-marginal-product/","timestamp":"2024-11-10T19:38:21Z","content_type":"text/html","content_length":"63000","record_id":"<urn:uuid:b8144344-f1b0-4e94-a18a-427ee3b6b930>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00419.warc.gz"}
|
6th Grade Math Problems Worksheets With Answers
6th Grade Math Problems Worksheets With Answers
Also detailed solutions and full explanations are included. 6th grade ratios and rates worksheets pdf with answers are giving to help kids with situation or word problems which include proportional
relationship between different values.
Addition subtraction time ratios and percentages probability geometry pythagorean theorem place values even and odd numbers prime numbers fractions algebra and algebraic expressions circle areas and
more time past on clocks telling time roman numeral clocks telling time am pm.
6th grade math problems worksheets with answers. 6th grade math worksheets on. 3 write interpret and use. Worksheets math grade 6.
1 connect ratio and rate to whole number multiplication and division and use concepts of ratio and rate to solve problems. By practising with our grade 6 proportion worksheets they will enhance their
math skills with one of the most used math concept in real life. 2 complete understanding of division of fractions and extending the notion of number to the system of rational numbers including
negative numbers.
This is a comprehensive collection of free printable math worksheets for sixth grade organized by topics such as multiplication division exponents place value algebraic thinking decimals measurement
units ratio percent prime factorization gcf lcm fractions integers and geometry. Our printable grade 6 math worksheets delve deeper into earlier grade math topics 4 operations fractions decimals
measurement geometry as well as introduce exponents proportions percents and integers. Grade 6 module 1 lesson 6 problem set answers displaying top 8 worksheets found for this concept.
Two numbers n and 16 have lcm 48 and gcf 8. Grade 6 maths word problems with answers are presented. Some of the worksheets for this concept are grade 6 module 1 student file a grade 5 module 2 grade
3 module 6 eureka lesson for 6th grade unit one 2 overview eureka math homework helper 20152016 grade 6 module 1 a story of ratios eureka lessons for 7th grade unit three ratios grade 3.
Choose your grade 6 topic. Count on our printable 6th grade math worksheets with answer keys for a thorough practice. They are randomly generated printable from your browser and include the answer
In sixth grade students will focus on four areas. Common core and math in sixth grade. Some of these problems are challenging and need more time to solve.
These sixth grade math worksheets cover most of the core math topics previous grades including conversion worksheets measurement worksheets mean median and range worksheets number patterns exponents
and a variety of topics expressed as word problems. With strands drawn from vital math topics like ratio multiplication division fractions common factors and multiples rational numbers algebraic
expressions integers one step equations ordered pairs in the four quadrants and geometry skills like determining area surface area and volume. Math worksheets 6th grade with answer key new collection
Free grade 6 worksheets from k5 learning.
Challenging Word Problems 6th Grade Multi Step Common Core Aligned From Blburke On Teachersnotebook Com 7 Word Problems Common Core Aligned Math Writing
6th Grade Math Word Problems Worksheets With Answers In 2020 Word Problems Fraction Word Problems Math Word Problems
Practice Your Math Skills With These 7th Grade Worksheets Word Problem Worksheets Math Word Problems Math Words
Free Printable 6th Grade Math Worksheets Ratios In 2020 Word Problem Worksheets Word Problems Ratio And Proportion Worksheet
Realistic Math Problems Help 6th Graders Solve Real Life Questions Math Word Problems Word Problem Worksheets Math Words
6th Grade Math Worksheets Pdf 6th Grade Math Test Math Worksheets 4th Grade Math Worksheets Maths Exam
Pre Algebra Worksheets Equations Worksheets Math Word Problems Word Problems Word Problem Worksheets
Word Problems Percentages Of Number Problems 3c Gif 1 000 1 294 Pixels Word Problem Worksheets Word Problems Percentages Math
6 G A 2 Geometry Word Problems 6th Grade Common Core Math Worksheets Common Core Math Worksheets Geometry Words Common Core Math
Realistic Math Problems Help 6th Graders Solve Real Life Questions Brain Buddies Math Word Problems Word Problems Math Problem Solving
Realistic Math Problems Help 6th Graders Solve Real Life Questions Math Word Problems Word Problem Worksheets Math Words
Third Grade Division Common Core Wroksheets Google Search Math Word Problems Word Problem Worksheets Math Words
Sixth Grade Math Problems For You
What Are Some Good Math World Problems For 8th Graders Math Word Problems Word Problems Word Problem Worksheets
10 6th Grade Math Worksheet With Answer Key Algebra Worksheets Basic Algebra Worksheets 6th Grade Worksheets
Practice Your Math Skills With These 7th Grade Worksheets Math Word Problems Word Problems Math Words
Pre Algebra Number Problem Worksheets With Answers Pre Algebra Word Problem Worksheets Algebra Worksheets
|
{"url":"https://thekidsworksheet.com/6th-grade-math-problems-worksheets-with-answers/","timestamp":"2024-11-08T18:06:05Z","content_type":"text/html","content_length":"135790","record_id":"<urn:uuid:264f3ed5-e8d1-4b6b-b3ad-98506e6aec13>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00858.warc.gz"}
|
Control Group Results | Verification Method
top of page
Control Group Results
The 2.6 million false relationships are derived from 5200 = 2 persons x 2 studies x 1300 relationships as follows:
500 false partners were created for every subject in the study and given a random date of birth in the same year or two years before or after their actual partner’s birth date at the time of the real
relationship. The start date used was the start date of the actual relationship. For Elizabeth Taylor, this meant that for her first relationship in the study and for her then partner Richard Burton,
500 false partners were created for each of them with dates of birth in a five-year period centered around 1925 (for Liz) and 1932 (for Richard). The start year for all of these was 1962, the start
year of the Burton-Taylor romance. The same was done for all of the partners in all relationships.
This method creates a control of 2.6 million false relationships. The resulting totals were quite close to mathematically expected results. These results were used to calculate expected outcomes
throughout this paper.
Calculation of Expected Results:
In the following, we will create an expected value for any synastry aspect based purely on mathematics and astrometry.
Here is the principle of this calculation: There is a factor 2 in both of these calculations since there are 2 aspects to include and for each pair we calculate the direct and also the converse
aspect i.e. (Number of relationships/degrees in a circle * total number of degrees included in the orb) * the number of times the aspect can happen in a circle.
This is the calculation used for 1300 relationships at 2° 01”:
Where the planets involved are different (SO-VE, VE-MA, SO-MA)
• 1,300/360*8.068 for conjunctions/oppositions = 29.13
• 1,300/360*8.068*2 for other aspects = 58.26
Where the planets involved are the same (SO-SO, VE-VE, MA-MA)
• 1,300/360*4.034 for conjunctions/oppositions = 14.56
• 1,300/360*4.034*2 for other aspects = 29.13
Finally, the scaled control is arrived at as follows: (control group result/control group total) * sample group total[1]
Planetary movements seen from the Earth are far from uniform. But in the case of the comparison of two themes, this argument about non-uniformity no longer applies and the simple calculation of the
proportionality of the ratio between the sensitive areas as a function of the orbs and the whole zodiacal circle is quite close to the true probability as approximated by the control group.
[1] Where the control is the number of results in the aspect styles added together (either N-N N-P P-N and P-P for natal and progressed OR N-P P-N and P-P for progressed) and the sample group total
is the number of real relationships under scrutiny. For instance for trine aspects of SO-VE in 1300 relationships: N-P SO – VE control result = 115,804 /2,600,000 = .04454 * 1,300 = 57.902 (N-N SO-VE
= 58.3835, P-N SO–VE = 58.0405 P-P SO–VE =57.9875) added together equals 232.3135 rounds to an expected number of 232.
With thanks to Kyosti Tarvainen, Vincent Godbout, Robert Currey, and David Cochrane for advice, review and modelling.
bottom of page
|
{"url":"https://www.positiveastrology.com/control-group-results","timestamp":"2024-11-11T13:24:41Z","content_type":"text/html","content_length":"464700","record_id":"<urn:uuid:5542ee61-52be-460e-96ff-2db96d49e456>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00252.warc.gz"}
|
Understanding Mathematical Functions: What Are The Most Common Spreads
Mathematical functions are the building blocks of countless calculations and analyses that we perform in our daily lives, whether we realize it or not. From simple addition and subtraction to complex
statistical calculations, understanding spreadsheet functions is crucial for anyone working with data. In this blog post, we will explore the most common spreadsheet functions and their importance in
everyday tasks.
Key Takeaways
• Mathematical functions are fundamental to everyday calculations and analyses.
• Understanding spreadsheet functions is crucial for working with data.
• Basic arithmetic functions include addition, subtraction, multiplication, and division.
• Statistical functions such as average, median, mode, and standard deviation are important for data analysis.
• Mastering various spreadsheet functions is essential for efficient data manipulation and analysis.
Understanding Mathematical Functions: What are the most common spreadsheet functions
Basic arithmetic functions
When working with mathematical functions in spreadsheets, it is important to understand the most common basic arithmetic functions that are frequently used.
The addition function in spreadsheets is used to add two or more numbers together. This is a simple function that is often used in various calculations and data analysis.
The subtraction function is used to subtract one number from another. It is a fundamental arithmetic function that is useful for a wide range of spreadsheet tasks.
The multiplication function is used to multiply two or more numbers together. This function is essential for performing calculations and generating results in spreadsheets.
The division function is used to divide one number by another. This function is crucial for performing various types of calculations and analysis in spreadsheets.
Statistical functions
When it comes to mathematical functions in spreadsheets, statistical functions play a crucial role in analyzing and interpreting data. These functions allow users to perform various calculations that
are essential in understanding and making decisions based on data.
• Average
The average function, also known as the mean, is used to find the central value of a dataset. It is calculated by adding up all the values in a dataset and then dividing the sum by the number of
values. In spreadsheet formulas, the average function is denoted as =AVERAGE(range), where "range" represents the cells containing the values to be averaged.
• Median
The median function is used to find the middle value in a dataset when the values are arranged in ascending or descending order. If the dataset has an odd number of values, the median is the
middle value. If the dataset has an even number of values, the median is the average of the two middle values. In spreadsheet formulas, the median function is denoted as =MEDIAN(range).
• Mode
The mode function is used to find the most frequently occurring value in a dataset. In spreadsheet formulas, the mode function is denoted as =MODE(range).
• Standard deviation
The standard deviation function is used to measure the amount of variation or dispersion of a set of values. It indicates how much the values deviate from the mean. A low standard deviation
indicates that the values are close to the mean, while a high standard deviation indicates that the values are spread out over a wider range. In spreadsheet formulas, the standard deviation
function is denoted as =STDEV(range).
Logical functions
When it comes to spreadsheet functions, logical functions play a crucial role in performing calculations based on certain conditions. These functions help in making decisions and performing actions
based on whether a specified condition evaluates to true or false.
• IF function
The IF function is one of the most commonly used logical functions in spreadsheets. It allows you to test a condition and return one value if the condition is true, and another value if the
condition is false. The syntax for the IF function is: =IF(logical_test, value_if_true, value_if_false).
• AND function
The AND function is used to check if all conditions specified are true. It returns TRUE if all conditions are met, and FALSE if any of the conditions is not met. The syntax for the AND function
is: =AND(logical1, [logical2][logical2], ...)
• NOT function
The NOT function is used to reverse the logical value of its argument. If the argument is TRUE, the function returns FALSE, and if the argument is FALSE, the function returns TRUE. The syntax for
the NOT function is: =NOT(logical)
Lookup and reference functions
When it comes to working with data in a spreadsheet, lookup and reference functions are essential for finding and retrieving specific information. Let's take a closer look at the most common
functions in this category.
• VLOOKUP
• HLOOKUP
• INDEX
• MATCH
The VLOOKUP function is one of the most widely used functions in spreadsheet applications. It allows you to search for a value in the first column of a table and retrieve a corresponding value from
another column. This is extremely useful for creating dynamic reports and analyzing data.
Similar to VLOOKUP, the HLOOKUP function is used to search for a value, but in this case, it looks across the rows of a table instead of down the columns. This can be handy when working with data
organized in a horizontal format.
The INDEX function is used to return the value of a cell in a table based on the row and column number. This function provides a lot of flexibility when working with data and can be combined with
other functions for more complex calculations.
The MATCH function is used to search for a specified item in a range of cells and return the relative position of that item. This is particularly useful for finding the position of a value within a
dataset, which can then be used in conjunction with other functions.
Text functions
Text functions are essential in spreadsheet programs as they allow users to manipulate and analyze text data. The most common text functions include:
• CONCATENATE: This function is used to combine multiple strings into one. It is particularly useful when dealing with data from different columns or cells that need to be merged together.
• LEFT: The LEFT function extracts a specified number of characters from the left side of a text string. This can be helpful when working with data that follows a consistent format and you need to
extract specific information.
• RIGHT: Similar to the LEFT function, the RIGHT function extracts a specified number of characters from the right side of a text string. It is commonly used when dealing with data that has a
consistent format and requires extraction of information from the end of the string.
• LEN: The LEN function returns the length of a text string, including spaces and punctuation. This can be handy when analyzing the length of text entries or when setting up data validation rules.
These text functions are invaluable tools for manipulating and analyzing text data within spreadsheet programs. Whether you need to combine text from different cells, extract specific characters from
a string, or determine the length of a text entry, these functions offer a convenient way to manage text data effectively.
Understanding and mastering spreadsheet functions are essential for anyone working with data analysis and financial modeling. By familiarizing yourself with the most common functions, such as SUM,
AVERAGE, and VLOOKUP, you will be better equipped to handle complex calculations and data manipulation efficiently.
We encourage you to practice and explore different functions through spreadsheet software, as this will not only enhance your mathematical skills but also improve your overall productivity in
handling numerical data.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support
|
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-most-common-spreadsheet-functions","timestamp":"2024-11-11T17:13:25Z","content_type":"text/html","content_length":"209743","record_id":"<urn:uuid:e1a3e55d-5875-4c1a-870f-8b877327d655>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00351.warc.gz"}
|
Computer Science 136
Data Structures and Advanced Programming
Williams College
Fall 2005
Lecture 33: Dijkstra's Algorithm, Maps and Dictionaries
Date: December 2, 2005
• Congratulations to this year's Darwin contest winners:
1. Zachary for his creature "Dolly"
2. Rhaad for his creature "DumbBit"
3. Kyle for his creature "Brainless"
• Lab 10 questions
• Dijkstra'a Algorithm
□ The outline from last time:
T is the empty graph;
PQ is an empty priority queue;
All vertices in V are marked unvisited;
Add s to T; mark s as visited in G;
Add each edge (s,v) of G to PQ with appropriate value
while (T.size() < G.size() and PQ not empty)
nextEdge = PQ.remove();
until(one vertex of nextEdge is visited and the other is unvisited)
or until there are no more edges in PQ
// assume nextEdge = (v,u) where v is visited (in T) and u is
unvisited (not in T)
Add u to T; mark u as visited in G;
Add (u,v) to T;
for each unvisited neighbor w of u
add (u,w) to PQ with appropriate weight
□ Application to Massachusetts distances example.
• Maps and Dictionaries
|
{"url":"https://courses.teresco.org/cs136_f05/lectures/lect33/","timestamp":"2024-11-07T13:24:17Z","content_type":"application/xhtml+xml","content_length":"2614","record_id":"<urn:uuid:4abbda43-1157-4527-b428-d7e8a8f49311>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00282.warc.gz"}
|
DragIT - Build the gearing ratio
Are you confident that you know the formulae for the gearing ratio? See if you can construct the ratios below. Drag the appropriate blue button from on the right to the orange target area to build
the formula. Try and get it right first time rather than by trial and error. Click on 'Check answer' once you are confident the answer is right. If you want another go, simply click on 'Reset'.
Gearing ratio
|
{"url":"https://textbook.stpauls.br/Accounts_and_Finance_student/page_149.htm","timestamp":"2024-11-07T19:09:35Z","content_type":"text/html","content_length":"7868","record_id":"<urn:uuid:7e71b9de-a9f0-4f79-b578-cccd77ddd008>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00540.warc.gz"}
|
Problems in calculating conductivity using the KPM method
28 Dec 2021 28 Dec '21
12:45 p.m.
Dear all: I am a PHD student and i am using the kwant to calculate the conductivity of the graphene. Excuse me, I'd like to ask you a question that in the kwant documentation, you used the KPM
method to calculate the conductivity of graphene. Does the disorder strength take a value of 0 or approach 0? If the disorder strength is 0, why is the conductivity not diverging at points where the
energy is not zero? Thank you for your answer. Best wishes Hanlin Liu
Hi Hanlin, This is a finite system, so conductivity cannot become infinite, it can only scale with the system size. Moreover, the examples we have in the documentation show the system in the quantum
Hall regime, which has a finite conductivity even in the infinite system size limit. Best, Anton On Wed, 29 Dec 2021 at 14:52, <674657065@qq.com> wrote:
Dear all: I am a PHD student and i am using the kwant to calculate the conductivity of the graphene. Excuse me, I'd like to ask you a question that in the kwant documentation, you used the KPM
method to calculate the conductivity of graphene. Does the disorder strength take a value of 0 or approach 0? If the disorder strength is 0, why is the conductivity not diverging at points where
the energy is not zero? Thank you for your answer. Best wishes
Hanlin Liu
Last active (days ago)
1 comments
2 participants
participants (2)
• 674657065@qq.com
• Anton Akhmerov
|
{"url":"https://mail.python.org/archives/list/kwant-discuss@python.org/thread/E5IMZVNF5DQMRBXZEPHBPNVRIPPUXMUY/","timestamp":"2024-11-02T08:25:10Z","content_type":"text/html","content_length":"22878","record_id":"<urn:uuid:930faa9e-c1f6-4b2c-957d-122e6b65d84a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00663.warc.gz"}
|
What is the difference between circularity and cylindricity - Difference Digest
Share article
Circularity is a measure of how close a given curve is to being perfectly circular, while cylindricity is a measure of how close a given cylinder is to being perfectly round. In other words,
circularity measures the deviation from perfect roundness in two dimensions, while cylindricity measures the deviation from perfect roundness in three dimensions.
What is circularity?
Circularity is a measure of how close a curve is to being a perfect circle.
Circularity can be quantified by calculating the ratio of the circumference of the curve to its diameter. The closer this ratio is to 1, the more circular the curve is. A value less than 1 indicates
an elliptical shape, while a value greater than 1 indicates a lobed or multi-lobed shape.
What is cylindricity?
Cylindricity is a measure of how close a surface is to being a perfect cylinder.
Cylindricity, on the other hand, is quantified by calculating the ratio of the height of the cylinder to its diameter. The closer this ratio is to 1, the more cylindrical the surface is. A value less
than 1 indicates a frustum (truncated cone) shape, while a value greater than 1 indicates a pyramid shape.
Cylindricity is an important quality control measurement for many types of cylindrical parts and components. It checks for how closely the surface of a cylindrical object deviates from being
perfectly round. Cylindricity measurements are typically made with a special type of measurement tool called a cylindrical gauge.
Circularity Vs. cylindricity – The key difference
Circularity is a perfect circle, while cylindricity is an imperfect circle. The difference between these two types of circularity is the amount of distortion in the shape. Cylindricity is more
distorted than circularity, meaning that it is not a perfect circle.
How to measure circularity and cylindricity
To measure circularity, you will need a caliper. Place the caliper on the object so that the jaws are touching the sides of the object at two points that are directly across from each other. For
cylindricity, you will also need a caliper. Place one jaw of the caliper on the side of the object and the other jaw on top of the object.
Photo by ROMBO: https://www.pexels.com/photo/measuring-guitar-pick-3988555/
|
{"url":"https://differencedigest.com/education/mathematics/what-is-the-difference-between-circularity-and-cylindricity/","timestamp":"2024-11-03T09:15:17Z","content_type":"text/html","content_length":"115823","record_id":"<urn:uuid:8c0a93d1-67a2-4dce-9bc6-c79e841eedf0>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00372.warc.gz"}
|
Recommender Systems Part 3: Candidate Generation - Collaborative Filtering (continued)
Binary Labels: Favorites, Likes, and Clicks
Many important applications of recommender systems or collaborative filtering algorithms involve binary labels, where instead of a user giving a 0-5 stars rating, they give you a like or don't like.
Let's take a look at how to generalize the algorithm we looked at in yesterday's post to this setting.
Here's an example of collaborative filtering data set with binary labels:
By predicting how likely Alice, Bob, Carol, and Dave are to like the items they have not yet rated, we can then decide how much we should recommend these items to them. There are many ways of
defining the labels 1, 0, or ?, some examples include:
• did user j purchase an item after being shown?
• did user j favorite or like an item?
• did user j spend at least 30 seconds with an item?
• did user j click on an item?
Meaning of ratings for our example:
• 1: engaged after being shown item
• 0: did not engage after being shown item
• ?: item not shown yet
previously, we predict y(i,j) with the following prediction function:
For binary labels, we will use this function instead:
Loss function for binary labels, for one user example:
Cost function for all examples:
Recommender Systems Implementation
Mean Normalization
When building a recommender system with numbers, such as movie ratings from 0-5 stars, your algorithm will run more efficiently and perform better if you first carry out mean normalization. That is,
if you normalize the movie ratings to have consistent average value.
Let's look at an example from the movie data we have used, and add a new user 5, Eve, who has not yet rated any movie. Adding mean normalization will help the algorithm make better prediction on the
user Eve. With the previous algorithm, it will predict that if you have a new user that has not yet rated anything, it will rate all movies 0 stars and that wouldn't be helpful.
To describe mean normalization, take all of the values including all the '?' and put them in a 2-D matrix and compute the average rating that was given:
As we can see in the image above, we will take subtract each rating from the original ratings with the mean, and we'll get the result shown on the right side of the image, which will be our new y
With this, we can learn our parameters w,x,b like we previously did, but because we have subtracted mean from each rating, we will have to add it back after the computation.
The effect of this algorithm for the new user Eve is, the initial guesses would be equal to the mean of all user for these movies.
In this example, we normalized by row, and it is also possible to normalize by column.
TensorFlow implementation of Collaborative Filtering
We're going to use a very simplified cost function for this example:
the gradient descent algorithm we will be using in this example, we will fix b = 0 for this example:
w = tf.Variable(3.0) # parameters we want to optimize
x = 1.0
y = 1.0 # target value
alpha = 0.01
iterations = 30
for iter in range(iterations):
# use TensorFlow's gradient tape to record steps used to compute
# the cost J, to enable auto differentiation (auto diff)
with tf.GradientTape() as tape:
fwb = w*x
costJ = (fwb - y) ** 2
# use the gradient tape to calculate the gradients of the cost
# wrt the parameter w
[dJdw] = tape.gradient(costJ, [w])
# run one step of gradient descent by updating the value of w to
# reduce the cost
w.assign_add(-alpha * dJdw) # tf.Variable require special function to modify
Once we found our parameters with gradient descent, this is how we can implement collaborative filtering with TensorFlow:
def cofi_cost_func_v(X, W, b, Y, R, lambda_):
j = (tf.linalg.matmul(X, tf.transpose(W)) + b - Y) * R
J = 0.5 * tf.reduce_sum(j**2) + (lambda_/2) * (tf.reduce_sum(X**2) + tf.reduce_sum(W**2))
return J
# instantiate an optimizer
optimizer = keras.optimizers.Adam(learning_rate=1e-1)
iterations = 200
for iter in range(iterations):
# use TensorFlow's gradient tape to record the operations used to
# compute the cost
with tf.GradientTape() as tape:
# compute the cost (forward pass is included in cost)
cost_value = cofi_cost_func_v(X, W, b, Ynorm, R, num_users, num_movies, lambda)
# use the gradient tape to automatically retrieve the gradients
# of the trainable variables wrt the loss
grads = tape.gradient(cost_value, [X, W, b])
# run one step of gradient descent by updating the value of the
# variables to minimize the loss
optimizer.apply_gradients(zip(grads, [X, W, b]))
R = which values have rating
zip() = a function in python that re-arranges the numbers into appropriate ordering for the applied gradient function
Finding related item
The collaborative filtering algorithm we have seen gives us a great way to find related items. As part of the collaborative filtering we have discussed, we learned features x(i) for every item i
(movie i, or other items you're recommending)
In practice, when you use this algorithm to learn the features x(i), looking at the individual features, it will be difficult to interpret, but nonetheless, collectively, they convey something about
what the movie is like.
So, let's say, you would like to find other movies or items related to i:
• find item k with x(k) similar to x(i):
• This function would be the distance between x(k) and x(i), this squared distance can also be written as:
If you find not just one movie with the smallest distance between x(k) and x(i), but 5-10 items with the most similar feature vectors, then you end up finding 5-10 related items to the item x(i)
Limitations of collaborative filtering:
• cold start problem: how to rank new items that few users have rated? show something to new users who have rated only a few items?
• doesn't give you a natural way to use additional information about items or users such as: genres, movie-stars, demographics, etc.
To calculate mean normalization to do feature scaling of the ratings, you can use the following function:
Further readings:
|
{"url":"https://www.joankusuma.com/post/recommender-systems-part-3-candidate-generation-collaborative-filtering-continued","timestamp":"2024-11-08T01:45:17Z","content_type":"text/html","content_length":"1050481","record_id":"<urn:uuid:3950bfa2-6acf-416a-89cf-d24330d79294>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00070.warc.gz"}
|
Том 15 номер 3
Physics in Higher Education
V. 15, N 3, 2009
The contents
3 Solution of X International Conference “Physics in System of Modern Education» (PSME - 09).
10 The Lorenz Contraction in Special Relativity Theory as a Result of the Non-Euclidian Geometry of 3-Dimensial Space in Inertial Frames
A.V. Khrustaliov
16 Integration of Courses in an Effort to Fundemantalize Engineering Education
N.M. Yanina, N.V. Zapatrina
28 Observation of Domain Structure in Yttrium Iron Garnet Film and Study of Its Transformation in Magnetic Field
А.D. Gladun, F.F. Igoshin and Yu. M.Tsipenyuk
43 Application of Technical Training Aids in Lecture Course at Physics at the St. Petersburg State Maritime Technical University
R.H. Bekyashev, F.F. Legusha
48 Osmotic Pressure Measurement
A.A. Tevryukov, V.V. Ouskov, F.F. Igoshin
56 Lecture Demonstrations Laws of Preservation in Mechanics
S. N. Konev
63 Golden Mean in Optics
N.V. Grushina, P.V. Korolenko, A.M. Zotov
73 Use of the High-Frequency Transformer in Demonstration Experiment
A.A. Khodyrev, K.I. Kornisik, E.E. Fiskind
80 Effect Electrostatic Sticking of Likely Charged Bodies
V.A. Saranin
86 Methodical Aspects of Information Technology Using in the Laboratories of Physical Practice Work
E.L. Kazakova, A.I. Nazarov
95 Content and Training System of the Physical Workshop for the Students Being Educated in Biology
Е. Petrova
102 Method of Modeling in the System of Training the Masters of Education
M.Yu. Korolev
110 Synthesis of Methodological and Applied Knowledge in the Physics Course at School: Results of Pedagogical Experiment
A.A. Chervova, Yuri B. Altshuler
120 Abstracts
125 Information
Solution of X International Conference “Physics in System of Modern Education» (PSME - 09).
Summary record about the conference which have passed from May, 31st till June, 04th, 2009 in St.-Petersburg on the basis of the Russian State Pedagogical University of A.I. Herzen (RSPU).
The Lorenz Contraction in Special Relativity Theory as a Result of the Non-Euclidian Geometry of 3-Dimensial Space in Inertial Frames
A.V. Khrustaliov
Moscow Institute of the International Economic Relations
Keywords: Special relativity theory (SRT); world-outlook; physical picture of world; space and time; relativity effects; the space interval length relativity; non-Euclidian geometry.
The methods of teaching one of the key effects of Special Relativity Theory (SRT), the space interval length relativity (the Lorenz contraction), are considered. It’s improvement by means of
revealing the non-Euclidian geometry of 3-dimensional space in inertial frame is suggested.
Integration of Courses in an Effort to Fundemantalize Engineering Education
N.M. Yanina, N.V. Zapatrina
Cherepovets Military Engineering School of Radioelectronics
162622, Cherepovets, Sovetskiy pr., 126, Physics Department, Mathematics Department
E-mail: yaninM@metacom.ru
Keywords: integration, fundamental disciplines, numerical methods, application mathematical packages.
In this article one can find the experience of solving some issues of educational programs on the grounds of integrating such courses as mathematics, physics and informatics. In Mathematics lessons
the differential equation which cannot be solved by analytical methods is considered. To motivate the solution of it we offer the physical problem, where the law of motion corresponds completely to
the type of equation. Then the problem is solved by digital methods with the application of computers. The use of application mathematical packages can help in engineer’s professional activity in the
Observation of Domain Structure in Yttrium Iron Garnet Film and Study of Its Transformation in Magnetic Field
А.D. Gladun, F.F. Igoshin and Yu. M.Tsipenyuk
Moscow Institute of Physics and Technology
Keywords: ferromagnetic films, Faraday Effect, domain structure, CMD.
The laboratory work on observation and study of domain structure in ferrimagnetic yttrium iron garnet film by means of Faraday Effect is described. Behavior of magnetic structure in an external
magnetic field is studied: transition from labyrinth structures to a one-domain state and formation of cylindrical magnetic domains (CMD). The dependence of the size of CMD on intensity of an
external magnetic field, a field of elliptic instability of CMD is measured. The thickness of the investigated film is estimated from the obtained results.
Application of Technical Training Aids in Lecture Course at Physics at the St. Petersburg State Maritime Technical University
R.H. Bekyashev, F.F. Legusha
St. Petersburg State Maritime Technical University
St. Petersburg 190008 Lotsmanskaya 3
E-mail: alexilaiho@mail.ru
Key words: lecture demonstrations, technical training aids (TTA), television facilities, recommendations on using demonstrations.
Common requirements on lecture demonstrations for the lecture course at general physics are given. The paper contains the description of physics lecture-room with the set of technical training aids
(TTA), including the television facilities. Recommendations on using demonstrations of the Department of Physics of St. Petersburg State Maritime Technical University are introduced here. The paper
may be useful for physics high-school teachers and lecturers at relative subjects.
Osmotic Pressure Measurement
A.A. Tevryukov, V.V. Ouskov, F.F. Igoshin
Moscow Institute of Physics and Technology
E-mail: ouskov@nm.ru
Key words: osmosis, osmotic pressure, laboratory setup, osmometr.
A laboratory experiment of osmotic pressure measurement, experiment’s procedure and some results are described.
Lecture Demonstrations Laws of Preservation in Mechanics
S.N. Konev
Ekaterinburg, the Russian State Professional – pedagogical University
E-mail: koneff_s@mail.ru
Key words: laws of preservation; computer demonstrations; animation; the concept of modern natural sciences
Computer demonstrations laws of preservation in mechanics for use in lectures
of a course of “Concept of modern natural sciences” are considered. Application effects of animation and diagrams give to demonstrations the maximal presentation.
Golden Mean in Optics
N.V. Grushina, P.V. Korolenko, A.M. Zotov
Faculty of Physics M.V. Lomonosov Moscow State University
Lenin Hills, 1, str.21, 19992 Moscow, Russia
E-mail: pvkorolenko@rambler.ru
Keywords: physics, golden mean, Fibonacci optical systems.
Arguments for the benefit of inclusion in programs of high school physics courses some questions connected to a phenomenon of gold mean are given. The method of studying of characteristics of the
optical devices constructed with use of Fibonacci principle is considered. It is shown that mastering properties of gold mean have heuristic opportunities raises a level of preparation of students to
the future research activity.
Use of the High-Frequency Transformer in Demonstration Experiment
A.A. Khodyrev, K.I. Kornisik, E.E. Fiskind
Nizhniy Tagil Technological Institute, the department of the Urals State Technical University-UPI
Nizhniy Tagil State Social-Pedagogical Academy
Keywords: Tesla Coil; high-frequency electromagnetic field; inductive coil; inductive action; no electrode tube; glow of vacuum gases; resonance; displacement current; physiological effect of
high-frequency current.
This article considers construction and principle of operation high-frequency a resonance - transformer Tesla. Descriptions of demonstration experiments with his use are resulted.
Effect Electrostatic Sticking of Likely Charged Bodies
V.A. Saranin
Glazov State Pedagogical Institute
E-mail: saranin@ggpi.org
Keywords: electrostatic sticking, electrical images, charged conducting sphere and disk, force and energy electrostatic interaction.
On an example of a disk and ball the effect electrostatic in repulsive (sticking) of the same charged is described. The experiment is carried out, in which conducting ball suspended on a string, and
metal disk adjoin, being are connected to a source of a high voltage. At smooth increase of a voltage the ball continues to hang on a string, concerning a disk, and only at achievement of the certain
voltage, about one and a half ten kilovolt occurs it divergence from a disk. The theory, which is taking into account the electrostatic image of a ball in a disk, shows, that the stability
equilibrium of a ball about a disk is really possible only at excess by a voltage of some critical meaning exceeding ten kilovolt.
Methodical Aspects of Information Technology Using in the Laboratories of Physical Practice Work
E.L. Kazakova, A.I. Nazarov
Petrozavodsk State University
Lenin St. 33, 185910, Petrozavodsk, Russia
E-mail: ekazakova@petrsu.ru , anazarov@petrsu.ru
A possibility of IT using in the laboratories of physical practice work is discoursed. The samples of the work with different goals and alternates of PC using are considered. In particularу, the
works with equipment of the Phywe firm with PC as a recorder are described.
Content and Training System of the Physical Workshop for the Students Being Educated in Biology
Е.B. Petrova
Moscow Pedagogical State University
Keywords: natural – science education; physical practical work; invariant kernel; variant a par.
The physical workshop for the students in a biology department has been described. Novel ideas in the content and training system of the labs have been offered. Distinguishing feature of the labs in
the physical workshop is their two-part system; one part is invariable and another part is variable. The latter corresponds to the specifics of professional training. The key element of the lab
workshop is the use of multi-media manuals in the process of preparations for the labs.
Method of Modeling in the System of Training the Masters of Education
M.Yu. Korolev
Moscow Pedagogical State University
Keywords: method of modeling, mathematical models, masters of education, innovative educational technologies.
The master program «Contemporary natural-science education» for direction 540200 «Physics-mathematical education» is presented in the article. The main attention is paid to the structure and to the
content of the discipline «Modeling in the natural science». The method of modeling and its application in training the masters of education are considered.
Synthesis of Methodological and Applied Knowledge in the Physics Course at School: Results of Pedagogical Experiment
A.A. Chervova, Yuri B. Altshuler
Shuisky State Pedagogical University
Keywords: synthesis of methodological and applied knowledge, updating of a school physics, school electrodynamics, methodological and applied knowledge, intellectual development of pupils,
development competence qualities of pupils.
Results of pedagogical experiment on research of influence of the contents of a course and a technique of training physics, based on synthesis of methodological and applied knowledge, on increase of
a learning efficiency and development of pupils, are presented. By results of experiment, the conclusion is done, that updating of school physics education based on synthesis of methodological and
applied knowledge promotes formation of knowledge of pupils, development of intellectual abilities and competence qualities of pupils.
|
{"url":"https://pinhe.moomfo.ru/tom15n3.en.htm","timestamp":"2024-11-02T08:17:17Z","content_type":"text/html","content_length":"16432","record_id":"<urn:uuid:e1a3a2be-8d4f-4c60-a622-b33e4b2d009d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00334.warc.gz"}
|
Let θ be an angle s… - QuestionCove
Ask your own question, for FREE!
Mathematics 65 Online
OpenStudy (anonymous):
Let θ be an angle such that cosθ=23 and sinθ<0. Find tanθ
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (anonymous):
Cosine cannot be larger than 1. Its range is between -1 and 1. Arc-cosine of 23 = NaN
OpenStudy (anonymous):
I meant cosθ=2/3. Sorry. :(
OpenStudy (anonymous):
If I'm not mistaken, theta is 48,189.. and therefore tan(theta) is 1,11..
OpenStudy (mathsolver):
|dw:1363106210176:dw|If you draw the cos curve ( as i have very badly) you will see that arccos(2/3) has two solutions, one at 48.19 and one at 311.81. since sin48.19 is above zero our thetha must be
311.81. so tan311.81=.1.118 and that is our answer :)
OpenStudy (anonymous):
sub your value of cos into \[\cos^2 \theta + \sin^2 \theta = 1\] to get \[\sin^2 \theta = \frac{5}{9}\] as sin(theta) < 0 we know now that \[\sin(\theta) = \frac{\sqrt{5}}{3}\] now tan = sin/cos
gives \[\tan(\theta) = \frac{\sqrt{5}}{2}\]
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (anonymous):
sorry that's meant to be \[\sin(\theta) = -\sqrt5/3\] and \[\tan \theta = - \sqrt5/2\]
OpenStudy (raden):
if θ in the quadrant 4th, then tan must be negative
OpenStudy (anonymous):
Thank you for the help everyone! Yeah I had \[\sqrt{5}/2\] but I had forgot the negative.
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends!
Latest Questions
Heythere1: help
2 hours ago 2 Replies 0 Medals
Heythere1: helppp
2 hours ago 3 Replies 2 Medals
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends!
|
{"url":"https://questioncove.com/updates/513f56f0e4b0b5e82bb1c1d7","timestamp":"2024-11-07T03:18:07Z","content_type":"text/html","content_length":"24264","record_id":"<urn:uuid:68d2eb89-6607-469b-9604-f72e7692ea11>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00077.warc.gz"}
|
Probabilistic Inference is Not Synergistic
Bayesian inference tools measure belief in uncertain events. The basic idea is to pit prior information (a probability estimate) against new information (posterior) and condition both cyclically. A
governing concept called meta-probability (the probability that these estimates are good) helps us calibrate our confidence in using these priors.
Nassim Nicholas Taleb shows people overestimate their confidence in their probabilistic estimates. In Antifragile, part of the Incerto series^[1], Taleb terms entities that improve after bad events
as antifragile. Processes like risk-taking, innovation, Lindy effect, information security, and project handling are antifragile. Human bodies are antifragile. All benefit from damage. What about
probabilistic inference methods? Can we combine the predictive power of estimators to give an estimator at least as good as the best? The answer is negative.
Probabilistic inference is a fragile system. Fragility means probabilistic estimators cannot always be combined to construct an estimator that matches the performance of their best candidate. This
construction dismisses the trivial case of one-to-one mapping of the best candidate to the constructed estimator.
In a Bayesian scenario, we use Bayesian analysis for simplicity and universality. A Bayesian estimator, B, can't match the best candidate from a set of estimators, E. B's prior is centered around E,
a subset of the universal set of all measures and priors. We construct B by giving exponential weight penalties for performance deviations, known as regret-based construction. The goal is to see if B
can have zero regret.
Without loss of generality, we can assign a loss function (like KL-divergence) to the performance penalty (how close B is to the true distribution estimator of E) in a measure-theoretic space. The
countably infinite set E requires B to assign exponentially decreasing weights to each element. However, B's elements already have a measure space assigning exponentially decreasing weight to the
event space. Hence the likelihood of events of interest (based on E) has little room for any penalty. To combine E elements, we must bolster the likelihood instead of penalizing it. This is a classic
case of overfitting B to E. This would work for E, but break it for U – E.
This result applies to all estimation principles like AIC, MDL, and BIC. Combining probabilistic models complicates the analysis without improving predictive power.
|
{"url":"https://densebit.com/posts/2.html","timestamp":"2024-11-12T07:01:07Z","content_type":"text/html","content_length":"5014","record_id":"<urn:uuid:23524d10-964f-4804-b9c1-30bf2a9a7372>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00617.warc.gz"}
|
Free Group Study Rooms with Timer & Music | FiveableFinding the Area Between Curves Expressed as Functions of x | AP Calculus AB/BC Class Notes | Fiveable
This is one of the most important topics of Unit 8. In this topic, we will discuss how to find the area between two curves expressed as functions of x. This topic will set you up to understand more
complex topics moving forward. To understand how to find the area, take a look at this simple example:Β
Letβ s say we want to find the area between the curve y = x and y = x^2 from x = 2 to x = 4. In order to find the area, you can imagine we are slicing the region vertically, into a bunch of
infinitely thin slices. The area would be the sum of all the slices. To add all the slices, you can use a definite integral. Integrate the function (x^2 - x) from 2 to 4.Β
Hereβ s a basic formula to understand the concept:Β
Also, it is important to mention that you can only use this specific method when your functions are expressed in terms of x. In the example above, our functions were x and x^2. If the functions were
y and y^2, we would have to use a slightly different approach. To learn more, look at Topic 8.5: Finding the Area Between Curves Expressed as Functions of y.Β Β
If youβ re still confused, try out this example and see how you do.Β
Question: Find the area between the functions y = x^2 - 2 and -x^2.
Answer: 2.667
Solution: First, we should set the functions equal to each other to find the intersection points. If you do x^2 - 2 = -x^2 and solve for x, you should get x = -1 and x = 1. Next, graph the functions
to figure out which one is on top. You should get something like this:
|
{"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-calc/unit-8/finding-area-between-curves-expressed-as-functions-x/study-guide/Zyj7XJuPfoWBuAJ96ZAG","timestamp":"2024-11-04T11:54:43Z","content_type":"text/html","content_length":"230818","record_id":"<urn:uuid:660c9aa8-8ce6-4b2c-91ef-7b6dd4674e82>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00745.warc.gz"}
|
ft46m.site Mortgage Refinance Breakeven Calculator
Mortgage Refinance Breakeven Calculator
Mortgage refinance break-even calculator. Original Loan Amount. Original Interest Rate, %. Appraised Value. Term in Years, Years. Years Remaining, Years. Income. How long would it take you to
break-even on a refinanced mortgage? Use nbkc's mortgage refinance break-even calculator tool to find out today. The most common measure is the break-even point. More about that below, but if your
closing costs will be $4,, for instance, and your monthly savings are. You'll first need to calculate your total savings. Do this by adding the monthly payments of the debts you're paying off to any
mortgage payment savings you'd. The Refinance Breakeven Calculator from Carter Bank helps you determine how long will it take to break even on a mortgage refinance. See the numbers now.
Should you refinance your mortgage? Use this calculator to determine when you will breakeven! In addition to a monthly payment breakeven, you may want to analyze the breakeven Include closing costs
in loan amount? No, Yes. Calculate. Icon of a. The Refinance Break-Even calculator will take into consideration your monthly mortgage payment savings & how much you will pay in closing costs. Loan ·
Open An Account · Apply For A Loan. Mortgage Refinance Break-Even. Contact Us. Information and interactive calculators are made available to you as self. The refinance break-even calculator
determines the time it would take for the accumulated savings from the new loan to offset the costs. Thinking about refinancing your home? Our mortgage refinance calculator makes it easy for our
customers to find out their break-even point after. Use this calculator to sort through the confusion and determine if refinancing your mortgage is a sound financial decision. Click the "View Report"
button for a. How long will it take to break even on a mortgage refinance? Use the mortgage refinance calculator to sort through a multitude of factors including your. Refinance Calculator You should
consider refinancingGet Started! This calculator is being provided for educational purposes only. The provided values for. Interested in refinancing to a lower rate or lower monthly payment? With
NerdWallet's free refinance calculator, you can calculate your new monthly payment.
Free calculator to plan the refinancing of loans by comparing existing and refinanced loans side by side, with options for cash out, mortgage points. This calculator makes it easy for homeowners to
decide if it makes sense to refinance their mortgage to a new loan with a lower interest rate. How to use our refinance break-even calculator · Current principal: The remaining balance on your
original mortgage. · Current term: Your original loan term in. CalcXML's Refinance Calculator will help you determine how much interest you could save by refinancing your mortgage with a lower
interest rate. Use our refinance calculator to analyze your situation today! Current Loan InformationPart 1; Proposed Loan Information. Use our refinance calculator to analyze your situation today!
In addition to a monthly payment breakeven, you may want to analyze the breakeven based strictly. Our refinance break-even calculator can help you determine your unique break-even point. It can also
help you compare the monthly payments on your current loan. How long will it take to break even on a mortgage refinance? Use the mortgage refinance calculator to sort through a multitude of factors
including your. Refinance Break-Even Point Calculator. Calculate the number of months to break-even if you refinance the loan.
The refinance break-even calculator determines the time it would take for the accumulated savings from the new loan to offset the costs. This calculator helps you find out! Enter the specifics about
your current mortgage, along with new loan amortization, rate and closing costs. Loan · Open An Account · Apply For A Loan. Mortgage Refinance Break-Even. Contact Us. Information and interactive
calculators are made available to you as self. Our calculator allows you to add closing costs to the loan or pay them out of pocket & calculates your break even date either way. For your convenience
we list. How long will it take to break even on a mortgage refinance? Determine if refinancing your mortgage is a sound financial decision.
How Do Banks Determine Credit Limit | Is 115 Mbps Good
|
{"url":"https://ft46m.site/overview/mortgage-refinance-breakeven-calculator.php","timestamp":"2024-11-10T14:35:15Z","content_type":"text/html","content_length":"11117","record_id":"<urn:uuid:54f04df7-fd2b-470a-8625-847484cc5378>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00508.warc.gz"}
|
Biography of Joseph Fourier
The mathematician and physicist of the highest level, Joseph Fourier, with his researches, revolutionized the scientific world.
Childhood and early years
Joseph Fourier, born in Auxerre, France, was a descendant of a modest family. Orphaned at an early age, Joseph receives primary education at the Cathedral School, which is run by a church music
teacher. After this, Fourier continues his studies at the Royal Military School of Auxerre. The boy exhibits remarkable literary abilities, but at the age of 15 these talents outshine the penchant
for mathematics, which he enjoys with all his heart. To his fourteen years, Joseph finishes studying the “Course of Mathematics” Bezu, and the next year receives his first award for an essay to the
book Bossu “Fundamentals of Mechanics.” In 1787, Fourier became a novice in the Benedictine Abbey of St. Benoît-sur-Loire, intending to later be tonsured in the monks. However, he suddenly changes
his plans, sending
his scientific notes on algebra to Paris, Jean Monette, and even declaring in a letter addressed to Bonar his desire to make a significant contribution to the development of mathematics. Such actions
reveal Fourier’s doubts as to whether he really wants to depart from worldly life. In 1789, Fourier went to Paris, where he presented his article on the subject of algebraic equations at the Royal
Academy of Sciences.
The following year, Fourier holds the position of junior instructor at the Benedictine College – the Royal Military School in Auxerre, where he studied.
To the dilemma, whether to devote one’s life to the service of God or to seriously engage in mathematics, politics is added, when Fourier joins the ranks of the local Revolutionary Committee.
Returning to his native Auxerre, Joseph teaches in college and works on the committee. In 1794 he was arrested, but soon released. A year later, he is sent to study in the Higher Normal School of
Paris – an educational institution that prepares teachers – where, of course, he is the most capable among students. Joseph learns from
the best teachers of his time – Lagrange, Laplace and Monge. Later, Fourier himself becomes a teacher at the College de France. With his teachers, he maintains good relations and, with their help,
begins his journey to great mathematical achievements. Fourier quickly moves up the career ladder, receiving a teaching position at the Central School of Public Works, which will later be renamed the
Polytechnic School. However, in his old criminal case, new circumstances are opened, as a result of which Fourier is arrested again and imprisoned. It will not last long, and very soon he will again
be free.
Late period
September 1, 1795 Fourier again begins to teach at the Polytechnic School. Two years later, in 1797, he replaced Lagrange as head of the Department of Analysis and Mechanics. Despite the fact that
Fourier had established himself as an outstanding lecturer, he will only undertake serious research work now. In 1798, during the invasion of Egypt, Fourier is a scientific adviser to the army of
Napoleon. Although at first this military company was extremely successful, on August 1 the French fleet suffers a complete defeat. Napoleon is surrounded in the country he captured. With the help of
Fourier, he establishes here a typical French political structure and administration. Fourier is also engaged in the opening of educational institutions in Egypt and the organization of
archaeological excavations. In Cairo, the scientist not only helps to found the Cairo Institute, but also becomes one of the twelve members of his mathematical department, along with Monge, Malius
and Napoleon himself. In view of the weakening English influence in the East, he even writes a number of mathematical articles. Later, Fourier becomes the scientific secretary of the Institute, and
will remain in this post all the time of the French occupation of Egypt. In his charge are also all scientific achievements and literary works.
In 1801, Fourier returned to Paris and holds his former post of head of the Department of Analysis at the Polytechnic School. However, Napoleon had his own plans for him. Fourier goes to Grenoble,
where he is appointed prefect of the department of Ysera. The scientist is engaged in a number of projects, including the supervision of the operation to drain the bogs of Burguin and the control of
the construction of a new road from Grenoble to Turin. It is here that Fourier begins his experiments with “the spread of heat.” On December 21, 1816, at the Paris Institute, he will present his
article “Thermal conductivity of solids” to the scientific public, which will be included in the monumental French edition “Description of Egypt”. In the same year, he will go to England, returning
from which six years later,
Fourier’s Proceedings
In 1822, Fourier presented his article on the topic of heat flow called “Théorie analytique de la chaleur”. Based on Newton’s law of cooling, Fourier concludes that the heat flux between two adjacent
molecules is directly proportional to the extremely small difference in their temperatures. There were three aspects in the work: one mathematical and two physical. From a mathematical point of view,
Fourier proves that any function with a variable, whether continuous or discontinuous, can be expanded into a series of sines multiples of the variable. Although this assertion was incorrect, the
idea that some deliberately discontinuous functions are given by formulas, if the latter include infinite series, has become a discovery of great importance. Among the physical conclusions of the
paper was the theory of uniformity of the dimensions of the equation, according to which the equation can formally be correct only if the dimensions in both sides of the equation coincide. Another
significant contribution of Fourier to the development of physics was the proposal of a partial differential equation for heat conduction. To this day, every student who studies mathematical physics
knows this equation.
To all of the above, we can add also the unfinished Fourier’s work on the equations containing determinants, which Claude-Louis Navier completed and published in 1831. In this paper we present the
Fourier theorem for determining the number of real roots of an algebraic equation. In addition to mathematical discoveries, Fourier first proposed the theory of the greenhouse effect. Having made the
necessary calculations, he deduces that if the Earth was heated only by solar radiation, then, taking into account its dimensions and distance to the Sun, on our planet it should be much colder.
Proceeding from this, the scientist comes to the conclusion that a significant portion of additional heat the planet receives thanks to interstellar radiation. His idea that the Earth’s atmosphere
acts as an insulating layer was the first theory of the phenomenon in history, which is now known to us under the name of the greenhouse effect. Referring to the experience carried out by Ferdinand
de Saussure, Fourier suggests that gases in the atmosphere can create a reliable barrier, like the glass frames of a greenhouse, which lays the foundations for a modern theory of the greenhouse
Death and heritage
In 1830, Fourier’s health deteriorated sharply. The first symptoms of an aneurysm of the heart are evident during his stay in Egypt and Grenoble, but with the return to Paris, the attacks of
suffocation are becoming increasingly difficult. All this complicates the fall of Fourier from the stairs, which happened on May 4, 1830. A few days later, on May 16, 1830, Fourier died. The
scientist was buried at the Pere Lachaise cemetery in Paris. His grave is decorated in the Egyptian style as a sign that he was the secretary of the Cairo Institute, and also as a reminder of his
contribution to the publication “Description of Egypt.” Fourier’s name is on the list of 72 two names of outstanding people of France, immortalized on the first floor of the Eiffel Tower.
Biography of Joseph Fourier
|
{"url":"https://en.home-task.com/biography-of-joseph-fourier/","timestamp":"2024-11-12T09:36:34Z","content_type":"text/html","content_length":"44445","record_id":"<urn:uuid:7f702d75-64bc-4650-a6d4-d6e8693863f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00123.warc.gz"}
|
Ceil Function in Python
Last Updated on May 16, 2023 by Prepbytes
Ceil is a function defined in Python’s math module which takes a numeric input and returns the integer greater than or equal to the input number. It is equivalent to the Least integer function in
What is Ceil Function in Python?
The Python math module, which is a part of the Python Standard Library, offers the ceil function. It is equivalent to the mathematical Least Integer Function or Ceil Function.
If you’re building a program to calculate the interest on a loan and the bank mandates that no such monetary quantities contain fractional amounts of rupees (like paisa) in order to facilitate
transactions, then what happens? You choose to round the resultant number off to the next nearest integer keeping in mind that the bank shouldn’t lose money.
So, what method would you use? utilizing Python’s ceil() function
In mathematical notation, if you’re given a real number x, ceil(x) is depicted using ⌈x⌉, where the upper direction of the brackets refers to ceiling operation (as the ceiling lies above your head).
Inversely, the floor(x) (which returns the greatest integer ≤ x) is depicted using ⌊x⌋, where the downward symbol depicts the flooring operation.
Defining in a piecewise manner, ceil(x) = :
x, if x ∈ Z
⌊x+1⌋, if x ∉ Z
What this piecewise formula says is that if the input for the ceil function is an integer, then the greatest integer greater or equal to that number is that number itself. Otherwise, just add one to
that number and then floor.
Ceil function in Python also performs the same way. It returns the smallest integer greater than or equal to the input real number. For example,
ceil(2.3) = 3
ceil(-1.3) = -1
ceil(3) = 3
Syntax of Ceil Function in Python:
Here we have code for ceil function in python
Import math
Here, x is the number or expression for which the lowest integer larger than or equal to it must be calculated. math is the Python math module. Ceil() produces an integer as a result.
Parameter Values of Ceil Function in Python
The ceil function only accepts real numbers (float) or integers as input. If the input parameter is an integer, the ceil function returns the same integer. If the input parameter is a float, the ceil
function rounds up float values to the next integer greater than or equal to the input value.
Some important things to note about the ceil function are:
• The value of the ceil function is always an integer.
• If the supplied value is already an integer, the ceil function just returns the same number.
• If the input parameter is NaN (not a number), the ceil function returns the same value. Regardless of whether the input value is positive or negative infinity, the ceil function produces the same
Example1 of Ceil Function in Python
In this example we have taken a positive float value
# your code goes here
import math
num = 17.8
ceil_num = math.ceil(num)
print(ceil_num) # Output: 18
Explanation: In the above python program we have taken positive float value 17.8 , so the nearest greatest integer of 17.8 is 18 so if we observe on output screen ceil function gives 18 as a output .
Example 2 of Ceil Function in Python
In this example we have taken a positive integer value
# your code goes here
import math
num = 3
ceil_num = math.ceil(num)
print(ceil_num) # Output: 3
Explanation: In the above python program we have taken positive integer value 3, so the nearest greatest integer of 3 is itself 3 so if we observe on output screen ceil function gives 3 as a output .
Example 3 of Ceil Function in Python
In this example we have taken a negative integer value
# your code goes here
import math
num = -12.5
ceil_num = math.ceil(num)
print(ceil_num) # Output: 4
In the above python program we have taken negative float value -12.5, so the nearest greatest integer of -12, not -13, because -12.5 is greater than -12, so if we observe on output screen ceil
function gives 3 as an output.
For rounding up values to the next integer in Python, use the ceil() function. It may be applied to a broad range of tasks, from basic math to more complex computations. The ceil() function is useful
in a variety of situations, including rounding up numbers by determining the lowest integer that is greater than or equal to a given floating-point number. For instance, if a program has to figure
out how many pages are required to print a document, it can use the ceil() method to determine how many pages are required to fit any extra text after dividing the total amount of words by the number
of words per page.
Frequently Asked Questions (FAQ)
Here are some FAQs on ceil function in Python:
Q1. What is the difference between the ceil and floor functions in Python?
Ans. The ceil function in Python returns the smallest integer greater than or equal to the given number, while the floor function returns the largest integer less than or equal to the given number.
Q2. What is the return type of the ceil function in Python?
Ans. The ceil function in Python returns an integer value.
Q3. What happens if we pass a non-numeric value to the ceil function in Python?
Ans. If we pass a non-numeric value to the ceil function in Python, it will result in a TypeError.
Q4. Can we use the ceil function for floating-point numbers in Python?
Ans. Yes, the ceil function can be used for both integer and floating-point numbers in Python.
Q5. Can we use the ceil function for negative numbers in Python?
Ans. Yes, the ceil function can be used for negative numbers in Python. If the given number is negative, the ceil function returns the smallest integer that is less than or equal to the given number.
Leave a Reply Cancel reply
|
{"url":"https://www.prepbytes.com/blog/python/ceil-function-in-python/","timestamp":"2024-11-14T17:03:35Z","content_type":"text/html","content_length":"152833","record_id":"<urn:uuid:db3059ae-df65-4aac-b26b-b26967aaeacc>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00591.warc.gz"}
|
Population model
The basic idea of flexible cutoffs is that these cutoffs come from a sui-generis (having it’s own shape) distribution of fit indices for a correct, unbiased model. Essentially, this means that the
population model is correctly specified with no errors. For example, all observed variables load on the latent variables they are supposed to load on, all correlations are estimated freely, and the
data is not excessively skewed. This assumption makes flexible cutoffs potentially more objective since no subjective modification to the model (misspecification) or data (skewness, kurtosis) is
needed. However, this also implies that the population model is specified not simply on the basis of the own empirical model. In an anecdotal manner, one can compare this to the introduction of the
meter. Assuming that one’s own measure is exactly 1 meter is likely error prone. One needs a validated meter bar. The meter bar in this case is the population model.
The function pop_mod specifies this population model. We offer three types of population models, the NM (Niemand & Mai, 2018) option, the HB (Hu & Bentler, 1999) option, and the EM (empirical)
option. Function gen_fit internally calls function pop_mod, but the argument type is also available in the latter function for flexibility. We can compare these models by:
pop_mod(mod, x = bb1992, type = "NM")$pop.mod
#> [1] "F1=~0.7*Q5+0.7*Q7+0.7*Q8 \nF2=~0.7*Q2+0.7*Q4 \nF3=~0.7*Q10+0.7*Q11+0.7*Q12+0.7*Q13+0.7*Q18+0.7*Q19+0.7*Q20+0.7*Q21+0.7*Q22 \nF4=~0.7*Q1+0.7*Q17 \nF5=~0.7*Q6+0.7*Q14+0.7*Q15+0.7*Q16 \nF1~~1*F1 \nF2~~0.3*F1 \nF3~~0.3*F1 \nF4~~0.3*F1 \nF5~~0.3*F1 \nF2~~1*F2 \nF3~~0.3*F2 \nF4~~0.3*F2 \nF5~~0.3*F2 \nF3~~1*F3 \nF4~~0.3*F3 \nF5~~0.3*F3 \nF4~~1*F4 \nF5~~0.3*F4 \nF5~~1*F5 \n \n"
pop_mod(mod, x = bb1992, type = "HB")$pop.mod
#> [1] "F1=~0.7*Q5+0.75*Q7+0.8*Q8 \nF2=~0.75*Q2+0.75*Q4 \nF3=~0.7*Q10+0.7*Q11+0.7*Q12+0.7*Q13+0.75*Q18+0.8*Q19+0.8*Q20+0.8*Q21+0.8*Q22 \nF4=~0.75*Q1+0.75*Q17 \nF5=~0.7*Q6+0.75*Q14+0.75*Q15+0.8*Q16 \nF1~~1*F1 \nF2~~0.5*F1 \nF3~~0.4*F1 \nF4~~0.3*F1 \nF5~~0.5*F1 \nF2~~1*F2 \nF3~~0.4*F2 \nF4~~0.5*F2 \nF5~~0.4*F2 \nF3~~1*F3 \nF4~~0.3*F3 \nF5~~0.5*F3 \nF4~~1*F4 \nF5~~0.4*F4 \nF5~~1*F5 \n \n"
pop_mod(mod, x = bb1992, type = "EM")$pop.mod
#> [1] "F1=~0.807*Q5+0.637*Q7+0.876*Q8 \nF2=~0.747*Q2+0.842*Q4 \nF3=~0.519*Q10+0.568*Q11+0.597*Q12+0.697*Q13+0.603*Q18+0.557*Q19+0.534*Q20+0.578*Q21+0.552*Q22 \nF4=~0.659*Q1+0.772*Q17 \nF5=~0.727*Q6+0.719*Q14+0.826*Q15+0.752*Q16 \nF1~~1*F1 \nF2~~0.36*F1 \nF3~~0.71*F1 \nF4~~0.641*F1 \nF5~~0.807*F1 \nF2~~1*F2 \nF3~~0.349*F2 \nF4~~0.486*F2 \nF5~~0.492*F2 \nF3~~1*F3 \nF4~~0.625*F3 \nF5~~0.8*F3 \nF4~~1*F4 \nF5~~0.831*F4 \nF5~~1*F5 \n \n"
When the type is “NM”, all loadings (default: .7) and correlations (default: .3) are assumed to be equal (\n denotes a line break for the lavaan syntax). When the type is “HB”, the loadings vary by
.05 around .75, depending on the number of indicators and the correlations are either .5, .4, or .3, also depending on the number of latent variables. Finally, when the type is “EM”, the function
runs a CFA and determines the empirical loadings and correlations. Since one cannot assume the own empirical model to be correct, we advise users to not use the “EM” type for model validation. This
type might be useful for other features (see further applications). Hence, the default is set to “NM”. Since the by far most selected standardized factor loading (afl) in our tool was .7, we set the
default value to .7. Other options between 0 and 1 are possible. The average correlation between latent variables (aco) is set to a default of .3, but this can be changed likewise. To increase
flexibility, the argument standardized (default: TRUE) can also be called, allowing for the specification of standardized (all loadings < 1, all covariances are correlations) and unstandardized
(loadings > 1, covariances, not correlations) parameters. The function returns a warning when the empirical model suspects standardized or unstandardized loadings and this is in conflict with the
standardized argument. See below:
pop_mod(mod, x = bb1992, type = "NM", afl = .9)$pop.mod
#> [1] "F1=~0.9*Q5+0.9*Q7+0.9*Q8 \nF2=~0.9*Q2+0.9*Q4 \nF3=~0.9*Q10+0.9*Q11+0.9*Q12+0.9*Q13+0.9*Q18+0.9*Q19+0.9*Q20+0.9*Q21+0.9*Q22 \nF4=~0.9*Q1+0.9*Q17 \nF5=~0.9*Q6+0.9*Q14+0.9*Q15+0.9*Q16 \nF1~~1*F1 \nF2~~0.3*F1 \nF3~~0.3*F1 \nF4~~0.3*F1 \nF5~~0.3*F1 \nF2~~1*F2 \nF3~~0.3*F2 \nF4~~0.3*F2 \nF5~~0.3*F2 \nF3~~1*F3 \nF4~~0.3*F3 \nF5~~0.3*F3 \nF4~~1*F4 \nF5~~0.3*F4 \nF5~~1*F5 \n \n"
pop_mod(mod, x = bb1992, type = "NM", aco = .5)$pop.mod
#> [1] "F1=~0.7*Q5+0.7*Q7+0.7*Q8 \nF2=~0.7*Q2+0.7*Q4 \nF3=~0.7*Q10+0.7*Q11+0.7*Q12+0.7*Q13+0.7*Q18+0.7*Q19+0.7*Q20+0.7*Q21+0.7*Q22 \nF4=~0.7*Q1+0.7*Q17 \nF5=~0.7*Q6+0.7*Q14+0.7*Q15+0.7*Q16 \nF1~~1*F1 \nF2~~0.5*F1 \nF3~~0.5*F1 \nF4~~0.5*F1 \nF5~~0.5*F1 \nF2~~1*F2 \nF3~~0.5*F2 \nF4~~0.5*F2 \nF5~~0.5*F2 \nF3~~1*F3 \nF4~~0.5*F3 \nF5~~0.5*F3 \nF4~~1*F4 \nF5~~0.5*F4 \nF5~~1*F5 \n \n"
pop_mod(mod, x = bb1992, type = "EM", standardized = FALSE)$pop.mod
#> Warning in pop_mod(mod, x = bb1992, type = "EM", standardized = FALSE): All
#> loadings are < 1. Consider revision of standardized.
#> [1] "F1=~0.807*Q5+0.637*Q7+0.876*Q8 \nF2=~0.747*Q2+0.842*Q4 \nF3=~0.519*Q10+0.568*Q11+0.597*Q12+0.697*Q13+0.603*Q18+0.557*Q19+0.534*Q20+0.578*Q21+0.552*Q22 \nF4=~0.659*Q1+0.772*Q17 \nF5=~0.727*Q6+0.719*Q14+0.826*Q15+0.752*Q16 \nF1~~1*F1 \nF2~~0.36*F1 \nF3~~0.71*F1 \nF4~~0.641*F1 \nF5~~0.807*F1 \nF2~~1*F2 \nF3~~0.349*F2 \nF4~~0.486*F2 \nF5~~0.492*F2 \nF3~~1*F3 \nF4~~0.625*F3 \nF5~~0.8*F3 \nF4~~1*F4 \nF5~~0.831*F4 \nF5~~1*F5 \n"
Assumptions from data
In order to follow this principle of objectivity, the data x is essentially not that important for deriving flexible cutoffs, unless specified differently. That is, x determines the sample size (N)
for the simulations and the multivariate non-normality of the data, if assume.mvn = FALSE (default: TRUE). Both are only relevant for the gen_fit function.
Internally, the function gen_fit calls the simulateData function from lavaan, which itself does not require data, but takes a population model from function pop_mod. Sample size is specified via
sample.nobs by nrow(x). Arguments skewness and kurtosis are derived from semTools::mardiaKurtosis(x) and semTools::mardiaSkew(x) in package semTools, if assume.mvn = FALSE.
Since there seems to be no unified agreement on what “no”, “low”, “moderate”, or “high” kurtosis / skewness constitutes (Niemand & Mai, 2018), we omitted the once verbally differentiated levels
available in the tool. As mentioned before, x is also needed for the empirical population model type “EM” in pop_mod.
Index guessing
For flexible cutoffs, the type of a fit index (GoF or BoF) plays an essential role, as the lower (GoF) or upper (BoF) width of a confidence interval is required. Since empirically guessing the type
may be misleading (e.g., a very bad model may produce a distribution of SRMR that is not different from a distribution for a CFI in a better fitting model), we implemented a helper function
index_guess that simply guesses the fit index by name (upper or lowercase considered):
#> [1] "GoF"
#> [1] "GoF"
#> [1] "BoF"
#> [1] "BoF"
#> [1] "not a fit index"
This function is applied in the flex_co function. If one specifies established fit indices, such as CFI or SRMR, the gof argument is not required. However, this feature does not override the gof
argument. For example:
fits = fits.single,
index = c("CFI", "SRMR"),
gof = c(TRUE, FALSE)
#> Warning in flex_co(fits = fits.single, index = c("CFI", "SRMR"), gof = c(TRUE,
#> : The number of replications is lower than the recommended minimum of 500.
#> Consider with care.
#> $cutoff
#> CFI SRMR
#> 0.97826871 0.03659316
#> $index
#> [1] "CFI" "SRMR"
#> $alpha
#> [1] 0.05
#> $gof
#> CFI SRMR
#> TRUE FALSE
#> $replications
#> [1] 10
#> $`number of non-converging models`
#> [1] 0
#> $`share of non-converging models`
#> [1] 0
fits = fits.single,
index = c("CFI", "SRMR"),
gof = c(FALSE, TRUE)
#> Warning in flex_co(fits = fits.single, index = c("CFI", "SRMR"), gof = c(FALSE,
#> : The number of replications is lower than the recommended minimum of 500.
#> Consider with care.
#> $cutoff
#> CFI SRMR
#> 1.00000000 0.03096027
#> $index
#> [1] "CFI" "SRMR"
#> $alpha
#> [1] 0.05
#> $gof
#> CFI SRMR
#> FALSE TRUE
#> $replications
#> [1] 10
#> $`number of non-converging models`
#> [1] 0
#> $`share of non-converging models`
#> [1] 0
The first version (the gof argument can also be omitted as the guess is correct) gives the correct flexible cutoffs, the second version does not, as CFI is not a BoF and SRMR is not a GoF. Hence,
users should be careful with specifying this argument. For novel fit indices or alternative uses, it may however be beneficial to maintain this argument.
One quintessential issue we notice in many papers, reviews, and PhD courses is that non-experts do not know which fit index or fit indicator to choose. To summarize, this is the major question one of
our papers investigates (Mai et al., 2021) and the main message is that one should follow a tailor-fit approach. Depending on three settings, a) sample size, b) research purpose (novel or established
model), and c) focus (confirming a factorial structure, i.e., CFA or investigating a theoretical, structural model), different fit indicators are recommended.
The differentiation between novel or established model might need some explanation. Fit indicators often yield different Type I and II errors. When an established model that has been empirically
investigated many times before (e.g., Theory of Planned Behavior-models) is tested, it makes sense to put more weight on Type I error. However, when the model has never been tested before, it makes
sense to put more weight on the Type II error. Since fixed cutoffs show worse hit rates for higher Type II error weights (1:3, 1:4), they may be poorly performing for novel models.
We built the function recommend to incorporate this tailor-fit approach in a user-friendly way. It requires the simulated fit indices and the arguments for purpose and focus. Sample size is
determined automatically. Results are rounded to 3 digits, but can be changed to 1 to 5 digits if needed, for example by digits = 5.
Since the most obvious application is to conduct a CFA on a novel model, the standard arguments of purpose and focus are set to this application. Hence, when we use no further arguments, we get the
following recommendation:
#> Warning in recommend(fits.single): The number of replications is lower than the
#> recommended minimum of 500. Consider with care.
#> $recommended
#> type fit.values
#> SRMR BoF 0.038
#> $cutoffs
#> SRMR
#> cutoff 0.001 0.037
#> cutoff 0.01 0.037
#> cutoff 0.05 0.037
#> cutoff 0.1 0.036
#> $decisions
#> SRMR
#> cutoff 0.001 rejected
#> cutoff 0.01 rejected
#> cutoff 0.05 rejected
#> cutoff 0.1 rejected
#> $replications
#> [1] 10
#> $comment
#> [1] "Recommendations based on flexible cutoffs and Mai et al. (2021)"
SRMR is recommended due to the purpose, focus, and sample size (n = 502) in line with the recommendations by Mai et al. (2021). Hence, the lone fit indicator we need to report is SRMR. The function
also returns the type of the fit indicator, which is guessed from index_guess and the actual value of the SRMR in the model.
Additionally, the function provides a sensitivity analysis for different values of uncertainty, .001 (.1 percent), .01 (1 percent), .05 (5 percent) and .10 (10 percent) and makes a decision given the
cutoff. For completeness, replications, the number of non-converging models, and their share are also provided.
The result found here demonstrates the consequences of uncertainty for the decision. When one is very conservative and assumes a high Type I error (.10), the cutoff (.036 for 100 replications) is
lower than the actual value (.038) and hence the present model should be rejected. When one is very lenient and feels safe with the model (.001), the cutoff (.040 for 100 replications) is higher than
the actual value and hence the model can be confirmed.
Let us assume for a moment that Babakus and Boller had not been as exploratory as they were and simply looked for confirmation of an established measurement model. So, we change the purpose argument
to established.
recommend(fits.single, purpose = "established")
#> $recommended
#> type fit.values
#> CFI GoF 0.96
#> $decisions
#> [1] "confirmed"
#> $comment
#> [1] "Recommendations based on fixed cutoffs and Mai et al. (2021)"
Now the recommendation changes to CFI with a fixed cutoff. Consequently, no uncertainty is provided (as they are fixed) and the recommendation is to confirm the model because the actual value (.960)
is above the fixed cutoff of .95. This demonstrates two things. First, the recommend function also recommends fixed cutoffs when it is acceptable to do so (see Mai et al. 2021). Second, it shows what
happens when assuming established models (and hence a low importance of Type II error): Type I errors become more important. Compare this with the lenient uncertainty before (.001, i.e., .040 for 100
replications), from the first recommendation, where SRMR also confirms the model. In other words, we feigned to be very certain by assuming the model to be an established model (ignoring the doubts)
and hence got a very determined answer.
Finally, for exploratory investigations, one can also override the recommendations programmed into the function by using the override argument. This however requires users to provide one or more
indices with the argument index (otherwise, an error is returned).
override = TRUE,
index = c("CFI", "SRMR"))
#> Warning in recommend(fits.single, override = TRUE, index = c("CFI", "SRMR")):
#> The number of replications is lower than the recommended minimum of 500.
#> Consider with care.
#> $recommended
#> type fit.values
#> CFI GoF 0.960
#> SRMR BoF 0.038
#> $cutoffs
#> CFI SRMR
#> cutoff 0.001 0.978 0.037
#> cutoff 0.01 0.978 0.037
#> cutoff 0.05 0.978 0.037
#> cutoff 0.1 0.980 0.036
#> $decisions
#> CFI SRMR
#> cutoff 0.001 rejected rejected
#> cutoff 0.01 rejected rejected
#> cutoff 0.05 rejected rejected
#> cutoff 0.1 rejected rejected
#> $replications
#> [1] 10
#> $comment
#> [1] "Override mode"
In the example, we provide CFI and SRMR and get the appropriate recommendations for them. Unsurprisingly, the recommendations do not change much (SRMR is confirmed for levels of .001 and .01
uncertainty for 100 replications, otherwise the model is rejected). Please note that purpose and focus are without function in this case, as the recommendations by Mai et al. (2021) are overridden.
|
{"url":"https://cran.r-project.org/web/packages/FCO/vignettes/FCO.html","timestamp":"2024-11-10T14:28:05Z","content_type":"text/html","content_length":"125198","record_id":"<urn:uuid:9fb36119-534c-465b-828b-452c8bb04737>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00248.warc.gz"}
|
Voltage Divider Rule - (Electrical Circuits and Systems II) - Vocab, Definition, Explanations | Fiveable
Voltage Divider Rule
from class:
Electrical Circuits and Systems II
The voltage divider rule is a simple and effective method used to determine the voltage across components in a series circuit. This rule states that the voltage across a resistor in a series circuit
is proportional to its resistance relative to the total resistance of the circuit, allowing for easy calculation of voltage drops. It becomes particularly useful when analyzing circuits with complex
impedances, as it helps to understand how different components share the applied voltage.
congrats on reading the definition of Voltage Divider Rule. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The voltage divider rule can be expressed mathematically as $$V_x = V_{in} \times \frac{R_x}{R_{total}}$$ where $$V_x$$ is the voltage across the resistor of interest, $$V_{in}$$ is the input
voltage, and $$R_x$$ and $$R_{total}$$ are the resistance of the specific resistor and total resistance, respectively.
2. This rule applies not only to resistors but also to complex impedances in AC circuits, facilitating calculations involving capacitors and inductors.
3. In a series circuit with multiple resistors, each resistor will have a voltage drop determined by its resistance relative to the total resistance, leading to a predictable distribution of
4. Understanding the voltage divider rule is essential for troubleshooting and designing circuits, especially when dealing with signal levels and ensuring proper operation of components.
5. It can be easily applied using simulation tools or practical breadboard setups, helping visualize how changes in resistance affect voltage distribution.
Review Questions
• How does the voltage divider rule help in analyzing circuits with complex impedances?
□ The voltage divider rule assists in analyzing circuits with complex impedances by allowing calculations of voltage drops across components that include capacitors and inductors. Since these
components can be represented as impedances in series, applying the same principle of proportionality based on their impedance values enables precise determination of voltage across each
component. This approach simplifies understanding how AC signals behave in various scenarios.
• Compare the use of the voltage divider rule with Kirchhoff's Voltage Law in circuit analysis.
□ While both the voltage divider rule and Kirchhoff's Voltage Law are fundamental principles in circuit analysis, they serve different purposes. The voltage divider rule specifically focuses on
calculating individual voltages across series components based on their resistances. In contrast, Kirchhoff's Voltage Law provides a broader view by ensuring that the sum of all potential
differences in any closed loop equals zero. Combining these two principles allows for a more comprehensive analysis of circuits, especially when dealing with complex arrangements.
• Evaluate how the understanding of the voltage divider rule impacts practical circuit design and troubleshooting.
□ A solid understanding of the voltage divider rule significantly impacts practical circuit design and troubleshooting by enabling engineers to predict and control voltage levels across
components accurately. When designing circuits, knowing how resistances distribute input voltages helps prevent component failure due to overvoltage conditions. Furthermore, during
troubleshooting, applying this rule allows quick assessment of whether voltage drops align with expected values, which can identify faulty components or connections efficiently.
"Voltage Divider Rule" also found in:
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/electrical-circuits-systems-ii/voltage-divider-rule","timestamp":"2024-11-11T00:13:07Z","content_type":"text/html","content_length":"154876","record_id":"<urn:uuid:28753e58-bdb7-468c-8433-bf01d3c7cf67>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00233.warc.gz"}
|
Ph.D. Program in Electrical and Electronic Engineering
Program tanımları
Head of Department: Gunhan Dundar
Professors: Emin Anarim, M.Levent Arslan, Isil Bozma, Omer Cerid, Kemal Ciliz, Oguzhan Cicekoglu, Hakan Delic, Gunhan Dundar, Okan Kadri Ersoy•, Aysin Ertuzun, Yasemin Kahya, Okyay Kaynak, Avni
Morgul*, Kadri Ozcaldiran, Bulent Sankur, Selim Seker, Sabih Tansal†
Associate Professors: Yagmur Denizhan,
Assistant Professors: Burak Acar, Mehmet Akar, F. Kerem Harmanci, Mutlu Koca, Mehmet Kivanc Mihcak, Murat Saraclar, Arda Deniz Yalcinkaya, Heba Mohamed Yuksel
Instructor: Dr. Selcuk Ogrenci*
† Professor Emeritus
The Ph.D. Program in Electrical and Electronic Engineering is composed of a minimum number of 24 credits of course work, chosen under the supervision of an advisor from among the option courses in
Electrical and Electronic Engineering and from the courses offered by the Institute, and a dissertation carried out according to the regulations of the Institute.
EE 512 Solid State D.C. Drives (Kati Hal D.C. Suruculeri) (3+0+0) 3
Evolution of D.C. drives. Analysis and performance characteristics of single-phase, three-phase and chopper-fed D.C. drives. Reversible drives. Discontinuous current operation. Regenerative braking.
Dual converters. Closed-loop control. Analysis and design of controller structures. Phase locked loop control. Microprocessor based drive control systems. Applications.
Prerequisite: Consent of instructor.
EE 531 Television Engineering (Televizyon Muhendisligi) (3+0+0) 3
Principles of picture transmission. Color fundamentals. Camera tubes. Color television systems. Video signal. Carrier transmission of the video signal. Video recording. Television studio equipment.
Transmitters and receivers.
EE 537 Introduction to VLSI Design (3+0+0) 3
(Cok Genis Capli Tumlesik Devre Tasarimina Giris)
Electronic characteristics of logic gates. Fabrication processes for MOS technology. Layout design rules and examples. Electronic characteristics based on geometry. Design verification, Schematic
capture, analog/digital simulation. CMOS digital circuits: pads, super buffers, CMOS switch logic. Student term project.
EE 541 Computer Communication Networks I (3+0+0) 3 (ECTS: 7)
(Bilgisayar Iletisim Aglari I)
Theory and practice of computer communication networks. The layered approach and International Standardization Organization (ISO) Open Systems Interconnection Reference Model (OSI/RM) principles.
Physical communication media. Data link layer and reliability of logical links. Multiplexing, switching, and multiple access methods. Network layer routing via X.25 and Internet Protocol (IP);
congestion control. Transport protocols: Transmission Control Protocol/Internet Protocol (TCP/IP). Local area networks and performance analysis.
EE 542: Computer Communication Networks II (3+0+0) 3 (ECTS: 6)
(Bilgisayar Iletisim Aglari II)
Advanced data transport and switching concepts. Asynchronous Transfer Mode (ATM) principles. Optical networking. High speed switching. Performance issues: queuing theory and delay models in computer
networks. Elements of the presentation layer. Application protocols: message handling systems, database applications, network management, World Wide Web (WWW), multimedia.
Prerequisite: EE 541 or equivalent.
EE 544 Data Compression (Veri Sikistirma) (3+0+0) 3
Lossles and lossy compression techniques and their mathematical basis. Huffman coding, arithmetic coding, various dictionary coding techniques including the Lempel-Ziv techniques; scalar and vector
quantization, differential encoding, transform coding, and subband coding.
Prerequisites: Consent of instructor, EE 577.
EE 545 Wireless Networks and Mobile Systems I (3+0+0) 3 (ECTS: 7)
(Telsiz Aglar ve Gezgin Sistemler I )
Characteristics and operations of wireless networks: Institute of Electrical and Electronics Engineers (IEEE) 802.11 and Bluetooth wireless area networks. Mobile Internet Protocol (IP) and mobile ad
hoc routing protocol. Middleware application program interfaces (APIs) to realize mobile applications. Basic functionality provided by middleware for peer to peer (P2P) computing. Design,
implementation and testing of mobile applications.
Prerequisite: EE 541 or equivalent.
EE 546: Wireless Networks and Mobile Systems II (3+0+0) 3 (ECTS: 6)
(Telsiz Aglar ve Gezgin Sistemler II)
Routing protocols in mobile ad hoc networks mobile Internet Protocol (IP) principles. Operation of IP Dynamic Host Configuration Protocol (DHCP). Networks Address Translation (NAT). Principles of
security engineering: cryptography, vulnerability, confidentiality, integrity, modification, Public Key Encryption (PKE). The security of Institute of Electrical and Electronics Engineers (IEEE)
802.11 (WEP).
Prerequisites: EE 545 or equivalent
EE 548 Wireless Communication Applications (1+0+4) 3 (ECTS: 6)
(Telsiz Iletisim Uygulamalari )
Wireless communication and networking applications: Wireless Local Area Networks (WLAN), wireless web access, peer-to-peer communications, Bluetooth, Mobile Ad hoc Networks (MANET), Mobile Internet
Protocol (IP) and Dynamic Host Configuration Protocol (DHCP).
Prerequisites: EE 545 or equivalent.
EE 550 Artificial Neural Networks (Yapay Sinir Aglari) (3+0+0) 3
Principles of Neural Computing. Architectural analysis of different neural network models (Hopfield model, Single Perceptron, Multilayer Perceptron etc.). Learning algorithms. Back propagation
algorithm and local minima problem. Dynamics of recurrent neural networks. Applications of neural networks for control systems, system identification, associative memories, optimization problem etc.
Computer simulation homeworks and final project.
EE 551 Robust Control (Dayanikli Denetim) (3+0+0) 3
Norms for signals and systems. Internal stability. Youla parametrization of all internally stabilizing compensators. Additive and multiplicative plant uncertainty. Robust stability and robust
performance against plant uncertainties. Solution of the robust performance problem via loopshaping. Pick-Nevanlinna Interpolation Problem and its application to model matching. Modified robust
performance problem (mixed sensitivity problem): Solution via spectral factorization and model matching. Phase and gain margin optimization.
EE 552 Digital Control (Sayisal Denetim) (3+0+0) 3
Introduction to digital control of analogue systems. Sampling, quantizing and coding. The z-transformation and its properties, the inverse z-transformation. Discretization techniques, discrete-time
equivalence of continuous-time systemsand filters. Transient and steady-state analysis of discrete-time systems, stability analysis. Design of digital controllers based on root locus methods and
frequency response methods. State-space analysis of discrete-time system and controller design by pole-placement. Introduction to discrete-time optimal control design.
Prerequisite: Consent of instructor.
EE 562 Microwaves (Mikro Dalgalar) (3+0+0) 3
Microwave transmission, transmission lines and waveguides. Microwave circuits. Scattering parameters. Microwave resonators. Microwave using ferrites. Generation and amplification of microwaves.
Klystrons, magnetrons, traveling wave tubes. Semiconductors in microwaves.
EE 570 Spectral Estimation (Spektral Kestirim ) ((3+0+0) 3 (ECTS: 7)
Spectral estimation problem. Nonparametric techniques: periodogram and correlogram methods; asymptotic properties; statistical analysis; windowed spectrum methods. Parametric methods for rational
spectra: Auto Regressive (AR), Moving Average (MA) and Auto Regressive Moving Average (ARMA) models; Yule-Walker equations and least-squares methods; Levinson-Durbin algorithm; Prony's method.
Parametric methods for line spectra; non-linear least squares; higher order Yule-Walker; methods based on eigen-decomposition; Multiple Signal Classification (MUSIC), Pisarenko, Estimation of Signal
Parameters via Rotational Invariance Techniques (ESPRIT).
EE 572 Mathematical Methods for Signal Processing (3+0+0) 3 (ECTS: 7)
(Isaret Islemede Matematiksel Yontemler)
Metric spaces, normed vector spaces, basis sets. The four subspaces of the linear transforms. Approximation in Hilbert spaces: least squares filtering and estimation, linear regression, polynomial
approximation, minimum norm solutions and system identification. Matrix factorization, eigenvectors, singular value decomposition, iterative matrix inverses, pseudoinverse. Theory of constrained
optimization and dynamic programming.
EE 573 Pattern Recognition (Oruntu Tanima ) (3+0+0) 3 (ECTS: 6)
Overview of learning and statistical decision theory. Model inference and parameter estimation. Linear models for regression and classification. Kernel methods. Nonparametric methods. Model
assessment and selection. Ensemble methods. Unsupervised learning.
EE 574 Image Analysis (Imge Cozumlemesi ) (3+0+0) 3 (ECTS: 7)
Image spaces. Variational optimization, variational image processing: restoration and denoising. Curves: representations, characterizations and evolution. Medial axis transform. Surfaces:
representations, characterizations and evolution. Interface propagation techniques. Statistical image analysis: Principal Component Analysis (PCA), Independent Component Analysis (ICA).
EE 575 Multiresolution Signal Processing (3+0+0) 3
(Coklu-Cozunurlu Sinyal Isleme)
Time-frequency signal decomposition. Block transforms, subband filters, wavelet decomposition. Orthogonality, transform efficiency, coding gain performance. Wavelet transform: regularity, 2-channel
filterbanks, wavelet families. DCT, Lapped Orthogonal Transforms, other transforms. Parametric modeling of signal sources.
EE 576 Machine Vision (Yapay Gorme) (3+0+0) 3
Extraction of low-level features, boundary and region based analysis, segmentation and grouping, lightness and color, shape from shading. Photometric and binocular stereo, optical flow and motion
estimation, strongly-modeled vision, weakly-modeled vision recognition, integration and vision systems, real-time vision.
Prerequisite: Consent of instructor.
EE 577 Statistical Signal Analysis (3+1+0) 3 (ECTS: 7)
(Istatistiksel Sinyal Cozumlemesi )
Characteristics of random processes. Correlation function and power spectral density of stationary processes. Noise mechanisms, the Gaussian and Poisson processes. Statistical estimation theory,
linear mean square filtering, optimum Wiener and Kalman filtering. Signal detection theory and statistical significance tests.
Prerequisites: Consent of instructor
EE 578 Speech Processing (Konusma Isleme) (3+0+0) 3 (ECTS: 7)
Speech production theory, acoustic tube model, linear prediction model, cepstrum analysis, homomorphic speech processing, vector quantization and speech coding, speech enhancement, text-to-speech
synthesis, hidden Markov models and their application to speech recognition.
EE 579 Graduate Seminar (Lisansustu Seminer) (0+1+0) 0 P/F
The widening of students' perspectives and awareness of topics of interest to electrical and electronic engineers through seminars offered by faculty, guest speakers and graduate students.
EE 580-589 Special Topics (Ozel Konular) (3+0+0) 3
In depth study of a special topic in the area of Communication, Control or Electronics.
EE 590-599 Selected Topics (Secilmis Konular) (3+0+0) 3
In depth study of a selected topic in the area of Communication, Control or Electronics.
EE 621 Network Synthesis (Devre Sentezi) (3+0+0) 3
Loop and node equations in s-domain, state equations, Tellegen's theorem, positive definite quadratic forms. Synthesis of emmitance functions and of transfer functions: 'z', 'y' parameters, zeroes of
transmission. Active network synthesis, some classical network configurations, signal flow graph techniques. Filter characteristics and approximation techniques. Sensitivity and tolerance
EE 628 Game Theory (Oyun Kurami) (3+0+0) 3
Introduction to game theory in matrix and extensive forms of optimal strategies. Discrete and continuous zero-sum games. Games of kind and games of degree. Pontryagin's theory for linear pursuit
problems. Imperfect information. Markov games. Nonzero sum and many players' games
EE 631 Advanced Electronic Circuits (Ileri Elektronik Devreler) (3+0+0) 3
Pulse transformers, transmission lines. Broadband amplifier analysis and design. Sweep generation. Synchronization. Negative resistance devices. Parametric amplifiers. Sampling gates.
EE 632 System Design with Microelectronic Devices (3+0+0) 3
(Tumdevrelerle Sistem Tasarimi)
Classification of linear and digital integrated circuits. Digital system design; logic circuits, dividers, counters and memories, digital communication circuits. Linear system design; radio-TV
circuits, power amplifiers, linear communications circuits. Hybrid systems; SAW and digital filters. Microprocessor based intelligent systems.
EE 633 Computational Aspects of VLSI (3+0+0) 3
(Cok Genis Capli Tumlesim Algoritmalari)
Analog circuit simulation, digital circuit simulation and switch level simulation techniques. Physical design of VLSI circuits. Floorphamming, placement, channel routing, global routing, and
performance driven routing. Silicon assembly. Silicon compilation for digital VLSI, silicon compilation for analog VLSI.
EE 634 Integrated Electronics (Tumlesik Elektronik) (3+0+0) 3
Types of integrated circuits, physical process employed in the design of integrated circuits, impurity diffusion and diffusion junction properties, oxidation and surface statesthin film deposition
and properties; epitaxial growth; passive and active components for integrated electronics; integrated circuit design principles.
EE 635 Theory of Electron Devices (Elektronik Aygitlar Kurami) (3+0+0) 3
Photo conductivity, photo-emission, light amplification, metal semiconductor diodes, field effect devices, special effects in semiconductors, properties of dielectric materials and magnetic
EE 637 Integrated Circuit Design (Tumdevre Tasarimi) (3+0+0) 3
Introduction to device physics, A.C. and D.C. models for integrated circuit transistors, biasing techniques, current and voltage sources and references, design principles for analog circuits such as
operational amplifiers, voltage regulators, multipliers. Design principles for digital circuits such as DTL's, ECL's and TTL's.
EE 638 Advanced VLSI Design (3+0+0) 3
(Ileri Cok Genis Capli Tumlesik Devre Ileri Tasarim Teknikleri)
Dynamic circuits: clocked static logic, charge leakage, storage and sharing. Dynamic logic: Domino, MODL, LDL, NORA and Zipper CMOS logic. Electronic design automation (EDA). Chip design options:
Programmable logic, programmable gate arrays, sea-of-gates and standard cell design. CMOS subsystem design. Testing and design for testability. Student term project.
EE 640 Microprocessor System Design (Mikroislemcili Sistem Tasarimi) (3+0+0) 3
Design of dedicated purpose microprocessor systems used in electronic instrumentation, control and communications. Design of medium to large size microprocessor systems. Multi-processor system
EE 643 Digital Communication (Sayisal Iletisim) (3+0+0) 3 (ECTS: 6)
Signal spaces and system analysis in digital communication. Characterization of communication channels and modulation methods under the constraints of both noise and finite bandwidth. Linear and
nonlinear signaling techniques under additive white Gaussian noise. Basic equalization techniques. Adaptive equalization. Multichannel and multicarrier systems. Spread spectrum techniques and
multiuser communications.
Prerequisite: Consent of instructor.
EE 644 Error Control Coding (Hata Denetim Kodlari) (3+0+0) 3
Introduction to algebra and Galois fields. Various error control coding techniques including linear block codes, cyclic codes, BCH and Reed-Solomon codes, convolution codes. Viterbi algorithm.
Trellis coded modulation.
Prerequisite: Consent of instructor.
EE 650 H - Optimal Control (H - Eniyi Denetim) (3+0+0) 3
Definitions of L2, L , H2 and H spaces, Background on linear operators. Nehari Problem. Co-prime factorizations over H , parametrization of internally stabilizing compensators and reduction of the H
- Optimal Control Problem to a model matching problem. Solution of the one-sided model matching problem via Nehari Theorem for SISO systems. Canonical inner-outer, spectral and J-spectral
factorizations for MIMO systems. Krein spaces and solution of the two-sided model matching problem via Nehari Theorem. State, space solution of the H - Optimal Control Problem: central controller and
parametrization of all H - Optimal controllers.
EE 652 Adaptive Control (Uyarlanir Denetim) (3+0+0) 3
Overview of basic approaches and alternatives to adaptive control; review of deterministic and stochastic signal and system models; real time parameter estimation using recursive least squares and
its derivatives, persistent excitation, covariance management estimation under feedback; model reference adaptive systems, MIT rule, hyperstability approach; self-tuning regulators. Direct and
indirect methods, minimum variance, linear quadratic and generalized predictive control strategies, convergence analysis using ODE method.
Prerequisite: Consent of instructor.
EE 653 Optimal Control Theory (Eniyi Denetim Kurami) (3+0+0) 3
Calculus of variations in system optimization. Two point boundary value problems. Optimal control function and optimal control law. Dynamic programming. Pontryagin's Minimum Principle: minimum time,
minimum fuel, minimum energy problems. Optimal control design with quadratic criteria. Regulation and tracking problems. Singular control problems.
Prerequisite: Consent of instructor.
EE 655 Stochastic Systems and Control (Rasgele Sistemler ve Denetimi) (3+0+0) 3
Modeling and analysis of discrete time and continuous time uncertain linear dynamic systems. Estimation problems: Bayesian, Fisher and Weighted Least Square estimation. Optimal prediction, filtering
and smoothing. Kalman filtering. Wiener-Hopf theory. Stochastic optimal control.
Prerequisite: Consent of instructor.
EE 656 Analysis of Nonlinear Control Systems (3+0+0) 3
(Dogrusal Olmayan Denetim Sistemleri)
Solution of nonlinear control problems, phase-plane, describing function, relay servomechanisms. Optimum and quasi-optimum relay servos. Nonlinear compensation techniques. Self-adaptive control
systems. Analog simulation.
EE 657 Robotics (Robot Sistemleri) (3+0+0) 3
Industrial automation. Manipulator and sensor technology. Review of kinematics and Lagrangian dynamics. Joint-space/work-space transformations and data structures. Point to point continuous path
control. Robot control: command languages, navigation and mapping, optical and acoustic ranging and pattern recognition, collision avoidance algorithms, positioning accuracy, resolution and
repeatability. Distribution of intelligence. Adaptive hierarchical control.
EE 658 Computer Control Techniques (Bilgisayarla Denetim Teknikleri) (3+0+0) 3
Design and implementation of digital controllers; disturbance models, pole placement design based on I/O models, optimal design based on state-space approach; self tuning control; predictive control.
Simulation studies using CAD tools.
EE 662 Electromagnetic Wave Propagation (3+0+0) 3
(Elektromanyetik Dalga Yayilimi)
Hertz potential. Plane waves in different media. Spectral representation of elementary sources. Field of a dipole in a stratified medium. Ground wave and ionospheric propagation. Scattering and
absorption of a wave by a single particle.
EE 664 Antenna Theory (Antenler Kurami) (3+0+0) 3
Radiation from a current distribution and a field distribution. Calculation of far field. Elementary dipole. Thin linear antennas, near and far fields. Impedance of antennas, self and mutual
impedance. Directional properties of antennas, antenna arrays. Receiving antennas. Radiation from aperture, horns, reflectors and lenses. Special types of antennas.
EE 670 Adaptive Filter Theory (Uyarlamali Suzgec Kurami) (3+0+0) 3
Study of the mathematical theory of various realizations of linear filters. Detailed study of linear optimum filtering, namely Wiener filtering, linear prediction, and Kalman filtering. FIR
structures versus lattice filter structures. Method of least squares, comparative study of steepest descent, least-mean sequare (LMS) and recursive least squares (RLS) filter design algorithms.
EE 671 Information Theory (Bilisim Kurami) (3+0+0) 3
Information measures, characterization of information sources, coding for discrete sources. Discrete channel characterization, channel capacity. Introduction to waveform channels and rate distortion
EE 673 Radar and Sonar Systems (Radar ve Sonar Sistemleri) (3+0+0) 3
Microwave propagation. CW, FM, MTI doppler and tracking radar. Radar transmitters and receivers and antennas. Underwater sound propagation. Sonar transmitter and receivers. Transducers. Array
processing. Detection and processing of radar and sonar signals.
EE 675 Waves in Random Media (Rasgele Ortamlarda Dalgalar) (3+0+0) 3
The random medium and its statistical properties. The random field as a set or spectrum of plane waves. The mean field, power, field coherence, and the angular spectrum. Intensity fluctuations of the
wave field. Single and multiple scattering theory of waves in stationary scatterers. Dyadic permittivity of the medium.
EE 676 Remote Sensing (Uzaktan Algilama) (3+0+0) 3
Basic operation and applications of radar. Antenna system in microwave remote sensing. Radiometry. Microwave interaction with atmospheric constituents. Radiometer systems. Active microwave sensing of
EE 677 Detection and Estimation Theory (Sezim ve Kestirim Kurami) (3+0+0) 3
Classical statistical decision theory, decision criteria and composite hypothesis tests. Receiver operating characteristics and error probability, applications to radar and communications. Detection
of signals with unknown and random parameters, detection of stochastic signals, nonparametric detection techniques. Introduction to signal design, ambiguity function, the uncertainty principle.
Application to radar and sonar systems.
Prerequisite: EE 577.
EE 679 Pattern Recognition (Oruntu Tanima) (3+0+0) 3
Introduction to normative and generative theories of pattern recognition. Algebraic characterization of patterns. Techniques of statistical classification. Feature choice, feature extraction. Linear
discrimination and adaptive learning. Syntactic properties of conventional symbol systems.
EE 680-689 Special Topics (Ozel Konular) (3+0+0) 3
Study of the latest and current developments in the field of Electrical Engineering.
EE 691-694 Seminar (Seminer) (3+0+0) 3
Presentation and study of current research topics in computers, systems, communications, control, solid state.
EE 695-698 Special Topics (Ozel Konular) (3+0+0) 3
Study of the latest and current developments in the field of Electrical Engineering.
EE 699 Guided Research (Yonlendirilmis Arastirmalar) (2+0+4) 4
Research in the field of Electrical and Electronics Engineering, by arrangement with members of the faculty; guidance of doctoral students towards the preparation and presentation of a research
EE 790 Ph.D. Thesis (Doktora Tezi)
|
{"url":"https://www.educaedu-turkiye.com/ph-d-program-in-electrical-and-electronic-engineering-doktora-programlari-1044.html","timestamp":"2024-11-05T21:58:22Z","content_type":"text/html","content_length":"105876","record_id":"<urn:uuid:097685c5-03e8-4984-92a9-700d0c5fa45e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00048.warc.gz"}
|
Mechanical Energy Balance: Screencast
The mechanical energy balance is obtained from the steady-state energy balance, and under some conditions, it simplifies to the Bernoulli equation.
We suggest that after watching this screencast, you list the important points as a way to increase retention.
\[\frac{\Delta P}{\rho} + \frac{\Delta u^2}{2} +g\Delta z + \left(\hat{\Delta U} – \frac{\dot Q}{\dot m} \right) = \frac{\dot W _S}{\dot m}\]
where \(\rho\) = fluid density (g/L),
\(P\) = pressure (bar),
\(\Delta P\) = change in pressure (bar): outlet – inlet,
\(u\) = velocity (m/s),
\(g\) = gravitational constant = 9.81 m/s,
\(z\) = height (m),
\(\Delta z\) = change in height (m): outlet – inlet,
\(\hat{\Delta U}\) = specific change in internal energy (J/g),
\(\dot Q\) = rate of heat added (J/s),
\(\dot m\) = mass flow rate (g/s), and
\(\dot W _S\) = rate of shaft work added (J/s).
\[\frac{\Delta P}{\rho} + \frac{\Delta u^2}{2} +g\Delta z + \hat{\,\,\, F} = \frac{\dot W _S}{\dot m}\]
where \(\hat{\,\,\, F} = \left( \hat{\Delta U} – \frac{\dot Q}{\dot M} \right)\) = friction loss (J/s), and \(\hat{\,\,\, F} > 0\).
Bernoulli equation (obtained when no frictional loss and no shaft work):
\[\frac{\Delta P}{\rho} + \frac{\Delta u^2}{2} +g\Delta z = 0\]
|
{"url":"https://learncheme.com/quiz-yourself/interactive-self-study-modules/mechanical-energy-balance/mechanical-energy-balance-screencast/","timestamp":"2024-11-10T11:47:38Z","content_type":"text/html","content_length":"75538","record_id":"<urn:uuid:35a2e265-afd6-4d66-b250-906782112a12>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00660.warc.gz"}
|
An investigation on helical gear pair stresses incorporating misalignment and detail modification
A finite element approach to investigating the dynamic behavior of helical gear pairs (HGPs) by incorporating misalignment error and detail modifications of tip relief and face-width crowning is
presented. Basing on the C code and derived tooth profile formulas, fine finite element models of helical gear pair (HGP) can be constructed parametrically. Also, all elements on the driven teeth
surfaces are numbered to identify individual dynamic stresses. After analysis settings, the dynamic contact and fillet bending stresses of a theoretic HGP are first calculated. Then, the maximum
stresses with misalignment error are also obtained. Finally, the effect of tooth modification on the dynamic stresses of HGPs with the misalignment errors is discussed. The result shows modification
with tip relief and face-width crowning can reduce the dynamic responses caused by the impact contact of HGPs.
1. Introduction
Dynamic analyses of gears are required in the high precision, high speed, and low vibration applications. Still, to obtain gear dynamic responses is rather task even today owing that complicated
considerations required such as numerous geometric design parameters, errors, manufacturing modifications, backlash, deformation, or even lubrication and wearing behaviors. Discrete models with
equivalent mass, damping, and spring elements are the common approach. Only few of plentiful publications can be cited here [1-3]. Nevertheless, to pursue a deeper analysis or in multi physics
coverage, discrete models hardly satisfy the sufficient requirements. Therefore, continuous geometric models are taken into consideration. One of which is detail geometric design is applied.
Furthermore, with advancement of computing techniques, gear dynamics employing continuum models become attainable. Therefore, the works [4, 5] used commercialized finite element (FE) packages to find
static stress and deformation of gears. Besides, using the dynamic stiffness method Huang et al [6] analyzed the tip displacement and fillet strain of a spur gear pair modeled by several non-uniform
Timoshenko beams. Recently, continuum models were used in the other more gearing dynamic analyses [7, 8]. Fine element preparation of a HPG is a time consuming burden due to profile complexity, local
sensitivity of contact points, modification, and errors. To obtain hexahedron elements of HGPs using tooth profile [9] and element creation for wide consideration were presented [10, 11] for gearing
dynamics. Additionally, Yuksel and Kahraman [12] used an FE package to calculate the dynamic meshing forces of planetary gearing and predicted its wear on gear teeth.
Involute profile is sensitive to manufacturing and elastic errors. In practice, the negative behavior can be diminished by tooth modification. Generally, two kinds of modifications are often adopted
which are applied in profile and face-with directions, respectively. A tip relief can also be viewed as the kind of profile modification to avoid tooth interference at initial contact of tooth pairs.
Recently, a FE package for gears is being developing but analysis limitations remain [13]. Therefore, a FE approach using a general purpose FE software LS-DYNA is proposed here [14]. The dynamic
behavior of HGPs including tip relief and crowning modifications is discussed.
2. Errors and modifications of gears
Assembly misalignment and tip interference are the very concern. Misalignment error causes edge teeth contact and makes the contact deviating from the theoretic conjugate relation. As shown in Fig.
1, angle $\gamma$ is a misalignment error and ${\gamma }_{y}$ is an angular error around $y$-axis. Besides, tip interference happens at the moment when the mating tooth pair starts its surface
contact. At that, the tooth tip of driven gear cannot understate its mesh for the conjugate tooth surface smoothly. Thus, extreme contact impact may occur.
Fig. 1A misalignment error of gear pair represented by angle γ
Modification is the method by applying deliberately adjusting tooth geometry to compensate the effect due to manufacturing and operation errors. Firstly, a parabolical crowning by applying the
modification in the face-width direction can be defined by a crowning factor of ${C}_{c}$ as:
${C}_{c}={\left(\frac{F}{2\mathrm{cos}\beta }\right)}^{2}\delta ,$
in which $F$ is the face-width, $\beta$ is the helical angle, and $\delta$ is a tool cutting depth for crowning. Then, a tip relief modification is used by relieving tooth thickness near tip region
so as to smooth the tooth engagement especially at the instant of tooth meshing starting. As shown in Fig. 4(b), the pressure angle ${\alpha }_{2}$ on a rack is related as:
${\alpha }_{2}={\mathrm{tan}}^{-1}\left(\frac{w+h\mathrm{tan}{\alpha }_{n}}{w\mathrm{cos}\beta }\right),$
which ${\alpha }_{n}$ is the normal pressure gear of rack, $w$ is the amount of modification in the tangential direction, and $h$ is the modification in the profile direction. Fine detail tooth
geometry can be determined by adjusting parameters ${C}_{c}$, $w$, and $h$ to meet design requirements.
Fig. 2Modifications of gear tooth: a) parabolic crowning; b) tip relief
3. Finite element models
The process to tooth profile derivation and elements creation of a helical gear as shown in Fig. 3(a) are entailed in the previous work [11]. Only a briefing description is given in which a gear
tooth is divided into several regular quadrilateral areas. There are six area blocks divided quadrilaterally shown in Fig. 3(b). By subsequently assigning the distances or numbers of nodes belonging
to curves of the involute, fillet, tip relief, and other fractions, node distribution or density of the meshing elements can be conveniently adjusted. An example of created nodes is given in Fig. 3
(c). Benefiting from the above process by using profile formula with a C code, the 3D meshing models can be effortlessly built using the above organized and regular nodes. Finally, all the element
information helical gears can be shown using a preprocessor of LS-DYNA. Finally, the model of an HGP with crowning and tip relief modifications is obtained as shown in Fig. 4. The 3D element model of
the gear pair is constructed using the following gear data as: normal module ${m}_{n}=$ 3.175 mm, tooth number $=$ 28, normal pressure angle ${\alpha }_{n}$, helical angle $\beta =$ 300, and
face-width $F=$ 30 mm.
Fig. 3Gear element generation: a) model of helical gear applying tip relief and crowning modifications, b) six blocks divided quadrilaterally, c) nodal distribution
Fig. 43D element model of a HGP with gears, shafts, and other components
Using the FE model in Fig. 4 and assigning required inputs, gear dynamic response can be solved. Figure 5(a) shows dynamic stress distribution of the driven gear at an instant. In this study, two
driven teeth will be discussed and noting them as tooth 1 and tooth 2. Actually, before dynamic response achieves a steady state, three leading teeth pairs have passed.
The dynamic stress of all elements on teeth is desired to investigate. Therefore, as shown in Fig. 5(b), all elements on an analysis tooth surface are expressed in an element matrix of 32 by 30 which
is expressed along its profile and face-width, respectively. In the profile direction, the number is increasingly ordering from the tip to bottom. In the face-width direction, it is ordering along
the face-width from the side of late mesh, called late side, to the other side called early side. The elements belong the matrix are categorized into three regions which are tip region, frank region,
and fillet region shown in Fig. 5(b). At first, since the elements of the tooth tip, which are at the first row with blank circle, are tending to appear significantly large contact stress. Especially
when no modification is used since the contact impact occurs caused by the tip interference. Therefore, the elements of tooth tip are separately depicted. The maximum contact stress almost occurs on
this element. Besides, the second region is tooth face or flank which is the elements from row 2 to 23 illustrating with black circles. Then the third region is the gear fillet region from row 24 to
In order to separately discuss the largest values of stresses, a symbol system is introduced. Then, in region 1 for the tooth tip, its largest dynamic stress for the first tooth pair is denoted as
1EA and is 2EA for the second tooth pair. When a tip relief modification is applied, this element is the first element at the start of active profile (SAP). Then for region 2, the maximum contact
stress is denoted as 1E and 2E for tooth pair 1 and 2 respectively. For region 3 of the fillet stress section, 1R and 2R represent the maximum stress for tooth 1 and tooth 2 respectively.
Finally, symbol representing the stresses of all elements on tooth surface of the two driven teeth can be attained. Exemplify an element numbered as “1EA-1-30”. The first symbol of “1EA” explains the
element is belonging to region 1 (EA) on the first tooth (1); the second of “1” means the element is the first element along the profile direction from SAP; the 3rd symbol of “30” denotes it is the
30th element along the face-width direction. Accordingly, "1EA-1-30" indicates that the element locating in the tip region (region 1) for the first tooth pair of number 1 in the profile direction and
number 30 in the face-width direction. Similarly, the element of “2E-10-2” identifies the element in the face or flank region (region 2) of the second tooth pair which is the 10th element from the
tooth tip in the profile direction and is the 2nd element from the late side in face-width direction. Therefore, element “2R-5-9” is an element at the fillet region (region 3) on the second tooth of
number 5 in the profile direction and number 9 in face-width direction.
Fig. 5a) 2 observed teeth on driven gear, b) element matrix of 32×30 for a tooth surface
4. Results and discussions
4.1. Stresses of a theoretic HGP
The dynamic stresses of a theoretic (no modification) HGP at 10000 rpm without misalignment error are firstly discussed. The gear data is the given for the model in Fig. 4. The behavior of two teeth
pairs does not demonstrate entirely same in the dynamic analysis. Then, the von-Mises stresses of two adjacent tooth pairs are both exhibited. The dynamic contact and bending stresses of critical
elements in the three regions of the two teeth are shown in Fig. 6. In addition, maxima of the dynamic stresses of the three regions are also 3D illustrated in Fig. 7 in which significant peaks are
observed in the tooth tips.
The change of dynamic stresses of the driven gear for the first tooth pair during the meshing period is shown in Fig. 6(a). The maximum contact stress is 703 MPa which appears at the element of
“1E-1-30” occurring at the rotation angle of 2° after the starting meshing of the first tooth pair contacting very near the early side of face-width. This largest contact stress peak is caused by the
impact and singularly edging contact at the initial contact of the tooth pair during that the driven teeth tip may interfere with driving tooth flank due to the transmission error. Another
significant large stress peak is 584 MPa on element “1E-1-1” at the rotation angle of 20°. The reason is similar to that at the angle of 2° but is not so shock impact. Even without considering
manufacturing errors in HGPs, flexible deformation deviating the theoretically conjugating relation and inducing edge contact still cause significant tip interference.
Subsequently, dynamic contact stresses of the 1st tooth pair in the tooth flank surface (region 2) are discussed. In this region, the largest stress is 156 MPa which appears at element “1E-10-14” at
the angle of 19.4°. The contact location of element “1E-10-14” is near the middle of face-width which is the near the highest point of leas tooth contact (HPSTC) and has the shortest action line. The
dynamic contact stress of the element during the meshing period is expressed in Fig. 6(c). The final discussion of the 1st tooth is dynamic bending stresses at the tooth fillet in region 3. As shown
in Fig. 6(e), the largest bending stress of the 1st tooth is 58 MPa appearing at element “1R-5-9” at the rotation angle 19.7° which is very close to the instant of HPSTC at which the maximum stress
in the flank region abovementioned. The critical dynamic stresses of the 2nd tooth pair are also shown in Figs. 6(b), 6(d) and 6(f). Although the values between the two tooth pairs has obviously
different, their changing tendency are alike. Also, the angle instants which the peaks occur are almost same.
Fig. 6The maximum bending stresses in a mesh period of theoretic HGP at 10000 rpm
Fig. 7The maxima of dynamic stresses of theoretic HGP
5. Stresses of the theoretic HGP at misalignment error
Next, the dynamic characteristics of a non-modification HGP at 10000 rpm under variant misalignment errors are discussed. Only 3D showing of their maxima for the dynamic contact and bending stresses
in the three regions are 3D shown in Figs. 8(a) and 8(b).
The first case of a small misalignment ${\gamma }_{y}=$ 0.0053 is discussed. The HGP of a very small misalignment performs in a way similar to the results without misalignment error. There are rather
large peaks appear right after the meshing starting and just before the ending. As noted in Fig. 10, the maximum stresses of the two driven teeth in the three regions of tooth tips, flanks, and
fillets are 715, 194, and 64 MPa, respectively. Besides, the results of maximum stresses of the two larger misalignments are shown in Figs. 8(c)-8(f). For ${\gamma }_{y}=0.01374°$ in Figs.10(c) and
10(d), the maximum stresses in the tooth tip, flank, and fillet regions are 533, 185, and 60 MPa, respectively. For ${\gamma }_{y}=$[]0.0214° in Figs. 8(e) and 8(f), they are 485, 206, and 70 MPa,
respectively. Thus, the assigned misalignments do not significantly affect the maximum stresses on the frank and fillet regions except that tend to slightly increase the element stresses near the
late sides. Noticeably, the maximum contact stresses are decreased with the increase of the assigned misalignment. It is surprised that the adequate misalignment can somewhat reduce impactive contact
at the meshing starting for early side of the HGP. For example, it is from 703 MPa of perfect alignment HGP in Fig. 7(a) to 485 for ${\gamma }_{y}=$ 0.0214° in Fig. 8(e). However, the misalignment
may complicate the non-modification HGP dynamic behavior. As the understanding, tooth contact pattern critically affects the dynamic transmission performance of gear systems. Therefore, variant
modification techniques in the national or international standards have been established. All maximum values of the 3 regions of two driven teeth from the above analyses to the non-modification HGP
considering the misalignment or not are summarized in Fig. 9 and show reduction effect of peak values of contact stress at a misalignment of ${\gamma }_{y}=$ 0.01374°.
6. Stresses of HGPs applying modifications
Finally, the dynamic stress of HGPs at 10000 rpm by both applying the tip relief and face-width crowning modifications are investigated. Respectively, the tip relief values are $h=0.2{m}_{n}$ and $w=
0.03{m}_{n}$ in the profile and face-width directions formulated in Eq. (1) and the crowning factor ${C}_{c}=$ 0.00002 in Eq. (2). The resulted maxima of dynamic surface contact stresses and fillet
bending stresses of two driven teeth are shown in Fig. 10.
Fig. 83D illustration of maximum stresses of the two teeth with misalignment errors
a) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0053°)
b) Tooth 2 $\left({\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0053°)
c) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.01374°)
d) Tooth 2 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.01374°)
e) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0214°)
f) Tooth 2 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0214°)
Fig. 9The maxima of dynamic stresses in Fig. 10 of the no modification HGP with misalignments
Fig. 103D illustration of maximum dynamic contact and bending stresses of two teeth for an HGP with various misalignment errors applied tip relief and crowning modifications
a) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0053°)
b) Tooth 2 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0053°)
c) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.01374°)
d) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.01374°)
e) Tooth 1 (${\gamma }_{x}=$ 0°, ${\gamma }_{y}=$ 0.0214°)
f) Tooth 1 (${\gamma }_{x}=$0°, ${\gamma }_{y}=$ 0.0214°)
Fig. 11The maxima of dynamic stresses in Fig. 10 for the HGP of modification
Comparing with the results of non-modification HGP shown in Fig. 8, it is observed that tip relief can effectively eliminate the very large peaks of tip contact stresses appearing at the instants
near the meshing starting and ending of tooth pairs in Fig. 8. All the maxima of the dynamic contact and bending stresses of the two driven teeth in crowning HGPs are described below. For
misalignment error ${\gamma }_{y}=$ 0.0053°, the maximum stresses in the tooth tip, face and flank, and fillet regions are 270, 237, and 89 MPa, respectively. In addition, they are 271, 231, and 86
MPa for ${\gamma }_{y}=$ 0.01374° and are 260, 237, and 96 MPa for ${\gamma }_{y}=$ 0.0214°, respectively. Comparing with the results of a non-modification in Fig. 8, Executing the tip relief and
crowning modifications to the HGP has effectively decreased the flank contact and fillet bending stresses. The maximum contact stress at tooth tip is 62.1 % decreased from a value of 715 MPa to 271
MPa. Modifications with tip relief and face-width crowning can improve the impactive contact at the instants near the meshing starting and ending of tooth pairs. However, the maximum contact stresses
on the tooth faces and flanks are increased from 206 MPa to 246 MPa, but not significantly. Finally, all the maximum values in this analysis to the HGP applying the tip and crowning modifications
with and without considering misalignment errors are depicted in Fig. 11 and demonstrate a significantly better dynamic performance compared with the results in Fig. 9.
7. Conclusions
The dynamic behavior of surface contact and fillet bending stresses of HGPs including tooth modifications of tip relief and crowning are analyzed using an FE package. Numbering using an element
matrix to identify critical maximum stresses is used. In addition to stress response history, the maxima of dynamic stresses in three regions on teeth surfaces are 3D illustrated. Even of no
misalignment error, significant dynamic peaks of contact stresses of tooth pairs at the meshing start and end of a theoretic HGP are observed. It also exhibits dynamic responses of various tooth
pairs in an HGP are not completely identical. Furthermore, dynamic characteristics of theoretic HGP considering various horizontal misalignment errors are discussed. Unexpectedly, certain
misalignment may a little reduce impact contact at the moment of mesh starting of early side. Finally, design parameters for the tooth modifications are also given. The influences of the tip relief
and crowning modifications on the dynamic stresses in HGPs with or without the assembly errors are investigated. Tooth modification with both tip relief and face-width crowning effectively improves
the dynamic characteristics caused by impact contact of the HGP. An optimal study to facilitate tip relief and crowning modifications for spur and helical gear pairs incorporating the transmission
errors will be further undertaken.
• Yoon K., Rao S. S. Dynamic load analysis of spur gears using a new tooth profile. ASME J. Mech. Des., 118(1), 1996, p. 1-6.
• Sfakiotakis V. G., Vaitsis J. P., Anifantis N. K. Numerical simulation of conjugate spur gear action. Comput. Struct., 79(12), 2001, p. 1153-1160.
• Huang K. J., Wu M., Tseng J. T. Dynamic analyses of gear pairs incorporating the effect of time-varying lubrication damping. J. Vib. Control, 17(3), 2011, p. 355-363.
• Chen Y. C., Tsay C. B. Stress analysis of a helical gear set with localized bearing contact. Finite Elem. Anal. Des., 38(8), 2002, p. 707-723.
• Mažeika P., Didžiokas R., Barzdaitis V., Bogdevičius M. Dynamics and reliability of gear driver with antifriction bearings. Journal of Vibroengineering, 10(2), 2008, p. 217-221.
• Huang K. J. Dynamic analyses of gear pairs with gross motion effect using a dynamic stiffness method. Proc. IMechE, Part K: J. of Multi-body Dynamics, 224(2), 2010, p. 203-210.
• Parker R., Agashe G. V., Vijayakar S. M. Dynamic response of a planetary gear system using a finite element/contact mechanics model. ASME J. Mech. Des., Vol. 122, 2000, p. 304-310.
• Litvin F. L., Lian Q., Kapelevich A. L. Asymmetric modified spur gear drives: reduction of noise, localization of contact, simulation of meshing and stress analysis. Comput. Meth. Appl. Mech.
Eng., 188(1-3), 2000, p. 363-390.
• Litvin F. L. Gear Geometry and Applied Theory. Cambridge U., N. Y., 2004.
• Huang K. J., Su H. W. Approaches to parametric element constructions and dynamic analyses of spur/helical gears including modifications and undercutting. Finite Elem. Anal. Des., 46(12), 2010, p.
• Brauer J. A general finite element model of involute gears. Finite Elem. Anal. Des., 40(13), 2004, p. 1857-1872.
• Yuksel C., Kahraman A. Dynamic tooth loads of planetary gear sets having tooth profile wear. Mech. Mach. Theory, 39(7), 2004, p. 695-715.
• Prueter P., Parker R., Cunliffe F. A study of gear root strains in a multi-stage planetary wind turbine gear train using a three dimensional finite element/contact mechanics model and
experiments. Proceedings of the ASME 2011, IDETC/CIE 2011, Washington, DC, 2011.
• Hallquist J. O. LS-DYNA Theoretic Manual. LSTC Ltd., 1998.
About this article
28 February 2013
helical gear
finite element
misalignment error
Copyright © 2013 Vibroengineering
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/14531","timestamp":"2024-11-08T09:24:35Z","content_type":"text/html","content_length":"126158","record_id":"<urn:uuid:fa8d6ac1-459a-4cf8-ae04-73499eea34fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00774.warc.gz"}
|
Problem E
In chess the bishop is the chessman, which can only move diagonal. It is well known that bishops can reach only fields of one color but all of them in some number of moves (assuming no other figures
are on the field). You are given two coordinates on a chess-field and should determine, if a bishop can reach the one field from the other and how. Coordinates in chess are given by a letter (’A’ to
’H’) and a number (1 to 8). The letter specifies the column, the number the row on the chessboard.
The input starts with the number of test cases. Each test case consists of one line, containing the start position $X$ and end position $Y$. Each position is given by two space separated characters.
A letter for the column and a number for the row. There are no duplicate test cases in one input.
Output one line for every test case. If it’s not possible to move a bishop from $X$ to $Y$ in any number of moves output ’Impossible’. Otherwise output one possible move sequence from $X$ to $Y$.
Output the number $n$ of moves first (allowed to be $4$ at most). Followed by $n + 1$ positions, which describe the path the bishop has to go. Every character is separated by one space. There are
many possible solutions. Any with at most $4$ moves will be accepted. Remember that in a chess move one chessman (the bishop in this case) has to change his position to be a valid move (i.e. two
consecutive positions in the output must differ).
Sample Input 1 Sample Output 1
3 Impossible
E 2 E 3 2 F 1 B 5 E 8
F 1 E 8 0 A 3
A 3 A 3
|
{"url":"https://open.kattis.com/contests/t8iivg/problems/chess","timestamp":"2024-11-11T05:05:26Z","content_type":"text/html","content_length":"30893","record_id":"<urn:uuid:8660bbf4-9e46-4be0-9a0d-82659b886427>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00339.warc.gz"}
|
Second Highest Value In List Python With Code Examples
In this article, we will look at how to get the solution for the problem, Second Highest Value In List Python With Code Examples
How do you find the top 3 values in Python?
If you want to get the indices of the three largest values, you can just slice the list. It also supports sorting from smallest to largest by using the parameter rev=False .
# Python program to find second largest
# number in a list
# list of numbers
list1 = [10, 20, 4, 45, 99]
# new_list is a set of list1
new_list = set(list1)
# removing the largest element from temp list
# elements in original list are not changed
# print(list1)
# Python program to find second largest
# number in a list
# list of numbers - length of
# list should be at least 2
list1 = [10, 20, 4, 45, 99]
n =len(list1)
for i in range(2,n):
if list1[i]>mx:
elif list1[i]>secondmax and \
mx != list1[i]:
print("Second highest number is : ",\
How do you find the 2nd greatest element in an array?
The simple approach to find second largest element in array can be running two loops. The first loop will find the first largest element in the array. After that, the second loop will find the
largest element present in the array which is smaller than first_largest.
How do you find the second largest number without an array?
Logic To Find First and Second Biggest Number in N Numbers, without using Arrays. First we ask the user to enter length of numbers list. If user enters limit value as 5, then we ask the user to enter
5 numbers. Once the user enters limit value, we iterate the while loop until limit value is 0.
How do I find the second largest integer in Python?
By removing the maximum number and then using max() function to get the second-largest number.
How do I find the second largest value in a list Python?
How to Find the Second Largest Element in a List in Python
• Sort the list and save the ordered list somewhere.
• Remove the largest element.
• Get the remaining largest element.
How do you find the third highest number in an array?
• First, iterate through the array and find maximum.
• Store this as first maximum along with its index.
• Now traverse the whole array finding the second max, excluding the maximum element.
• Finally traverse the array the third time and find the third largest element i.e., excluding the maximum and second maximum.
How do you find the third largest number in Python?
• First, iterate through the array and find maximum.
• Store this as first maximum along with its index.
• Now traverse the whole array finding the second max, excluding the maximum element.
• Finally traverse the array the third time and find the third largest element i.e., excluding the maximum and second maximum.
How do you find the second largest number in an array algorithm?
• Step 1 − Declare and read the number of elements.
• Step 2 − Declare and read the array size at runtime.
• Step 3 − Input the array elements.
• Step 4 − Arrange numbers in descending order.
• Step 5 − Then, find the second largest and second smallest numbers by using an index.
How do you find the highest number in a list Python?
In Python, there is a built-in function max() you can use to find the largest number in a list. To use it, call the max() on a list of numbers. It then returns the greatest number in that list.
How do I find the second largest number in Python 3 numbers?
Python Program to Find Second Largest Number in a List
• Take in the number of elements and store it in a variable.
• Take in the elements of the list one by one.
• Sort the list in ascending order.
• Print the second last element of the list.
• Exit.
Sys.Maxsize In Python With Code Examples
In this article, we will look at how to get the solution for the problem, Sys.Maxsize In Python With Code Examples What does Sys Maxsize do in Python? maxsize() in Python. maxsize attribute of the
sys module fetches the largest value a variable of data type Py_ssize_t can store. It is the Python platform's pointer that dictates the maximum size of lists and strings in Python. # importing
the module import sys # fetching the maximum value max_val = sys.maxsize print(max_val) # importing th
Python Cant Find Keras Utils To_Categorical With Code Examples
In this article, we will look at how to get the solution for the problem, Python Cant Find Keras Utils To_Categorical With Code Examples Is Keras installed with TensorFlow? Because Keras is a high
level API for TensorFlow, they are installed together. from keras.utils.np_utils import to_categorical # worked for me How do I import Keras into Python? Evaluate model on test data. Step 1: Set up
your environment. Step 2: Install Keras and Tensorflow. Step 3: Import libraries and modules. Step 4:
Csv To Json Python With Code Examples
In this article, we will look at how to get the solution for the problem, Csv To Json Python With Code Examples How do I read a CSV file in Python? Read A CSV File Using Python Using the CSV Library.
import csv with open("./bwq.csv", 'r') as file: csvreader = csv.reader(file) for row in csvreader: print(row) Here we are importing the csv library in order to use the . Using the Pandas
Library. import pandas as pd data = pd.read_csv("bwq.csv") data. import pandas as pd # enter the jso
Convert Keys To Values In Python With Code Examples
In this article, we will look at how to get the solution for the problem, Convert Keys To Values In Python With Code Examples Can we change key in dictionary Python? Method 2: Rename a Key in a
Python Dictionary using Python pop() We use the pop method to change the key value name. old_dict = {'A': 67, 'B': 23, 'C': 45, 'D': 56, 'E': 12, &#
x27;F': 69, 'G': 67, 'H': 23} new_dict = dict([(value, key) for key, value in old
Encrypt And Decrypt In Nodejs With Code Examples
In this article, we will look at how to get the solution for the problem, Encrypt And Decrypt In Nodejs With Code Examples How do I encrypt a file in NodeJS? var fs = require('fs'); var
crypto = require('crypto'); var key = "14189dc35ae35e75ff31d7502e245cd9bc7803838fbfd5c773cdcd79b8a28bbd"; var cipher = crypto. createCipher('aes-256-cbc', key); var file_cipher =
""; var f = fs. ReadStream("test. txt"); f. javascript. node. js. encryption. const Cryptr = require(&
|
{"url":"https://www.isnt.org.in/second-highest-value-in-list-python-with-code-examples.html","timestamp":"2024-11-02T15:37:57Z","content_type":"text/html","content_length":"151318","record_id":"<urn:uuid:5a3bb1f4-0a23-4c2b-ac4d-bfcbcbd2636b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00274.warc.gz"}
|