content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Mimo-ofdm system using eigenbeamforming method - Choi, In-kyeong
This application claims priority to and the benefit of Korea Patent Application Nos. 2003-98216 and 2003-98217 filed on Dec. 27, 2003 in the Korean Intellectual Property Office, the entire content of
which is incorporated herein by reference.
(a) Field of the Invention
The present invention relates to a Multiple Input Multiple Output (MIMO)-Orthogonal Frequency Division Multiplexing (OFDM) system. More specifically, the present invention relates to a MIMO-OFDM
system using eigenbeam forming in a downlink.
(b) Description of the Related Art
A beam forming method has been used to obtain the antenna array gain to improve performance. Also, the beam forming method can be used to use a space domain in a downlink channel of a MIMO system.
Generally, a base station is required to have instantaneous channel information in a downlink to apply a closed-loop downlink beam forming method. In a Frequency Division Duplexing (FDD) mode, a
mobile station is required to feedback the instantaneous information to the base station, since frequency bands are different between uplink channel and downlink channel. Here, when the amount of the
feedback information is large, the feedback information hinders the closed loop beam forming. Thus, a method capable of reducing the feedback information is required to be investigated.
The blind beam forming method adaptively forms a downlink beam by measuring an uplink channel, under the assumption that spatial statistical properties of the channels are similar, since
constructions for transferring conditions are similar in both uplink and downlink. The method does not require feedback information since the method uses reciprocity of the channels; however the
method does not satisfy diversity gain since the beam forming vector does not include the instantaneous channel variation. To obtain space diversity gain, it is necessary to feed back the
instantaneous channel information in the downlink. Here, the amount of feedback information increases, and the feedback rate for tracking channel variation increases when the number of transmit
antennas increases. Thus, it is difficult to apply the beam forming method when the number of transmit antennas is large or the speed of the mobile station is high. To solve the above problems,
several methods are proposed, as follows.
An eigenbeam forming method proposed by the 3^rd Generation Partnership Project (3GPP) uses spatial correlation and selection diversity. The spatial correlation can allows long-term feedback with
much feedback information, and the selection diversity can requires a very small amount of feedback information when short-term feedback is required in accordance with the instantaneous channel
variation. That is, according to the eigenbeam forming method, the mobile terminal finds a dominant eigenmode by using a spatial covariance matrix of which a short term update is not necessary and
feeds back the dominant eigenmode; and the mobile terminal feeds back the strongest eigenmode in the uplink by using the instantaneous channel variation among dominant eigenmodes. The base station
selects the strongest eigenmode and transmits the signals. Thus, the eigenbeam forming method can obtain the selection diversity gain in addition to the signal-to-noise ratio gain.
From the situation that the antenna array of the base station is generally located on the top of some buildings, the down link channel exhibits to have a high spatial correlation or few dominant
eigenmodes. Since there are no local scatters around the antenna array of the base station, the signal can be spatially selectively transmitted with only few directions. It is regarded that the
eigenmode generates an independent path between the base station and the mobile station. The eigenbeam forming method can be effectively used in this condition.
However, when the eigenbeam forming method is applied to the OFDM system, each subcarrier of OFDM is selectively faded at different frequencies in the OFDM system. Thus, each subcarrier has a
different beam forming vector and all subcarriers are required to feed back their beam forming vectors. In this case, the amount of the feedback information becomes very much larger than that of the
single subcarrier, and the feedback information provides a severe burden to the system.
It is an advantage of the present invention to reduce an amount of feedback information for eigenbeam forming in an OFDM system.
To achieve the advantage, one aspect of the present invention is a Multiple Input Multiple Output (MIMO)-Orthogonal Frequency Division Multiplexing (OFDM) system comprising a transmitter with L
transmit antennas, a receiver with M receive antennas, and an uplink feedback device for providing information of the receiver to the transmitter, wherein the transmitter comprises: a serial/parallel
converter for converting continuously inputted symbols of the number of subcarriers to K parallel signals; a signal reproducer for reproducing K parallel signals by the number of transmit antennas L;
an eigenmode generator for generating an eigenbeam of the reproduced signals outputted from the signal reproducer at each subcarrier, on the basis of N[f ]eigenbeam forming vectors which are fed back
long-term and information of a best eigenbeam forming vector at each subcarrier which is fed back short-term, through the feedback device; and a plurality of inverse Fourier converters for receiving
the signals outputted from the eigenmode generator and generating an OFDM symbol.
Another aspect of the present invention is an MIMO-OFDM system comprising: a serial/parallel converter for converting continuously inputted symbols of the number of subcarriers to K parallel signals;
a signal reproducer for reproducing K parallel signals outputted from the serial/parallel converter by the number of extant transmit antennas; an eigenbeam calculator for calculating an instantaneous
channel covariance and a spatial covariance matrix by using the uplink channel information, providing N[f ]dominant eigenbeam forming vectors from the spatial covariance matrix, and providing the
eigenvalue of the instantaneous channel covariance; an eigenmode selector for selecting an eigenmode of which the eigenvalue of the instantaneous channel covariance is maximum among N[f], whenever N
[f ]eigenbeam forming vectors are inputted from the eigenbeam calculator and the instantaneous channel covariance is updated; and a plurality of inverse Fourier converters for receiving the signals
outputted from the eigenmode selector and generating an OFDM symbol.
Another aspect of the present invention is a MIMO-OFDM system comprising a transmitter with L transmit antennas, a receiver with M receive antennas, and an uplink feedback device for providing
information of the receiver to the transmitter, wherein the transmitter comprises: a serial/parallel converter for converting continuously inputted symbols of the number of subcarriers to K parallel
signals; a signal reproducer for reproducing K parallel signals outputted from the serial/parallel converter by the number of transmit antennas L; an eigenmode generator for generating one eigenbeam
for each group of subcarriers, on the basis of long-term feedback information corresponding to N[f ]eigenbeam forming vectors and short-term feedback information corresponding to a group of
subcarriers which are provided through the feedback device; and a plurality of inverse Fourier converters for receiving the signals outputted from the eigenmode generator and generating an OFDM
Another aspect of the present invention is a beam forming method for a MIMO-OFDM system comprising a transmitter with L transmit antennas and a receiver with M receive antennas, comprising: (a)
converting continuously inputted symbols of the number of subcarriers to K parallel signals; (b) reproducing K parallel signals by the number of transmit antennas L; and (c) generating one eigenbeam
for each group of subcarriers, on the basis of long-term feedback information corresponding to N[f ]eigenbeam forming vectors and short-term feedback information corresponding to the group of
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate an embodiment of the invention, and, together with the description, serve to explain the
principles of the invention.
FIG. 1 shows a MIMO-OFDM system according to a first exemplary embodiment of the present invention.
FIG. 2 shows a MIMO-OFDM system according to a second exemplary embodiment of the present invention.
FIG. 3 shows an eigenmode generator according to a third exemplary embodiment of the present invention.
FIG. 4 shows a beamforming weight vector determiner shown in FIG. 3.
FIG. 5 shows an eigenbeam calculator according to a third exemplary embodiment of the present invention.
In the following detailed description, only the preferred embodiment of the invention has been shown and described, simply by way of illustration of the best mode contemplated by the inventor(s) of
carrying out the invention. As will be realized, the invention is capable of modification in various obvious respects, all without departing from the invention. Accordingly, the drawings and
description are to be regarded as illustrative in nature, and not restrictive. To clarify the present invention, parts which are not described in the specification are omitted, and parts for which
similar descriptions are provided have the same reference numerals.
In a MIMO system with a single carrier wherein the number of transmit antennas is L and the number of receive antennas is M, a received signal vector r(q) at the q^th symbol period is as in the
following Equation 1.
r(q)=√{square root over (γ)}H(q)ws(q)+n(q) [Equation 1]
Here, r is a transmitting signal to noise ratio, r(q)=[r[1](q)r[2](q) . . . r[M](q)]^r, H(q) ([H(q)][m,l]=h[m,l, m=1, . . . , M,l=1, . . . ,L]) is a channel, and w(w=[w[1], . . . ,w[L]]^r) is a
weight vector. Here ∥w∥=1 is assumed. And a noise vector n(q), (n(q)=[n[1], . . . ,n[M]]^r) satisfying the equation E└n(q)n^H(q)┘=I means white nose in space.
The best suited weight vector with a maximum average signal to noise ratio of the received signal r(q) defined in Equation 1 is a maximum eigenvector corresponding to a maximum eigenvalue of a
spatial covariance matrix R[H](q)(R[H](q)=E└H^H(q)H(q)┘).
Assuming that R[H](q)=R[H ](R[H ]is referred to as a long-term spatial covariance matrix), the long-term spatial matrix R[H ]can be calculated as in the following Equation 2.
R[H]=(1−ρ)R[H]+ρR[st](q) [Equation 2]
Here, R[st](R[st](q)=H^H(q)H(q)) is an instantaneous channel covariance and ρ(0≦ρ≦1) is a forgetting factor. To obtain an eigenbeam forming vector, eigendecomposition can be applied to R[H ]as in the
following Equation 3.
R[H]=EDE^H [Equation 3]
Here, D is a diagonal matrix (D=diag(λ[1],λ[2], . . . ,λ[L]), E is a unit matrix (E=[e[1],e[2], . . . ,e[L]]), λ[1]≧λ[2]≧ . . . ≧λ[L ]are eigenvalues, and e[l ]is a eigenvector corresponding to the
eigenvalue λ[l].
When the base station finds an N[f](<L) number of the dominant eigenvector corresponding to the biggest eigenvalue on the basis of the feedback information, the long-term spatial covariance matrix is
gradually changed. Thus, the feedback rate for transmitting the eigenbeam vector is reduced, and the amount of feedback is also reduced. At this time, the eigenbeam vector is used as the weight
vector for forming a beam in the downlink, and the eigenbeam vector has orthogonality. Thus, the eigenbeam vector can generate an independent channel or transmitting mode to the mobile terminal, and
the eigenbeam vector is referred to the eigenmode. Here, the eigenvalue of the instantaneous channel covariance is calculated from the long-term feedback of N[f ]eigenvectors and fast fading, and the
information is fed back to select a best eigenvector with a maximum eigenvalue among N[f ]eigenvectors.
Meanwhile, assuming that the base station recognizes an N[f](<L) number of the dominant eigenvectors corresponding to the biggest eigenvalue on the basis of the feedback information, the eigenvalue
of the instantaneous channel covariance is calculated from the long-term feedback of N[f ]eigenvectors and fast fading, and the information is fed back to select a best eigenvector with a maximum
eigenvalue among N[f ]eigenvectors and the information is fed back short-term to select a best eigenvector with a maximum eigenvalue among N[f ]eigenvectors. The short-term feedback ratio is higher
than the long-term feedback ratio, but the amount of the feedback information is only log[2](N[f]), since the feedback is for simply selecting the best suited one among the N[f ]eigenvectors.
The best suited eigenvector w(q) for maximizing the instantaneous signal to noise ratio can be calculated from the maximum short-term channel gain, as in the following Equation 4. $w(q)=arg
maxen,n=1,2,… ,NfH(q)en2[Equation 4]$
Here, the short-term feedback ratio is higher than the long-term feedback ratio, but the amount of the feedback information is only log[2](N[f]) since the feedback is for simply selecting the best
suited one among the N[f ]eigenvectors.
However, as described above, when the beam forming method is applied to the OFDM system, each subcarrier may use a different beam forming vector, since each subcarrier is differently faded at the
frequency selective fading channel. Therefore, the amount of feedback increases when using different beam forming vectors.
The exemplary embodiment of the present invention shows that the spatial covariance matrix is the same for all subcarriers, and the eigenbeam forming method in which the amount of the feedback of the
eigenvector is reduced is significantly effective in the OFDM system.
It is assumed that K subcarriers are assigned in downlink of the MIMO-OFDM system, and the number of transmit antennas is L and the number of receive antennas is M.
Here, the K×1 OFDM symbol is s(t), and the L×1 weight vector w[k](t) is a beamforming vector for the k^th symbol s[k](t) of s(t). Then, the transmitting signal S(t) in a space and frequency domain is
as in the following Equation 5.
S(t)=[w[1](t) w[2](t) . . . w[K](t)]O(t) [Equation 5]
Here, D(t) is a diagonal matrix of data symbols, and D(t)=diag{s[1](t), s[2](t), . . . ,s[K](t)}.
The frequency response of a channel between the transmit antenna I and the receive antenna m is as in the following Equation 6. $h~k,m,l=∑p=0P-1hp,m,ld-j 2πpk/K[Equation 6]$
Here, {h[p,m,l]}[p=0,1, . . . P−1,m=1, . . . ,M,=1, . . . ,L ]is a channel impulse response (CIR) between the transmit antenna I and the receive antenna m; P is a length of the channel impulse
response, that is, the number of multipaths; and k is an index of the subcarriers. It is assumed that the channel impulse response is any sequence of which average is 0 and satisfies the following
Equation 7.
E└H[p]^HH[p′]┘=σ[h,p]^2R[H][]δ[p,p′] [Equation 7]
Here, └H[p]┘[m,l]=h[p,m,l], σ[h,p]^2 is a power delay profile of the channel impulse response, and $[RHp]s,t=1σh,p2E[hp,m,s*hp,m,t] s,t=1,2,… L.$
According to the Equation 7, it is assumed that an R[H ]normalized spatial covariance matrix in the time domain is the same at all multipaths, and there is no correlation between multipath
coefficients. The MIMO channel matrix corresponding to the k^th subcarrier can be described as in the following Equation 8.
└{tilde over (H)}[k]┘[m,l]={tilde over (h)}[k,m,l], m=1,2, . . . ,M, l=1,2, . . . ,L [Equation 8]
Then, the spatial covariance matrix of the channel {tilde over (H)}[k ]in the frequency domain can be described as in the following Equation 9.
R[{tilde over (H)}][k]=E└{tilde over (H)}[k]^H{tilde over (H)}[k]┘ [Equation 9]
The Equation 9 can be developed by using Equations 6 and 7 as in the following Equation 10. $[RH~k]s,t=∑m=1ME[h~k,m,sh~k,m,t] s,t=1,2,… L=∑m=1ME[(∑p=0P-1hp,m,sⅇ-j2 πpk/N)*(∑p′=
0P-1hp′,m,tⅇ-j2π p′k/N)]=∑m=1M∑p=0P-1E[hp,m,s*hp,m,t]=∑p=0P-1σh,p2[RHp]s,t=RH~=Δ[∑p=0P-1σh,p2RHp]s,t[Equation 10]$
Equation 10 shows that the spatial covariance matrix R[{tilde over (H)}][k ]of the channel {tilde over (H)}[k ]of each subcarrier is independent from any subcarrier k, and is always the same.
Each subcarrier has a different channel property in the OFDM system, since each subcarrier is selectively faded at different frequencies. However, Equations 6, 7, and 10 show that all subcarriers
have the same spatial covariance matrix.
Thus, it is not necessary to calculate a spatial covariance matrix at subcarriers, and the spatial covariance calculated at one subcarrier can be used for forming eigenbeams at the subcarriers. Thus,
the amount of calculation can be significantly reduced. Further, the averaging length can be reduced by calculating the spatial covariance in a two dimensional domain which uses the frequency domain
and the time domain of the subcarriers at the same time. The OFDM system can more actively meet a channel change. Further, since the subcarriers have the same eigenvector group, the amount of
feedback information is properly reduced and becomes the same as in the case of the system with the single subcarrier. Thus, the eigenbeam forming method can be easily applied to the OFDM system.
Hereinafter, a first exemplary embodiment of the present invention is described in detail with reference to appended drawings.
FIG. 1 shows a MIMO-OFDM system according to a first exemplary embodiment of the present invention. FIG. 1 is a block diagram for describing the idea and construction of the present invention in an
FDD mode.
As referred to in FIG. 1, the MIMO-OFDM system according to the first exemplary embodiment of the present invention is an OFDM system with K subcarriers. The OFDM system comprises a transmitter 10
with L transmit antennas 131a, 131b, . . . ,131L, a receiver 20 with M receive antennas 231a, 231b, . . . , 231M, and an uplink feedback device 40 for transferring information of the receiver 20 to
the transmitter 10.
The transmitter 10 comprises a serial/parallel converter (S/P converter) 100, a signal reproducer 110, an eigenmode generator 120, inverse fast Fourier transformers 130a, 130b, . . . , 130L, and L
transmit antennas 131a, 131b, . . . , 131L.
The receiver 20 comprises receive antennas 231a, 231b, . . . , 231M, fast Fourier transformers 230a, 230b, . . . , 230M, an eigenbeam calculator 220, a symbol detector 210, and a parallel/serial
converter 200.
The serial/parallel converter 100 of the transmitter 10 is a device for converting continuously inputted K symbols to K parallel signals. K indicates the number of subcarriers. The signal reproducer
110 is a device for reproducing K parallel signals 101a, 101b, . . . , 101K outputted from the serial/parallel converter 100 L times, which is the number of transmit antennas. That is, the I^th
signal among the reproduced signals 111a, 111b, . . . , 111L outputted from the signal reproducer 110 are the same (I=1,2 to L).
The eigenmode generator 120 is a device for generating eigenbeams of the reproduced signals 111a, 111b, . . . , 111L outputted from the signal reproducer 110 at each subcarrier, on the basis of N[f ]
eigenbeam forming vectors and information of a best eigenbeam forming vector at each subcarrier. Here, the eigenbeam forming vectors are calculated by the eigenbeam calculator 220 of the receiver and
are fed back long-term by the uplink feedback device 40, but the subcarriers have the same eigenbeam forming vector group. Further, the information of a best eigenbeam forming vector is fed back
short-term by the uplink feedback device 40. That is, the eigenmode generator 120 is a device for generating N[f ]eigenmodes by using N[f ]eigenbeam forming vectors fed back long-term and selecting a
best eigenmode among N[f ]eigenmodes generated in accordance with the best beamforming eigenbeam forming vector fed back short-term. At this time, the information of the best eigenbeam forming vector
is required to be fed back within a coherent time. The N[f ]eigenmodes are updated whenever the eigenbeam forming vectors are fed back, and the best eigenbeam forming vector among those is short-term
The L inverse Fourier converters 130a, 130b, . . . , 130L are devices for receiving K signals respectively and generating one OFDM symbol. The OFDM symbols generated from the L inverse Fourier
converters 130a, 130b, . . . , 130L are the same. The OFDM symbols generated from the inverse Fourier converters are transmitted through the corresponding antennas 130a, 130b, . . . , 130L.
The Fourier converters 230a, 230b, . . . , 230M of the receiver 20 receive signals received through M receive antennas and perform Fourier conversion to the signals and output K signals 221a, 221b, .
. . , 221M. The eigenbeam calculator 220 is a device for estimating a channel to the signals outputted from the Fourier converters 230a, 230b, . . . , 230M and calculating the instantaneous
covariance and the spatial covariance according to Equation 2 and the N[f ]dominant eigenvectors according to Equation 3. At this time, the spatial covariance matrix can be obtained from only one
subcarrier, or from the two dimension domain using both the frequency domain and the time domain according to Equation 10. The instantaneous channel covariance is calculated for each channel. The
eigenbeam calculator 260 selects one vector with the maximum eigenvalue among N[f ]eigenbeam forming vectors for the instantaneous channel covariance, and transfers the number of vectors to the
uplink feedback device 40.
The symbol detector 210 is a device for detecting K symbols inputted to the signal reproducer 110 of the receiver 10 at the same time by using the channel estimate obtained from the eigenbeam
calculator 260. The parallel/serial converter 200 is a device for converting the K symbols to the serial signals.
The uplink feedback device 40 is a device for long-term feedback of the eigenbeam forming vector obtained from the eigenbeam calculator 260 of the receiver 20 and short-term feedback of the number of
the best eigenbeam forming vector. According to Equation 10, the subcarriers have the same eigenbeam forming vector. Thus, the feedback can be achieved through one subcarrier instead of all
subcarriers. Further, the information of the feedback can be divided into each subcarrier to reduce a feedback delay. However, the instantaneous channel covariance is different for each subcarrier,
and thus the instantaneous channel covariance is required to be fed back for all subcarriers.
As such, according to the first exemplary embodiment of the present invention, it is not necessary to calculate a spatial covariance matrix at the subcarriers, and the spatial covariance calculated
at only one subcarrier can be used for forming eigenbeams at the subcarriers. Thus, the amount of calculation can be significantly reduced. Further, the averaging length can be reduced by calculating
the spatial covariance in a two dimensional domain using both the frequency domain and the time domain of all subcarriers. The OFDM system according to the first exemplary embodiment can more
actively meet channel change. More specially, in the FDD mode wherein the information for the eigenbeam forming of the transmitter is required to be fed back from the receiver, the amount of the
long-term feedback information can be significantly reduced, since the eigenbeam forming vector for only one subcarrier is required to be fed back, and the eigenbeam forming vectors for all
subcarriers are not required.
FIG. 2 shows a MIMO-OFDM system according to a second exemplary embodiment of the present invention. FIG. 2 is a block diagram for explaining the idea and construction of the present invention in the
time division duplexing (TDD) mode.
As referred to in FIG. 2, only the receiver of the base station is described, since it is not necessary to feed back the channel information due to channel reciprocity in the TDD mode, different from
the first exemplary embodiment shown in FIG. 1.
According to FIG. 2, the OFDM system according to the second exemplary embodiment of the present invention is a transmitter of the MIMO-OFDM system with K subcarriers. Thus, the transmitter according
to the exemplary embodiment is set in the base station.
As referred to in FIG. 2, the transmitter 30 comprises a serial/parallel converter (S/P converter) 300, a signal reproducer 310, an eigenmode calculator 320, an eigenmode selector 330, inverse fast
Fourier transformers 340a, 340b, . . . , 340L, and L transmit antennas 341a, 341b, . . . , 341L. The transmitter 30 transmits the eigenbeam forming signals through L transmit antennas.
The serial/parallel converter 300 of the transmitter 30 is a device for converting continuously inputted K symbols to K parallel signals. The K indicates the number of subcarriers. The signal
reproducer 310 is a device for reproducing K parallel signals 301a, 301b, . . . , 301K L times. The L indicates the number of transmit antennas.
The eigenmode calculator 320 is a device for calculating the instantaneous channel covariance and the spatial covariance from the uplink channel information obtained from the receiver (not shown) of
the base station according to Equation 2; calculating N[f ]dominant eigenbeam forming vectors according to Equation 3; and calculating the eigenvalue of the instantaneous channel covariance. At this
time, the instantaneous channel covariance is obtained from only one subcarrier, or from the two dimensional domain using both the frequency domain and the time domain according to Equation 10. The
instantaneous channel covariance is frequently updated within coherent time, but since the spatial covariance matrix needs averaging lengths, the spatial covariance matrix is slowly updated every
averaging length.
The eigenmode selector 330 is a device for selecting only one eigenmode of which the eigenvalue of the instantaneous channel covariance is maximum among N[f], whenever N[f ]eigenbeam forming vectors
are inputted from the eigenbeam forming calculator 320 and the instantaneous channel covariance is updated. Each of the inverse Fourier converters 340a, 340b, . . . , 340L is a device for receiving K
signals and generating one OFDM symbol. The OFDM symbols generated from the L inverse Fourier converters 340a, 340b, . . . , 340L are the same.
Hereinafter, the third exemplary embodiment of the present invention is described.
In the OFDM system, the subcarriers have N[f ]dominant eigenvectors, but the best eigenmode to be selected can be different at each subcarrier since each subcarrier has a different frequency
selective fading channel. However, close subcarriers are similarly faded, thus the same eigenmode can be selected for close subcarriers.
The K subcarriers can be divided into K[f](≦K) groups. Each group includes K close subcarriers, and each group selects the same eigenmode. Thus, total amount of feedback becomes K[f]·log[2](N[f]).
That is, the amount of feedback becomes (K/ K)·log[2](N[f]), so the amount of feedback is reduced 1/ K times.
When G[g]={ Kg+1, Kg+2, . . . , K(g+1)}, g=1,2, . . . ,K[f ]is a g^th group of subcarriers, the beamforming vector of the g^th group of subcarriers can be expressed as in the following Equation 11.
$wg(t)=arg maxex,n=1,2,… ,Nf∑k∈GgH~k(t)en2[Equation 11]$
As such, the third exemplary embodiment of the present invention divides the subcarriers into groups of close subcarriers and reduces the amount of feedback by selecting the same eigenmode for each
Hereinafter, the third exemplary embodiment of the present invention is described in detail with reference to the appended drawings.
The MIMO-OFDM system according to the third exemplary embodiment of the present invention has similar construction to the MIMO-OFDM system according to the first exemplary embodiment. Thus repeated
explanation is not given.
FIG. 3 shows an eigenmode generator 120 according to a third exemplary embodiment of the present invention.
As shown in FIG. 3, the input for the eigenmode generator 120 according to the third exemplary embodiment of the present invention includes L parallel signals 111a, 111b, . . . , 111L reproduced from
K parallel signals s(t), and the short-term feedback information and the long-term feedback information provided through the uplink feedback device 40.
The eigenmode generator 120 divides L parallel signals reproduced from K parallel signals s(t) into K[f ]groups of the K parallel signals respectively. That is, the eigenmode generator 120 divides K
parallel signals 111a into K[f ]groups G[1],G[2], . . . ,G[K][f ]223a-1, 223a-2, . . . , 223a-Kf of the K parallel signals, and divides K parallel signals 111b into K[f ]groups 223b-1, 223b-2, . . .
, 223b-Kf of the K parallel signals. The process is repeated by the number of the transmit antennas 223L-1, 223L-2, . . . , 223L-Kf.
Further, the eigenmode generator 120 multiplies Kf weight vectors by the signals of the group. The Kf weight vectors are obtained from the weight vector determiner 221. In detail, the eigenmode
generator 120 multiplies the first vector w[1]=(w[11],w[12], . . . ,w[1L]) (221-1) among the Kf weight vectors by G[1 ]223a-1, 223b-1, . . . , 223L-1, the signal of the first group of each antenna.
That is, the eigenmode generator 120 multiplies signals s[1],s[2], . . . ,s[K ]corresponding to G[1 ]of the first antenna by w[11], multiplies signals s[1],s[2], . . . , s[K ]corresponding to G[1 ]of
the second antenna by w[12], and multiplies signals s[1],s[2], . . . , s[K ]corresponding to G[1 ]of the L^th antenna by w[1L]. Thus, subcarrier signals s[1],s[2], . . . ,s[K ]generate one eigenbeam.
In the same manner, the eigenmode generator 120 multiplies the second vector w[2]=(w[21],w[22], . . . ,w[2L]) 222-2 by G[2 ]223a-2, 223b-2, . . . , 223L-2 of the second group of each antenna. Here,
the signals s[ K+1],s[ K+2], . . . ,s[2 K] of the subcarrier are owned in common. This process is repeated until the eigenmode generator 120 multiplies the Kf^th vector W[K][f]=(W[k][f][1],W[K][f]
[2], . . . ,W[K][f][L]) 222-Kf by G[K][f ]223a-Kf, 223b-Kf, . . . , 223L-Kf of the Kf^th group of each antenna. Here, the signals s[K− K+1],s[K− K+2], . . . ,s[K ]of the subcarrier are owned in
As a result, the eigenmode generator 120 generates one eigenbeam for each group of subcarriers and the subcarriers in the group own the eigenbeam. Thus, the eigenmode generator 120 generates Kf
eigenbeams for all subcarriers.
FIG. 4 shows a weight vector determiner 221 in the eigenmode generator in detail. As shown in FIG. 4, the weight vector determiner 221 comprises an eigenbeam update device 321 and Kf eigenmode
determiners 322-1, 322-2, . . . , 322-Kf.
As referred to in FIG. 4, the eigenbeam update device 321 updates N[f ]eigenbeam vectors through the uplink feedback device 40 whenever the long-term feedback information is provided. At this time,
the eigenbeam vector being updated is the same for all subcarriers. The Kf eigenmode determiners 322-1, 322-2, . . . , 322-Kf receive N[f ]same eigenbeam vectors outputted from the eigenbeam update
device 321, and the eigenmode determiner selects one among N[f ]eigenbeam vectors inputted in accordance with the uplink feedback device 40 and determines the eigenmode. The eigenmodes selected by
each eigenmode determiner are expressed as a weight vector. The Kf eigenmode determiners 322-1, 322-2, . . . , 322-Kf output w[1]=(w[11],w[12], . . . ,w[1L]), w[2]=(w[21],w[22], . . . ,w[2L]), and W
[K][f]=(W[K][f][1],W[K][f][2], . . . ,W[K][f][L]) respectively.
FIG. 5 shows an eigenbeam calculator 260 according to the third exemplary embodiment of the present invention.
As shown in FIG. 5, the eigenbeam calculator 260 comprises M channel estimators 261a, 261b, . . . , 261M, Kf instantaneous power measuring devices 262-1, 262-2, . . . , 262-Kf, eigenvector calculator
263, and Kf eigenvector selectors 264-1, 262-2, . . . , 262-Kf.
The channel estimators 261a, 261b, . . . , 261M estimate channels to M pairs of parallel signals inputted respectively at each subcarrier. The eigenvector calculator 263 obtains the channel spatial
covariance from the signals outputted from the channel estimators 261a, 261b, . . . , 261M by using Equation 7 and Equation 10, the channel spatial covariance being the same for all subcarriers. Then
the eigenvector calculator 263 calculates N[f ]dominant eigenvectors e[1],e[2], . . . ,e[N][f ]according to Equation 2, and provides N[f ]dominant eigenvectors to Kf eigenvector selectors 264-1, 262-
2, . . . , 262-Kf.
The instantaneous power measuring devices 262-1, 262-2, . . . , 262-Kf receive signals outputted from the each channel estimator and measure the instantaneous power. That is, the estimated channel
values for each subcarrier by each channel estimator are orderly divided into Kf groups of K signals. The first K signals are provided to the instantaneous power measuring device 1 262-1, then K
signals are provided to the instantaneous power measuring device 2 262-2, and last K signals are provided to the instantaneous power measuring device Kf 262-Kf.
Each instantaneous power measuring device measures the instantaneous power by using the M pairs of estimated K signals, and provides the measured instantaneous power to the eigenvector selectors 264-
1, 264-2, . . . , 264-Kf.
The eigenvector selectors 264-1, 264-2, . . . , 264-Kf select one eigenvector with maximum instantaneous power among N[f ]dominant eigenvectors e[1],e[2], . . . ,e[N][f ]inputted by using the
instantaneous powers inputted from the corresponding instantaneous measuring devices. Then, the eigenvector with the maximum instantaneous power becomes the short-term feedback information.
In detail, the first eigenvector selector selects one eigenvector of which the instantaneous power is maximum among N[f ]dominant eigenvectors e[1],e[2], . . . ,e[N][f ]inputted, by using the
instantaneous power inputted from the first instantaneous measuring device 262-1. Then, the eigenvector with maximum instantaneous power becomes the short-term feedback information 265-1. The second
eigenvector selector selects one eigenvector with the maximum instantaneous power from among the N[f ]dominant eigenvectors e[1],e[2], . . . ,e[N][f ]inputted by using the instantaneous power
inputted from the second instantaneous measuring device 262-2. Then, the eigenvector with the maximum instantaneous power becomes the short-term feedback information 265-2.
The process is repeated until the Kf^th eigenvector selector selects one eigenvector with the maximum instantaneous power from among the N[f ]dominant eigenvectors e[1],e[2], . . . ,e[n][f ]inputted
by using the instantaneous power inputted from the Kf^th instantaneous measuring device 262-2. Then, the eigenvector with the maximum instantaneous power becomes the short-term feedback information
265-Kf. Each short-term feedback information determined by the eigenvector selectors is constructed by log[2](N[f]) bit, wherein Nf is the number of the eigenvector. However, since K close
subcarriers among K subcarriers own the eigenvector in common, the short-term feedback is not performed at each subcarrier. Since the K subcarriers provide one set of feedback information, the amount
of the short-term feedback information is reduced 1/ K times.
Further, the long-term feedback information 266 is obtained by quantizing the amplitude and phase of each dominant eigenvector e[1],e[2], . . . ,e[N][f ]of the channel spatial covariance matrix
obtained from the eigenvector calculator 263. The long-term feedback information is slowly updated, since the channel spatial covariance matrix is slowly changed.
The short-term feedback information and the long-term feedback information are inputted to the eigenmode generator 120 in the receiver 10 through the uplink feedback device 40 of FIG. 1. For the
short-term feedback information, K close subcarriers are required to feed back Kf feedback information at least one time within a coherent time. Otherwise, for the long-term feedback information, all
subcarriers slowly feedback only one information set.
As described above, according to the exemplary embodiment of the present invention, K close subcarriers among all K subcarriers form a group and the all K subcarriers are divided into K[f](≦K)
groups, and each group selects the same eigenvector. Thus, since the total amount of feedback becomes (K/ K)·log[2](N[f]), the total amount of feedback is reduced to 1/ K. The burden to the system
can therefore be reduced.
While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to
the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. For example, the
device according to the exemplary embodiment of the present invention can be embodied as hardware or software. Also, the present invention can be embodied as code on a readable medium which a
computer can read.
As described above, according to the present invention, when the eigenbeam forming method is applied to the OFDM system, the spatial covariance matrix being necessary for the eigenbeam forming can be
calculated for one subcarrier instead of all subcarriers, and thus the amount of calculations can be significantly reduced. Further, the averaging length can be reduced by calculating the spatial
covariance in a two dimensional domain which uses the frequency domain and the time domain of all subcarriers at the same time. Thus, the present invention can more actively meet channel change.
Further, according to the present invention, when the eigenbeam forming method is applied to the OFDM system, close subcarriers among all K subcarriers form a group and all K subcarriers are divided
into a predetermined number of groups, and each group selects the same eigenvector. Thus, since the total amount of feedback is reduced, the burden to system can be reduced. | {"url":"http://www.freepatentsonline.com/y2007/0177681.html","timestamp":"2014-04-17T21:39:01Z","content_type":null,"content_length":"92264","record_id":"<urn:uuid:a98e5646-ea49-417c-8a86-5077ddab7c1c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
In what sense do the categorical trace and coend count fixed points?
up vote 4 down vote favorite
According to the nlab, the categorical trace of a 1-endomorphism $F:C\to C$ in a 2-category is the set hom$(1_C, F)$ of global elements of $F$. If $F$ is a functor in the 2-category Cat, the
categorical trace is a set of natural transformations that assign to each object of $C$ a coalgebra of $F$ such that the obvious square commutes.
Any functor can be considered a special kind of profunctor; given an endofunctor, we can compute the coend of the corresponding profunctor.
Both of these concepts are generalizations of the trace, which for a function counts the number of fixpoints. In what sense do these "count" the fixpoints of a functor? I don't see how the
categorical trace of a functor relates to fixpoints at all.
Also, does the notion of what constitutes a fixpoint change? The coend, in particular, seems like it might count an object $c$ as a fixpoint of $F$ if it's in the same endomorphism class rather than
the same isomorphism class as $Fc$.
1 What's a point? – Tom Goodwillie Jun 22 '11 at 2:19
1 omg, I first read "counit" instead of "count". – Martin Brandenburg Jun 22 '11 at 2:24
Without totally understanding your question, I am going to suggest looking at the very interesting paper of Ganter and Kapranov, which is certainly about categorical traces and about fixed points
-- whether they are about YOUR categorical traces and YOUR fixed points I cannot say. – JSE Jun 22 '11 at 3:36
If this definition is compatible with the usual definition for monoidal categories with duals, then a nice example is the Lefschetz number of an endomorphism of a simplicial complex (regarded as a
chain complex); the relationship with fixed points is given by the Lefschetz fixed point theorem. – Qiaochu Yuan Jun 22 '11 at 23:00
add comment
2 Answers
active oldest votes
Simon Willerton explains it all very well here: http://www.simonwillerton.staff.shef.ac.uk/ftp/TwoTracesBeamerTalk.pdf
up vote 1 down vote accepted
add comment
Here's a partial answer: in the case of an endofunctor $F$ on a discrete category $C$ (i.e. $F$ is a function), the coend of $F$ gives the set of fixpoints rather than the number: A
profunctor $F:C \not\to C$ adds extra morphisms to $C$ so that the result is still a category. I'll say these morphisms are "in $F$". The coend of $F$ is the set of endomorphisms in $F$ mod
conjugation by the morphisms in $C$; since the morphisms of $C$ are all identities, we just get the set of endomorphisms in $F$, i.e. fixed points of $F$.
up vote 0 The categorical trace doesn't reduce to anything useful in the case of a discrete category. A natural transformation $\alpha:1_C \Rightarrow F$ chooses for each $c \in C$ a morphism $\
down vote alpha_c:c \to Fc$ in $C$. Since we're assuming all the morphisms in $C$ are identities, $\alpha_c$ can't exist unless $Fc = c$. So it looks to me like the set hom$(1_C, F)$ is empty unless
$F$ is the identity functor on $C$, in which case it's the terminal set.
add comment
Not the answer you're looking for? Browse other questions tagged fixed-point-theorems traces profunctors or ask your own question. | {"url":"http://mathoverflow.net/questions/68424/in-what-sense-do-the-categorical-trace-and-coend-count-fixed-points/68537","timestamp":"2014-04-17T04:18:03Z","content_type":null,"content_length":"59077","record_id":"<urn:uuid:9c29c67c-a504-46e0-9ba9-d1f929e5f87f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series/Sequences Questions
March 2nd 2010, 06:26 PM #1
May 2008
Series/Sequences Questions
The first question is to find $\Sigma^{b}_{n = 1}\frac{\sin n}{2^n}$ and the second being to find $\Sigma^{\infty}_{n = 1}\frac{\sin n}{2^n}$.
The thing is I'm not sure how to even begin these, so if someone could help me out that would be great. Thanks.
Are you asked to evaluate them or to show that they are convergent?
Showing convergence is easy, just use the comparison test with
$\sum_{n = 1}^{\infty}\frac{1}{2^n}$ (this is a geometric series).
$\sin n = \frac{e^{in} - e^{-in}}{2i}$ (1)
... so that...
$\sum_{n=1}^{\infty} \frac{\sin n}{2^{n}} = \frac{1}{2i}\cdot \{\sum_{n=1}^{\infty} (\frac{e^{i}}{2})^{n} - \sum_{n=1}^{\infty} (\frac{e^{-i}}{2})^{n}\}=$
$= \frac{1}{2i}\cdot \{\frac{e^{i}}{2-e^{i}} - \frac{e^{-i}}{2-e^{-i}} \}$ (2)
The numerical evaluation of (2) is tedious but not difficult and it is left to the reader
Kind regards
March 2nd 2010, 06:41 PM #2
March 2nd 2010, 07:17 PM #3
May 2008
March 2nd 2010, 08:24 PM #4 | {"url":"http://mathhelpforum.com/calculus/131752-series-sequences-questions.html","timestamp":"2014-04-16T04:29:54Z","content_type":null,"content_length":"42571","record_id":"<urn:uuid:210905e2-e28b-4cf3-8da7-c06190f20964>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
To calculate world coordinates from screen coordinates with OpenCV
up vote 3 down vote favorite
I have calculated the intrinsic and extrinsic parameters of the camera with OpenCV. Now, I want to calculate world coordinates (x,y,z) from screen coordinates (u,v).
How I do this?
N.B. as I use the kinect, I already know the z coordinate.
Any help is much appreciated. Thanks!
opencv kinect camera-calibration calibration
So you are saying that you have Xscreen,Yscreen,and Zworld? And you want Xworld,Yworld,Zworld? – Hammer Aug 17 '12 at 17:39
Yes, it's right – Luke Aug 19 '12 at 10:25
add comment
2 Answers
active oldest votes
First to understand how you calculate it, it would help you if you read some things about the pinhole camera model and simple perspective projection. For a quick glimpse, check this.
I'll try to update with more.
So, let's start by the opposite which describes how a camera works: project a 3d point in the world coordinate system to a 2d point in our image. According to the camera model:
P_screen = I * P_world
or (using homogeneous coordinates)
| x_screen | = I * | x_world |
| y_screen | | y_world |
| 1 | | z_world |
| 1 |
I = | f_x 0 c_x 0 |
| 0 f_y c_y 0 |
| 0 0 1 0 |
is the 3x4 intrinsics matrix, f being the focal point and c the center of projection.
If you solve the system above, you get:
x_screen = (x_world/z_world)*f_x + c_x
y_screen = (y_world/z_world)*f_y + c_y
But, you want to do the reverse, so your answer is:
x_world = (x_screen - c_x) * z_world / f_x
y_world = (y_screen - c_y) * z_world / f_y
z_world is the depth the Kinect returns to you and you know f and c from your intrinsics calibration, so for every pixel, you apply the above to get the actual world coordinates.
Edit 1 (why the above correspond to world coordinates and what are the extrinsics we get during calibration):
First, check this one, it explains the various coordinates systems very well.
Your 3d coordinate systems are: Object ---> World ---> Camera. There is a transformation that takes you from object coordinate system to world and another one that takes you from world
to camera (the extrinsics you refer to). Usually you assume that:
• Either the Object system corresponds with the World system,
• or, the Camera system corresponds with the World system
1. While capturing an object with the Kinect
When you use the Kinect to capture an object, what is returned to you from the sensor is the distance from the camera. That means that the z coordinate is already in camera coordinates.
up vote 13 By converting x and y using the equations above, you get the point in camera coordinates.
down vote
accepted Now, the world coordinate system is defined by you. One common approach is to assume that the camera is located at (0,0,0) of the world coordinate system. So, in that case, the
extrinsics matrix actually corresponds to the identity matrix and the camera coordinates you found, correspond to world coordinates.
Sidenote: Because the Kinect returns the z in camera coordinates, there is also no need from transformation from the object coordinate system to the world coordinate system. Let's say
for example that you had a different camera that captured faces and for each point it returned the distance from the nose (which you considered to be the center of the object coordinate
system). In that case, since the values returned would be in the object coordinate system, we would indeed need a rotation and translation matrix to bring them to the camera coordinate
2. While calibrating the camera
I suppose you are calibrating the camera using OpenCV using a calibration board with various poses. The usual way is to assume that the board is actually stable and the camera is moving
instead of the opposite (the transformation is the same in both cases). That means that now the world coordinate system corresponds to the object coordinate system. This way, for every
frame, we find the checkerboard corners and assign them 3d coordinates, doing something like:
std::vector<cv::Point3f> objectCorners;
for (int i=0; i<noOfCornersInHeight; i++)
for (int j=0; j<noOfCornersInWidth; j++)
objectCorners.push_back(cv::Point3f(float(i*squareSize),float(j*squareSize), 0.0f));
where noOfCornersInWidth, noOfCornersInHeight and squareSize depend on your calibration board. If for example noOfCornersInWidth = 4, noOfCornersInHeight = 3 and squareSize = 100, we
get the 3d points
(0 ,0,0) (0 ,100,0) (0 ,200,0) (0 ,300,0)
(100,0,0) (100,100,0) (100,200,0) (100,300,0)
(200,0,0) (200,100,0) (200,200,0) (200,300,0)
So, here our coordinates are actually in the object coordinate system. (We have assumed arbitrarily that the upper left corner of the board is (0,0,0) and the rest corners' coordinates
are according to that one). So here we indeed need the rotation and transformation matrix to take us from the object(world) to the camera system. These are the extrinsics that OpenCV
returns for each frame.
To sum up in the Kinect case:
• Camera and World coodinate systems are considered the same, so no need for extrinsics there.
• No need for Object to World(Camera) transformation, since Kinect return value is already in Camera system.
Edit 2 (On the coordinate system used):
This is a convention and I think it depends also on which drivers you use and the kind of data you get back. Check for example that, that and that one.
Sidenote: It would help you a lot if you visualized a point cloud and played a little bit with it. You can save your points in a 3d object format (e.g. ply or obj) and then just import
it into a program like Meshlab (very easy to use).
Thank you very much. Now using the following extrinsic parameters can I pass by the coordinates of CAM in the coordinates world? – Luke Aug 20 '12 at 10:12
The calibration I have the following extrinsic parameters (for a single installation of the board):1.7261576010447846e-01 3.1158880577193560e-01 1.2720406228471280e-02
-1.1592911113815259e+02 -2.2406582979927950e+02 8.1420941356557194e+02 – Luke Aug 20 '12 at 10:14
You find those extrinsic parameters during the calibration, right? When you do the capture with the Kinect, do you capture the same board in the same position? – Chrys Aug 20 '12 at
When I do the capture with the kinect, I use same board but with different position, then I obtain as many rows (with 6 values) as the number of images that use. For example, I have
obtained these extrinsic parameters (3 for rotation and 3 for translation) for a single installation of the chessboard: 1.7261576010447846e-01 3.1158880577193560e-01
1.2720406228471280e-02 -1.1592911113815259e+02 -2.2406582979927950e+02 8.1420941356557194e+02. – Luke Aug 20 '12 at 16:13
What I am asking is, are those the extrinsics during the calibration or during the capture (if the position is different)? Sorry, I didn't understand that from your comment above. –
Chrys Aug 20 '12 at 17:10
show 14 more comments
Edit 2 (On the coordinate system used):
This is a convention and I think it depends also on which drivers you use and the kind of data you get back. Check for example that, that and that one.
up vote 0 down vote
if you for instance use microsoft sdk: then Z is not the distance to the camera but the "planar" distance to the camera. This might change the appropriate formulas.
add comment
Not the answer you're looking for? Browse other questions tagged opencv kinect camera-calibration calibration or ask your own question. | {"url":"http://stackoverflow.com/questions/12007775/to-calculate-world-coordinates-from-screen-coordinates-with-opencv","timestamp":"2014-04-18T23:35:37Z","content_type":null,"content_length":"81131","record_id":"<urn:uuid:d33479a9-8eea-4e0d-92d5-bfaf313bcf02>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special Properties of Lifetime Data
Some features of lifetime data distinguish them other types of data. First, the lifetimes are always positive values, usually representing time. Second, some lifetimes may not be observed exactly, so
that they are known only to be larger than some value. Third, the distributions and analysis techniques that are commonly used are fairly specific to lifetime data
Let's simulate the results of testing 100 throttles until failure. We'll generate data that might be observed if most throttles had a fairly long lifetime, but a small percentage tended to fail very
lifetime = [wblrnd(15000,3,90,1); wblrnd(1500,3,10,1)];
In this example, assume that we are testing the throttles under stressful conditions, so that each hour of testing is equivalent to 100 hours of actual use in the field. For pragmatic reasons, it's
often the case that reliability tests are stopped after a fixed amount of time. For this example, we will use 140 hours, equivalent to a total of 14,000 hours of real service. Some items fail during
the test, while others survive the entire 140 hours. In a real test, the times for the latter would be recorded as 14,000, and we mimic this in the simulated data. It is also common practice to sort
the failure times.
T = 14000;
obstime = sort(min(T, lifetime));
We know that any throttles that survive the test will fail eventually, but the test is not long enough to observe their actual time to failure. Their lifetimes are only known to be greater than
14,000 hours. These values are said to be censored. This plot shows that about 40% of our data are censored at 14,000.
failed = obstime(obstime<T); nfailed = length(failed);
survived = obstime(obstime==T); nsurvived = length(survived);
censored = (obstime >= T);
plot([zeros(size(obstime)),obstime]', repmat(1:length(obstime),2,1), ...
line([T;3e4], repmat(nfailed+(1:nsurvived), 2, 1), 'Color','b','LineStyle',':');
line([T;T], [0;nfailed+nsurvived],'Color','k','LineStyle','-')
text(T,30,'<--Unknown survival time past here')
xlabel('Survival time'); ylabel('Observation number')
Ways of Looking at Distributions
Before we examine the distribution of the data, let's consider different ways of looking at a probability distribution.
● A probability density function (PDF) indicates the relative probability of failure at different times.
● A survivor function gives the probability of survival as a function of time, and is simply one minus the cumulative distribution function (1-CDF).
● The hazard rate gives the instantaneous probability of failure given survival to a given time. It is the PDF divided by the survivor function. In this example the hazard rates turn out to be
increasing, meaning the items are more susceptible to failure as time passes (aging).
● A probability plot is a re-scaled CDF, and is used to compare data to a fitted distribution.
Here are examples of those four plot types, using the Weibull distribution to illustrate. The Weibull is a common distribution for modeling lifetime data.
x = linspace(1,30000);
title('Prob. Density Fcn')
title('Survivor Fcn')
wblhaz = @(x,a,b) (wblpdf(x,a,b) ./ (1-wblcdf(x,a,b)));
title('Hazard Rate Fcn')
title('Probability Plot')
Fitting a Weibull Distribution
The Weibull distribution is a generalization of the exponential distribution. If lifetimes follow an exponential distribution, then they have a constant hazard rate. This means that they do not age,
in the sense that the probability of observing a failure in an interval, given survival to the start of that interval, doesn't depend on where the interval starts. A Weibull distribution has a hazard
rate that may increase or decrease.
Other distributions used for modeling lifetime data include the lognormal, gamma, and Birnbaum-Saunders distributions.
We will plot the empirical cumulative distribution function of our data, showing the proportion failing up to each possible survival time. The dotted curves give 95% confidence intervals for these
[empF,x,empFlo,empFup] = ecdf(obstime,'censoring',censored);
hold on;
stairs(x,empFlo,':'); stairs(x,empFup,':');
hold off
xlabel('Time'); ylabel('Proportion failed'); title('Empirical CDF')
This plot shows, for instance, that the proportion failing by time 4,000 is about 12%, and a 95% confidence bound for the probability of failure by this time is from 6% to 18%. Notice that because
our test only ran 14,000 hours, the empirical CDF only allows us to compute failure probabilities out to that limit. Almost half of the data were censored at 14,000, and so the empirical CDF only
rises to about 0.53, instead of 1.0.
The Weibull distribution is often a good model for equipment failure. The function wblfit fits the Weibull distribution to data, including data with censoring. After computing parameter estimates,
we'll evaluate the CDF for the fitted Weibull model, using those estimates. Because the CDF values are based on estimated parameters, we'll compute confidence bounds for them.
paramEsts = wblfit(obstime,'censoring',censored);
[nlogl,paramCov] = wbllike(paramEsts,obstime,censored);
xx = linspace(1,2*T,500);
[wblF,wblFlo,wblFup] = wblcdf(xx,paramEsts(1),paramEsts(2),paramCov);
We can superimpose plots of the empirical CDF and the fitted CDF, to judge how well the Weibull distribution models the throttle reliability data.
hold on
handles = plot(xx,wblF,'r-',xx,wblFlo,'r:',xx,wblFup,'r:');
hold off
xlabel('Time'); ylabel('Fitted failure probability'); title('Weibull Model vs. Empirical')
Notice that the Weibull model allows us to project out and compute failure probabilities for times beyond the end of the test. However, it appears the fitted curve does not match our data well. We
have too many early failures before time 2,000 compared with what the Weibull model would predict, and as a result, too few for times between about 7,000 and about 13,000. This is not surprising --
recall that we generated data with just this sort of behavior.
Adding a Smooth Nonparametric Estimate
The pre-defined functions provided with the Statistics Toolbox™ don't include any distributions that have an excess of early failures like this. Instead, we might want to draw a smooth, nonparametric
curve through the empirical CDF, using the function ksdensity. We'll remove the confidence bands for the Weibull CDF, and add two curves, one with the default smoothing parameter, and one with a
smoothing parameter 1/3 the default value. The smaller smoothing parameter makes the curve follow the data more closely.
[npF,ignore,u] = ksdensity(obstime,xx,'cens',censored,'function','cdf');
npF3 = ksdensity(obstime,xx,'cens',censored,'function','cdf','width',u/3);
xlim([0 1.3*T])
title('Weibull and Nonparametric Models vs. Empirical')
legend('Empirical','Fitted Weibull','Nonparametric, default','Nonparametric, 1/3 default', ...
The nonparametric estimate with the smaller smoothing parameter matches the data well. However, just as for the empirical CDF, it is not possible to extrapolate the nonparametric model beyond the end
of the test -- the estimated CDF levels off above the last observation.
Let's compute the hazard rate for this nonparametric fit and plot it over the range of the data.
hazrate = ksdensity(obstime,xx,'cens',censored,'width',u/3) ./ (1-npF3);
title('Hazard Rate for Nonparametric Model')
xlim([0 T])
This curve has a bit of a "bathtub" shape, with a hazard rate that is high near 2,000, drops to lower values, then rises again. This is typical of the hazard rate for a component that is more
susceptible to failure early in its life (infant mortality), and again later in its life (aging).
Also notice that the hazard rate cannot be estimated above the largest uncensored observation for the nonparametric model, and the graph drops to zero.
For the simulated data we've used for this example, we found that a Weibull distribution was not a suitable fit. We were able to fit the data well with a nonparametric fit, but that model was only
useful within the range of the data.
One alternative would be to use a different parametric distribution. The Statistics Toolbox includes functions for other common lifetime distributions such as the lognormal, gamma, and
Birnbaum-Saunders, as well as many other distributions that are not commonly used in lifetime models. You can also define and fit custom parametric models to lifetime data, as described in the
Fitting Custom Univariate Distributions, Part 2 example.
Another alternative would be to use a mixture of two parametric distributions -- one representing early failure and the other representing the rest of the distribution. Fitting mixtures of
distributions is described in the Fitting Custom Univariate Distributions example. | {"url":"http://www.mathworks.co.uk/help/stats/examples/analyzing-survival-or-reliability-data.html?nocookie=true","timestamp":"2014-04-23T07:07:48Z","content_type":null,"content_length":"45153","record_id":"<urn:uuid:070718d9-6785-4641-a48d-d8044c7e5dd9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catalina Algebra 1 Tutor
Find a Catalina Algebra 1 Tutor
...For the last seven years I have worked for a charter school as a math and science tutor for grades 6-12. In math I can tutor in elementary, geometry, pre algebra, algebra I, algebra II and ACT
and in science I can tutor in elementary, biology, chemistry, physiology, anatomy and ACT. My strength...
23 Subjects: including algebra 1, chemistry, geometry, biology
...SAT tests mostly cover material from Algebra and Geometry. Statistics show the best way to improve your score by around 100 points is to take another year of math like Algebra 2. Working
problems from an SAT prep manual can improve your score by around 50 points.
26 Subjects: including algebra 1, English, reading, ASVAB
...I taught high school geometry for 12 years and I have tutored many geometry students. I am able to help a student to develop confidence in her/his ability to be successful in Geometry. I am
very good a helping a student to understand how to organize proofs.
7 Subjects: including algebra 1, geometry, algebra 2, precalculus
...I love meeting people from different countries, and I look forward to working with you!I received my K - 8 teaching credential from San Jose State University in 1989. I taught elementary school
for 3 years before having children, and since then I have substitute taught and tutored. I have worked with special needs children as well as typical students.
25 Subjects: including algebra 1, English, ESL/ESOL, GED
...I received a BS in astrophysics at New Mexico Tech (2001) and worked at the National Solar Observatory in Tucson for several years before pursuing graduate studies. As a graduate student
teaching assistant, I have worked extensively in the classroom. My experience ranges from delivering lecture...
35 Subjects: including algebra 1, chemistry, physics, calculus | {"url":"http://www.purplemath.com/Catalina_algebra_1_tutors.php","timestamp":"2014-04-16T04:35:31Z","content_type":null,"content_length":"23843","record_id":"<urn:uuid:9d430126-4ae0-4e80-850b-6d6674e7a610>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exponents of Variables
The problem below has two key differences.
• First, it has a term with two variables, and as you can see the exponent from outside the parentheses must multiply EACH of them.
• Second, there is a negative sign inside the parentheses. Since the exponent on the parentheses is 3, the negative sign is written in front of the term three times. Then the multiple signs are
Both the problem above and below this have a negative sign inside a set of parentheses which is raised to some power. If you did a lot of these you'd notice that when the parentheses are raised to an
odd power such as 3, the answer will be negative. If the parentheses are raised to an even power like the one below, the answer will be positive.
The last problem, shown below has a negative sign outisde the parentheses. Again, because of the Order of Operations which is presented in a later lesson, the exponent must be simplified before you
do anything with the negative sign. Look at the work below:
Note that even though the exponent on the parentheses was a 4 which is an even number, the final answer is negative. This is because the negative sign was outside of the parentheses, not inside as in
the previous example.
The next page has resources and worksheets for this lesson. | {"url":"http://www.algebrahelp.com/lessons/simplifying/varexp/pg5.htm","timestamp":"2014-04-20T13:32:31Z","content_type":null,"content_length":"7125","record_id":"<urn:uuid:de07fc61-9f06-4b98-bd5c-ec2235dc1c48>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to understand shifting in Java.
Author Trying to understand shifting in Java.
Ranch Hand
Joined: Jul I have not been able to understand fully the unsignied right shift (>>>
05, 2002 I know that negative numbers have 'ones' instead of 'zeros' in binary form. But does "most significant position" mean and how do I implement the shifts on negative numbers?
Posts: 79 The authors of R & H book and all the other books I've used must've felt the meaning of twos compliment operators was so obvious that they didn't even bother to explain it. I don't know
what that means: Can anyone please explain this to me too?
Ranch Hand
Joined: Jul To convert a positive number to a negative number represented in two's compliment format, just invert each bit and add one. Do the same to convert from negative to positive.
02, 2002
Posts: 1865
Dan Chisholm<br />SCJP 1.4<br /> <br /><a href="http://www.danchisholm.net/" target="_blank" rel="nofollow">Try my mock exam.</a>
Ranch Hand
Maybe I should give you a few examples of converting positive to negative and vice versa. I will use eight bit bytes just to keep things simple.
Joined: Jul +1 = 00000001;
02, 2002 -1 = 11111111;
Posts: 1865 Notice that the most significant bit (the first bit on the left) of positive one is zero. Notice that the most significant bit (msb) of negative one is a one. That's the sign bit.
If you add the positive one and the negative one the result is 00000000 as you would expect. Of course, you have to accept the fact that we ignore the bit that overflows as a result of
the addition.
Ranch Hand
The most significant bit of a number represented in two's compliment format is the sign bit. As a result, positive number can become negative number when they become too large. If one is
Joined: Jul added to a byte that previously held the value 127, then the byte will become negative 128.
02, 2002 +127 = 01111111
Posts: 1865 If you add one, the result is
-128 = 10000000
The above is an obvious example of why the range of a byte is -128 to +127.
In case it's not clear - "most significant bit" means the leftmost bit, and "least significant bit" is the rightmost bit.
Joined: Jan
30, 2000
Posts: "I'm not back." - Bill Harding, Twister
Joined: Jul Have you read this campfire story?
22, 2000
Posts: 9043
10 "Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
Ranch Hand
The behavior of the shift operator is fairly obvious when working with 32 bit integers. However, the behavior of the shift operator is not entirely obvious when working with smaller
Joined: Jul primitive types such as byte, short, and char. That's because the shift operator promotes the operands to int primitives before doing the shift. For example, you would expect the
02, 2002 following code to print a value of 127.
Posts: 1865 byte b = -1;
b = (byte)(b>>>1);
Instead, the result is -1.
To produce a value of 127, you actually have to shift the byte 25 times to the right. Try running the following.
I think I will add the above to my mock exam. Thanks for the idea.
subject: Trying to understand shifting in Java. | {"url":"http://www.coderanch.com/t/369544/java/java/understand-shifting-Java","timestamp":"2014-04-18T08:17:25Z","content_type":null,"content_length":"34050","record_id":"<urn:uuid:d68e262c-557d-49da-85a9-e889084d3b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shell correction energy for bubble nuclei
Next: References
Shell correction energy for bubble nuclei
Yongle YU
February 4, 1999
The positioning of a bubble inside a many fermion system does not affect the volume, surface or curvature terms in the liquid drop expansion of the total energy. Besides possible Coulomb effects, the
only other contribution to the ground state energy of such a system arises from shell effects. We show that the potential energy surface is a rather shallow function of the displacement of the bubble
from the center and in most cases the preferential position of a bubble is off center. Systems with bubbles are expected to have bands of extremely low lying collective states, corresponding to
various bubble displacements.
PACS numbers: 21.10.-k, 21.10.Dr, 21.60.-n,24.60.Lz
There are a number of situations when the formation of voids is favored. When a system of particles has a net charge, the Coulomb energy can be significantly lowered if a void is created [1, 2] and
despite an increase in surface energy the total energy decreases. One can thus naturally expect that the appearance of bubbles will be favored in relatively heavy nuclei. This situation has been
considered many times over the last 50 years in nuclear physics and lately similar ideas have been put forward for highly charged alkali metal clusters [3].
The formation of gas bubbles is another suggested mechanism which could lead to void(s) formation [4]. The filling of a bubble with gas prevents it from collapsing. Various heterogeneous atomic
clusters [5] and halo nuclei [6] can be thought of as some kind of bubbles as well. In these cases, the fermions reside in a rather unusual mean-field, with a very deep well near the center of the
system and a very shallow and extended one at its periphery. Since the amplitude of the wave function in the semiclassical limit is proportional to the inverse square root of the local momentum, the
single particle wave functions for the weakly bound states will have a small amplitude over the deep well. If the two wells have greatly different depths, the deep well will act almost like a hard
wall (in most situations).
Several aspects of the physics of bubbles in Fermi systems have not been considered so far in the literature. It is tacitly assumed that a bubble position has to be determined according to symmetry
considerations. For a Bose system one can easily show that a bubble has to be off-center [7]. In the case of a Fermi system the most favorable arrangement is not obvious [8]. The total energy of a
many fermion system has the general form
where the first three terms represent the smooth liquid drop part of the total energy and 9]. We shall consider in this work only one type of fermions with no electric charge. In a nuclear system the
Coulomb energy depends rather strongly on the actual position of the bubble, but in a very simple way. In an alkali metal cluster, as the excess charge is always localized on the surface, the Coulomb
energy is essentially independent of the bubble position. The character of the shell corrections is in general strongly correlated with the existence of regular and/or chaotic motion [10, 11]. If a
spherical bubble appears in a spherical system and if the bubble is positioned at the center, then for certain ``magic'' fermion numbers the shell correction energy E(N), has a very deep minimum.
However, if the number of particles is not ``magic'', in order to become more stable the system will in general tend to deform. Real deformations lead to an increased surface area and liquid drop
energy. On the other hand, merely shifting a bubble off-center deforms neither the bubble nor the external surface and therefore, the liquid drop part of the total energy of the system remains
Moving the bubble off-center can often lead to a greater stability of the system due to shell correction energy effects. In recent years it was shown that in a 2-dimensional annular billiard, which
is the 2-dimensional analog of spherical bubble nuclei, the motion becomes more chaotic as the bubble is moved further from the center [12]. One might thus expect that the importance of the shell
corrections diminishes when the bubble is off-center. We shall show that this is not the case however.
One can anticipate that the relative role of various periodic orbits (diameter, triangle, square etc.) is modified in unusual ways in systems with bubbles. In 3-dimensional systems the triangle and
square orbits determine the main shell structure and produce the beautiful supershell phenomenon [10, 13]. A small bubble near the center will affect only diameter orbits. After being displaced
sufficiently far from the center, the bubble will first touch and destroy the triangle orbits. In a 3-dimensional system only a relatively small fraction of these orbits will be destroyed. Thus one
might expect that the existence of supershells will not be critically affected, but that the supershell minimum will be less pronounced. A larger bubble will simultaneously affect triangular and
square orbits, and thus can have a dramatic impact on both shell and supershell structure.
The change of the total energy of a many fermion system can be computed quite accurately using the shell corrections method, once the single-particle spectrum is known as a functions of the shape of
the system [9, 11]. The results presented in this Letter have been obtained using the 3d-version of the conformal mapping method described in [8] as applied to an infinite square well potential with
Dirichlet boundary conditions. The magic numbers are hardly affected by the presence or absence of a small diffuseness [14]. The absence of a spin-orbit interaction leads to quantitavive, but to no
qualitative differences. Consequently, we expect that our results are generic.
In Fig. 1 we show the unfolded single-particle spectrum for the case of a bubble of half the radius of the system, a=R/2, as a function of the displacement d/R of the bubble from the center. The size
of the system is determined as usual from 15] for the average cumulative number of states.
Figure 1: The unfolded single-particle spectrum for the case of a bubble of radius a= R/2 as a function of the bubble dispalcement d/R. Only the lowest 128 levels are shown here.
where e in a 3d-cavity and
As the bubble is moved off center, the classical problem becomes more chaotic [12] and one can expect that the single particle spectrum would approach that of a random Hamiltonian [16] and that the
nearest-neighbor splitting distribution would be given by the Wigner surmise[17]. A random Hamiltonian would imply that ``magic'' particle numbers are as a rule absent. There is a large number of
level crossings in Fig. 1 and one can clearly see a significant number of relatively large gaps in the spectrum. These features are definitely not characteristic of a random Hamiltonian. If the
particle number is such that the Fermi level is at a relatively large gap, then the system at the corresponding ``deformation'' is very stable. This situation is very similar to the celebrated
Jahn-Teller effect in molecules. A simple inspection of Fig. 1 suggests that for various particle numbers the energetically most favorable configuration can either have the bubble on- or off-center.
Consequently, a ``magic'' particle number could correspond to a ``deformed'' system. In this respect this situation is a bit surprising, but not unique. It is well known that many nuclei prefer to be
deformed, and there are particularly stable deformed ``magic'' nuclei or clusters [11, 13, 14, 20].
The variation of the ground state energy of an interacting N-fermion system, with respect to shape deformation or other parameters, is quite accurately given by the shell correction energy [11]. In
our case, the eigenspectrum and the shell correction energy are functions of N, R, a and d. When the particle number N is varied at constant density, we have 18] or the critical Casimir energy in a
binary liquid mixture near the critical demixing point [19]. In Fig. 2 we show the contour plot of the shell correction energy for a system with a=R/2 as a function of the bubble displacement d/R
versus N's the bubble ``prefers'' to be in the center, while for other values that is the worst energy configuration. For a given particle number N the energy is an oscillating function of the
displacement d and many configurations at different d value have similar energies. However, in all cases, moving the bubble all the way to the edge of the systems leads to the lowest values of d is
preceeded by the highest ``mountain range''. Thus, with the exception of the alternating peaks and deep lakes for the on-center configuration, the largest variations in the shell correction energy
occur when the bubble is close to the surface.
Figure 2: Contour plot of the shell correction energy for the case of a bubble of radius a = R/2 for up to N=8,000 spinless fermions. Calculations have been perfomed up to
In Fig. 3 we show the unfolded single-particle spectra for a bubble with a smaller radius a=R/5. The number of level crossing is significantly smaller than in Fig. 1 and as a result, the shell
correction energy contour plot has less structure, see Fig. 4, and thus a system with a smaller bubble is also significantly softer.
Pairing correlations can lead to a further softening of the potential energy surface of a system with one or more bubbles. We have seen that the energy of a system with a single bubble is an
oscillating function of the bubble displacement. When the energy of the system as a function of this displacement has a minimum, the Fermi level is in a relatively large gap, where the
single-particle level density is very low. When the energy has a maximum, just the opposite takes place. Pairing correlations will be significant when the Fermi level occurs in a region of high
single-particle level density and it is thus natural to expect that the total energy is lowered by paring correlations at ``mountain tops'', and be less affected at ``deep valleys". All this
ultimately leads to a further leveling of the potential energy surface.
Figure 3: The same as in Fig. 1 but for a= R/5.
Figure 4: The same as in Fig. 2 but for a = R/5 and for up to N=1,000 spinless fermions.
With increasing temperature the shell correction energy decreases in magnitude, but the most probable position of a bubble is still off-center. The reason in this case is however of a different
nature, the ``positional'' entropy of such a system favours configurations with the bubble off-center, as a simple calculation shows, namely
A system with one or several bubbles should be a very soft system. The energy to move a bubble is parametrically much smaller than any other collective mode. All other familiar nuclear collective
modes for example involve at least some degree of surface deformation. For this reasons, once a system with bubbles is formed, it could serve as an extremely sensitive ``measuring device'', because a
weak external field can then easily perturb the positioning of the bubble(s) and produce a system with a completely different geometry. There are quite a number of systems where one can expect that
the formation of bubbles is possible [8]. Known nuclei are certainly too small and it is difficult at this time to envision a way to create nuclei as big as those predicted in Ref. [2]. On the other
hand voids, not always spherical though, can be easily conceived to exist in neutron stars [21]. Metallic clusters with bubbles, one or more fullerenes in a liquid metal or a metallic ball placed
inside a superconducting microwave resonator [22] in order to study the ball energetics and maybe even dynamics, are all very promising candidates.
Financial support for this research was provided by DOE.
Next: References Aurel Bulgac
Thu Feb 4 15:40:17 PST 1999 | {"url":"http://www.phys.washington.edu/~bulgac/Papers/bubble/index.html","timestamp":"2014-04-18T18:13:11Z","content_type":null,"content_length":"19596","record_id":"<urn:uuid:b50d459f-c9b2-45ce-9fa3-7d5096d9540b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Water Tank Minimization
A copper water tank in the form of a rectangular parallelopiped is to made. If its length is to be a times its breadth, how high should it be that for a given capacity it should cost as little as
W.E. Berely, Problems in Differential Calculus , 1895
Click here to reveal the answer | {"url":"http://www.maa.org/publications/periodicals/convergence/water-tank-minimization?device=mobile","timestamp":"2014-04-19T05:58:02Z","content_type":null,"content_length":"19922","record_id":"<urn:uuid:b54cad0d-75af-4a6f-91ea-f51a5dd808d2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectrum Lab Glossary
< under construction >
Analog- to Digital Converter. Used, for example, in the input section of a soundcard, to convert the analog audio signal into "numbers" which can be processed with this program.
Digital- to Analog Converter. Used, for example, in the output section of a soundcard, to convert the processed signal back into analog audio signals.
Digital Signal Processing, or digital signal processor. In Spectrum Lab, DSP is done entirely in software (no "hardware DSP" is used here).
Extremely Low Frequency: 3 ... 30 Hz, Super Low Frequency: 30 ... 300 Hz, Ultra Low Frequency: 300 ... 3000 Hz .
SLF and ULF can be easily processed with a soundcard. ELF can be a problem due to the lower edge frequency of most soundcards, caused by the coupling capacitors between the line input jack, and
the ADC.
Fast Fourier Transform. An algorithm which can transform a signal from the time domain into the frequency domain. The number of samples in the time domain, and the number of samples in the
frequency domain, are often restricted to powers of two (as in SpecLab). The FFT is the heart of SpecLab's frequency analyser, and also used in the DSP filter when running in "FFT mode". In the
SL manual, a single sample in the frequency domain (=often the "result" of the FFT as we use it here) is often called an "FFT bin".
A single bin in the fourier transform. Contains the energy of a signal in a narrow frequency range.
Low Frequency = radio spectrum between 30 and 300 kHz. This is beyond the frequency range which can be handled with most audio soundcards, so you will need an extra receiver to catch these
signals. Some "shortwave" receivers go down to 30 kHz, so they make decent LF receivers (but beware, their sensitivity in the LF spectrum can be very limited, or, as radio amateurs call it, they
are "pretty deaf down there").
Very Low Frequency = radio spectrum between 3 and 30 kHz. Can be processed directly with modern soundcards. | {"url":"http://www.qsl.net/dl4yhf/speclab/glossary.htm","timestamp":"2014-04-17T21:25:54Z","content_type":null,"content_length":"3390","record_id":"<urn:uuid:d4c13ed9-1367-43d9-99ed-7021d74e9de1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Launching of a Potato (Involve kinetic and potential energy and an angle!!)
a. What is the horizontal component of the potato’s velocity at the time it is launched?
A) You can seperate the velocity into two vectors, lets say v[x] and v[y]. The find the horizontal velocity, you simply make a triangle with the v[initial] as the hypotenuse with an angle of 47
degrees. Use v[initial]sin(x) to find v[x] (the horizontal velocity). All this is assuming there is no air friction.
What is the speed of the potato when it is its maximum height? Hint: the answer is not 0!
The speed of the potato I interpret as the magnitude of its velocity. Since v[x] is 0m/s at max height we can neglect that and use v[y]. The answer would be v[i]sin(x), the same as your question A.
. Calculate the kinetic energy of the potato when it is launched.
d. Calculate the gravitational potential energy of the potato when it is launched.
Use PE=mgh. Keep in mind that h has a non-zero value because you are not on the ground.
e. Calculate the kinetic energy of the potato when it reaches its maximum height.
Gotta go to class! My answers might not necessarily be correct, please check them over! I'm just a student myself lol. | {"url":"http://www.physicsforums.com/showthread.php?t=539317","timestamp":"2014-04-20T08:48:33Z","content_type":null,"content_length":"43586","record_id":"<urn:uuid:fc7388e0-4dc1-47ac-a19f-8467953f876f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wiggling Sums
The problem
The context of this problem is related to optimization problems: given some value, we want to produce a bunch of related values.
An example of where such an operation can be found is shrink :: Arbitrary a => a -> [a], found in the QuickCheck library.
Alex and I encountered an fun problem while working on something similar at Tsuru. This blogpost is not really aimed at people who have just begun reading about Haskell as it contains little text and
requires some intuition about sums and products (in the more general sense).
> {-# LANGUAGE FlexibleInstances #-}
> import Control.Applicative
> import Data.Traversable
We can capture our idea of related values in a typeclass:
> class Wiggle a where
> wiggle :: a -> [a]
And define a simple instance for Int or Double:
> instance Wiggle Int where
> wiggle x = [x - 1, x, x + 1]
> instance Wiggle Double where
> wiggle x = let eps = 0.03 in [x - eps, x, x + eps]
The interesting notion is to define instances for more general (combined) types. Given a tuple, we can wiggle it in two ways: either wiggle one of its components, or wiggle them both. Let’s express
both notions using two simple newtypes ^1:
> newtype Product a = Product {unProduct :: a}
> deriving (Show)
> instance (Wiggle a, Wiggle b) => Wiggle (Product (a, b)) where
> wiggle (Product (x, y)) =
> [Product (x', y') | x' <- wiggle x, y' <- wiggle y]
> newtype Sum a = Sum {unSum :: a}
> deriving (Show)
> instance (Wiggle a, Wiggle b) => Wiggle (Sum (a, b)) where
> wiggle (Sum (x, y)) =
> [Sum (x', y) | x' <- wiggle x] ++
> [Sum (x, y') | y' <- wiggle y]
The same applies to structures such as lists. We can wiggle all elements of a list, or just a single one (if the list is non-empty). Both instances are reasonably straightforward to write.
The interesting question is if and how we can do it for a more general family of structures than lists? Foldable? Traversable?
A Wiggle instance for traversable products is not that hard:
> instance (Traversable t, Wiggle a) => Wiggle (Product (t a)) where
> wiggle (Product xs) = map Product $ traverse wiggle xs
But how about the instance:
> instance (Traversable t, Wiggle a) => Wiggle (Sum (t a)) where
> wiggle = wiggleSum
The solution
Is it possible? Can you come up with a nicer solution than we have?
1. These newtypes are also defined in Data.Monoid. I defined them again here to avoid confusion: this code does not use the Monoid instance in any way.↩ | {"url":"http://jaspervdj.be/posts/2012-10-17-wiggling-sums.html","timestamp":"2014-04-16T14:06:23Z","content_type":null,"content_length":"15236","record_id":"<urn:uuid:a8e313c9-799c-4236-946d-a9385ebe2a80>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Divergence of a series
March 15th 2010, 03:53 AM #1
Junior Member
Jan 2008
Divergence of a series
Hi, I want to show that $\sum_{n=1}^{\infty}b_n$, where $b_n = \ln(1 - \frac{1}{2^n})$, is divergent (approaches negative infinity).
How would I go about approach this? I was thinking of using direct comparison or limit form of comparison test. However, I am not sure of what to compare $b_n$ with.
Any help will be kindly appreciated.
Last edited by shinn; March 15th 2010 at 04:49 AM.
Hi, I want to show that $\sum_{n=0}^{\infty}b_n$, where $b_n = \ln(1 - \frac{1}{2^n})$, is divergent (approaches negative infinity).
How would I go about approach this? I was thinking of using direct comparison or limit form of comparison test. However, I am not sure of what to compare $b_n$ with.
Any help will be kindly appreciated.
It should be obvious that the very first term $\to \ln{0} \to -\infty$.
If you can show the rest of the terms create a convergent series, when you add them, the sum is still going to $\to -\infty$.
sorry I made a mistake when typing up the question, n is suppose to start at 1. thanks for pointing that out.
$\sum_{n = 1}^{\infty}\ln{\left(1 - \frac{1}{2^n}\right)} = \ln{\left(1 - \frac{1}{2}\right)} + \ln{\left(1 - \frac{1}{4}\right)} + \ln{\left(1 - \frac{1}{8}\right)} + \ln{\left(1 - \frac{1}{16}\
right)} + \dots$
$= \ln{\frac{1}{2}} + \ln{\frac{3}{4}} + \ln{\frac{7}{8}} + \ln{\frac{15}{16}} + \dots$
$= \ln{\left(\frac{1}{2} \cdot \frac{3}{4} \cdot \frac{7}{8} \cdot \frac{15}{16} \cdot \dots\right)}$
$\to \ln{0}$
$\to -\infty$.
the series converges.
No. It converges.
Here is a hint:
Make a common denominator.
And use one of the logarithmic functions' properties.
Sorry but I don't see how
$\ln{\left(1 - \frac{1}{2^n}\right)} = \ln{\left(\frac{2^n - 1}{2^n}\right)} = \ln{(2^n - 1)} - \ln{(2^n)}$
helps us in this case.
When you take the sum you just end up with
$\ln{1} - \ln{2} + \ln{3} - \ln{4} + \ln{7} - \ln{8} + \dots$.
This is not a telescopic series so I don't see how you get that the series is convergent...
Sorry but I don't see how
$\ln{\left(1 - \frac{1}{2^n}\right)} = \ln{\left(\frac{2^n - 1}{2^n}\right)} = \ln{(2^n - 1)} - \ln{(2^n)}$
helps us in this case.
When you take the sum you just end up with
$\ln{1} - \ln{2} + \ln{3} - \ln{4} + \ln{7} - \ln{8} + \dots$.
This is not a telescopic series so I don't see how you get that the series is convergent...
But I did not say it will be a telescoping series.
alright, so suppose i have the series $\sum{\ln \left( 1+\frac{1}{n^{2}} \right)}=\ln 2+\ln \frac{5}{4}+\ln +\frac{10}{9}+\cdots$ and i'm seeing here that the numerator grows faster than the
denominator, thus the series diverges.
that's just wrong, doesn't provide a solid proof of the convergence of the series.
my series converges by limit comparison test with $b_n=\frac1{n^2}$ which is a solid proof of the convergence of the series.
Considering that the OP has stated that he/she needs help getting to the answer of $-\infty$ he/she has been given, I have shown correctly how to get that answer, as the logarithm laws and logic
I have used is not flawed.
alright, so suppose i have the series $\sum{\ln \left( 1+\frac{1}{n^{2}} \right)}=\ln 2+\ln \frac{5}{4}+\ln +\frac{10}{9}+\cdots$ and i'm seeing here that the numerator grows faster than the
denominator, thus the series diverges.
that's just wrong, doesn't provide a solid proof of the convergence of the series.
my series converges by limit comparison test with $b_n=\frac1{n^2}$ which is a solid proof of the convergence of the series.
Actually it does work in this case.
Simple application of the same logarithm rule $\ln{a} + \ln{b} = \ln{(ab)}$.
So $\ln{2} + \ln{\frac{5}{4}} + \ln{\frac{10}{9}} + \dots = \ln{\left(2 \cdot \frac{5}{4} \cdot \frac{10}{9} \cdot \dots \right)}$
and as stated before, since you're multiplying each term in the product by a number $>1$, the product grows without bound (albeit slowly), as does the logarithm. So this series diverges.
$a_n=\ln\bigg(1-\frac1{2^n}\bigg)$ is strictly negative, so there's no difference on studying the series $\sum\ln\bigg(1-\frac1{2^n}\bigg)$ or $-\sum \ln\bigg(1-\frac1{2^n}\bigg)$ where the first
one is negative and the second one positive, so limit comparison test with $\frac1{2^n}$ does apply, and the series converges.
March 15th 2010, 04:27 AM #2
March 15th 2010, 04:50 AM #3
Junior Member
Jan 2008
March 15th 2010, 05:07 AM #4
March 15th 2010, 05:24 AM #5
March 15th 2010, 05:25 AM #6
March 15th 2010, 05:30 AM #7
March 15th 2010, 05:31 AM #8
March 15th 2010, 05:39 AM #9
March 15th 2010, 05:44 AM #10
March 15th 2010, 05:52 AM #11
March 15th 2010, 05:54 AM #12
March 15th 2010, 05:59 AM #13
March 15th 2010, 06:00 AM #14
March 15th 2010, 06:06 AM #15 | {"url":"http://mathhelpforum.com/calculus/133884-divergence-series.html","timestamp":"2014-04-18T13:00:27Z","content_type":null,"content_length":"95164","record_id":"<urn:uuid:f53f898b-1bfa-4a8a-b727-bfef16d9e4c9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Control Valve Simulation in the Age of the Black Box
The Black Box: All It Takes to Gain Answers to Complex Questions Is to Push Buttons on a Box, and the Canned Software Will Provide the Answer, Right?
"Ask the Experts" is moderated by Béla Lipták (http://belaliptakpe.com/), process control consultant and editor of the three-volume Instrument Engineer's Handbook (IEH). He is recruiting contributors
for the 5th edition. If you would like to contribute or if you have questions for our team of "experts, please send them to liptakbela@aol.com.
Q: My background is in electronics, and I need to simulate the behavior of a valve in Simulink (MATLAB). Both the inlet and outlet pipe diameter to the valve are 2 inches. How do I select the
diameter inside the valve? Should it be more or less than the size of inlet? Because I already have the model in MATLAB, I just need to choose the diameters of the duct inside the valve and the inlet
and outlet ducts connecting the valve. My valve is a straightforward one run by a stepper motor. I don't know the exact name of it in mechanical terms, but I believe that the output from the motor
(cross-sectional area) is fed to the valve as an input based on which valve will open or close. Is there any site where I can get some ready-made simulated valves that can be run in Simulink?
Mohammed Hanneef
A: This question reflects today's culture of the "black box." This culture suggests that all it takes to gain answers to complex questions is to push some buttons on a box or to plug in a number, and
the canned software will provide the answer. This is wrong! The question, "What port size is required if the pipe is 2 in?" is similar to asking, "What size shoe should I buy if my waste is 34?"
If process control in general or control valve simulation in particular were that simple, we would not need process control engineers! Gadgets like black boxes are only as good as the software
inside, and if the programmer did not understand the problem, the software is useless. The rule of "garbage in, garbage out" will apply! In other words, if the instructions for a valve simulator
require only to provide the inside diameter of the valve, that simulator model is useless.
This does not mean that there is anything wrong with Simulink. It is a commercial tool for modeling, simulating and analyzing multi-domain dynamic systems. It is used in the model-based simulation of
the control of both simple processes, such as the thermostat control of a home, or as complex processes as the simulation of the automated transport vehicle (ATV) used in the international space
station. The problem is not with the capabilities of this software; it is with the understanding of what information it needs to make the simulation meaningful.
In order to simulate the behavior of a control valve station, first the purpose of the simulation must be defined, and then both the characteristics (the "personality") of the valve and the nature of
the process must be described. The process description includes both the nature of the installation and the properties of the flowing fluid, including its Reynolds number. As to the valve/actuator/
positioner package itself, both its steady state and dynamic behavior should be described.
Its steady-state behavior (characteristics) describes the relationship between valve position and flow as a function of pressure drop (on the left of Figure 1). This inherent characteristic (linear,
equal-percentage, hyperbolic, quick opening, square root, etc.) is affected by the variation in the system pressure drop, which determines the distortion coefficient (Dc on the right of Figure 1) and
results in the actual characteristics of the valve.
The dynamic behavior of the valve station describes the relationship between the controller output signal (desired valve position) and the actual valve position, which is affected by the dead band
and velocity limits of the installation. Naturally, these effects are reduced if the valve is provided with a positioner.
The dynamic behavior of a control valve may be represented by a time lag of first or second order, with a limited velocity in stem movement. The time lags are simulated by lag or lag-lead elements.
Velocity limiting may be expressed by the differential equation below,
where x is the stem position, vL denotes the velocity limit, and xideal is the input position of an ideal, unconstrained valve. When the velocity exceeds the limiting value, the gain of the actuator
decreases, and its phase lag increases.
The electrical engineer asking this question probably did not understand what I wrote here, but it should show that simulation is a complex and sophisticated field of engineering that requires
process control knowledge, and if somebody suggests that all you need to simulate a valve is to plug in the port diameter, that person does not possess that knowledge. If you want to learn about
valve simulation, read Chapter 8.11 in the 2nd volume of my handbook, the many ISA documents (such as 75.25) on valve simulation, or see my 2007 article on the subject at www.controlglobal.com/
Bela Liptak
A: The simulation you are using might not be adequate.
First your request does not say why the simulation is being done. That is, is it to learn flow dynamics, control dynamics or simply to estimate flows and pressure drops under a static situation? Each
objective will require a different approach.
For the valve flow, see the ISA valve sizing program if the valve is to be used over a wide range. Flow relationships will vary with pressure drop and be linear versus stem position at very small
openings; go into a region where the square root of pressure difference sets flow; and then may well go into a choked-flow regime where the flow is constant regardless of pressure changes. Also, the
valve flow coefficient might not be linear with stem position.
For dynamics, see ISA 75.25 for the impact of imperfect stem positioning versus control signal to describe control problems caused by valve actuator dynamics.
If your concerns are simple, the valve simulation can be simple. If you are looking for subtle issues or for a difficult application, the simulation will have to be more detailed. A valve is not a
simple resistor.
Simulation can be a very powerful tool. The question is always, is it valid? Can it be proven adequate for the questions asked? I know very well how to get the desired answer, but that might not
represent the truth.
Cullen Langford
former chair of ISA 75.25 and 75.07
A: In Simulink simulations, I recall that there is an input function block for step inputs. Please check. (I could be wrong, since I have not used MATLAB, Simulink, Control Toolbox, Signal
Processing, etc. for awhile). But having an input function block is of no use. You need to write your own simulation codes for the control valve. This means you need to know the dynamics of your
valve—the time constant, the gain, the time lag, etc., of its responses to step changes. More important is the resolution of your valve; typically a diaphragm-operated valve has a resolution of 2%,
while a piston-operated valve has a resolution of only 1%. I have not seen any stated resolution for step-motor actuator. (The resolution here means the changes in output fluid pressure caused by
the valve for every unit change in valve stroke).
You can study the dynamics of your valve by statistics—clusters analysis, PCA (principle component analysis), DMC (dynamic matrix), etc., from historical data captured with a sampling time at least
several times faster than your step motor. (I am assuming that your plant data were captured once every second for a total number of 6000 plant variables). The changes in the output (ideally the flow
rate of the valve), if not, the valve stroke, versus the input (the step motor input signal and dynamics of the motor) have to be given.
There is software in the market called Process Doc, written in MATLAB codes, which can help you to pinpoint the key dependent variables as a function of the independent variable (the flow rate of the
valve or the valve stroke). But again you need to provide enough statistical data, both for building a model for the valve and for model validation. You can start out by assuming a model, such as
Box-Jenkins or ARMA models, etc., and work your way by including white noise, using your Model Identification Toolbox. For reference, please read and examine the codes in IDDEMO. To build a model
using MATLAB and Simulink is fast, but garbage in, garbage out, if the dynamics of the valve are unknown or assumed incorrectly.
It is not impossible to build a valve model analytically. But it is very difficult to do so, and basic chemical engineering assumptions have to be used. Sometimes the viscous nature of the fluid can
affect the response of your valve to an unknown extent for every change in your step motor input.
I do not believe anyone would, or could, release ready-made software that will simulate your valve for free. I have seen MATLAB + Simulink crashed in non-linear engineering applications.
For any modeling of a control valve, it has to meet the published Cv versus lift in the valve catalog provided by the specific manufacturer. Lift here means the percent opening or travel of the valve
plug inside the valve trim. Such published Cv versus lift data is used to validate the valve model. For a given valve size, the Cv versus lift varies from manufacturer to manufacturer because the
loss coefficients of their trims are not the same, since their trim geometries are not the same. Even among the various valve types or valve models of the same size made by the same valve
manufacturer, the loss coefficient of the trims are not the same. The loss coefficient can be calculated from the geometry of the trim based on knowledge (internal flows) in chemical engineering
transport phenomenon or in mechanical engineering fluid mechanics.
For example Crane Technical Publication 410 bases the flow geometry model on a circular flow geometry and suggests the following relationship:
Cv/A = 38/sq.root (k)
Where A = 3.1416d2/4 and is the effective flow area of the valve trim in inches squared,
k is the loss coefficient of the valve trim (dimensionless), and d is the equivalent flow diameter inside the valve trim.
The above equation assumes that the capacity of a control valve (Cv) is a function of the upstream resistance (i.e., loss coefficient) of the valve trim. Knowing the loss coefficient and the Cv, the
effective flow area (A) can be calculated. This area is always smaller than the physical flow area of the valve trim, depending on the upstream loss coefficient, k. Cv is called the capacity of the
control valve, while Cv/A is called the flow capability of the control valve because it is a function of the trim loss coefficient.
For complicated valve trim geometries, the loss coefficient can be calculated by an iterative method: First assume a numerical value for k; measure the outlet flow area of the trim or from its
photograph; calculate the Cv/A at the trim outlet; then calculate the Cv/A at the trim inlet and, therefore, the Cv. If the calculated Cvs match with the published Cv at various strokes to within 5%,
then the assumed loss coefficient is the k of the valve trim. Such a reversed engineering method has been published by Joe Steinke of CCI.
There are valve trims with variable loss coefficients as the valve stroke changes. Examples are those control valves used in regulating boiler feedwater flow, where the trim loss coefficient varies
to take into account the boiler pump curve for optimized control—low flow rate at small valve opening with high loss coefficient; high flow rate at large valve opening with low loss coefficient.
Gerald Liu, P.Eng.
Control Valve Consultant | {"url":"http://www.controlglobal.com/articles/2011/valvesimulation1101/?show=all","timestamp":"2014-04-17T09:46:45Z","content_type":null,"content_length":"70506","record_id":"<urn:uuid:2e5ba685-9c60-4893-85fd-95822c5b81ee>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning to Program - A Beginners Guide - Part Seven - Representing Numbers
Learning to Program – A Beginners Guide – Part Seven – Representing Numbers
by Matthew Adams
In this section we’re going to look at the Data Memory in more detail.
So far, we’ve assumed we know what “store 1 in the memory location at offset 4″ means. If we’ve told it to store 1 at offset 4, when we read back the value at offset 4, we get 1 as expected.
We know, then, that a memory location can hold a whole number greater than or equal to zero (the “positive integers”). Is there a maximum number that we can hold in that location?
Let’s write a short program to find out. This uses a loop like the one we saw in the previous exercise to increment r0 all the way up to 255. When r0 reaches 255, we end the loop and proceed to add
another 1 to the result, and write it back to the memory at offset 0.
(Don’t forget that you can get the tool we use to run these “assembly language” programs from here. The setup instructions for this F# environment are here.)
load r0 0 ; set the result to 0
add r0 1 ; add 1 to the result (<--- jump back here)
compare r0 255 ; is the result 255?
jumpne -2 ; if not, jump back up 2
add r0 1 ; add another 1 to the result (255 + 1=??)
write 0 r0 ; write the result to the memory at offset 0
If you run this, you'll see R0 increase one-by-one, until it reaches 255. Then the final few lines of the output are as follows:
compare r0 255
R0=255 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=3, FL=0
jumpne -2
R0=255 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=4, FL=0
add r0 1
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=5, FL=0
write 0 r0
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=6, FL=0
R0=0 R1=0 R2=0, WR0=0, WR1=0, WR2=0, PC=65535, FL=0
You can see that when we reached the value 255 in R0, the comparison set the FL register to 0, so the JUMPNE instruction did not decrement the program counter, and we went on to execute the ADD
When we added 1 again, R0 seemed to reset back to 0!
What's going on?
Well, it's all to do with the fact that each memory location in our computer is a fixed size called a byte (remember that the x86 MOV to memory instruction referred to a BYTE in the previous section
). What's a byte? And what's the maximum number we can represent with a byte? To understand, we need to look at binary arithmetic.
Binary Arithmetic
Remember in the "How does a computer work" section we talked about transistors being switches that can be On and Off. It turns out that we can represent integers just using On and Off values.
Representing numbers in different bases
We're all familiar with decimal arithmetic - counting 0…1…2…3…4…5…6…7…8…9… When we run out of the symbols we use for numbers, we then add a second digit 10…11…12…13…14…15…16…17…18…19, then
20…21…22…23…24…25…26…27…28…29 and so on, until we run out of symbols again. Then we add a third digit 100…101…102…103…104… and so on. We start out by counting up digit1 (0-9). Then, when we add a
second digit, the number is (ten x digit2) + digit1. When we add a third digit, the number is (ten x ten x digit3) + (ten x digit2) + digit1. And so on.
Here are a couple of examples (like you need examples of the number system you use every day!). But notice specifically that each digit represents an additional power of 10. So we call this "base 10"
(or decimal). (Remember that anything to the power 0 is 1.)
Digit Multiplier 100 10 1
(As power) 10^2 10^1 10^0
Base 10: 234 2 3 4
Base 10: 84 0 8 4
This is so familiar to us, we've usually forgotten that we were once taught how to interpret numbers like this. (Often, this was done with cubes of wood or plastic in blocks of 1, 10, 100 etc. when
we were very tiny.)
Imagine, instead, that we only had symbols up to 7.
We'd count 1…2…3…4…5…6…7… Then, when we ran out of symbols, we would have to add a digit and start counting again 11…12…13…14…15…16…17… and then 20…21…22…23…24…25…26…27… and so on until
As with decimal, we start out by counting up digit1 (0-7). Then, when we add a second digit, the number is (eight x digit2) + digit1. When we add a third digit, the number is (eight x eight x digit3)
+ (eight x digit2) + digit1. And so on.
We call this counting system "base 8" (or octal). As you might expect, each digit represents an increasing power of 8.
Here are some examples of values in base 8
Digit Multiplier 64 8 1
(As power) 8^2 8^1 8^0
Base 10: 256 4 0 0
Base 8: 400
Base 10: 84 1 2 4
Base 8: 124
What if we had more symbols for numbers than the 10 (0-9) that we're familiar with? What if we had sixteen, say? We call this "base 16" or hexadecimal (hex for short). Rather than resort to Wingdings
or Smiley Faces for the extra symbols, it is conventional to use the letters A-F as the 'numbers' greater than 9.
Digit Multiplier 256 16 1
(As power) 16^2 16^1 16^0
Base 10: 256 1 0 0
Base 16: 100
Base 10: 84 0 5 4
Base 16: 54
Base 10: 255 0 F F
Base 16: FF
Base 10: 12 0 0 C
Base 16: C
Now, imagine we're really symbol-poor. We've only got 0 and 1 (the "off" and "on" of the transistor-switches in our processor). It works just the same. First, we count units: 0…1 We've run out of
symbols, so we add a digit and count on 10…11. Out of symbols again, so add another digit: 100…101…110…111. We run out of symbols really quickly, so we need another digit:
1000…1001…1010…1011…1100…1101…1110…1111 and so on.
Here are some examples in "base 2" (or binary).
Digit multiplier 128 64 32 16 8 4 2 1
(As power of 2) 2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
Base 10: 255 1 1 1 1 1 1 1 1
Base 10: 84 0 1 0 1 0 1 0 0
Look at the first example in that table. The decimal (base 10) number 255 is represented by a full set of 1s in each of 8 binary digits.
(Saying "binary digit" is a bit long winded, so we've shortened it in the jargon to the word bit.)
We call a number made up of 8 bits a byte.
We number the bits from the right-most (0) to the left-most (7). Because the right-most represents the smallest power of 2, we call it the least significant bit (LSB). The left-most bit represents
the largest power of 2, so we call it the most significant bit (MSB).
MSB LSB
Bit number 7 6 5 4 3 2 1 0
Power of 2 2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
As we've seen, the maximum (decimal) number we can store in a byte is 255 (a '1' in all 8 bits).
MSB LSB
Bit number 7 6 5 4 3 2 1 0
Power of 2 2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0
Base 10: 255 1 1 1 1 1 1 1 1
The fact that something screwy happens when we exceed this 8-bit maximum value hints that our register R0 is probably 1 byte in size.
But why does something screwy happen?
To understand that, we need to learn about binary addition.
Binary Addition
We're so used to doing decimal addition, that we probably don't even think about it. But let's remind ourselves how we add up regular decimal numbers by hand.
We write the numbers to be added in a stack, with each equivalent digit lined up in columns. Then, starting from the right hand column (which, remember, we call the least significant) we add up the
total of that column and record it at the bottom. If the total is greater than 9 (the biggest number we can represent in a single column), we "carry" that number into the next column, and include it
in the addition for that column.
If a particular number has no value in the column, we treat it as a 0
Here are a couple of examples
+ 163
carry 1
carry 111
We can do exactly the same thing for binary addition. Here's an example of two bytes being added together:
carry 1111
Now, let's see what happens when we add 1 to a byte containing the decimal value 255:
carry 11111111
But, hang on a minute - that's 9 bits! And a byte can contain only 8 bits. We can't just make up a new '9th' bit.
What happens is that we're left with the least significant 8 bits (i.e. the right-most 8 bits) - and they are binary 00000000 - or 0 in decimal!
And that's why our program wrapped round to 0 when we added 1 to 255.
It's all very well knowing this limitation, but how can we represent larger numbers?
One way is to increase the number of bits in your storage.
It turns out that our computer has some larger registers called WR0, WR1, and WR2 that can do just that. Each of these registers can store values up to 16 bits (or two bytes) in size. We sometimes
call a 16 bit value a short (and on computers with a 16-bit heritage, a word).
Spot test: What's the largest number you can represent with 16 bits?
Hint: 2^8 = 256
Answer: If the largest decimal you can store in 8 bits is 255 (= 2^8 - 1), then the largest you can store in 16 bits is (2^16 - 1) = 65535
With n bits, you can store a positive decimal integer up to (2^n - 1).
Let's load up a number larger than 255 into one of these 16 bit registers. We'll create a new program to do that.
LOAD WR0 16384
If you run that, you should see the following output. 16384 is loaded into the WR0 register.
load wr0 16384
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=1, FL=0
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=65535, FL=0
But what happens when we write the contents of that register to memory?
LOAD WR0 16384
WRITE 0 WR0
This time the output looks like this:
load wr0 16384
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=1, FL=0
write 0 wr0
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=2, FL=0
R0=0 R1=0 R2=0, WR0=16384, WR1=0, WR2=0, PC=65535, FL=0
It didn't write 16384 to the memory location at offset 0, it wrote 64 to the memory location at offset 1?!
Try it with a different value: (16384 + 1) = 16385
LOAD WR0 16385
WRITE 0 WR0
This is the output:
load wr0 16385
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=1, FL=0
write 0 wr0
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=2, FL=0
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=65535, FL=0
This time, it has stored 1 in the memory at offset 0, and 64 at offset 1.
What will the result be if we if store 16386? Have a guess before you look at the answer.
LOAD WR0 16386
WRITE 0 WR0
Here’s the answer:
R0=0 R1=0 R2=0, WR0=16385, WR1=0, WR2=0, PC=65535, FL=0
Did you guess right? OK - how about 16640? Again, have a guess before you look at the answer. (There's a hint just below if you need it.)
Hint: 16640 = 16384 + 256
LOAD WR0 16640
WRITE 0 WR0
And here's the answer again:
R0=0 R1=0 R2=0, WR0=16640, WR1=0, WR2=0, PC=65535, FL=0
If you haven't spotted the pattern yet, this should give you an even bigger clue - what happens if we store the number 256?
LOAD WR0 256
WRITE 0 WR0
R0=0 R1=0 R2=0, WR0=16640, WR1=0, WR2=0, PC=65535, FL=0
It is rather like these two memory locations are storing the number in base 256!
We've seen that we can hold the numbers 0-255 in 1 byte, so when we need to store a larger number, we add a second byte, and count in the usual manner (256 * second byte) + (first byte). You could
imagine storing a 24 bit number by adding a third byte, and a 32 bit number by adding a fourth byte and so on. (We sometimes call a 32-bit number a dword, from 'double word' and a 64-bit number a
qword, from 'quadruple word'.)
Byte 1 0
(As power of 256) 256^1 256^0
Base 10: 256 1 0
Base 10: 16384 64 0
As with our Most Significant Bit and Least Significant Bit, we call the byte that stores the largest power of 256 the Most Significant Byte, and the other the Least Significant Byte - or sometimes
High Byte and Low Byte.
Ordering the bytes
When we were ordering the bits in our byte, we numbered them from right-to-left, low-to-high. You'll notice that when we store the 16bit number in our 2 8-bit memory locations, we're storing the high
byte in the memory location at offset 1, and the low byte in the memory location at offset 0.
High (most-significant) byte Low (least-significant) byte
Offset in memory 1 0
(As power of 256) 256^1 256^0
We could equally well have chosen to do that the other way around!
Computers that choose this particular byte ordering are called little endian memory architectures. Our computer does it this way, as does Intel's x86 series. The Z80 and PowerPC are both big endian -
they store the high byte in the lower memory offset, and the low byte in the higher memory offset. Some architectures (like the ARM) let the programmer choose which to use.
(Another quick jargon update: when we send data over a network, it is often encoded in a big-endian way, so you sometimes see big-endian ordering called network ordering.)
This is an example of encoding a value into computer memory, in this case, a positive integer. Notice that even with something as simple as a positive integer, we have to think about how it is
represented in the computer! We'll see this need for encoding again and again for decimals, text, images and all sorts of data. We need to encode whenever we have to represent some information in a
computer. Most of the time, higher-level languages or other people's code will take care of the details for us, but if we forget about it entirely, we open ourselves up to all sorts of
seemingly-mysterious behaviour and bugs.
Displaying multi-byte values
Sometimes, displaying numbers in decimal is not the most convenient way to read them - particularly when we are looking at multi-byte values.
Here's the decimal number 65534 represented as two decimal bytes
Low byte High byte
Now, look what happens if we represent it hexadecimal (base 16) instead of decimal.
Here's 65534 in hex:
And here it is represented as two hexadecimal bytes in memory
Notice that they are represented using exactly the same digits (barring the byte-ordering). Whereas the decimal values "255 254" look nothing like "65534".
It can be very convenient to get used to reading numbers in base 16 (hex), because each byte is always just a pair of hex digits, and their representation in memory is very similar to the way you
would write them on the page.
We've got a handy switch in our program that lets us start to represent the output in hex, instead of decimal.
Just below your program, you'll see the following lines:
(* THIS IS THE START OF THE CODE THAT 'RUNS' THE COMPUTER *)
let outputInHex = false
Change this second line to read:
let outputInHex = true
Now, let's update our program to load 65534 in decimal into memory:
LOAD WR0 65534
WRITE 0 WR0
And run the program again. The output is now displayed in hex.
load wr0 65534
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
What if we want to write the values in our program in hex? How do we distinguish between the decimal value 99, and the hex value 99 (=153 decimal)? Different languages use different syntax, but our
model computer supports two of the most common - both of which use a prefix. You either add the prefix 0x, or use the prefix #.
(You might have seen that # prefix used when specifying colours in HTML mark-up)
LOAD WR0 #FFFE
WRITE 0 WR0
LOAD WR0 0xFFFE
WRITE 0 WR0
Try updating the program to use one of these hex representations and run it again.
load wr0 0xFFFE
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
If you want to, you can also use a binary notation to specify a number. For that we use the prefix 0b
(This is a less common notation, and is not supported across all languages, but it is quite useful.)
LOAD WR0 0b1111111111111110
WRITE 0 WR0
Here's the result of running that version of the program.
load wr0 0b1111111111111110
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0001, FL=00
write 0 wr0
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=0002, FL=00
FE FF 00 00 00 00 00 00 00 00
R0=00 R1=00 R2=00, WR0=FFFE, WR1=0000, WR2=0000, PC=FFFF, FL=00
FE FF 00 00 00 00 00 00 00 00
Representing negative numbers
So far, (almost) all of the numbers that we've represented have been the positive integers.
How do we represent negative numbers?
This is much the same question as "how do we do subtraction". Why? You'll probably recall from basic maths that subtracting the number B from the number A is equivalent to adding the negative of B to
A. You might have seen that written down in this form:
(A - B) = A + (-B)
So, perhaps we can use the idea of subtraction to help us to represent a negative number?
We've already seen a sort of example of subtraction: what happened when we saw the addition of 8-bit numbers carry beyond an 8-bit value? Let's remind ourselves.
For 8-bit values, if we try to add two numbers such that the result would be greater than 255, we are left with the least significant 8-bits, and we drop the carried-over 9th bit.
Here are a few examples
255 + 1 = 0 (with carry)
255 + 2 = 1 (with carry)
255 + 3 = 2 (with carry)
(You can write a short program to test that, if you like.)
load r0 255
load r1 255
load r2 255
add r0 1
add r1 2
add r2 3
What we know about basic arithmetic tells us that:
255 - 255 = 0
255 - 254 = 1
255 - 253 = 2
If we replace the right-hand-side of the first expression with the left-hand-side of the second expression, we get
255 + 1 = 255 - 255
255 + 2 = 255 - 254
255 + 3 = 255 - 253
This would seem to imply that
1 = (- 255)
2 = (-254)
3 = (-253)
How is this possible?!
Well, we've already answered that question: it is precisely because there is a maximum number we can represent in a fixed number of bits, and the way that larger numbers carry over.
Let's represent those numbers in binary:
00000001 = -(11111111)
00000010 = -(11111110)
00000011 = -(11111101)
A quick rearrangement of the expression shows us that that that is really true!
(remember that the expression a = -b is equivalent to a + b = 0)
00000001 + 11111111 = 00000000 (with carry)
00000010 + 11111110 = 00000000 (with carry)
00000011 + 11111101 = 00000000 (with carry)
In fact, we could write a little program to test that:
load r0 0b00000001
load r1 0b00000010
load r2 0b00000011
add r0 0b11111111
add r1 0b11111110
add r2 0b11111101
So, for every positive number that we can represent in binary, there is a complementary negative number.
We call this representation of a negative number the two's complement representation.
It is fairly easy to calculate - you just take the binary representation of the absolute number (the number without its sign) and flip all of the 0s to 1s, and 1s to 0s (we call this the one's
complement of the number), then add 1, to make the two's complement.
Let's have an example of that: what's the two's complement representation of the number -27, in an 8-bit store?
First, we need to represent 27 in binary:
Then we flip all the bits to get the one's complement representation:
Finally, we add 1 to get the two's complement representation:
So, -27 in two's complement binary notation is 11100101
We can test this
In decimal, 60 - 27 = 33
60 in binary is 00111100
-27 in two's complement binary notation is 11100101
So, 60 - 27 in binary is 00111100 + 11100101
That is 00100001 (with a carry), which is 33 in decimal, as required!
Originally, we stressed that we could store positive integers from 0 - 255 in an 8-bit value.
Now, we have added the possibility of storing both positive and negative integers in an 8-bit value, by the use of a two's complement representation.
So, every number has an equivalent two's complement "negative" number. How do we tell the difference between 11100101 when it is meant to be -27 and 11100101 when it is meant to be 229?
The short answer is that you can't - it is up to the programmer to determine when a result might be negative and when it might be positive.
We could specify (by documentation, for example) that a particular value we were storing was unsigned (i.e. a positive integer) or signed (i.e. any integer, positive or negative).
It turns out we can remove the ambiguity in the case of the signed integer by limiting the maximum and minimum values we can represent.
Think about the decimal numbers 1…127.
These can be represented in binary as 0b00000001…0b01111111.
Now, think about the decimal numbers -127…-1.
These can be represented in binary as 0b10000001…0b11111111.
Notice how the positive numbers all have a 0 in the top bit, and the negative numbers all have a 1 in the top bit. We often call this the sign bit.
If we choose to limit the numbers we represent in a single byte to numbers that can be represented in 7 bits, rather than 8, then we have room for this sign bit, and there is no ambiguity about what
we mean. But how can we tell whether some memory is signed or not? The short answer is that we can't. The only way to tell is to document how the storage is being used. We're reaching the limits of
what we can conveniently express in such a low-level language. We need to move to a higher-level of abstraction, and a richer language, to help us with that.
Copying a value from a smaller representation into a larger representation
There's a little wrinkle that happens when we copy a value from a smaller representation. Here's decimal 1 as a byte
0b00000001. And here's decimal 1 as a 16-bit value: 0b0000000000000001. So far so good.
What about -127? That's 0b10000001 in a byte, but it's 0b1111111110000001 when stored in a word.
Notice that in each case, the sign bit gets extended across the whole of the most-significant byte when you copy from a byte to a word (the zero for the case of a positive number, or the 1 for a
negative number.)
This sign extension when you copy between storage sizes is very important: 0b10000001 is -127 decimal when stored in a byte, but naively copy that into a word and it becomes 0b00000000100000001,
which is 129 decimal!
Copying a value from a larger representation into a smaller representation
There's an obvious danger with copying numbers from a larger representation (e.g. a word) to a smaller one (e.g. a byte) and that's called truncation.
There's no problem if the number can be fully represented in the number of bits of the target representation e.g. 1 is 0x0001 in 16 bits, and 0x01 in 8 bits - you just lose the high byte. Similarly,
-1 is 0xFFFF in 16 bits, and 0xFF in 8 bits - you just lose the high byte again, and it remains correct. But what about 258? That's 0x0102 in 16-bits, but if you lose the high byte, it becomes 0x02 -
not the number you were thinking of at all!
Most higher level languages will warn you about possible truncation (or even prevent you from doing it directly). But it is a very common source of bugs in programs.
In this section, we've looked at how we can represent numbers (specifically the integers) in data memory.
We've learned how we can convert the number between different bases - in particular base 10 (decimal), base 16 (hexadecimal) and base 2 (binary).
We then looked at binary arithmetic, and saw what happens when we add binary numbers together.
We investigated the largest value we can store in one of our registers, and determined that it represented an 8-bit value (which we call a byte).
We then looked at what happens when we try to store a number larger than that which a particular register can hold, and introduced the concept of a 16-bit number.
Next, we looked at how a 16-bit (or larger) number can be represented in memory, including the concepts of big-endian and little-endian memory layouts. We also switched our computer over to
displaying values using hex instead of decimal, and looked at why we might want to do that.
Finally, we looked at how to store negative numbers in a consistent way, such that we could perform basic addition and subtraction, by using the two's complement representation, and how to constrain
the range of numbers we can store, to remove the ambiguity when storing positive and negative numbers.
Next time, we'll look at how we can bring logic to bear on our problems.
Learning To Program – A Beginners Guide – Part One - Introduction
Learning To Program – A Beginners Guide – Part Two - Setting Up
Learning To Program – A Beginners Guide – Part Three - What is a computer?
Learning To Program – A Beginners Guide – Part Four - A simple model of a computer
Learning To Program – A Beginners Guide – Part Five - Running a program
Learning To Program – A Beginners Guide – Part Six - A First Look at Algorithms
Learning To Program – A Beginners Guide – Part Seven - Representing Numbers
Learning To Program – A Beginners Guide – Part Eight - Working With Logic
Learning To Program – A Beginners Guide – Part Nine - Introducing Functions
Learning To Program – A Beginners Guide – Part Ten - Getting Started With Operators in F#
Learning to Program – A Beginners Guide – Part Eleven – More With Functions and Logic in F#: Minimizing Boolean Expressions
Learning to Program – A Beginners Guide – Part Twelve – Dealing with Repetitive Tasks - Recursion in F# | {"url":"http://blogs.endjin.com/2013/06/learning-to-program-a-beginners-guide-part-seven-representing-numbers/","timestamp":"2014-04-17T13:11:59Z","content_type":null,"content_length":"66351","record_id":"<urn:uuid:df8ccc64-b0af-49e2-9d85-660cbf57705d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00399-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 621
BACK TO Galileo and Einstein HOME PAGE
More Stuff!
This is a grab bag of lectures, etc., that I've given at various times in the past that do not fit in well with Physics 109N or Physics 252 as I'm teaching them now. Much of this material was
originally presented in summer courses for high school physics teachers.
Using History to Teach Science:
Virginia Beach presentation, January 11, 2000
Physics Using Excel
Physics 581 for High School Teachers, taught in the Summer of 1998, gave detailed instructions for constructing Excel spreadsheets to analyze a wide variety of dynamical phenomena, including
projectiles with air resistance, and planetary orbits. You can also download the spreadsheets and play with them yourself -- this is a great way to teach and learn dynamics!
I've also written notes on how to use it to solve a differential equation, Schrödinger's equation, in two of my Physics 252 homework assignments, here and here.
Electricity and Magnetism
This first lecture covers E&M from the earliest times up to Michael Faraday.
Summer 1995 Lectures on History of Theories of Electricity and Magnetism
This second lecture doesn't really follow from the first, it's an attempt to show -- in as elementary a fashion as possible -- how Maxwell found the speed of light from electrostatics and forces
between current-carrying wires.
Summer 1995 Lecture on Maxwell's Equations
Some Math
Proof of Heron's Formula for the area of a triangle.
Physics 621
: Curriculum Enhancement for Physics Teachers, 1997 | {"url":"http://galileo.phys.virginia.edu/classes/109N/more_stuff/home.html","timestamp":"2014-04-17T21:39:34Z","content_type":null,"content_length":"3475","record_id":"<urn:uuid:c0bd4c95-4e82-480e-9c9d-a0af585fc953>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics ~really need help!~
Posted by Isis on Monday, February 25, 2013 at 6:01pm.
1) A 1 kg mass moving at 1 m/s has a totally inelastic collision with a 0.7 kg mass. What is the speed of the resulting combined mass after the collision?
2) A cart of mass 1 kg moving at a speed of 0.5 m/s collides elastically with a cart of mass kg at rest. The speed of the second mass after the collision is 0.667 m/s. What is the speed 1 kg mass
after the collision?
3) A 0.010 kg bullet is shot from a 0.500 kg gun at a speed of 230 m/s. Find the speed of the gun.
4)Two carts with a masses of 4 kg and 3 kg move toward each other on a frictionless track with speeds of 5.0 m/s and 4 kg m/s respectively. The carts stick together after the colliding head on. Find
the final speed.
5) A cart of mass 1.5 kg moving at a speed of 1.2 m/s collides elastically with a cart of mass 1.0 kg moving at a speed of 0.75 m/s. (the carts are moving at the same direction)The speed
of the second mass (1.0 kg) after the collision is 0.85 m/s. What is the speed
of the 1.5 kg mass after the collision?
please help me!
J=Ft or J=p
P= Fd/t
please please!!!
• Physics ~really need help!~ - Elena, Monday, February 25, 2013 at 6:22pm
v₁= {-2m₂v₂₀ +(m₁-m₂)v₁₀}/(m₁+m₂)=
v₂={ 2m₁v₁₀ - (m₂-m₁)v₂₀}/(m₁+m₂)=
= 2m₁v₁₀ /(m₁+m₂)=
0= m₁v₁-m₂v₂=>
m₁v₁-m₂v₂=(m₁+m₂)u =>
u= (m₁v₁-m₂v₂)/(m₁+m₂)
When the balls are moving in the same direction
v₁= {+2m₂v₂₀ +(m₁-m₂)v₁₀}/(m₁+m₂)
v₂={ 2m₁v₁₀ + (m₂-m₁)v₂₀}/(m₁+m₂)
Related Questions
Physics - An old Chrysler with mass 1800 kg is moving along a straight stretch ...
physics - A 4.5 kg body moving in the +x direction at 5.5 m/s collides head-on ...
Really Physics - A 2.4 Kg object moving to the right at 0.8c collides with a 3.2...
physics - A 5.0 kg object moving in the +x direction at 5.5 m/s collides head-on...
Physics -Conservation of Momentum- - 1) A 1 kg mass moving at 1 m/s has a ...
Physics -Conservation of Momentum- - 1) A 1 kg mass moving at 1 m/s has a ...
Physics - An old Chrysler with mass 1900 kg is moving along a straight stretch ...
physical science - Which of the following four bodies has the greatest kinetic...
physics - Calculate the magnitude of the linear momentum for the following cases...
physics - Skater 1 has a mass of 45 kg and is at rest. Skater 2 has a mass of 50... | {"url":"http://www.jiskha.com/display.cgi?id=1361833269","timestamp":"2014-04-16T11:43:10Z","content_type":null,"content_length":"10262","record_id":"<urn:uuid:2e14f097-1356-4c96-9cb0-b52a4be39f72>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the surface area distribution of two-dimensional projections for a polytope
up vote 1 down vote favorite
My question concerns the existence of a nice (deterministic?) method/algorithm for calculating the distribution of surface areas for two-dimensional projections of an arbitrary polytope (or convex
approximation of a polytope). Less optimistically, a method of finding the minimum, maximum, and perhaps, mean surface area of the polytope's projections.
It is a relatively straightforward procedure to calculate a given two-dimensional surface projection along some orientational vector, and then calculate the approximate surface area of the projection
(or its convex hull). But, beyond statistical sampling or methods related to simulated annealing, I'm having trouble imagining how to go about characterizing the full set of projections along all
arbitrary vectors... and I haven't had any luck with a literature search (so far).
Note - This question is directly related to computations one might like to perform for - Characterizing a tumbling convex polytope from the surface areas of its two-dimensional projections. I hope
this follow-up post is appropriate...
geometry computer-science algorithms
add comment
2 Answers
active oldest votes
If you take the arrangement of planes determined by the faces of your polytope, then the combinatorial structure of the projection is constant throughout all the viewpoints within one
cell of the arrangement. An alternative viewpoint is to partition $S^2$ by these planes moved to the center of that sphere. Within each cell of this arrangement of great circles on $S^2$,
the area of the projection changes in a regular, computable manner (as a function of coordinates on $S^2$). None of this would be easy to implement, but it is computable in roughly $O(n^
up vote 3 2)$ time for a 3-polytope of $n$ vertices. (Note here I am using $n$ for the number of vertices, and assuming you are working in $R^3$, whereas in Robby McKilliam's posting, $n$ is the
down vote dimension).
For this arrangements viewpoint, see the paper by Michael McKenna and Raimund Seidel, "Finding the optimal shadows of a convex polytope," http://portal.acm.org/citation.cfm?id=323237 .
add comment
There is a nice paper on a similar topic by Burger, Gritzmann and Klee "Polytope projection and projection polytopes" . They describe an $O(n^2)$ algorithm to compute the minimum surface
up vote 1 area projection of an n-dimensional simplex. According to the paper it is NP-hard to find the maximum surface area projection of a n-dimensional simplex.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged geometry computer-science algorithms or ask your own question. | {"url":"http://mathoverflow.net/questions/24700/calculating-the-surface-area-distribution-of-two-dimensional-projections-for-a-p?sort=oldest","timestamp":"2014-04-18T10:56:41Z","content_type":null,"content_length":"56406","record_id":"<urn:uuid:bd6abf53-920c-4fb8-8433-725fa48bbe4a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
seems easy but...
November 17th 2013, 10:13 PM #1
Nov 2010
seems easy but...
find all the triplets (a,b,k) of positive integers such that a+b divide a^(2k)+b^(2k).
(a,a,k) is trivial
but there is also
I guess there are some other non trivial solutions, but I cannot figure them out :-(
can anyone give a clue on this? tkx
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/number-theory/224380-seems-easy-but.html","timestamp":"2014-04-19T00:37:30Z","content_type":null,"content_length":"28429","record_id":"<urn:uuid:9328ca9d-4c84-484c-a9bb-a46ee108fe8b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
Contemporary Mathematics
1989; 350 pp; softcover
Volume: 96
ISBN-10: 0-8218-5102-0
ISBN-13: 978-0-8218-5102-9
List Price: US$60
Member Price: US$48
Order Code: CONM/96
This book will provide readers with an overview of some of the major developments in current research in algebraic topology. Representing some of the leading researchers in the field, the book
contains the proceedings of the International Conference on Algebraic Topology, held at Northwestern University in March, 1988. Several of the lectures at the conference were expository and will
therefore appeal to topologists in a broad range of areas.
The primary emphasis of the book is on homotopy theory and its applications. The topics covered include elliptic cohomology, stable and unstable homotopy theory, classifying spaces, and equivariant
homotopy and cohomology. Geometric topics--such as knot theory, divisors and configurations on surfaces, foliations, and Siegel spaces--are also discussed. Researchers wishing to follow current
trends in algebraic topology will find this book a valuable resource.
• A. Adem, R. L. Cohen, and W. G. Dwyer -- Generalized Tate homology, homotopy fixed points and the transfer
• D. J. Anick -- On the homogeneous invariants of a tensor algebra
• A. Bahri, M. Bendersky, and P. Gilkey -- The relationship between complex bordism and \(K\)-theory for groups with periodic cohomology
• A. Baker -- Elliptic cohomology, \(p\)-adic modular forms and Atkin's operator \(U_p\)
• R. Brown -- Triadic Van Kampen theorems and Hurewicz theorem
• D. Burghelea -- The free-loop space
• D. P. Carlisle and N. J. Kuhn -- Smash products of summands of \(B(\Bbb Z/p)^n+\)
• F. R. Cohen, R. L. Cohen, B. Mann, and R. J. Milgram -- Divisors and configuration on a surface
• M. C. Crabb -- Periodicity in \(\Bbb Z/4\)-equivariant stable homotopy theory
• E. B. Curtis and M. Mahowald -- The unstable Adams spectral sequence for \(S^3\)
• D. H. Gottlieb -- Zeroes of pullback vector fields and fixed point theory for bodies
• B. Gray -- Homotopy commutativity and the EHP sequence
• J. R. Harper -- A proof of Gray's conjecture
• H.-W. Henn, J. Lannes, and L. Schwartz -- Analytic functors, unstable algebras and cohomology of classifying spaces
• L. Kauffman -- Statistical mechanics and the Alexander polynomial
• J. Kulich -- A quotient of the iterated Singer construction
• R. Lee and S. H. Weintraub -- A generalization of a theorem of Hecke to the Siegel space of degree two
• W.-H. Lin -- A Koszul algebra whose cohomology is the \(E_3\)-term of the May spectral sequence for the Steenrod algebra
• M. Mahowald and R. Thompson -- \(K\)-theory and unstable homotopy groups
• H. Miller -- The elliptic character and the Witten genus
• G. Mislin -- On the characteristic ring of flat bundles defined over a number field
• N. Ray -- Loops on the \(3\)-sphere and umbral calculus
• C. B. Thomas -- Characteristic classes and \(2\)-modular representations of some sporadic simple groups
• T. Tsuboi -- On the connectivity of the classifying spaces for foliations
• J. A. Wood -- Maximal abelian subgroups of Spinor groups and error-correcting codes | {"url":"http://ams.org/bookstore?fn=20&arg1=conmseries&ikey=CONM-96","timestamp":"2014-04-19T09:55:36Z","content_type":null,"content_length":"16772","record_id":"<urn:uuid:c93271f3-5f2c-4e26-963e-6c7191876429>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Twenty-Fourth Annual University of Alabama System Applied Mathematics Meeting
Twenty-Fourth Annual University of Alabama System Applied Mathematics Meeting
Saturday, November 5, 2011
University of Alabama at Birmingham
All sessions will be held on the fourth floor of Campbell Hall (CH) on the campus of the University of Alabama at Birmingham.
10:00 Refreshments CH 451
10:15 Welcoming remarks Rudi Weikard (UAB) CH 405
10:25 Traveling Wave Solutions for a Class of Diffusive Predator-Prey Systems Wenzhang Huang (UAH) CH 405
11:05 Most Likely Path to The Shortfall Risk in Long-Term Hedging with Short-Term Futures Contracts Jing Chen (UA) CH 405
11:25 Distribution of spiral points on surfaces of revolution Eric Askelson (UAB) CH 405
11:45 Lunch Area Restaurants
1:00 Random Walks at Random Times: A tool for constructing self-similar processes Paul Jung (UAB) CH 405
1:40 Global dynamics of four-stage-structured mosquito population models Junliang Lu (UAH) CH 405
2:00 Treecode-Accelerated Boundary Integral Poisson-Boltzmann Solver Weihua Geng (UA) CH 405
2:40 Refreshments CH 451
3:00 Faculty discussion CH 405
3:15 Student discussion CH 445
Random Walks at Random Times: A tool for constructing self-similar processes, Paul Jung (UAB)
Random walks in random scenery (RWRS) were first introduced by Kesten and Spitzer (1979). Cohen and Samorodnitsky (2006) studied a certain renormalization of RWRS and proposed self-similar, symmetric
alpha-stable processes, which generalize fractional Brownian motion, as their scaling limits. The limiting processes have self-similarity exponents H>1/α. We consider a modification in which a sign
associated to the scenery alternates upon successive visits. The resulting process is what we call a random walk at random time. We will discuss their scaling limits, and show that the alternating
scenery leads to processes which are stochastic integrals of indicator kernels. Our results complement the above results in that the processes have self-similarity exponents H<1/α.
Distribution of spiral points on surfaces of revolution, Eric Askelson (UAB)
Much research has been completed on the subject of evenly distributing points across the surface of a sphere, a problem which is both interesting from a purely mathematical perspective and important
from a scientific one in the areas of chemistry and physics. One algorithm in particular has previously been shown to provide near-optimal results on the sphere with large N; this algorithm
distributes points in a spiral across the surface of the sphere, forming a characteristic hexagonal lattice which maximizes the distance between neighboring points. In this work, the spiral points
algorithm is extended to generalized surfaces of revolution through modification of the algorithm's iteration method; where the original algorithm iterated over z linearly, the modified algorithm
iterates over arc length with a correction for uniformity. The spiral points algorithm includes a scaling factor, and this modification introduces another scaling factor. As such, numerical
optimization over two parameters is required for the calculation of spiral point sets. This optimization is performed here on the sphere, oblate and prolate ellipsoids, and the torus, and the results
Traveling Wave Solutions for a Class of Diffusive Predator-Prey Systems, Wenzhang Huang (UAH)
A shooting method, with the application of a Liapunov function, has been developed to show the existence of traveling wave fronts for a class of Lotka-Volterra diffusive predator-prey systems. In
addition, the minimum wave speed has been identified.
Global dynamics of four-stage-structured mosquito population models, Junliang Lu (UAH)
Mosquitoes are the main vector for malaria. There are at least 350-500 million cases of malaria annually in the world, which results in about between 1.5 and 2.7 million deaths annually. An effective
way to prevent these diseases is to control mosquitoes. In this paper, we construct and study continuous and discrete mosquito population models. In continuous model, we obtain the inherent net
reproductive number r[0]. If r[0]<1, the continuous model has only one trivial equilibrium point, which is globally asymptotically stable. If r[0]>1, besides a trivial equilibrium point, which is
unstable, the continuous model has a unique positive equilibrium point, which is globally asymptotically stable. Similarly, for the discrete model, we obtain the inherent net reproductive number R
[0]: If R[0]<1; the discrete model has a unique trivial fixed point, which is globally asymptotically stable. If R[0]>1, the discrete model has a trivial fixed point, which is unstable, and a unique
positive fixed point, which is locally asymptotically stable.
Treecode-Accelerated Boundary Integral Poisson-Boltzmann Solver, Weihua Geng (UA)
Implicit solvent models based on the Poisson-Boltzmann (PB) equation greatly reduce the cost of computing electrostatic potentials of solvated biomolecules, in comparison with explicit solvent
models. Even so, PB solvers still encounter numerical difficulties stemming from the discontinuous dielectric constant across the molecular surface, boundary condition at spatial infinity, and charge
singularities representing the biomolecule. To address these issues we present a linear PB solver employing a well-conditioned boundary integral formulation and GMRES iteration accelerated by a
treecode algorithm. The accuracy and efficiency of the method are assessed for the Kirkwood sphere and a solvated protein. We obtain numerical results for both the Poisson-Boltzmann and Poisson
equations. Results are compared with those obtained using the mesh-based APBS method. The present scheme offers the opportunity for relatively simple implementation, efficient memory usage, and
straightforward parallelization.
Most Likely Path to The Shortfall Risk in Long-Term Hedging with Short-Term Futures Contracts, Jing Chen (UA)
In this talk, the most likely paths to the shortfall risk in long-term hedging with short-term futures contracts are discussed. Base on a simple model initially discussed in Culp and Miller, Mello
and Parsons, Glasserman and a simple discussion about comparing risks of a cash shortfall and the most likely path to a shortfall by Glasserman, we calculate the most likely path for four basic
cases: mean reverting or not, hedged or not. These "optimal" paths give information about how risky events occur and not just their probability of occurrence. | {"url":"http://www.uah.edu/science/departments/math/research/applied-mathematics-meetings/290-main/science/science-mathematical-science/3849-twenty-fourth-annual-university-of-alabama-system-applied-mathematics-meeting","timestamp":"2014-04-19T17:38:59Z","content_type":null,"content_length":"54612","record_id":"<urn:uuid:041f9720-03f9-4e9f-93c6-407646090da7>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metric Mass (Weight)
Mass: how much matter is in an object.
We measure mass by weighing, but Weight and Mass are not really the same thing.
These are the most common measurements:
Grams are the smallest, Tonnes are the biggest.
Let’s take a few minutes and explore how heavy each of these are.
A paperclip weighs about 1 gram.
Hold one small paperclip in your hand. Does that weigh a lot? No! A gram is very light. That is why you often see things measured in hundreds of grams.
Grams are often written as g (for short), so "300 g" means "300 grams".
A loaf of bread weighs about 700 g (for a nice sized loaf)
Once you have 1,000 grams, you have 1
1 kilogram = 1,000 grams
A dictionary has a mass of about one kilogram.
Kilograms are great for measuring things that can be lifted by people (sometimes very strong people are needed of course!).
Kilograms are often written as kg (that is a "k" for "kilo" and a "g" for "gram), so "10 kg" means "10 kilograms".
When you weigh yourself on a scale, you would use kilograms. An adult weighs about 70 kg. How much do you weigh?
But when it comes to things that are very heavy, we need to use the tonne.
Once you have 1,000 kilograms, you will have 1 tonne.
1 tonne = 1,000 kilograms
Tonnes (also called Metric Tons) are used to measure things that are very heavy.
Things like cars, trucks and large cargo boxes are weighed using the tonne.
This car weighs about 2 tonnes.
Tonnes are often written as t (for short), so "5 t" means "5 tonnes".
Final thoughts about measuring weight:
1 kilogram = 1,000 grams
1 tonne = 1,000 kilograms
Weight or Mass?
We have used the word "weight" only because that is what people commonly say.
But we really should say "Mass". See Weight or Mass to learn more.
More Examples
A gram is about:
• a quarter of a teaspoon of sugar
• a cubic centimeter of water
• a paperclip
• a pen cap
• a thumbtack
• a pinch of salt
• a piece of gum
• the weight of any US bill
• 0.035274 of an ounce to 6 decimal places (you need 28.349523 grams to make an ounce)
• one fifth of a piece of paper (80 gsm A4 paper weighs 4.8 g)
A kilogram is about:
• the mass of a liter bottle of water
• a little more than 2 pounds
• very close to 10% more than 2 pounds
• 2.205 pounds (accurate to 3 decimal places)
• 7 apples
• two loaves of bread
• the weight of 1 liter of water
• about 2 packs of ground beef
A tonne is about:
• the weight of a small car | {"url":"http://www.mathsisfun.com/measure/metric-mass.html","timestamp":"2014-04-18T00:14:59Z","content_type":null,"content_length":"9226","record_id":"<urn:uuid:4460dea0-1678-484a-ba3d-c1deb7a621e8>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Electronic Structure of Multi-Electron Quantum Dots
This article describes the use of the SACI package—a package for calculating the energy levels and wavefunctions of a multi-electron quantum dot modelled as a 2D harmonic well with electrons
interacting through a Coulomb potential and under the influence of a perpendicular magnetic field.
This article has not been updated for Mathematica 8.
Quantum dots are artificially fabricated atoms, in which charge carriers are confined in all three dimensions just like electrons in real atoms. Consequently, they exhibit properties normally
associated with real atoms such as quantised energy levels and shell structures. These properties are described by the electron wavefunctions whose evolution is governed by the Schrödinger equation
and the Pauli exclusion principle.
There are many methods available to solve the Schrödinger equation for multiple electrons. They roughly fall into the categories of the diagonalisation method, mean-field density-functional theory,
and the self-consistent field approach. One of the first theoretical studies of quantum dots was by Pfannkuche et al. [1], who compared the results of Hartree-Fock self-consistent calculations and
exact diagonalisation of the Hamiltonian for two electrons in a circularly symmetric parabolic potential. They found good agreement between the two methods for the triplet state but marked
differences for the singlet state, indicating important spin correlations were not included properly in their Hartree-Fock model. This suggests that the proper treatment of electron spins is crucial
for correctly obtaining the electronic structures in quantum dots.
Examples of self-consistent field approaches in the literature include Yannouleas and Landman [2, 3], who studied circularly symmetric quantum dots using an unrestricted spin-space Hartree-Fock
approach, and McCarthy et al. [4], who developed a Hartree-Fock Mathematica package. Macucci et al. [5] studied quantum dots with up to 24 electrons using a mean-field local-density-functional
approach, in which the spin exchange-correlation potential was approximated by an empirical polynomial expression. Lee et al. [6] also studied an
Diagonalisation approaches in the literature include Ezaki et al. [7, 8], Eto [9, 10], Reimann et al. [11], and Reimann and Manninen [12], who each applied a brute force approach by numerically
diagonalising the 11 employed matrices of dimensions up to 108,375 with 67,521,121 nonzero elements for a six-electron quantum dot. Calculations for any higher number of electrons were not considered
numerically viable using the conventional CI formalism, even with state-of-the-art computing facilities.
We have recently developed a spin-adapted configuration interaction (SACI) method to study the electronic structure of multi-electron quantum dots [13]. This method is based on earlier work by
quantum chemist R. Pauncz [14], which expands the multi-electron wavefunctions as linear combinations of antisymmetrised products of spatial wavefunctions and spin eigenfunctions. The SACI method has
an advantage over using Slater determinants in that a smaller basis is used. This reduces the computational resources required, allowing calculations for dots with more than six electrons on a
desktop computer.
After some theoretical background, we present results from a Mathematica package which employs the SACI method to calculate the energy levels and wavefunctions of multiple electrons confined by a 2D
harmonic well and under the influence of a perpendicular magnetic field. Mathematica enables the exact calculation of interaction integrals which greatly improves the speed and accuracy of
SACI Package
The SACI package encapsulates all the functionality needed to calculate the energy levels and wavefunctions for electrons in a 2D harmonic well potential with/without the influence of a perpendicular
magnetic field.
Ensure that SACI.m is on your $Path and load the package.
You can also choose Input to locate SACI.m on your computer. This package will be used in the Theory section to demonstrate results and produce examples, as well as in the Calculations section to
calculate energy levels and wavefunctions.
Spin Eigenfunctions
When measuring the component of an electron’s spin angular momentum along an axis, the result is either
The square of the total spin is also quantised:
One way to calculate the spin eigenfunction is to use the Dirac identity to write the
In the SACI package, we define ElectronList, in which 1 represents a spin up electron and 0 represents a spin down electron.
We can see that we have a 2D spin eigenspace as there are two basis vectors produced. Calculating the spin eigenfunctions is a prerequisite to the rest of the calculations. Henceforth, we label spin
Fock-Darwin Solutions
The solution for a single electron moving in a 2D harmonic well under the influence of a perpendicular magnetic field was solved independently by Fock [15] and Darwin [16]. With the Hamiltonian
we get the energy eigenvalues
and the wavefunctions
To coincide with experiments done on a circularly symmetric quantum, we will choose solid-state parameters appropriate for GaAs (i.e.,
When we are working in effective atomic units, the unit of length is the effective Bohr radius. This is given by the function lengthscale with the result in nanometers.
Here is a plot of the probability density for
The base scale is in nanometers and indicates the scale of the quantum dot.
Multiplying one-electron functions together (e.g.,
The Antisymmetry and Representation Matrices
The Pauli exclusion principle states that because electrons are indistinguishable fermions (half-integer spin particles), any wavefunction that describes the motion of electrons must change sign if
both the spin and spatial coordinates of any pair of electrons are interchanged. Consequently in the
Since the permutation and spin operators commute, if we apply a permutation to a spin eigenfunction, the result is another spin eigenfunction. This new spin eigenfunction is not, in general, one of
the functions already calculated but a linear combination of them. In fact, the coefficients of the linear combination form matrices which together form a representation of the symmetric group.
where 9) satisfies the condition for a representation of the symmetric group. These representation matrices and their properties are useful in simplifying the calculation of the Hamiltonian matrix.
We can see that our spin eigenfunctions are eigenfunctions of
This shows spin eigenfunction two is unchanged by the permutation and spin eigenfunction one changes sign under the permutation. In general the action of a permutation can lead to a linear
combination of the spin eigenfunctions.
This shows that under the permutation 9).
Spin-Adapted Basis
To get an 5), combine them with a spin eigenfunction of 14].
We use a five-electron quantum dot as an example, where the system wavefunction
The CalculateSpinBasis function applies a procedure to ensure the spin basis functions are also eigenfunctions of the permutations
The multi-electron Hamiltonian is given as the following:
This is simply the sum of the single-electron energies defined by equation (3) plus the pairwise Coulomb interactions. The fact that we are working in a solid state media is modelled by using the
relative effective mass
We wish to solve the following equation:
for the energy levels
Basis Choice
The SACI method uses normalised, antisymmetrised products of spin eigenfunctions and spatial wavefunctions given by products of Fock-Darwin solutions as basis elements. The elements of our
Hamiltonian matrix are then
The SACI method reduces the number of basis elements required by ensuring the wavefunction satisfies the Pauli exclusion principle from the outset and restricting the calculation to specific spin
quantum numbers
Although each of the basis elements is a sum of
We need to be careful in the selection of basis elements, however, as we need an orthonormal basis for the diagonalisation expansion to be valid. Remember, some spin and spatial combinations vanish
and thus cannot be included in the basis. We must also make sure the set of basis elements is mutually orthogonal and normalised (i.e.,
The SACI package automates the process of the orthonormal basis generation. As we can select only a finite number of basis elements out of an infinite set, we need to have some method of selecting
appropriate basis elements. As we are usually interested in the ground and first few excited states of a quantum dot, we have chosen to use basis elements in increasing order of the sum of their
component one-electron energies.
Elaborating, with no magnetic field the energy levels of the Fock-Darwin wavefunctions are given by
The variable cutoffenergy is set to a cutoff energy of
BasisCreate produces a list of basis elements with noninteracting energy less the specified cutoff. The spatial component of each basis element is given by a list of pairs of integers denoting the
quantum numbers ShowSpinStatus runs) with which it is combined.
We notice that spin eigenfunction “1″ is antisymmetric with respect to the permutation
Matrix Elements Rules
As mentioned previously, there are general rules for calculating Hamiltonian matrix elements. These rules are coded in the function InteractionHamiltonianMatrixElement, and their derivation can be
found in [13]. In brief, when the spatial wavefunctions of the two basis elements differ by more than two orbitals (i.e., after being sorted so the orbitals are in maximum correspondence), then the
matrix element is zero; when the spatial orbitals differ by two or less orbitals, a different rule applies for each of the three cases (i.e., zero, one, and two orbitals differ). These formulas are
quite general and can be used for any interaction Hamiltonian that acts pairwise.
In the Hamiltonian element formulas we must calculate the two-electron interaction integral. For the case of electrons in a 2D harmonic well interaction through a Coulomb potential under the
influence of a perpendicular magnetic field, the integral is in the following form:
We need to be able to do this integral for many combinations of the quantum numbers
However by using Mathematica and a little ingenuity, we can do this integral exactly in all cases needed in computation. The detailed derivation (breaking down and transforming the integral into a
sum of components) can be found in [13]. For example, we set the quantum numbers
We also note that if
These integrals are done with the effective harmonic well constant
Here are some examples of interaction Hamiltonian matrix elements indicating that they can be done exactly.
It can also be proved that the interaction element is zero if the sum of the orbital angular momentum quantum numbers are not the same, hence, the last calculation leads to zero.
The preceding calculations show that the spin eigenfunction with which a spatial wavefunction is combined makes a nontrivial contribution to the Hamiltonian matrix element calculation.
In this section we will demonstrate how the package can be used to do various calculations such as energy levels, wavefunctions, convergence plots, and energy level evolutions with magnetic fields.
Energy Levels
To calculate energy levels, we need to choose a basis. For larger basis sizes the calculation will take longer and require more memory (the size of the matrix is the size of the basis squared) but
the convergence will be better.
We choose a basis with a single total orbital angular momentum quantum number InteractionHamiltonian.
The calculation of this matrix can be time-consuming, so it is worthwhile to save the result, as the interaction Hamiltonian calculated for a particular number of electrons and spin quantum numbers
LoadIntegralandHamiltonianData to speed up our calculation and saved the data afterward using SaveIntegralandHamiltonianData to speed up future calculations run in other sessions.
Next we set the magnetic field strength to 0 Tesla remembering that firstenergygap was set to 10 meV previously and relativeeffectivemass and relativepermittivity are set to the parameters for GaAs.
We then can calculate the Hamiltonian for GaAs with these particular values of the harmonic well constant and magnetic field and plot the energy levels.
The plot shows the first 10 energy levels for a GaAs quantum dot system of three electrons with spin quantum numbers
Convergence Plots
As mentioned, the accuracy of our result as well as the computational time required increases with the size of the basis. We can test the convergence with the following series of functions.
As the interaction Hamiltonian for lower cutoff energies is a submatrix of interaction Hamiltonians with higher cutoff energies, we can do convergence tests using a single interaction Hamiltonian. We
will use the interaction Hamiltonian from the previous section.
The Convergence function takes submatrices of the Hamiltonian using different basis sizes as specified by the input sizelist and calculates the energy of the levels specified by wantedlevels. The
ground state is specified by 1, the first excited state by 2, and so on. This allows the convergence properties of the solution to be ascertained.
The list basis size can be specified manually, but a convenient method is to choose all the basis sizes that correspond to the different cutoff energies.
In the preceding code, we have specified wantedlevels to examine the ground state and first excited state. We set cutoffmin to be the first cutoff energy that will produce the highest energy level
requested, as we can only approximate up to
As there may be large gaps in the basis sizes, we can also add in points at intermediate intervals. Here, we add a basis size of 85.
We can then calculate the energies of the ground state and first excited state as we increase the basis size. The energies are in meV.
The following plot shows convergence of energies of the two lowest states.
We can see that the convergence is quite good for relatively small basis sizes in this case.
Electron Density Plots
We can extract electron density plots for various energy levels by calculating the normalised energy eigenvectors of the full Hamiltonian and using the function ElectronDensity. We can then plot the
electron density and use lengthscale to give the length dimensions in nanometers.
First we calculate the energy eigenvectors for the first six energy levels.
Next we use ElectronDensityFunction to convert the normalised energy eigenvector into a probability density function by combining it with the basis basisLzero. This is then simplified to cancel out
the complex components. As the density function is a measurable quantity, we are guaranteed that the complex components will cancel. Chop is used to remove any complex components that remain due to
We can see that we end up with a numerical approximation to the wavefunction. Here is an example.
The following plot shows the electron probability density functions for the first six energy levels with
Energy Levels versus Magnetic Fields
The SACI package also allows the calculation of the evolution of the ground state with a magnetic field using a single interaction Hamiltonian.
We can use LevelVsField to calculate a list of energy levels for a range of magnetic fields defined by magmin, magmax, and step given in Tesla, and for the levels defined by wantedlevels. Here we
choose the first six energy levels and plot for the magnetic field range of 0 to 4 Tesla at intervals of every 0.5 Tesla.
Calculating the energy levels for one
Note that although we are plotting six levels, two pairs of these remain degenerate as the magnetic field changes, so we only see four lines in the plot. Running the program several times, it is
possible to plot curves with different spin quantum numbers and total orbital angular momentum quantum numbers on the same graph. From this, it is possible to see the graphs for different quantum
numbers intersecting as the magnetic field increases. This corresponds to magnetic transitions in the energy levels. Magnetic transitions in the ground state of quantum dots have been measured
This article presents a Mathematica implementation of the SACI method for calculating the energies and wavefunctions of multi-electron quantum dot systems. The results obtained from this package are
highly accurate with established confidence for up to eight electrons when using a PC with a Pentium IV 2.4GHz processor, which can be readily extended by using more powerful computers. Such
numerically exact calculations provide important benchmarks against which one can test other approximate schemes developed to study more complex systems.
[1] D. Pfannkuche, V. V. Gudmundsson, and P. A. Maksym, “Comparison of a Hartree, a Hartree-Fock, and an Exact Treatment of Quantum-Dot Helium,” Physical Review B (Condensed Matter), 47(4), 1993 pp.
[2] C. Yannouleas and U. Landman, “Spontaneous Symmetry Breaking in Single and Molecular Quantum Dots,” Physical Review Letters, 82(26), 1999 pp. 5325-5328.
[3] C. Yannouleas and U. Landman, “Collective and Independent-Particle Motion in Two-Electron Artificial Atoms,” Physical Review Letters, 85(8), 2000 pp. 1726-1729.
[4] S. A. McCarthy, J. B. Wang, and P. C. Abbott, “Electronic Structure of an Computer Physics Communications, 141, 2001 pp. 175-204.
[5] M. Macucci, K. Hess, and G. J. Iafrate, “Simulation of Electronic Properties and Capacitance of Quantum Dots,” Journal of Applied Physics, 77(7), 1995 pp. 3267-3276.
[6] I.-H. Lee, V. Rao, R. M. Martin, and J.-P, Leburton, “Shell Filling of Artificial Atoms within Density-functional Theory,” Physical Review B (Condensed Matter), 57(15), 1998 pp. 9035-9041.
[7] T. Ezaki, N. Mori, and C. Hamaguchi, “Electronic Structures in Circular, Elliptic, and Triangular Quantum Dots,” Physical Review B (Condensed Matter), 56(11), 1997 pp. 6428-6431.
[8] T. Ezaki, Y. Sugimoto, N. Mori, and C. Hamaguchi, “Electronic Properties in Quantum Dots with Asymmetric Confining Potential,” Semiconductor Science and Technology, 13(8a), 1998 pp. A1-A3.
[9] M. Eto, “Electronic Structures of Few Electrons in a Quantum Dot under Magnetic Fields,” Japanese Journal of Applied Physics, 36, 1997 pp. 3924-3927.
[10] M. Eto, “Many-Body States in a Quantum Dot under High Magnetic Fields,” Japanese Journal of Applied Physics, 38, 1999 pp. 376-379.
[11] S. M. Reimann, M. Koskinen, and M. Manninen, “Formation of Wigner Molecules in Small Quantum Dots,” Physical Review B (Condensed Matter), 62(12), 2000 pp. 8108-8113.
[12] S. M. Reimann and M. Manninen, “Electronic Structure of Quantum Dots,” Reviews of Modern Physics, 74(4), 2002 pp. 1283-1343.
[13] J. B. Wang, C. Hines, and R. D. Muhandiramge, “Electronic Structure of Quantum Dots,” The Handbook of Theoretical and Computational Nanoscience (M. Rieth and W. Schommers, eds.), Stevenson
Ranch, CA: American Scientific Publishers, 2006.
[14] R. Pauncz, The Construction of Spin Eigenfunctions: An Exercise Book, New York: Kluwer Academic/Plenum Publishers, 2000.
[15] V. A. Fock, “Bemerkung zur Quantelung des Harmonischen Oszillators im Magnetfeld,” Zeitschrift fur Physics, 47(5-8), 1928 pp. 446-448.
[16] C. G. Darwin, “The Diamagnetism of the Free Electron,” Mathematical Proceedings of the Cambridge Philosophical Society, 27, 1930 pp. 86-90.
R. D Muhandiramge, and J. Wang, “Electronic Structure of Multi-Electron Quantum Dots,” The Mathematica Journal, 2012. dx.doi.org/10.3888/tmj.10.2-7.
Additional Material
Available at www.mathematica-journal.com/data/uploads/2012/05/SACI.m.
About the Authors
Ranga D. Muhandiramge is a mathematics Ph.D. student at the University of Western Australia. In 2003, Muhandiramge did his honours project jointly in mathematics and physics modelling quantum dots,
for which he received first-class honours and a prize for the most outstanding honours graduand. He currently works in the area of operations research on mine field path planning with the assistance
of scholarships from the Hackett Foundation and the Defense Science and Technology Organisation.
Jingbo Wang is an associate professor of physics at The University of Western Australia. Her research areas and interests range widely from atomic physics, molecular and chemical physics,
spectroscopy, acoustics, chaos, nanostructured electronic devices, and mesoscopic physics to quantum information and computation. Wang has published over 100 research papers in peer-reviewed journals
and international conference proceedings. Wang uses Mathematica extensively in both her teaching and research, and she is also an editorial board member of the Journal of Computational and
Theoretical Nanoscience.
Ranga D. Muhandiramge
School of Mathematics and Statistics, M019
University of Western Australia
35 Stirling Highway
Crawley, WA 6009, Australia
Jingbo Wang
School of Physics
University of Western Australia
35 Stirling Highway
Crawley, WA 6009, Australia | {"url":"http://www.mathematica-journal.com/2006/09/electronic-structure-of-multi-electron-quantum-dots/","timestamp":"2014-04-17T05:03:24Z","content_type":null,"content_length":"85261","record_id":"<urn:uuid:ebe15d89-e41b-4c3c-82aa-191ac849a423>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: August 2007 [00036]
[Date Index] [Thread Index] [Author Index]
Multi-variable Integration
• To: mathgroup at smc.vnet.net
• Subject: [mg79666] Multi-variable Integration
• From: gravmath at yahoo.com
• Date: Wed, 1 Aug 2007 05:07:34 -0400 (EDT)
Suppose one defines two expressions in Mathematica:
Q = f[x,y]
P = f[x,y] + g[y]
and then differentiates them wrt the variable 'x' as follows:
dQ = D[Q,x]
dP = D[P,x]
Subsequent use of the Integrate command wrt the variable
'x' (Integrate[dQ,x] and Integrate[dP,x]) yields, in both cases,
I find this behavior understandable from a systems point of view but
mathematically in both cases the answer should be f[x,y] +
arbitraryfunc[y], where obviously further input (as in the original
definitions of P & Q) is needed to determine arbitraryfunc[y]. Is
there a way to get Mathematica to recognize that there are two
variables in the problem and to produce the arbitrary function of the
variable 'y'?
I'm guessing that my specification of f[x,y] is not quite sufficient
to do this, even though it is sufficient when differentiating. That
is to say that dQ and dP are rendered in Mathematica as f^(1,0)[x,y],
which clearly indicates that Mathematica understands that there are
two independent variables in the expression.
Any help would be appreciated, even if it to point me to previous
posts (I found no germane ones myself).
Thanks in advance,
• Follow-Ups: | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Aug/msg00036.html","timestamp":"2014-04-16T16:25:59Z","content_type":null,"content_length":"34913","record_id":"<urn:uuid:51df7956-e373-4d96-80a8-89243fd326be>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometric function with half-square graph?
If you don't restrict t, it makes sawtooth waves.
Wow, different sizes!! Nice discovery!
'Course I only did the y=stuff equations.
I don't understand combining both together yet.
Oh I think I'm getting the idea of the parametric stuff.
Neat concept. I never had heard of it before!
So for like normal functions,
x = t and y = f(x) equation.
But now it's all in terms of t, wow, really flexible.
Never would have thought of that idea.
Last edited by John E. Franklin (2005-12-24 16:40:19) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=21429","timestamp":"2014-04-16T21:54:30Z","content_type":null,"content_length":"14013","record_id":"<urn:uuid:772aa6e8-1a22-4905-8314-e3d763e875c8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
Textbooks on Error Analysis
This is what was said to me:
"As far as data analysis goes, beyond the basics (mean, sigma, chi-squared fits) it would be good for you to understand maximum likelihood fits (particularly multi-dimensional ML fits). Also, we
probably will want to explore using multi-variate discriminators."
Since I took AP Stat back in high school, I can't really say I've had Stats :P because that class was really poorly put together, so I'm definitely going to recover the basics.
Now, for Bevington [3rd edition] I'm looking at covering:
Chapter 1 - Uncertainties in Measurements
Chapter 2 - Probability Distributions
Chapter 3 - Error Analysis
Chapter 4 - Estimates of Means and Errors
Should I cover Chapter 5 - Chapter 9?
I'm not sure what I will need as prereq to beginning Chapter 10 on Maximum Likelihood...and I'm guessing I'll need Chapter 11 - Testing the Fit
For Taylor [2nd Edition] I'm looking at the following:
(from Part I)
Chapter 1. Prelinary Description of Error Analysis
Chapter 2. How to Report and Use Uncertainties
Chapter 3. Propagation of Uncertainties
Chapter 4. Statistical Analysis of Random Uncertainties
Chapter 5. The Normal Distribution
(from Part II)
Chapter 6. Rejection of Data
Chapter 7. Weighted Averages
Chapter 12. The Chi-Squared Test for a Distribution
I'm not sure what parts of Chapters 8 - Chapter 11 I will need.
The work I'll be doing [though I don't know exactly what yet] will be dealing with particle data, if this helps in determining what I need to know. I'm learning to work with a various packages in
ROOT like RooFit and TVMA, but again I'm not sure if that helps for context much since I'm not very far into this yet!
I am not entirely sure that Bevington (I doubt Taylor has the material) has the last two things he mentioned: Multidimensional MLs and Multivariate Discriminators, so it would be great to know if
they do or where I can learn this material. To be honest, I don't even know what is meant by the last term and Google did not give me an immediate answer.
Any recommendations on what to cover from where? :P I've got three whole months to work on this stuff, so I should be okay double covering a bit, right? | {"url":"http://www.physicsforums.com/showthread.php?s=59d44b5f1e280168a809032a71406ef5&p=3817929","timestamp":"2014-04-24T17:17:11Z","content_type":null,"content_length":"45943","record_id":"<urn:uuid:45c93db4-89b9-4ff3-b6f6-932790f99035>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the x intercept of the graph of the equation....... 5x+4y=20
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fc818c4e4b022f1e12f9735","timestamp":"2014-04-21T04:47:18Z","content_type":null,"content_length":"39477","record_id":"<urn:uuid:37596fc7-5019-4312-a859-13c4dba1d6e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
relative motion of two moving objects
April 1st 2010, 10:23 PM
relative motion of two moving objects
I need the ans with relevant figure :
Problem ::
Two particles P and Q, are 30m apart with Q due north of P. Particle Q is moving at 5m/s in a direction 90 deg and P is moving at 7m/s in a direction 30 deg. Find
(a) the magnitude and direction of the velocity Of Q relative to P,
(b) the time taken for Q to be due east of P,to the nearest second.
April 2nd 2010, 12:59 AM
Hello manik
I need the ans with relevant figure :
Problem ::
Two particles P and Q, are 30m apart with Q due north of P. Particle Q is moving at 5m/s in a direction 90 deg and P is moving at 7m/s in a direction 30 deg. Find
(a) the magnitude and direction of the velocity Of Q relative to P,
(b) the time taken for Q to be due east of P,to the nearest second.
Here's the diagram showing the velocities.
Use the Cosine Rule to find the magnitude of the velocity of Q relative to P.
Then find the component of this velocity in a North-South direction (indicated by the dotted line on the diagram).
From this you can work out the direction of this velocity, and how long it takes before Q is East of P.
Can you complete it now? | {"url":"http://mathhelpforum.com/math-topics/136950-relative-motion-two-moving-objects-print.html","timestamp":"2014-04-19T22:39:56Z","content_type":null,"content_length":"5020","record_id":"<urn:uuid:5a639d45-57c5-4d96-b163-8445d2a53649>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fibonacci Formula Inductive Proof
Date: 11/05/97 at 07:26:35
From: Alexander Joly
Subject: Fibonacci formula inductive proof
I am stuck on a problem about the nth number of the Fibonacci
sequence. I must prove by induction that
F(n) = (PHI^n - (1 - PHI)^n) / sqrt5
Here's what we usually do to prove something by induction:
1) Show that the formula works with n = 1.
2) Show that if it works for (n), then it will work for (n+1).
But the problem is:
We suppose F(n) is true, so F(n+1) is true.
But when I must prove it, I have to add another thing: F(n-1),
because F(n-1) + F(n) = F(n+1).
So here are the questions:
Can I suppose TWO things are true, F(n) and F(n-1), to prove that
F(n+1) is true?
How can I prove it is true for n = 1, since F(1) is DEFINED as equal
to 1?
Here's what I've already done: I've successfully converted
(PHI^(n-1) - (1 - PHI)^(n-1)) / sqrt5 + (PHI^n - (1 - PHI)^n) / sqrt5
into (PHI^(n+1) - (1 - PHI)^(n+1)) / sqrt5
Thank you very much for your much-appreciated help!
An example of the Fibonacci formula inductive proof
would be very kind.
Date: 11/05/97 at 13:25:14
From: Doctor Rob
Subject: Re: Fibonacci formula inductive proof
You have already done the hard part, which is in your next-to-last
Answer 1. Yes, you can assume two things are true, but then you have
to show that two starting values work. (Think about it.) An equivalent
formulation of the Principle of Mathematical Induction is where you
assume it is true for *all* values k with 1 <= k <= n, and use that to
show it for n+1.
Answer 2. To prove it for n = 1, you have to show that
1 = F(1) = (PHI - 1 + PHI)/sqrt(5).
You do this by using the fact that PHI = (1 + sqrt(5))/2. Substitute
it in and check that it works. To prove it for n = 2, you have to
show that
1 = F(2) = (PHI^2 - [1 - PHI]^2)/sqrt(5).
You do this the same way. By the way, this is also true for n = 0,
0 = F(0) = (PHI^0 - [1 - PHI]^0)/sqrt(5),
for which you don't even need the formula for PHI.
-Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 11/05/97 at 15:31:47
From: Doctor Anthony
Subject: Re: Fibonacci formula inductive proof
Taking PHI to be -----------
we first check that the formula is true for n = 1 and n = 2
F(1) = (1/sqrt(5))[(1+sqrt(5))/2 - 1 -(1+sqrt(5))/2]
= (1/sqrt(5))[1/2 + sqrt(5)/2 - 1/2 + sqrt(5)/2]
= 1/sqrt(5)[sqrt(5)]
= 1
F(2) = (1/sqrt(5))[PHI^2 - (1-PHI)^2]
= (1/sqrt(5)[PHI^2 - 1 + 2 PHI - PHI^2]
= (1/sqrt(5))[2 PHI - 1]
= (1/sqrt(5)[1+sqrt(5) - 1]
= 1
So the formula is correct for n=1 and n=2. Now assume it is true for
all terms UP TO some other value n.
If we take
1 sqrt(5)+1 1 - sqrt(5)
F(n+1) = -------[ (--------- )^(n+1) - ( ----------- )^(n+1)]
sqrt(5) 2 2
By taking out a factor (sqrt(5)+1)^2
--------- from the first term and
from the second term and then squaring out these brackets it is easy
to show that F(n+1) = F(n) + F(n-1).
So if the expression is true for n and n-1, it is also true for n+1.
But it is true for n=1 and n=2, hence it must be true for n=3. So true
for n=2 and n=3 it is true for n=4. Thence to n=5, n=6, n=7 and so on
to all positive integer values of n.
-Doctor Anthony, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 11/10/97 at 11:55:23
From: Joly
Subject: Re: Fibonacci formula inductive proof
Dear Dr. Rob,
I give up, Doctor Rob. I must use three values (F(n), F(n-1) and
F(n+1)), but I can't find the way to justify this use of the
induction. I know I have to show that two starting values work, but
this is where I am stuck. I don't know how to show it.
Thank you very much for your time!
Date: 11/10/97 at 12:44:31
From: Doctor Rob
Subject: Re: Fibonacci formula inductive proof
I'm sorry that my previous answer was not sufficient to solve your
problem. As I understand the current state of the problem, you need
the following theorem to finish it:
Theorem: If P(n) is a statement involving the variable n, and if P(1)
is true, P(2) is true, and the implication P(k) & P(k+1) ==> P(k+2) is
true, then P(n) is true for all n >= 1.
Proof: We will prove this using the Principle of Mathematical
Let Q(n) be the statement "P(n) is true and P(n+1) is true." Then Q(1)
holds by hypothesis. Given Q(k) is true, we know that P(k) and P(k+1)
are true. Using the implication from the hypotheses of the theorem,
P(k+2) is also true, so P(k+1) and P(k+2) are both true. Thus Q(k+1)
is true. This means that Q(k) ==> Q(k+1). Thus by the Principle of
Mathematical Induction, Q(n) is true for all n >= 1. Q(n) ==> P(n),
so P(n) is true for all n >= 1. Q.E.D.
This is the justification you need to use your "double-step"
induction. See how the need for Q(1) to be true forces us to make sure
that both P(1) and P(2) are true, so we have a two-part start for the
-Doctor Rob, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 11/10/97 at 14:21:46
From: DJoly
Subject: RE: Thanks, Doctor Rob!
Thank you very much! Your answers were very complete. This is all
people could expect from Dr. Math's service! | {"url":"http://mathforum.org/library/drmath/view/51536.html","timestamp":"2014-04-17T01:45:19Z","content_type":null,"content_length":"10743","record_id":"<urn:uuid:1ccbbf1d-f083-4e18-b04a-f8009bb9bdac>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] A new matrix class
Keith Goodman kwgoodman@gmail....
Sun May 11 14:01:08 CDT 2008
The most basic, and the most contentious, design decision of a new
matrix class is matrix indexing. There seems to be two camps:
1. The matrix class should be more like the array class. In particular
x[0,:] should return a 1d array or a 1d array like object that
contains the orientation (row or column) as an attribute and x[0]
should return a 1d array. (Is x.sum(1) also a 1d array like object?)
2. A matrix is a matrix: all operations on a matrix, including
indexing, should return a matrix or a scalar.
Does that describe the two approaches to matrix indexing? Are there
other approaches?
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-May/033748.html","timestamp":"2014-04-18T05:48:35Z","content_type":null,"content_length":"3023","record_id":"<urn:uuid:ba7ddafc-a6b3-45d8-9eaf-d3eb29dd42aa>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why W?
Why W?
No, not that W. I won't be drawn into presidential politics here. The W I want to discuss is something else entirely: the Lambert W function, a mathematical contrivance that has been getting a fair
amount of attention lately. The buzz began in the world of computer-algebra systems such as Macsyma, Maple and Mathematica, but word of W has also been spreading through journal articles, preprints,
conference presentations and Internet news groups. The W function even has its own poster (see http://www.orcca.on.ca/LambertW).
The concept at the root of W can be traced back through more than two centuries of the mathematical literature, but the function itself has had a name only for the past 10 years or so. (A few years
longer if you count a name used within the Maple software but otherwise unpublished.) When it comes to mathematical objects, it turns out that names are more important than you might guess.
Without further ado, here is the definition of Lambert W: It is the inverse function associated with the equation:
We^W = x.
What does that mean? Some readers of this column will grasp it instantly, but I am not going to pretend that I am one of them. It took me a while to figure out how W works, and even longer to see why
the concept might be considered interesting or important. At the risk of inflicting severe tedium on those who are more adept at algebra and analysis, I want to retrace my own path toward
understanding what W is all about. It's a fairly long and wiggly path.
Single U and W
In trying to make sense of the expression We^W , the first question is not "why W?" but "why e?" The e in the formula is Euler's number, the second-most-famous constant in all of mathematics, often
introduced as "the base of the natural logarithms"; but then the natural logarithms are usually defined as "logarithms taken to the base e," which is not much help. Another way of defining e,
originally derived from the study of compound interest, avoids this circularity: e is the limiting value of the expression (1+1/n) ^n as n tends to infinity. Thus we can approximate the value of e by
setting n equal to some arbitrary, large number. With n=1,000,000, for example, we get six correct digits of e: 2.71828.
Now, with e in hand, consider an equation somewhat simpler than the one for W; we might call it single-U:
e^U = x.
This equation defines the exponential function, also written exp(U). The function maps each given value of U to a corresponding value of x, namely e raised to the power U. If U is a positive
integer, we can calculate the function's value by simple arithmetic: Just multiply e by itself U times. For nonintegral U, the procedure is not quite so obvious but is still well-defined; the left
part of Figure 1 shows the elegant curve generated.
The equation e^U = x also defines an inverse function; we just need to read the equation backward. Whereas the forward function maps a value of U to a value of x, the inverse function takes a value
of x as input and returns the corresponding value of U. In other words, the inverse function finds the power to which e must be raised to yield a given value of x. This is another well-known,
textbook function: the natural logarithm, written log(x) or ln(x). The log function has the same graph as the exponential function but reflected across the diagonal, as shown in the right part of
Figure 1.
Sometimes it's helpful to think of functions like these as if they were machines. The exp function works like a meat grinder: Dump a U into the input hopper, turn the crank, and out comes an x equal
to e^U . The same machine can calculate logarithms if we run numbers through it backwards and turn the crank the other way, but there is an important caveat: When the machine runs in reverse, some
inputs can jam the gears. For example, what output should the machine produce if you ask it for the logarithm of 0 or -1? (If you have a scientific calculator handy, see how a real machine answers
these questions.)
The problem is that the logarithm function works only over a limited domain. Exponentiation is defined over the entire real number line; any real value of U, whether positive or negative, produces a
value of exp(U), and that value is always a positive real number. The inverse function is not so well-mannered: As the right side of Figure 1 suggests, log(x) is defined only if x is positive. This
limitation can be sidestepped by venturing off the real number line into the wilds of the complex plane. If the value of log(x) is allowed to be a complex number, with both a real and an imaginary
part, then log(-1) has a definite value: According to a famous formula of Euler, it is equal to π i, where i is the imaginary unit, the square root of -1. The W function is also defined throughout
the complex plane, but in this article I shall confine myself to the straight and narrow path of real numbers. However, see Figure 4.
One more simple fact about logarithms will be needed below. The logarithm of the product of two numbers is equal to the sum of the logarithms of the factors: log(xy)=log(x)+log(y). Likewise for
quotients: log(x/y)=log(x)-log(y). These relations were the main reason for inventing logarithms in the first place: They convert multiplication and division into the easier tasks of addition and
W Coming and Going
There is an obvious family resemblance between single-U and W, between the equations e^U = x and We^W = x. In the case of the forward W function, if we know how to calculate e^W , then it's a trivial
matter to calculate We^W : just multiply by W. The resulting curve is shown in the left part of Figure 2. In overall shape it looks much like the exponential curve, although for large W it rises more
steeply. Where e^W and We^W really part company is to the left of W=0. Whereas e^W is always positive, We^W dips into negative territory, reaching a minimum at the point W=-1, x=-1/e. As W tends
toward negative infinity, both e^W and We^W approach 0, but one from above and the other from below.
Taking the inverse of this function—solving We^W = x for W instead of for x—finally brings us to the Lambert W function. Just how to solve for W is a matter I'll return to below, but for now it's
enough to flip the graph of the function about its diagonal, as in the right side of Figure 2; the inverse graph is drawn in more detail in Figure 3. Just as the forward function resembles the
exponential curve, the inverse function appears similar to the logarithm. The curves for log(x) and W(x) cross at x=e, where both are equal to 1. Where things get most interesting, again, is to the
left of x=0. Whereas log(x) is undefined for any x ≤ 0, W(x) continues to have a value down to x=-1/e, or about -0.37. Indeed, when x lies in the range between -1/e and 0, W(x) has not just a value
but two values. For example, W(-0.2) could be equal to either -0.26 or -2.54. Plugging either of these W values into the formula We^W yields the x value -0.2.
For a mathematical function, multiple values are an embarrassment of riches; a well-bred function is supposed to map each value in its domain to a single value in its range. But in practice multiple
values are not uncommon, particularly with inverse functions. The square root is a familiar example: Whereas squaring 2 yields the unique result 4, the square root of 4 could be either +2 or -2. Some
of the trigonometric functions are even worse. Every angle has just one sine, but the inverse function, the arc sine, wraps around to produce infinitely many values.
The problem with multivalued functions is knowing which value, or branch, to choose. Most calculators and programming languages give precedence to positive roots and to arc sine values between -90
and +90 degrees, but there is no fundamental justification for these choices. In the case of Lambert W, the part of the curve with W>-1 has been labeled the "principal branch," but again this is
mainly a matter of convention. (In the complex plane, W has infinitely many branches.)
W—What Is It Good For?
The Lambert W function may make a pretty curve, but what's it good for? Why should anyone care? By mixing up a few symbols we could generate an endless variety of function definitions. What makes
this one stand out from all the rest?
If you ask the same question of more familiar functions such as exp and log and square root, the answer is that those functions are tools useful in solving broad classes of mathematical problems.
With just the four basic operations of arithmetic, you can represent the solution of any linear equation. Adding square roots to the toolbox allows you to solve quadratic equations as well. Expanding
the kit to include the trigonometric, exponential and logarithmic functions brings still more problems within reach. All of these well-known functions, and perhaps a few more, are classified as
"elementary." The exact membership of this category is not written in stone, but it excludes more specialized tools such as Bessel functions.
A few years ago, a brief, unsigned editorial in Focus, the newsletter of the Mathematical Association of America, asked: "Time for a new elementary function?" The function proposed for promotion to
the core set was Lambert W. Whether W ultimately attains such canonical status will depend on whether the mathematical community at large finds it sufficiently useful, which won't be clear for some
years. In the meantime, I can list a few applications of W discovered so far.
One place where W turns up in pure mathematics is the "power tower," the infinitely iterated exponential
For large x, this expression soars off to infinity faster than we can follow it, but Euler showed that the tower converges to a finite value in the domain between x = e^-e (about 0.07) and x = e^1/e
(about 1.44). Within this realm, the value to which the infinite tower converges is W(-log(x))/-log(x).
W has another cameo role in the "omega constant," which is a distant of cousin of the golden ratio. The latter constant, with a value of about 1.618, is a solution of the quadratic equation 1/x= x-1.
The omega constant is the solution of an exponential variant of this equation, to wit: 1/e^x = x. And what is the value of that solution? It is W(1), equal to about 0.567143.
Of more practical import, W also appears in solutions to a large family of equations known as delay differential equations, which describe situations where the present rate of change in some quantity
depends on the value of the quantity at an earlier moment. Behavior of this kind can be found in population dynamics, in economics, in control theory and even in the bathroom shower, where the
temperature of the water now depends on the setting of the mixing valve a few moments ago. Many delay differential equations can be solved in terms of W; in some cases the two branches of the W
function correspond to distinct physical solutions.
A recent article by Edward W. Packel and David S. Yuen of Lake Forest College applies the W function to the classical problem of describing the motion of a ballistic projectile in the presence of air
resistance. In a vacuum, as Galileo knew, the ballistic path is a parabola, and the maximum range is attained when the projectile is launched at an angle of 45 degrees. Air resistance warps the
symmetry of the curve and greatly complicates its mathematical description. Packel and Yuen show that the projectile's range can be given in terms of a W function, although the expression is still
forbiddingly complex. (They remark: "Honesty compels us to admit at this point that the idea for using Lambert W to find a closed-form solution was really Mathematica's and not ours.")
Still another example comes from electrical engineering, where T. C. Banwell of Telcordia Technologies and A. Jayakumar of Anadigics show that a W function describes the relation between voltage,
current and resistance in a diode. In a simple resistor, this relation is given by Ohm's law, I=V/R, where I is the current, V the voltage and R the resistance. In a diode, however, the relation is
nonlinear: Although current still depends on voltage and resistance, the resistance in turn depends on current and voltage. Banwell and Jayakumar note that no explicit formula for the diode current
can be constructed from the elementary functions, but adding W to the repertory allows a solution.
Other applications of W have been discovered in statistical mechanics, quantum chemistry, combinatorics, enzyme kinetics, the physiology of vision, the engineering of thin films, hydrology and the
analysis of algorithms.
Evaluating W
It's all very well to express the solutions of problems in terms of W, but then how do we find the value of the resulting function? In the case of logarithms and trigonometric functions, the standard
method for many years was to look up the answer in a big printed table; now we push the appropriate button on a calculator. For W, however, there are no published tables, and so far no scientific
calculator has a built-in Lambert W key. Several computer algebra systems know how to evaluate the W function, but if you don't have access to such software, you're on your own.
Suppose we already know how to calculate exponentials and logarithms; can we then solve the equation We^W = x? As noted above, the forward version is easy: just evaluate e^W and then multiply by W.
At first glance, the inverse function looks like it might be wrestled to submission by a similar tactic. If we can solve for x by calculating an exponential and then multiplying, can't we solve for W
by dividing and then taking a logarithm?
Dividing both sides of the equation by W gives e^W = x/W. Then, taking the logarithm of both sides produces log(e^W) = log(x/W). On the left hand side, the logarithm of e^W is simply W. On the right
hand side we can rewrite the logarithm of a quotient as the difference of two logarithms, and so we wind up with this equation:
W = log(x) - log(W).
We have succeeded in getting W off by itself on the left side, but unfortunately there's still a log(W) on the right. Thus we don't have a closed-form solution, a formula that would allow us to plug
in an x and immediately get back the corresponding W. This failure is not merely a result of my ineptitude; no algebraic wizardry will yield a finite closed-form solution.
On the other hand, the equation above is not totally worthless. If we have a guess about the value of W, then we can plug it into the right hand side of the equation to get an even better guess, then
repeat the process until we're satisfied with the accuracy of the approximation. For some values of x—well away from 0—this simple iterative scheme converges quickly on the correct result. The
algorithms used in computer-algebra software are more efficient, accurate and robust, but they still rely on successive approximations.
Whence and Whither W?
The modern history of Lambert W began in the 1980s, when a version of the function was built into the Maple computer-algebra system and given the name W. Why W? An earlier publication by F. N.
Fritsch, R. E. Shafer and W. P. Crowley of the Lawrence Livermore Laboratory had written the defining equation as we^w = x. The Maple routine was written by Gaston H. Gonnet of the Institut für
Wissenschaftliches Rechnen in Zurich, who adopted the letter w but because of typographic conventions in Maple had to capitalize it.
A few years later Robert M. Corless and David J. Jeffrey of the University of Western Ontario launched a discussion of W and its applications in what has turned out to be a long series of journal
articles and less-formal publications. The most influential paper, issued as a preprint in 1993 but not published until 1996, was written by Corless and Jeffrey in collaboration with Gonnet, David E.
G. Hare of the University of Waterloo and Donald E. Knuth of Stanford University. This was the paper that named the function in honor of the 18th-century savant Johann Heinrich Lambert.
Lambert, who wrote on everything from cartography to photometry to philosophy, never published a word on the function that now bears his name. It was his eminent colleague Leonhard Euler who first
described a variant of the W function in a paper published in 1779, two years after Lambert's death. So why isn't it called the Euler W function? For one thing, Euler gave credit to Lambert for the
earliest work on the subject. Perhaps more to the point, Corless, Jeffrey and Knuth note that "naming yet another function after Euler would not be useful."
In the years between Euler and Maple, the W function did not disappear entirely. The Dutch mathematician N. G. de Bruijn analyzed the equation in 1958, and the British mathematician E. M. Wright
wrote on the subject at about the same time. In the 1970s and 80s there were several more contributions, including that of Fritsch, Shafer and Crowley. Nevertheless, the literature remained widely
scattered and obscure until the function acquired a name. In a 1993 article, Corless, Gonnet, Hare and Jeffrey remark: "For a function, getting your own name is rather like Pinocchio getting to be a
real boy."
Some of the recent publications on W go beyond mere explication of mathematics; they carry a whiff of evangelical fervor. Those for whom W is a favorite function want to see it elevated to the canon
of standard textbook functions, alongside log and sine and square root. I am reminded of another kind of canonization—a campaign for the recognition of a local saint, with testimonials to good works
and miracles performed.
The advocates of W do make a strong case. In a 2002 paper, Corless and Jeffrey argue that W is in some sense the smallest step beyond the present set of elementary functions. "The Lambert W function
is the simplest example of the root of an exponential polynomial; and exponential polynomials are the next simplest class of functions after polynomials."
But the elevation of W has not won universal assent. R. William Gosper, Jr., has suggested that a better choice might be the square of W, that is, We^W^2 = x, which eliminates the multivalued
branching on the real line. (In a play on "Lambert W," Gosper calls this the Dilbert lambda function.) And Dan Kalman of American University has suggested a formulation based on e^W/W = c, with an
inverse function he calls glog.
Woo Woo Woo
My own misgivings about Lambert W pertain not to the function itself but to the name. Again: Why W? Over the years, English-speaking people have inflicted far too many Ws on the rest of the world,
from the Wicked Witch of the West to the W boson to the World Wide Web. (Again I forgo comment on the current occupant of the White House.) We purse our lips painfully to pronounce doubleyou,
doubleyou, doubleyou. With 26 letters to choose from, why do we keep fixing upon the only letter in the English alphabet with a polysyllabic name? (I acknowledge that I have made matters worse by
writing this column, in which every sentence of the text includes at least one instance of the letter w.)
It's not too late to right the wrong. On a bus in Italy—a country that doesn't even have a w in its alfabeto—I overheard a fragment of a conversation: Someone was reading a URL and pronounced the
first part "woo woo woo." It's a shrewd accommodation to linguistic wimperialism. We should all adopt it. Let us keep the letter but change the way we say it. Whether it's Lambert W or George W. or
www, it's woo all the way.
For assistance with this article I offer warm thanks to Jonathan M. Borwein, David W. Cantrell, Robert M. Corless, David J. Jeffrey and Tim Royappa.
© Brian Hayes | {"url":"http://www.americanscientist.org/issues/id.3448,y.0,no.,content.true,page.2,css.print/issue.aspx","timestamp":"2014-04-18T01:39:58Z","content_type":null,"content_length":"123420","record_id":"<urn:uuid:4580e0a9-8140-47df-9086-e7d56f70da3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
CIBSE Journal May 2011
This magnitude of change is not uniform across all materials. Practical measurements of structural materials in Canada[4]
indicated that some materials,
such as (carbonate aggregate) concrete exhibited a significant change in conductivity with temperature, whereas lightweight bricks did not.
Heat flow to and from the structure The heat will flow to and from a building and between its individual components by conduction, convection and radiation. And for heat to pass through the solid
structures it must first enter the structure’s surface by heat conducting from the adjacent convecting air to the surface and from the radiant input from all of the surfaces facing it. When
considering heat flow from outdoors, the radiant heat flow from the sun and the sky will also be included. For convenient incorporation into the U value calculation, the various heat transfer
coefficients are combined to produce the internal surface resistance,
Rsi, and external surface resistance Rse, (m2
K/W). So Rse = 1/ (hc + Ehr) and Rsi is = 1/
(1.2 Ehr + hc) where hc is the convective heat transfer coefficient, E is the emissivity factor (this takes account of the ability of the surface to radiate heat to its surroundings), and the
radiative heat transfer coefficient, hr =4 s Ts
3 where s is the
Stefan Boltzman constant 5.67 x 10-8 W/m2
K, and
Ts is the surface absolute temperature (in Kelvin,
if the wind speed was 1.5m/s the value of hc would be 16.7 x 1.50.5
= 20.5W/m2 K. K. If
the wind speed subsequently increased to 3.5m/s the value of hc would increase to 31.24 W/m2
Radiation heat transfer will be dependant on the temperature of the object(s), the shape and emissivity. Typically dark objects will emit more radiation than lighter ones – they will have a higher
emissivity and practically most building materials will have a high emissivity (with the maximum value
surface. The emissivity factor will also be significantly affected by the surrounding surfaces, as there will only be radiant heat flow from the surface if there is a suitable surface to absorb the
radiation. (This receiving ‘surface’ may in practice be the massive heat sink of a very cold, clear night sky). Practically the radiant surroundings may change as vegetation alters around a building
or adjacent buildings and landscaping are changed. To combine the two effects of the
variation of the values of hc and Ehr (simply to illustrate the point and not as a specific design example)
the value of Rse for a 1.5 m/s air speed passing over the surface with an emissivity of 0.3 would be 0.045 m2
K/W. Rse would be 0.028 m2
The thermal performance of a simple brick will depend on its temperature, moisture content and its age’
being 1). However the emissivity of a surface will change as a material becomes coated in particles of soot and dust in the air. The relationships that define the radiant heat transfer from a surface
are very complex (a good discussion is provided in CIBSE Guide C 2007) and the equations used have been simplified and generalised for conditions that would be typical in building constructions. The
value of the radiative heat
K). The tables (3.8 and 3.9) in CIBSE Guide A 2006 for Rse and Rsi have been developed using these relationships. Values of convective heat transfer
No matter how ‘accurate’ the thermal model may be, there must be realistic understanding of the U value
wwere determined by Jürges in 1928 for forced convection (that is, external wind or caused by air movement devices in the room) and by McAdams in 1954 for free convection, and these still form the
basis of the many tabulated values. For the external convective heat transfer coefficient CIBSE Guide C 2007 suggests the use of the relationship[5] hc = 16.7cs
0.5 W/m2 being less than 3.5m/s. So, for example
transfer coefficient hr is dependent on the relative temperature of the surface to its surroundings; however, to accurately predict variations in this is extremely complex. The emissivity factor,
E, will vary directly with the emissivity of the
K for the air velocity, cs
surface and so is more straightforward to enumerate. This can be illustrated most markedly when considering, for example, a bitumen flat roof that has been coated with aluminium paint (as a means of
reducing solar gain) that, when freshly applied, would have an emissivity of around 0.3. As the roof ages and accumulates particles from pollution, flora and fauna its emissivity will rise, depending
on the condition, towards the emissivity of the original dark bitumen roof (0.9) so increasing the radiant heat loss from the
For a 3.5 m/s air speed passing over that same surface with an emissivity of 0.8, the value of K/W. If
the wind speed had stayed the same and the surface’s emissivity had
risen to 0.8, the value of Rse would still have reduced to 0.039 m2
to the original value of 0.045 m2
K/W compared K/W.
Practical implications This article has touched on some of the variables in the thermal performance of building constructions in use. Combined with the challenges of calculating a representative U
value for building elements that are made of several layers of ‘non-homogenous’ (varying) materials means that a pragmatic approach to heat loss and energy calculations is required. No matter how
‘accurate’ the subsequent thermal model may be, there must be a realistic understanding that the underlying U value cannot be considered an absolute constant. © Tim Dwyer
1. BS EN ISO 6946:2007 Building components and building elements… Calculation method
2. Table A3.2, CIBSE Guide A, 2006
3. Abou A. Budawi,I. ‘Comparison of Thermal Conductivity Measurements of Building Insulation Materials under Various Operating Temperatures’, Journal of Building Physics, Vol. 29(2), October 2005
4. Hu, T. Lie, G et al, Thermal Properties of Building Materials at Elevated Temperatures. National Research Council, Canada, 1993
5. Loveday D L and Taki A H, ‘Outside surface resistance: proposed new value for building design’, Proc. CIBSE A: Building Serv. Eng. Res. Technol. 19(1) 23–29 (1998)
May 2011 CIBSE Journal 69 | {"url":"http://content.yudu.com/A1rxx9/CIBSEMay11/resources/69.htm","timestamp":"2014-04-16T19:21:08Z","content_type":null,"content_length":"19625","record_id":"<urn:uuid:2dfcc9b7-8478-4f64-ad54-ce319b3804ce>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Search for Haplotype Interactions That Influence Susceptibility to Type 1 Diabetes, through Use of Unphased Genotype Data
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Search for Haplotype Interactions That Influence Susceptibility to Type 1 Diabetes, through Use of Unphased Genotype Data
Type 1 diabetes is a T-cell–mediated chronic disease characterized by the autoimmune destruction of pancreatic insulin-producing β cells and complete insulin deficiency. It is the result of a complex
interrelation of genetic and environmental factors, most of which have yet to be identified. Simultaneous identification of these genetic factors, through use of unphased genotype data, has received
increasing attention in the past few years. Several approaches have been described, such as the modified transmission/disequilibrium test procedure, the conditional extended transmission/
disequilibrium test, and the stepwise logistic-regression procedure. These approaches are limited either by being restricted to family data or by ignoring so-called “haplotype interactions” between
alleles. To overcome this limit, the present study provides a general method to identify, on the basis of unphased genotype data, the haplotype blocks that interact to define the risk for a complex
disease. The principle underpinning the proposal is minimal entropy. The performance of our procedure is illustrated for both simulated and real data. In particular, for a set of Dutch type 1
diabetes data, our procedure suggests some novel evidence of the interactions between and within haplotype blocks that are across chromosomes 1, 2, 3, 4, 5, 6, 7, 8, 11, 12, 15, 16, 17, 19, and 21.
The results demonstrate that, by considering interactions between potential disease haplotype blocks, we may succeed in identifying disease-predisposing genetic variants that might otherwise have
remained undetected.
Insulin-dependent diabetes mellitus (IDDM [MIM 222100]), or type 1 diabetes, is a common chronic disease characterized by autoimmune destruction of pancreatic β cells and complete insulin deficiency
(Cordell and Todd ^1995; Schranz and Lernmark ^1998; Friday et al. ^1999). The importance of some genetic factors for the etiology of type 1 diabetes, such as human leukocyte antigen (HLA), has been
established unequivocally, although their precise mechanism has not been identified. Evidence that the immune system and apoptosis play a role is accumulating. Both processes contribute to the
deterioration of β cells in the islets of Langerhans in the pancreas. Despite this information, no definite genetic cause can be determined in most patients, not even in the presence of a positive
family history. In this article, we present a method, for testing the influence of haplotype interactions on developing disease, that can be used when unphased genotypes are available for a number of
cases and controls, and we apply this method to genotype data of patients with type 1 diabetes and of healthy controls. Here, as in the article by Bugawan et al. (^2003), “haplotype interaction” is
defined as the statistical dependence between alleles at different loci.
The increasing availability of polymorphic markers such as SNPs, automated genotyping technology, and large collections of family-based (or case-control–based) data have enabled the design of
genomewide screens for several populations. Such screens have led to the location of susceptibility loci for type 1 diabetes in various chromosomal regions, suggesting that type 1 diabetes is a
multigenic disorder, in the sense that onset of the disease requires the simultaneous presence of a subset of susceptibility genes. Most recent research efforts have concentrated on HLA genes (see
Cox et al. [2001] and Pugliese [^2001] for reviews). The importance of the HLA class II haplotypes was shown by Noble et al. (^2002) in families with at least two children with insulin-dependent
Once a disease-predisposing region has been localized, a number of potentially causative genetic variants may exist in the region, including a large number of SNPs. Whereas, for monogenic diseases,
one base change in the coding region of a gene very often is sufficient to cause the disease, for multigenic diseases the effect of any single genetic variant on the risk of the disease may be small,
which makes identification of these variants difficult (Drysdale et al. ^2000). Furthermore, the following questions related to identification of the multiple risk variants arise. First, it is not
clear which combination of variants has a causative role in the disease. Second, it remains unknown whether susceptibility for the disease arises because of the effects of these variants acting
independently or because of some important interactions between the variants.
These questions have received increasing attention recently (see, for example, Valdes and Thomson ^1997; Cox et al. ^1999; Dassen et al. ^2001; Cordell and Clayton ^2002; Bugawan et al. ^2003).
Cordell and Clayton (^2002) proposed a simple but powerful stepwise logistic-regression procedure that allows for testing the dominance effects of different combinations of polymorphisms, as well as
genotype interactions in the analysis of case-control data. In particular, they measured genotype interactions in terms of penetrance for developing disease. However, haplotype interactions, since
the underlying haplotype pairs of unphased genotypes may have different disease risks, so that there are disease-predisposing interactions, cannot be dealt with in their approach. To illustrate this,
for the moment, we consider two diallelic variants of interest in a region: variant 1, with one of the unphased genotypes aa, AA, and aA; and variant 2, with one of the unphased genotypes bb, BB, and
bB. There are nine possible combinations (also called “genotypes”) observed at the two variants: aa/bb, aa/bB, aa/BB, AA/bb, AA/bB, AA/BB, aA/bb, aA/bB, and aA/BB, where, for example, aA/bb means
that the alleles in variants 1 and 2 are {a,A} and {b,b}, respectively. All of these genotypes except for aA/bB can be uniquely decomposed into a pair of haplotypes. For aA/bB, there are two
compatible possible haplotype pairs, (a,b)/(A,B) and (a,B)/(A,b). The pairing described here indicates that allele a is coupled with allele b or allele a is coupled with allele B.
It is only when these two haplotype pairs have different disease risks that there may be potential disease-predisposing interactions between a and b or a and B. As pointed out by a reviewer, even
when the haplotype pairs do have different disease risks, it does not necessarily mean that the alleles interact in anything other than a statistical sense, since this phenomenon could occur if
alleles a and b, say, were in linkage disequilibrium with (and, thus, marking a haplotype containing) another predisposing variant not included in the analysis. Note that the stepwise
logistic-regression procedure takes genotypes as explanatory variables and, therefore, the possible difference between the effects of the underlying haplotypes on the disease is ignored.
An alternative test is called the “haplotype method” (Valdes and Thomson ^1997), which compares the relative frequencies of alleles at a secondary locus on haplotypes that are identical at a primary
locus (or loci). The problem with the haplotype method is that, often, the haplotypes are not known. Although one can statistically infer the haplotypes from unphased genotypes, it is unclear how to
judge the significance of the results from the haplotype method if we want to take into account the possible haplotyping errors. Several other approaches have been described for simultaneous
identification of genetic factors through use of unphased genotype data, such as the modified transmission/disequilibrium test procedure (Cucca et al. ^2001) and the conditional extended transmission
/disequilibrium test (Koeleman et al. ^2000). These approaches are also limited, by being restricted either to family data or to haplotype data. This and the fact that there are 2^m-1 possible
haplotype pairs for a genotype of m heterozygous sites, which results in a considerable number of potential haplotype interactions when m is large, motivated us to develop a special procedure for
testing such interactions. The proposed method is based on minimal entropy, reflecting the principle that a good prediction of haplotype interactions should extract a maximum amount of information
from data and, thus, most parsimoniously explain the underlying haplotype structure, given unphased genotypes. In general, the computation of the entropy statistic is very intensive. To solve this
problem, we have developed a new Markov chain Monte Carlo algorithm called the “structure-annealing algorithm.”
Two types of approaches for the investigation of interaction can be distinguished: those that consider interaction in the sense of linkage disequilibrium between closely linked loci (Wall and
Pritchard ^2003) and those that consider interaction in the sense of effects on disease risk (Cordell and Todd ^1995; Cordell et al. ^2001). In this article, we focus on the linkage disequilibrium
approach while investigating interaction between all loci and, hence, also between possibly unlinked loci. For any two haplotype blocks, let us denote by p[1a] and p[2b] the probabilities of
occurrence for allele a at block 1 and for allele b at block 2, respectively. Let p[ab] be the probability of simultaneous occurrence of a and b. We are trying to test whether, for all a and b, p[ab]
=p[1a]p[2b]. We assess the evidence for interactions between and within (possibly unlinked) haplotype blocks on different chromosomal regions by using a permutation procedure. Since the strength of a
linkage disequilibrium pattern is not, typically, a monotonic function of recombination distance when there exist selective forces that favor certain haplotypes over others, as might be the case for
type 1 diabetes (Fain and Eisenbarth ^2001), we needed to develop an approach that is independent of this distance. Naturally, we are mainly interested in identification of disease-predisposing
interactions by comparisons between cases and controls. The disease-predisposing interactions are found in a second stage, by contrasting the interaction patterns observed for patients with the
interaction patterns observed for healthy controls. These interactions could facilitate understanding of the pathological mechanisms involved in the disease, as well as the further identification of
some haplotype blocks that provide significant association with the disease only when their interactions with other blocks are taken into account.
As an illustration of our method, we present in this article a reanalysis of a set of genotypes that was obtained from a cohort of 89 Dutch patients with type 1 diabetes and 47 healthy control
individuals, with a 65-polymorphism detection assay originally designed for unraveling the multigenic cause of atherosclerosis (Dassen et al. ^2001). Since both diabetes mellitus and atherosclerosis
can be regarded as metabolic diseases with many overlapping biochemical and clinical parameters, the variants that are susceptible to atherosclerosis may also be the cause of type 1 diabetes. Dassen
et al. (^2001) examined whether certain types of combinations of SNPs confer susceptibility to type 1 diabetes in the cohort by logistic regression and self-learning neural networks. They found that
a set of four polymorphisms could predict 79.9% of the cases correctly. However, a significant number of polymorphisms could not be interpreted by their method. Note that all of these variants were
selected from the pathways of lipid and homocysteine metabolism, regulation of blood pressure and coagulation, inflammation, cellular adhesion, and matrix integrity. Therefore, we wondered whether
the variants that were unexplained in the above-mentioned study may serve as transitive (or supporting) variants, in the sense that they interact with some etiological variants within and between
these pathways.
Before we applied the proposed procedure to the above-mentioned Dutch type 1 diabetes data, we evaluated the power of our approach by conducting a simulation study in which four different
combinations of mutation and recombination rates were considered. The results are presented below. They suggest that a high accuracy can be achieved if appropriate critical values for our entropy
statistics are selected. Note that, although the coalescent model that we have used for our simulations has been shown to be very helpful in modeling haplotype populations (Stephens et al. ^2001), it
is still not easy to statistically test whether this model fits real data, such as the Dutch type 1 diabetes data. Therefore, the thresholds that were obtained from the simulations were used as a
guide to the corresponding parameters as we applied our method to the data. The results of our data analysis show some evidence for a haplotype interaction network that is potentially associated with
type 1 diabetes and that includes the up interactions between the haplotype blocks from the chromosomes pairs (1,4), (1,12), (1,19), (6, 7), and (17, 21), as well as the down interactions between
blocks from the chromosome pairs (2,7), (3,19), (5,7), (6,21), and (7,11). There are several other less significant pairs. Here, “up interaction” and “down interaction” mean that there exists a
significant increase or decrease, respectively, in interaction between two blocks for patients over that for controls. We further found some disease-predisposing intrablock interactions on
chromosomes 1, 6, 7, 8, and 11. Finally, we searched for loci interactions that may account for these block interactions. As a result, a total of 25 potential disease-predisposing interactions
between loci are predicted, which indicates 19 gene-gene interactions among 19 candidate genes. Having found four dominant variants (Dassen et al. ^2001), we predicted, from the interaction network,
19 transitive variants. Our results clearly demonstrate that, by considering interactions between haplotype blocks, we may succeed in identifying disease-predisposing genetic variants that might
otherwise have remained undetected.
Haplotype Likelihood
Let G=(G[1],…,G[n])^T denote the observed genotypes for n individuals from a population, where G[i]=(g[i1],…,g[iL])^T, g[ij] is the genotype of individual i at locus j, and L is the total number of
observed loci per individual. For simplicity, let g[ij] take values of 0, 1, or 2 for the cases in which its genetic haplotype at locus j is homozygous and identical to a prespecified reference,
homozygous but different from the reference, or heterozygous, respectively. In addition, we let g[ij]=7 if allele 0 is missing at locus j, g[ij]=8 if allele 1 is missing, and g[ij]=9 if both alleles
are missing. A genotype is called “ambiguous” if it has at least two heterozygous sites. Let H=(H[1],…,H[n])^T, where H[i]=(H[i1],H[i2]) denotes the unobserved haplotype pair of G[i], and H[i][i],
the set of all possible haplotype pairs compatible to G[i]. Given G, under the assumption of Hardy-Weinberg equilibrium (Weir ^1996, chapter 3), the “haplotype likelihood” can then be written as
where p(·) denotes the population frequency of the corresponding haplotype, and p=(p[1],…,p[m[0]]). Here we assume that, overall, there are m[0] possible haplotypes compatible with G.
Haplotype Entropy
While performing a haplotype inference, we are usually interested only in H, and, hence, p works as a nuisance parameter in equation (1). Here, we follow Zhang et al. (^2001) in eliminating the
nuisance parameter by a maximization procedure—that is, we replace p in equation (1) with its maximum likelihood estimate (MLE). Thus, we have the following profile log likelihood:
where k[0] denotes the number of different haplotypes in H, and s[1],…,s[k[0]] denote their respective frequencies. We define S(H)=-l(G|H), where S(H) is the entropy of the frequencies of different
haplotypes in H, and s(G)=min{S(H):HiscompatiblewithG}. Note that S(H) attains its minimum at H in equation (1), so that
For example, suppose that
Then, there are two possible ways to decompose these genotypes into haplotypes—namely,
where h[1]=(0,0,0)^T, h[3]=(0,1,0)^T, h[5]=(1,0,0)^T, h[7]=(1,1,0)^T, and h[8]=(1,1,1)^T. The corresponding values of the haplotype likelihood shown in equation (1) are
where the unknown population frequencies of the five different haplotypes in equation (3) satisfy the equation
and the unknown population frequencies of the four different haplotypes in equation (4) are constrained by
Given H[1], and under the constraint of equation (5), the maximum of the logarithm of the likelihood in equation (3) is given by
Analogously, given H[2] and under the constraint of equation (6), the maximum of the logarithm of the likelihood in equation (3) is equal to
Obviously, S(H[2])<S(H[1]). Hence, s(G)=S(H[2]).
In this article, we call s(G) the haplotype entropy of G. The quantity s(G) measures the diversity of the underlying haplotypes compatible with G, since the entropy is a well-known measure of
variation for a system in information theory (Jones ^1979). The stronger the interactions among the loci of G, the less diverse the underlying haplotypes, and the smaller the value of s(G). To
explain this claim intuitively, we consider only three diallelic loci, at which there are eight possible haplotypes—namely, h[1]=(0,0,0)^T, h[2]=(0,0,1)^T, h[3]=(0,1,0)^T, h[4]=(0,1,1)^T, h[5]=
(1,0,0)^T, h[6]=(1,0,1)^T, h[7]=(1,1,0)^T, and h[8]=(1,1,1)^T. Let p(h[i]) be the population frequency of h[i] for 1in—say, G—which are assumed to be generated from these haplotypes according to the
Hardy-Weinberg equilibrium. Then, the haplotype entropy 2) gives rise to an empirical version of the above population entropy. To see how the haplotype entropy changes as the strength of interaction
(i.e., dependence) increases, we first calculate this entropy when there are no interactions among the three loci. In this situation, the above eight haplotypes have the equal probability of
occurrence 1/8 in individuals. As a result, the haplotype population reaches the highest diversity as the population entropy attains the maximum value of log8 (Jones ^1979, chapter 2). Now, we
consider the situation in which there exist some dependences among the three loci. Note that these dependences are apparent as increased frequencies of specific haplotypes compared with what would be
expected if alleles at the three loci are combined at random. For example, if we set p(h[1])=1/2, p(h[5])=1/2, and p(h[i])=0, i≠1,5, then the three loci are fully determined by the first locus. With
the entropy being equal to log2, the resulting haplotype population yields a smaller diversity than the previous one. We observe that, as an empirical version of the population entropy, n is large.
Therefore, in general, the population haplotype entropy—and, thus, its empirical version,
Testing for Interaction between Two Haplotype Blocks
Let G=(G[1],…,G[n])^T be partitioned into two blocks—say,
Suppose that we are interested in testing whether there exists interaction between the two blocks G^(1) and G^(2). This problem can be stated as testing the hypotheses that the two blocks are
independent (i.e., the null hypothesis) versus the hypothesis that the two blocks are dependent (i.e., the alternative hypothesis). As pointed out in the “Haplotype Entropy” subsection, if the null
hypothesis is true, s(G) will tend to have a large value; otherwise, it will tend to be small. Hence, s(G) can be used as a test statistic for this test. Because the distribution of s(G) under the
null hypothesis is unknown, the following procedure is designed to calculate the P values of the test:
Generate n^′ random permutations of (G^(2)[1],…,G^(2)[n]), and denote them by (G^(2)[j,1],…,G^(2)[j,n]), j=1,…,n^′.
Form a random sample G^*[j], j=1,…,n^′, where G^*[j] is formed by pairing (G^(1)[1],…,G^(1)[n]) with (G^(2)[j,1],…,G^(2)[j,n]).
Calculate the haplotype entropy for each G^*[j]. An empirical P value can then be defined by the proportion of values of s(G^*[j]) that are s(G); that is, #{s(G^*[j]):s(G^*[j])s(G)}/n^′.
The number n^′ is usually set to a moderate number. For example, it is 500 and 1,000 in this study.
On the basis of the central limit theorem, an empirical Z score statistic,
can also be defined for the test, where A and V are the sample mean and variance, respectively, of the values of s(G^*[j]). The empirical P value calculated in step 3 can be used to examine whether
the between-block interaction existing in G was obtained by chance, whereas the empirical Z score statistic more sensitively measures the length of the distance between the genotypes under
investigation and the population of genotypes without block interactions.
The above procedure will be used below to test the significance of the pairwise interactions among haplotype blocks or loci. In each case, the significance of an interaction will be decided by a
threshold for P values. Assessment of the overall significance, to account for multiple testing, is not straightforward, because there are many correlations among the tests. An alternative approach
is to control the false discovery rate (FDR), which is defined by the expected proportion of false positives among those called significant: E[V^*/R^*|R^*>0]. Here, for a given threshold, V^* is the
total number of false positives, and R^* is the total number of interactions called significant according the threshold. We opt for the recent proposal of Storey and Tibshirani (^2003) to estimate
the FDR and calculate the q value, a measure of statistical significance in terms of FDR, for each individual test under dependence.
Structure Annealing Algorithm
In this section, we propose a new algorithm, the so-called “structure annealing algorithm,” to minimize S(H). The algorithm is proposed on the basis of the following observation. Let G=(G^(1),G^(2))
be a random partition of G, and let H=(H^(1),H^(2)) be the corresponding partition of H. It is easy to see that, if H is compatible with G, then H^(1) is compatible with G^(1). Furthermore, if S(H^
(1)) is a good approximation of s(G^(1)), then S(H^(1),H^(2)) should be a good approximation of s(G), provided that H is compatible with G and the number of loci in G^(2) is not large. This
observation motivated our use of the following sequential way to minimize the objective function S(H).
Suppose now that G is partitioned into z blocks, G=(G^(1),…,G^(z)), where G^(b) comprises k[b] loci and k[b] be set to a small number—for example, k[b]appendix A) is designed to simulate from the
for b=1,…,z, where t[b] is called the “temperature” of this distribution, and appendix B) is designed to extrapolate G^(1) is usually small, the iteration number of the local updating steps is also
moderate at this step. We denote this iteration number by m[1], and set m[1]=10,000 for all examples in this article. Then, the algorithm proceeds for z-1 steps. The (b+1)th step consists of two
substeps, which are described as follows.
Extrapolation: extrapolate the haplotype , which is obtained at the last iteration of the bth step, to a compatible haplotype pair of
Local updating: simulate from the distribution m[b+1] steps.
The m[b] is a monotone increasing function of b; for example, we set m[b]=m[1]×b for b=1,2,…,z-1 and m[z]=10×m[1]×z. Here, as in the article by Kirkpatrick et al. (^1983), we set a large iteration
number for the last step simulation.
Simulated Data Sets
We used a coalescent-based program called MS, by R. Hudson, to simulate haplotypes for the four different situations described by quantities (θ,R)=(4,0),(4,4), (4,20), and (16,16). Here θ=4N[e]μ, R=4
N[e]r, N[e] is the effective population size, μ is the total per-generation mutation rate across the region sequenced, and r is the length, in morgans, of the region sequenced.
For each setting of (θ,R), this generated 40 independent data sets, each containing 40 haplotypes. For each data set, the haplotypes were randomly paired to form 20 genotypes. As a result, for each
case of (θ,R), we had 40 sets of 20 genotypes. They are denoted by G[1],…,G[40], with G[i]=(G[i,1],…,G[i,20])^T. We split each G[i,j] into two parts of equal length, G^(1)[i,j] and G^(2)[i,j], for i=
1,…,40 and j=1,…,20. In total, we have 80 genotype segments. With these segments, 20 new data sets, which are denoted by G^*[1],…,G^*[20], are formed, where G^*[k] is formed by attaching the segment
G^(2)[20+k,j] to the segment G^(1)[k,j] for k=1,…,20. The above construction procedure shows that there are two independent blocks in each G^*[k].
In the following, we will regard G^*[1],…,G^*[20] as samples from a population in which the two genotype blocks are independent, whereas we will regard G[1],…,G[20] as samples from a population in
which the two genotype blocks are dependent. To evaluate the power of our procedure, we applied it to these genotype data sets. The resulting P values and Z scores are summarized in figure 1. To find
the interesting blocks, we further analyzed these P values by setting lower and upper thresholds of .01 and .15. We say two blocks are dependent if the corresponding P value is P value is F[a] and F[
n]. That is, F[a] is the proportion of false rejections of the null hypothesis when the null hypothesis is true, and F[n] is the proportion of false nonrejections of the null hypothesis when the
alternative is true. For the above simulated data, we have (F[a],F[n])=(0,2/20) when (θ,R)=(4,0), and (F[a],F[n])=(0,0) when (θ,R)=(4,4), (4,20), and (16,16). These results show that our procedure
is, indeed, an effective tool for detecting haplotype interactions. As pointed out in the Introduction, the coalescent model can capture certain main features in a haplotype population (Stephens et
al. ^2001). The above simulated coalescent models might share some common features with real haplotype data. Thus, these thresholds were used to guide our choice of the corresponding thresholds when
we applied our method to the Dutch type 1 diabetes data below.
The P values and Z scores for 40 sets of genotypes with (θ,R)=(4,0), (4,4), (4,20) and (16,16). The dotted lines are for the data sets in which there are interactions ...
Type 1 Diabetes Data
Thirty-six candidate genes, listed in table 1, were selected from pathways that are potentially implicated in the development and progression of atherosclerosis: lipid and homocysteine metabolism,
regulation of blood pressure and coagulation, inflammation, cellular adhesion, and matrix integrity (Cheng et al. ^1999; Dassen et al. ^2001). They have all been reported in the Online Mendelian
Inheritance in Man (OMIM) database. Dassen et al. (^2001) described an assay, for genotyping a panel of 65 SNPs that represent variation within these genes, that is an early version of RMS Research
Assay for Cardiovascular Disease Genetics designed by Roche Molecular Systems. Most of these SNPs have been shown to be implicated with some metabolic diseases, such as cardiovascular disease,
coronary artery disease, hypertension, asthma, obesity, atherosclerosis, myocardial infarction, hyperlipidemia, Alzheimer disease, and others (see table 1 and ^OMIM for more details). The rest of
these SNPs are either the polymorphisms at (or close to) the promoter regions that may (directly or indirectly) play certain dysregulation roles for the genes of interest or the polymorphisms at
coding regions with nonsynonymous changes (Cheng et al. ^1999; Dassen et al. ^2001; Flori et al. ^2003; Vatay et al. ^2003). For example, V67 was selected because it could have a protective role
against type 2 diabetes (NIDDM) (Vatay et al. ^2003). V66 was included because it often interfered with our ability to call V67 correctly. We had no prior functional information, other than that its
proximity to V67 could mean that it would also have impact on the function of the gene TNF. As pointed out in the Introduction, since both diabetes mellitus and atherosclerosis can be regarded as
metabolic diseases with many overlapping biochemical and clinical parameters, the variants that are susceptible to atherosclerosis may also be the cause of type 1 diabetes. Therefore, this assay was
also applied to a Dutch cohort with diabetes that includes 136 unrelated individuals (89 patients with type 1 diabetes with impaired endothelial function and 47 healthy control individuals).
Endothelial function was assessed by measuring changes in forearm blood flow after pharmacological interventions. The DNA samples from the 136 individuals were genotyped by use of PCR. This led to
136 genotypes of 65 loci. Nine loci (V58, V59, V66, V67, V5, V57, V51, V52, and V30) were not used in the following data analysis, since these loci have the so-called “heavy-missing” problem, where
table 1 for more details).
SNPs Used in the Present Study^[Note]
We started with the search for pairwise interactions among these 16 unlinked blocks. The search was performed on the cases and controls separately. The P values for the cases and controls were
compared by plotting them in graphs, as shown in figuresfigures22 and and3,3, respectively. We obtained 10 pairs of interacting blocks, located on chromosome pairs (1,4), (1,12), (1,19), (2,7),
(3,19), (5,7), (6,7), (6,21), (7,11), and (17,21) (seetable 2 for more details). These block pairs were selected by use of the following criteria: for the up interaction, we claimed that there was an
increase in haplotype interaction if the P value of the controls was >.15, the P value of the cases was Z score of the cases was P value of the cases was >.15, the P value of the controls was Z score
of the controls was P value thresholds .01 and .15 for cases and controls, respectively, the corresponding estimated FDRs of these multiple tests for cases and controls are 0.017 and 0.029. There
will be more interaction pairs if we take .035 and .2 as the thresholds for cases and controls, respectively. The FDRs will then become 0.040 and 0.048 (see table 2 for more details).
The P values of testing the interactions of blocks 1–8 with the other blocks, for the cases and controls in the Dutch type 1 diabetes data. The dotted lines are for the cases, and the lines with
small triangles are for the controls. The normal ...
The P values of testing the interactions of blocks 9–16 with the other blocks, for the cases and controls in the Dutch type 1 diabetes data. The dotted lines are for the cases, and the lines with
small triangles are for the controls. The normal ...
Haplotype Interactions that Predispose to Type 1 Diabetes
To see how these interactions modify the related pathways, we ran our procedure on the pairs of variants on these blocks. Consequently, 25 pairs of variants were found to show certain evidence of
susceptibility to the disease. Table 2 indicates that these variants are distributed on 19 genes: NPPA, SELE, ADOB, AGTR1, ADRB2, LPA, TNF, TNFb, DCP1, ADD1, SCNN1A, APOE, NOS3, LPL, LIPC, PON1, CBS,
APOA4, and APOC3. Note that APOB, ADRB2, LPA, APOE, LPL, LIPC, PON1, and APOA4 are on the pathway of lipid metabolism; CBS is on the pathway of homocysteine metabolism; NPPA, AGTR1, ADRB2, DCP1,
SCNN1A, and NOS3 are on the pathway of blood pressure; SELE is on the pathway of coagulation; SELE, TNF, and TNFb are on the pathway of inflammation; and ADD1 is on the pathway of matrix integrity.
Thus, within the pathway of lipid metabolism there are seven up or down interactions, denoted by the symbols (+) and (−) respectively, among some genes. They are V9:V22 (APOB:LPL) (+), V8:V20
(APOB:LIPC) (−), V4:V26 (LPA:PON1) (+), V26:V7 (PON1:APOA4) (−), V25:V10 (PON1:APOC3) (−), and V25:V12 (PON1:APOC3) (−). These interactions are predisposing to the disease. Similarly, within the
pathway of blood pressure, there is one down interaction: V50:V38 (ADRB2:NOS3) (−). The rest are related to interactions among the six pathways mentioned above. Here, up interaction (down
interaction) is trying to describe the biological phenomenon through which the pathways of lipid metabolism, homocysteine metabolism, blood pressure, inflammation, and matrix integrity are modified,
by creating (disrupting) interactions among some genes that lie in these pathways. Similar to what Sudbery (^1998, p. 144) has suggested, the up interactions would suggest that those interactions
lead to a susceptibility to the disease, whereas the down interactions could imply that the related interactions may have a protective effect on developing the disease. These results indicate a
complicated feature of (possibly nonmultiplicative) effects of the interactions on the risk for type 1 diabetes.
Note that Dassen et al. (^2001) have identified a set of dominant variants—V4, V15, V28, and V50—that are on chromosomes 6, 11, 19, and 5, respectively. This, combined with the above results, yields
the following transitive and disease-predisposing variants, in the sense that there are significant increases (or decreases) of interactions of these variants with some dominant variants: V26, V37,
V38, V39, V7, V8, V10, V11, V12, V13, V65, V68, V20, V25, and V47.
In the next step, we screened for interactions in linked regions. For simplicity, we adopted the following strategy. Taking block 1 as example, we sequentially tested six subblock pairs for the cases
and controls: the first pair was {1},{2,3,4,5,7}, with 1 being the splitting location; the second pair was {1,2},{3,4,5,7}, with 2 being the splitting location; and so on. Here, the numbers 1, 2, 3,
4, 5, 6, and 7 denote the seven variants in block 1. The six subblock pairs are uniquely defined by six splitting locations: 1, 2, 3, 4, 5, and 6. We compared the resulting six pairs of P values and
Z scores in table 3. It suggests that there exists some disease-predisposing interaction between subblock pairs {1,2,3,4} and {5,6,7}. Following the same argument as above, for block 6 we may
conclude that variant V64 might be a transitive disease-predisposing variant, because the dominant variant, V4, is at subblock {1,2,3}. The evidence of disease-predisposing interactions within the
other blocks is reported in table 4, which yields the transitive variant V14. Note that, in practice, we need to test the interactions for all bipartitions of seven loci, since the strength of
linkage disequilibrium patterns is not, typically, a monotonic function of genetic distance. Our procedure can be easily extended to this general setting, since it does not use any information on
genetic distances among these loci.
P Values and Z Scores for Testing Interactions within Block 1
Within-Haplotype Interactions That Predispose to Type 1 Diabetes
The logistic regression mentioned in the Introduction is a very important genotype-based tool for detecting dominant polymorphisms and epistatic effects (i.e., genotype interactions) that are
associated with disease. One disadvantage of this method over some haplotype-based methods is that it ignores the potential disease-predisposing haplotype interactions. To contend with this
disadvantage, we have presented a procedure for evaluating the contributions of these haplotype interactions to susceptibility to disease, in which the entropy is used to measure the diversity of a
haplotype population. Our procedure can be easily generalized to other measures of the haplotype diversity (Weir ^1996; Clayton ^2002). Of course, for applications, we should combine these two
methods, in order to extract more complete information from unphased genotype data, in the following steps: first, apply the logistic regression to detect dominant disease-predisposing variants and
genotype interactions; then, as a complement, use our procedure to find potential haplotype interactions; finally, predict the transitive variants by finding the variants that are interacting with
the dominant ones.
In the first step, we assume a sample of n[1] cases and n[2] controls, each of whom is genotyped at m polymorphisms. Let p[j] be the probability of individual j being a case rather than a control.
Following McCullagh and Nelder (^1989), we model p[j] as
where x[1],…,x[m] are covariates depending on the genotypes of the individual, and β[0],…,β[m] are coefficients to be estimated. To examine the effects of a set of polymorphisms, we can test whether
the data are significantly better represented when these polymorphisms are included in the model compared with when they are not in the model, through use of likelihood-ratio tests (Cordell and
Clayton ^2002). This is equivalent to testing whether the corresponding coefficients are significantly different from 0. Similarly, we can account for the genotype interactions by adding some
epistatic terms to the above model. A commonly used strategy for evaluation of the effects of the different polymorphisms is to fit these models in a stepwise fashion. Following Cordell and Clayton
(^2002), for the Dutch type 1 diabetes data, we first code x[j]=-0.5, 0.5, and 0.5 for genotypes 0, 2, and 1, respectively, and we also code 0.5 for the cases in which genotypes are missing. We set
.05 as a nominal significance level for all these tests involved in the stepwise logistic-regression procedure. This yields seven dominant disease-susceptibility alleles on chromosomes 3, 6, 7, 6,
11, 19, and 2, respectively: V41(AA), V4(TT), V26(GG), V64(GG), V15(GG), V28(−), V9(missing), and one genotype interaction between V41(AA) and V64(GG), where, for example, in the notation “V41(AA),”
V41 is the name of the variant and (AA) is one of its alleles (see table 5). The result is slightly different from the prediction-based logistic-regression procedure of Dassen et al. (^2001); this
might be because of different criteria being used.
Results of the Stepwise Logistic Regression for the Dutch Data
In the second step, we start with a search for the haplotype interaction between blocks located on different chromosomes, followed by testing of the interactions within each block. If two blocks are
found interacting, we can further narrow the search area to identify which variants in the blocks are involved in this interaction. For the Dutch type 1 diabetes data, in the “Results” section we
have shown nine pairs of interacting blocks that are predisposed to type 1 diabetes. Combining with the result from the first step, we can infer some transitive disease-predisposing variants, as
shown in the “Results” section. The results demonstrate a complicated gene-gene interaction network, which might predispose to type 1 diabetes through modifying the pathways of lipid metabolism,
blood pressure, inflammation, coagulation, and matrix integrity.
Use of interaction between unlinked genomic regions has been suggested for improving power to detect loci of small effect on the disease phenotype; for example, in type 1 diabetes (Cordell et al. ^
1995^, ^2000; Bugawan et al. ^2003), type 2 diabetes (Cox et al. ^1999), and inflammatory bowel disease (Cho et al. ^1998). Cordell et al. (^1995) reported that there are interactions between the
loci IDDM1 (chromosome 6p21) and IDDM2 (chromosome 11p15) and between the loci IDDM1 and IDDM4 (on chromosome 11q13.3), in the context of the logistic regression model. Cox et al. (^1999) showed that
the loci on chromosomes 2 and 15 interact to increase susceptibility to type 2 diabetes, in the context of a nonparametric LOD score. Cox et al. (^2001) performed a systematic screen for correlation
between family-specific nonparametric LOD scores to evaluate evidence of interactions between some unlinked regions on chromosomes 1, 2, 3, 4, 6, 11, and 19. These methods are usually restricted to
family data. Unlike these authors, we focus here on interactions between genetic variants in a list of potential candidate genes across a number of chromosomes, where some of these variants have
already been shown to be associated with some metabolic diseases. Moreover, the proposed approach is specified for unphased genotype data (possibly with missing problems) from case-control studies.
Thus, our method could be a valuable contribution to a genomewide association study of a complex disease, especially when direct determination of the molecular haplotypes from experiment or family
data is not feasible.
Although significant and consistent linkage evidence was reported for the susceptibility intervals IDDM8 (on chromosome 6q27), IDDM4 (on 11q), and IDDM5 (on 6q25), evidence for most other intervals
varies in different data sets, probably because of a weak effect of the disease genes, genetic heterogeneity, random variation, or inappropriate correction for multiple tests (see Pugliese ^2001). To
reduce the possible effect of genetic heterogeneity, we need to confirm our initial finding by analyzing other populations in future studies. Since we compared correlated variants, it is important to
take into account the potential effects of multiple tests on the power of our procedure. For our case, there are 120 pairwise tests among 16 haplotype blocks. A simple Bonferroni (or Dunn-Sidak)
correction leads to the adjusted threshold of 4.17×10^-4 for P values if we want to achieve the significance level of .05; there are only seven block pairs in table 2 that remained nominally
significant after this correction. Such a correction seems too conservative, because of high dependences among these tests. This has been confirmed by Bugawan et al. (^2003) on the basis of a
permutation procedure. Unfortunately, using resampling methods such as permutation can be computationally prohibitive in our case. However, we have shown that the recently developed procedure of
Storey and Tibshirani (^2003) is applicable to our setting.
We are grateful to the Editor, the Deputy Editor, and other members of the Editorial Board, as well as to two anonymous reviewers, for their very constructive comments that have led to improvement of
the presentation and results of the present article. We thank Roche Molecular Systems (Alameda, CA) for providing the multiplex genotyping assays, under a research collaboration. We thank Drs.
Suzanne Cheng and Paul Schiffers for kindly providing some information on the SNPs used in this paper. We also thank Professors W. van Zwet, W. J. M. Senden, and M. Vingron and Dr. P. Lindsey for
useful discussions. This work was partially performed when the first author was working for EURANDOM, in The Netherlands. The work was supported, in part, by the Programme of Computational Molecular
Biology of EURANDOM, by the Institute for Mathematical Sciences, National University of Singapore, and by grant No. 01/1/21/19/217 from the Biomedical Research Council of Singapore.
Appendix A : Local Updating
The local updating algorithm includes two operators, ν-mutation and peer learning. In every iteration, they are selected to perform with probability 0.2 and 0.8, respectively. Of course, the
probabilities can be tuned by the user, but a large performing probability is usually assigned to the peer learning operator, since it tends to force the haplotypes to coalesce. The two operators are
described below.
ν-Mutation Operator
In the ν-mutation operator, a total of max{1,γ[b]ν} haplotype pairs at the heterozygous (g[ij]=2) or missing (g[ij]=9,8,7) loci are randomly selected to undergo changes, where γ[b] is the total
number of heterozygous and missing loci in ^1953; Hastings ^1970)—that is, the new haplotypes min(1,r[m]), where
where T(·→·) denotes the transition probability between the current and new haplotypes. The transition proceeds as follows. If the pair (h[b,ij,1],h[b,ij,2]) is selected to undergo a change and if g[
ij]=2, then the values of h[b,ij,1] and h[b,ij,2] will be simply swapped by setting h[b,ij,1]=1-h[b,ij,1] and h[b,ij,2]=1-h[b,ij,2]. If g[ij]=9, one of the pairs (0,0), (0,1), (1,0), and (1,1) are
equally likely to be reassigned to (h[b,ij,1],h[b,ij,2]). Similarly, if g[ij]=8 or 7, one of the possible haplotype pairs is also equally likely to be reassigned to (h[b,ij,1],h[b,ij,2]). The other
selected haplotype pairs will be mutated in the same way, but independently. It is easy to see that the transition is symmetric, in the sense that
Peer Learning Operator
The peer learning operator works as follows.
1.h[b,u,v]—from the set {h[b,1,1],h[b,1,2];…;h[b,n,1],h[b,n,2]}.
2.h[b,s,t]—from the set
with probability w[b,i,j]=exp{-d(h[b,u,v],h[b,i,j])/t[sel]}, d(h[b,u,v],h[b,i,j]) is the number of different haplotypes at the first h[b,u,v] and h[b,i,j], and t[sel] is the so-called “selection
3.g[uj], if g[uj]=0 or 1, we keep h[b,uj,v] unchanged; if g[uj]=2, 9, 8, or 7 and h[b,uj,v]=h[b,sj,t], we keep h[b,uj,v] unchanged with probability p[l] and change h[b,uj,v] to h[b,sj,t] with
probability 1-p[l]; if g[uj]=2, 9, 8, or 7 and h[b,uj,(v)]≠h[b,sj,t], we keep h[b,uj,v] unchanged with probability 1-p[l] and change h[b,uj,v] to h[b,sj,t] with probability p[l]. We update the
complementary pair of h[b,u,v] accordingly, such that they are compatible with g[u].
4.min{1,r[l]}, where
Here, the transition probability equals
where α[1] is the total number of the common haplotypes of [2] is the total number of the different haplotypes of [3] counts the total number of times that the haplotype values in the complementary
haplotype pair of h[b,u,v] are randomly assigned. The p[l] is a user-specified parameter. We set p[l]=0.9 for all examples in this article. The transition probability
This operator makes it possible for haplotypes to coalesce together very fast if it is feasible.
Appendix B : Extrapolation
The extrapolation operator extrapolates G^(b+1). We call a haplotype “original” if it first appears in h[b,1,1],h[b,1,2];…; h[b,n,1],h[b,n,2]) used in this article, where (h[b,i,1],h[b,i,2]) is the
haplotype pair of the ith genotype in g[ij] is a heterozygous or missing allele, then (h[b+1,ij,1],h[b+1,ij,2]) is equally likely to be set to one of the possible haplotype pairs. If a haplotype is
“duplicate,” then it will be extrapolated according to the corresponding original copy. Note that, in this case, the extrapolation for the corresponding original copy has been finished. For example,
if h[b,u,v] is a duplicate of h[b,s,t], and if g[uj] is a heterozygous or missing allele, then h[b+1,uj,v] will be set to the same value as h[b+1,sj,t] with probability p[e] and will be set to a
value that differs from h[b+1,sj,t] with probability 1-p[e]. The complementary pair of h[b+1,uj,v] will be set accordingly, such that the pair is compatible with g[u]. We usually set p[e] to a large
value—say, 0.95—for all examples in this article. Obviously, the extrapolation operator will provide a good starting point for the simulation from the distribution
Electronic-Database Information
The URLs for data presented herein are as follows:
Bugawan TL, Mirel DB, Valdes AM, Panelo A, Pozzilli P, Erlich HA (2003) Association and interaction of the IL4R, IL4, and IL13 loci with type 1 diabetes among Filipinos. Am J Hum Genet 72:1505–1514.
[PMC free article] [PubMed] [Cross Ref]10.1086/375655
Cheng S, Grow MA, Pallaud C, Klitz W, Erlich HA, Visvikis S, Chen JJ, Pullinger CR, Malloy MJ, Siest G, Kane JP (1999) A multilocus genotyping assay for candidate markers of cardiovascular disease
risk. Genome Res 9:936–949. [PMC free article] [PubMed] [Cross Ref]10.1101/gr.9.10.936
Cho JH, Nicolae DL, Gold CT, Fields MC, Labuda MC, Rohal PM, Pickles MR, Qin L, Fu Y, Mann JS, Kirschner BS, Jabs EW, Weber J, Hanauer SB, Bayless TM, Brant SR (1998) Identification of novel
susceptibility loci for inflammatory bowel disease on chromosomes 1p, 3q, and 4q: evidence for epistasis is between 1p and IBD1. Proc Natl Acad Sci USA 95:7502–7507. [PMC free article] [PubMed] [
Cross Ref]10.1073/pnas.95.13.7502
Cordell HJ, Clayton DG (2002) A unified stepwise regression procedure for evaluating the relative effects of polymorphisms within a gene using case/control or family data: application to HLA in type
1 diabetes. Am J Hum Genet 70:124–141. [PMC free article] [PubMed] [Cross Ref]10.1086/338007
Cordell HJ, Todd JA, Bennett ST, Kawagushi Y, Farrall M (1995) Two-locus maximum lod score analysis of a multifactorial trait: joint consideration of IDDM2 and IDDM4 with IDDM1 in type 1 diabetes. Am
J Hum Genet 57:920–934. [PMC free article] [PubMed]
Cordell J, Todd JA (1995) Multifactorial inheritance in type 1 diabetes. TIG 11:499–504. [PubMed]
Cordell J, Todd JA, Hill NJ, Lord CJ, Lyons PA, Peterson LB, Wicker LS, Clayton DG (2001) Statistical modelling of interlocus interactions in a complex disease: rejection of the multiplicative model
of epistasis in type 1 diabetes. Genetics 158:357–367. [PMC free article] [PubMed]
Cordell HJ, Wedig GC, Jacobs KB, Elston RC (2000) Multilocus linkage tests based on affected relative pairs. Am J Hum Genet 66:1273–1286. [PMC free article] [PubMed] [Cross Ref]10.1086/302847
Cox NJ, Frigge M, Nicolae DL, Concannon P, Hanis CL, Bell GI, Kong A (1999) Loci on chromosomes 2 (NIDDM1) and 15 interact to increase susceptibility to diabetes in Mexican Americans. Nat Genet
21:213–215. [PubMed] [Cross Ref]10.1038/6002
Cox NJ, Wapelhorst B, Morrison VA, Johnson L, Pinchuk L, Spielman RS, Todd JA, Concannon P (2001) Seven regions of the genome show evidence of linkage to type 1 diabetes in a consensus analysis of
767 multiplex families. Am J Hum Genet 69:820–830. [PMC free article] [PubMed] [Cross Ref]10.1086/323501
Cucca F, Dudbridge F, Loddo M, Mulargia AP, Lampis R, Angius E, De Virgiliis, Koeleman BP, Bain SC, Barnett AH, Gilchrist F, Cordell H, Welsh K, Todd JA (2001) The HLA-DPB1-associated component of
the IDDM1 and its relationship to the major loci HLA-DQB1, -DQA1, and -DRB1. Diabetes 50:1200–1205. [PubMed]
Dassen W, Spiering W, de Leeuw P, Smits P, Dijk WA, Spruijt H, Gommer E, Bonnemayer C, Doevendans PA (2001) Unravelling gene interactions to find the cause of artherosclerosis, a multigenic disease,
using an artificial neural network. Comput Cardiol 28:373–376.
Drysdale CM, McGraw DW, Stack CB, Stephens JC Judson RS, Nadabalan K, Arnold K, Ruano G, Liggett SB (2000) Complex promoter and coding region β[2]-adrenergic receptor haplotypes alter receptor
expression and predict in vivo responsiveness. Proc Natl Acad Sci USA 97:10483–10488. [PMC free article] [PubMed] [Cross Ref]10.1073/pnas.97.19.10483
Fain PR, Eisenbarth GS (2001) Type 1 diabetes, autoimmunity and the MHC. In: Lowe WL Jr (ed) Genetics of diabetes mellitus. Kluwer Academic Publishers, Boston, pp 43–64.
Flori L, Sawadogo S, Esnault C, Delahaye NF, Fumoux F, Rihet P (2003) Linkage of mild malaria to the major histocompatibility complex in families living in Burkina Faso. Hum Mol Genet 12:375–378. [
PubMed] [Cross Ref]10.1093/hmg/ddg033
Friday RP, Trucco M, Pietropaolo M (1999) Genetics of type 1 diabetes mellitus. Diabetes Nutr Metab 12:3–26. [PubMed]
Hastings WK (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57:97–109.
Jones DS (1979) Elementary information theory. Clarendon Press, Oxford.
Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671–680. [PubMed]
Koeleman BP, Dudbridge F, Cordell HJ, Todd JA (2000) Adaptation of the extended transmission/disequilibrium test to distinguish disease associations of multiple loci: the conditional extended
transmission/disequilibrium test. Ann Hum Genet 64:207–213. [PubMed] [Cross Ref]10.1046/j.1469-1809.2000.6430207.x
McCullagh P, Nelder JA (1989) Generalized linear models, 2nd ed. Chapman & Hall, London.
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equations of state calculations by fast computing machine. J Chem Phys 21:1087–1091.
Noble JA, Valdes AM, Bugawan TL, Apple RJ, Thomson G, Erlich HA (2002) The HLA class I A locus affects susceptibility to type 1 diabetes. Hum Immunol 63:657–664. [PubMed] [Cross Ref]10.1016/
Pugliese A (2001) Genetic factors in type 1 diabetes. In: Lowe WL Jr (ed) Genetics of diabetes mellitus. Kluwer Academic Publishers, Boston, pp 25–42.
Schranz DB, Lernmark A (1998) Immunology in diabetes: an update. Diabetes Metab Rev 14:3–29. [PubMed] [Cross Ref]10.1002/(SICI)1099-0895(199803)14:1<3::AID-DMR206>3.0.CO;2-T
Stephens M, Smith NJ, Donnelly P (2001) A new statistical method for haplotype reconstruction from population data. Am J Hum Genet 68:978–989. [PMC free article] [PubMed] [Cross Ref]10.1086/319501
Storey JD, Tibshirani R (2003) Statistical significance for genome-wide experiments. Proc Natl Acad Sci USA 100:9440–9445. [PMC free article] [PubMed] [Cross Ref]10.1073/pnas.1530509100
Sudbery P (1998) Human molecular genetics. Addison Wesley Longman, Harlow.
Valdes AM, Thomson G (1997) Detecting disease-predisposing variants: the haplotype method. Am J Hum Genet 60:703–716. [PMC free article] [PubMed]
Vatay A, Yang Y, Chung EK, Zhou B, Blanchong CA, Kovács M, Karádi I, Füst G, Romics L, Varga L, Yu Y, Szalai C (2003) Relationship between complement components C4A and C4B diversities and two TNFA
promoter polymorphisms in two healthy Caucasian populations. Hum Immunol 64:543–552. [PubMed] [Cross Ref]10.1016/S0198-8859(03)00036-3
Wall JD, Pritchard JK (2003) Haplotype blocks and linkage disequilibrium in the human genome. Nat Rev Genet 4:587–597. [PubMed] [Cross Ref]10.1038/nrg1123
Weir BS (1996) Genetic data analysis II. Sinauer Associates, Massachusetts.
Zhang J, Liang F, Hoehe M, Vingron M (2001) On haplotype reconstruction for diploid organisms. EURANDOM Report-2001-026.
Articles from American Journal of Human Genetics are provided here courtesy of American Society of Human Genetics
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1180402/?tool=pubmed","timestamp":"2014-04-20T21:25:21Z","content_type":null,"content_length":"150282","record_id":"<urn:uuid:665fa373-137f-45ee-a114-b59729bff398>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
• hadronic transport approach (1) (remove)
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion
reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the
collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured.
For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space
dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective
solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are
introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density
in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over
the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach
that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The
initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the
non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different
equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the
freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that
mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the
particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid
model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow
values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is
able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function
is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean
transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order
phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively
charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly
and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and
decays still takes place during this period. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Hannah+Petersen%22/start/0/rows/10/subjectfq/hadronic+transport+approach+","timestamp":"2014-04-16T13:15:33Z","content_type":null,"content_length":"20566","record_id":"<urn:uuid:e10fa797-8ea4-4df3-b97c-1332c174dd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Primal-Dual Approximation Algorithms for Feedback Problems in Planar Graphs
Primal-Dual Approximation Algorithms for Feedback Problems in Planar Graphs (1996)
Download Links
Other Repositories/Bibliography
by Michel X. Goemans , David P. Williamson
Venue: IPCO '96
Citations: 22 - 3 self
author = {Michel X. Goemans and David P. Williamson},
title = {Primal-Dual Approximation Algorithms for Feedback Problems in Planar Graphs},
year = {1996}
Given a subset of cycles of a graph, we consider the problem of finding a minimum-weight set of vertices that meets all cycles in the subset. This problem generalizes a number of problems, including
the minimum-weight feedback vertex set problem in both directed and undirected graphs, the subset feedback vertex set problem, and the graph bipartization problem, in which one must remove a
minimum-weight set of vertices so that the remaining graph is bipartite. We give a 9/4-approximation algorithm for the general problem in planar graphs, given that the subset of cycles obeys certain
properties. This results in 9/4-approximation algorithms for the aforementioned feedback and bipartization problems in planar graphs. Our algorithms use the primal-dual method for approximation
algorithms as given in Goemans and Williamson [16]. We also show that our results have an interesting bearing on a conjecture of Akiyama and Watanabe [2] on the cardinality of feedback vertex sets in
planar graphs.
1148 Geometric algorithms and combinatorial optimization - Grötschel, Lovász, et al. - 1993
347 A general approximation technique for constrainedforestproblems - Goemans, Williamson - 1995
295 Graph coloring problems - Jensen, Toft - 1995
244 An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms - Leighton, Rao
213 When trees collide: An approximation algorithm for the generalized Steiner tree problem on networks - Klein, Ravi - 1991
143 Approximate max-flow min-(multi)cut theorems and their applications - Garg, Vazirani, et al. - 1993
123 Primal-dual approximation algorithms for integral flow and multicut in trees - Garg, Vazirani, et al. - 1997
119 The primal-dual method for approximation algorithms and its applications to network design problems - Goemans, Williamson - 1997
97 Approximating minimum feedback sets and multicuts in directed graphs. Algorithmica 20:151–174 - Even, Naor, et al. - 1998
96 On acyclic colorings of planar graphs - Borodin - 1979
81 Random algorithms for the loop-cutset problem - Becker, Geiger - 1999
80 A primal-dual approximation algorithm for generalized Steiner network problems - Williamson, Goemans, et al. - 1995
78 Packing directed circuits fractionally - Seymour - 1995
72 Finding a maximum cut of a planar graph in polynomial time - Hadlock - 1975
69 Node-and edge-deletion NP-complete problems - Yannakakis - 1978
31 When cycles collapse: A general approximation technique for constrainedtwo-connectivity problems - Klein, Ravi - 1993
26 Approximation algorithms for the vertex feedback set problem with applications to constraint satisfaction and Bayesian inference - Bar-Yehuda, Geiger, et al. - 1994
21 Constant ratio approximation of the weighted feedback vertex set problem for undirected graphs - Bafna, Berman, et al. - 1995
17 Zosin: Approximating Minimum Subset Feedback Sets in Undirected Graphs with Applications - Even, Naor, et al.
10 L.: An 8-approximation algorithm for the subset feedback vertex set problem - Even, Naor, et al.
9 A primal-dual interpretation of recent 2-approximation algorithms for the feedback vertex set problem in undirected graphs - Chudak, Goemans, et al. - 1998
7 An approximate max-flow min-cut relation for undirected multicommodity flow, with applications - Klein, Rao, et al. - 1995
4 Trotter Jr. Vertex packings: Structural properties and algorithms - Nemhauser, Leslie - 1975
3 On feedback problems in planar digraphs - Stamm - 1991
2 A conjecture on planar graphs - Albertson, Berman - 1979
2 Finding the maximal cut in a graph. Engineering Cybernetics - Orlova, Dorfman - 1972
2 better than best approximation algorithms - Good, best - 1996
2 An approximate max-flow rain-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms - Leighton, Rao - 1988
1 problem - Research - 1986
1 Approximate max-flow rain(multi)cut theorems and their applications - Garg, Vazirani, et al. - 1993
1 A general approximation tedmique for constrained forest problems - Goemans, Wiltiamson - 1995
1 An approximate max-flow rain-cut relation for undirected multicommodity flow, with applications - Klein, Rao, et al. - 1995
1 On feedback problems in planar digraphs - Stature - 1990
1 Node and edge-deletion NP-complete problems - YanDakakis | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.1254","timestamp":"2014-04-23T19:12:21Z","content_type":null,"content_length":"35018","record_id":"<urn:uuid:88280cc9-bd78-46f9-8c1d-3354a863cf9d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 3. Pairing
An application should first initialize a pairing object. This causes PBC to setup curves, groups and other mathematical miscellany. After that, elements can be initialized and manipulated for
cryptographic operations.
Parameters for various pairings are included with the PBC library distribution in the param subdirectory, and some are suitable for cryptographic use. Some programs in the gen subdirectory may be
used to generate parameters (see Chapter 7, Bundled programs). Also, see the PBC website for many more pairing parameters.
Pairings involve three groups of prime order. The PBC library calls them G1, G2, and GT, and calls the order r. The pairing is a bilinear map that takes two elements as input, one from G1 and one
from G2, and outputs an element of GT.
The elements of G2 are at least as long as G1; G1 is guaranteed to be the shorter of the two. Sometimes G1 and G2 are the same group (i.e. the pairing is symmetric) so their elements can be mixed
freely. In this case the pairing_is_symmetric function returns 1.
Bilinear pairings are stored in the data type pairing_t. Functions that operate on them start with pairing_. | {"url":"http://crypto.stanford.edu/pbc/manual/ch03.html","timestamp":"2014-04-16T22:02:52Z","content_type":null,"content_length":"9243","record_id":"<urn:uuid:0a476cfc-828a-49c3-af6a-2135cc2a8743>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sammamish Science Tutor
Find a Sammamish Science Tutor
...I did quite well in all my middle and high school math classes, including precalculus (advanced algebra, trigonometry, etc.). I was often the go-to girl for helping my friends understand math
concepts they were struggling with. I was able to figure out how to explain the concepts or problems in ...
35 Subjects: including astronomy, philosophy, physical science, algebra 1
...I'm proficient in Microsoft Office Suite (Word, Excel, PowerPoint, Outlook, Access). I've also done projects with Publisher and PageMaker. When I worked in an office and taught classes, I was
responsible for keeping the network running and teaching people how to use software and new technology. I replaced routers, switches, and wireless modems as well.
39 Subjects: including nutrition, reading, English, GRE
...I have personally completed and excelled in mathematics courses through university level Calculus Courses. I have tutored Algebra I for over five years now as an independent contractor through
a private tutoring company. I have tutored high school level Algebra I for both Public and Private School courses.
27 Subjects: including biology, reading, algebra 2, algebra 1
...Outside of school, current events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha. Tutoring: I currently tutor 9 students
on a regular basis in a variety of subjects including but not limited to: chemistry, physics, mathematic...
17 Subjects: including organic chemistry, physics, chemistry, calculus
...I have an electrical engineering and computer science degree from MIT specializing in hardware engineering. For the past twelve years I have been designing and coding consumer hardware for
Fortune 500 companies including IBM and Microsoft, specializing in digital logic and design. My best classes were in Computer science and it continues to be my best asset in my work.
43 Subjects: including physical science, reading, English, electrical engineering | {"url":"http://www.purplemath.com/Sammamish_Science_tutors.php","timestamp":"2014-04-16T19:37:06Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:e656c0c3-cb31-4c47-a7ae-121c2849811b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Rhombohedron with Variable Faces
You can create various rhombohedra by tilting the four parallel edges of a cube or varying the face diagonals. This Demonstration focuses on the special case where the rhombohedron has four faces
with diagonal ratio and two faces with diagonal ratio , where is the golden ratio. Let the edge of a cube be . Tilt four parallel edges by degrees, then reduce the diagonal of two faces to . One
interesting feature of this polyhedron is that the distance between the two thin rhombi is the golden ratio. The half-diagonals of the thin rhombus are and . This illustrates an interesting
relationship: , that is, .
This rhombohedron could be considered as a building block in structures where rhombi with diagonal ratios or occur. The rhombic enneacontahedron is an example of a rhombus and a rhombus; the rhombic
dodecahedron is a rhombus and the median rhombic triacontahedron is a rhombus. | {"url":"http://demonstrations.wolfram.com/RhombohedronWithVariableFaces/","timestamp":"2014-04-21T09:41:58Z","content_type":null,"content_length":"44058","record_id":"<urn:uuid:831e1288-8dec-48ea-b914-d79aefa8ae96>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
DIFFERENTIAL EQUATIONS WITH APPLICATIONS AND HISTORICAL NOTES - George Simmons - McGraw-Hill Education
About the book
A revision of a much-admired text distinguished by the exceptional prose and historical/mathematical context that have made Simmons' books classics. The Second Edition includes expanded coverage of
Laplace transforms and partial differential equations as well as a new chapter on numerical methods.
About the author
George Simmons
GEORGE F. SIMMONS has academic degree from the CAlifornia Institute of Technology, the university of chicago, and Yale University. He taught at several colleges and universities before joining the
faculty of Colorado college in 1962, where he is a professor of mathematics. He is also the author of introduction to topology and Modern Analysis, Precalculus Mathematics in a Nutshell and calculus
with Analytic Geometry.
Table of contents
Preface to the Second Edition
Preface to the First Edition
Suggestions for the Instructor
Chapter 1. Introduction
Chapter 2. Gemeral Remarks on Solutions
Chapter 3. Families of Curves. Orthogonal Trajectories
Chapter 4. Growth, Decay, Chemical Reactions, and Mixing
Chapter 5. Falling Bodies and Other Motion Problems
Chapter 6. The Brachistochrone. Fermat and the Bernoullis
Chapter 7. Homogeneous Equations
Chapter 8. Exact Equations
Chapter 9. Integrating Factors
Chapter 10. Linear Equations
Chapter 11. Reduction of Order
Chapter 12. The Hanging Chain. Pursuit Curves
Chapter 13. Simple Electric Circuits
Chapter 14. Introduction
Chapter 15. The General Solution of the Homogeneous Equation
Chapter 16. The Use of a Known Solution to Find Another
Chapter 17. The Homogeneous Equation with Constant Coefficients
Chapter 18. The Method of Undetermined Coefficients
Chapter 19. The Method of Variation and Parameters
Chapter 20. Vibrations in Mechanical and Electrical Systems
Chapter 21. Newton's Law of Gravitation and the Motions of the Planets
Chapter 22. Higher Order Linear Equations. Coupled Harmonic Oscillators
Chapter 23. Operator Methods for Finding Particular Solutions
Appendix A. Euler
Appendix B. Newton
Chapter 24. Oscillations and the Sturm Separation Theorem
Chapter 25. The Sturm Comparison Theorem
Chapter 26. Introduction. A Review of Power Series
Chapter 27. Series Solutions of First Order Equations
Chapter 28. Second Order Linear Equations. Ordinary Points
Chapter 29. Regular Singular Points
Chapter 30. Regular Singular Points (Continued)
Chapter 31. Gauss's Hypergeometric Equation
Chapter 32. The Point at Infinity
Appendix A. Two Convergence Proofs
Appendix B. Hermite Polynomials and Quantum Mechanics
Appendix C. Gauss
Appendix D. Chebyshev Polynomials and the Minimax Property
Appendix E. Riemann's Equation
Chapter 33. The Fourier Coefficients
Chapter 34. The Problem of Convergence
Chapter 35. Even and Odd Functions. Cosine and Sine Series
Chapter 36. Extension to Arbitrary Intervals
Chapter 37. Orthogonal Functions
Chapter 38. The Mean Convergence of Fourier Series
Appendix A. A Pointwise Convergence Theorem
Chapter 39. Introduction. Historical Remarks
Chapter 40. Eigenvalues, Eigenfunctions, and the Vibrating String
Chapter 41. The Heat Equation
Chapter 42. The Dirichlet Problem for a Circle. Poisson's Integral
Chapter 43. Sturm-Liouville Problems
Appendix A. The Existence of Eigenvalues and Eigenfunctions
Chapter 44. Legendre Polynomials
Chapter 45. Properties of Legendre Polynomials
Chapter 46. Bessel Functions. The Gamma Function
Chapter 47. Properties of Bessel functions
Appendix A. Legendre Polynomials and Potential Theory
Appendix B. Bessel Functions and the Vibrating Membrane
Appendix C. Additional Properties of Bessel Functions
Chapter 48. Introduction
Chapter 49. A Few Remarks on the Theory
Chapter 50. Applications to Differential Equations
Chapter 51. Derivatives and Integrals of Laplace Transforms
Chapter 52. Convolutions and Abel's Mechanical Problem
Chapter 53. More about Convolutions. The Unit Step and Impulse Functions
Appendix A. Laplace
Appendix B. Abel
Chapter 54. General Remarks on Systems
Chapter 55. Linear Systems
Chapter 56. Homogeneous Linear Systems with Constant Coefficients
Chapter 57. Nonlinear Systems. Volterra's Prey-Predator Equations
Chapter 58. Autonomous Systems. The Phase Plane and Its Phenomena
Chapter 59. Types of Critical Points. Stability
Chapter 60. Critical Points and Stability for Linear Systems
Chapter 61. Stability by Liapunov's Direct Method
Chapter 62. Simple Critical Points of Nonlinear Systems
Chapter 63. Nonlinear Mechanics. Conservative Systems
Chapter 64. Periodic Solutions. The Poincaré-Bendixson Theorem
Appendix A. Poincare
Appendix B. Proof of Lienard’s Theorem
PART 12 THE CALCULUS OF VARIATIONS
Chapter 65. Introduction. Some Typical Problems of the Subject
Chapter 66. Euler's Differential Equation for an Extremal
Chapter 67. Isoperimetric problems
Appendix A. Lagrange
Appendix B. Hamilton's Principle and Its Implications
Chapter 68. The Method of Successive Approximations
Chapter 69. Picard's Theorem
Chapter 70. Systems. The Second Order Linear Equation
Chapter 71. Introduction
Chapter 72. The Method of Euler
Chapter 73. Errors
Chapter 74. An Improvement to Euler
Chapter 75. Higher-Order Methods
Chapter 76. Systems
Numerical Tables | {"url":"http://mheducation.co.in/html/9780070530713.html","timestamp":"2014-04-21T07:38:59Z","content_type":null,"content_length":"32468","record_id":"<urn:uuid:4859c1b3-3096-4f43-a310-94c366381396>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
Not sure if output is right
Pages: 1 2
Post reply
Not sure if output is right
I have written two programs using matlab,in which I implement the Jacobi and Gauss Seidel method.
Both of the programs should terminate either if the number of iterations surpass the maximum number of iterations MAXITERATIONS or if one of these conditions/or both of them:
|| x_{k}-x_{k-1} ||<ε , || b-Ax_{k} ||<ε
are valid.
Could you give me the results of an example with an initial value [tex] x_{0} [/tex],an array A ,a specific b,a specific MAXITERATIONS and a specific small number ε,so I can check my output?For
example,if :
A=[4 5 1 5;6 3 1 2;9 9 6 1;1 2 3 5]
x0 =[0;0;0;0]
which should be the solution of x,at the first code,and which at the second one?
Last edited by evinda (2013-11-25 05:47:27)
Re: Not sure if output is right
There are lots of different algorithms that have those names. Please point me to the exact ones or describe them further.
I do not program in Matlab but I can easily program the relevant algorithms in Mathematica and iterate 100 times.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
There are lots of different algorithms that have those names. Please point me to the exact ones or describe them further.
I do not program in Matlab but I can easily program the relevant algorithms in Mathematica and iterate 100 times.
The formula I have to use for the Jacobi method is this: D*x_{k+1}=-(U+L)*x_{k}+b,
for the Gauss Seidel method the formula I have to use is this:(D+L)*x_{k+1}=b-U*x_{k}.
( L is the lower triangular and U the upper triangular matrix)
Both of the programs should terminate either if the number of iterations surpass the maximum number of iterations MAXITERATIONS or if one of these conditions/or both of them:
|| x_{k}-x_{k-1} ||<ε , || b-Ax_{k} ||<ε
are valid.
Could you tell me the solution of x of both methods and also the number of iterations each method needed to converge?
Re: Not sure if output is right
Okay, I give up. What is D?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
D is the diagonal component of the matrix A, i.e., it is a matrix whose diagonal elements are equal to the elements on the diagonals of A, while the rest are 0.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Not sure if output is right
Now why on earth would anyone use all that linear algebra jargon for such a simple process as iteration and SOR?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Not sure if output is right
More jargon, it means successive over relaxation.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
The word jargon always reminds me of the 'e'-less game.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Not sure if output is right
It should be jargone.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
The Jacobi is not converging for some reason.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: Not sure if output is right
By 30 iterations with Gauss Seidel I am not getting convergence to the answers of the system. Same thing for Jacobi.
Am I correct in assuming this is A
There is an easy trick that will make Gauss Seidel converge but the convergence will be slow. The Jacobi has imaginary eigenvalues so I do not think it will converge.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
I checked if it converges using the Gauss-Sheidel method,but it doesn't...
Re: Not sure if output is right
That is not exactly correct, it can be made to converge using Gauss Seidel.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
That is not exactly correct, it can be made to converge using Gauss Seidel.
How can I do this??
Re: Not sure if output is right
You can rearrange the matrix until it is diagonally dominant or close to it.
Can you do that or do you require help?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
You can rearrange the matrix until it is diagonally dominant or close to it.
Can you do that or do you require help?
I don't know how to do this
Re: Not sure if output is right
Use this matrix here:
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
Use this matrix here:
How did you rearrange the matrix??Because I have to do this for the general case,and not for a special matrix..
Re: Not sure if output is right
You can not do it for a general case. Each one is different and it may not work next time. Numerical work is a hands on skill. You must experiment.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
You can not do it for a general case. Each one is different and it may not work next time. Numerical work is a hands on skill. You must experiment.
How can I rearrange,for example,a 250x250 Hilbert Matrix??
Re: Not sure if output is right
You are not following. Nothing on this earth will ever get the answer to that linear system using Matlab's precision.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
You are not following. Nothing on this earth will ever get the answer to that linear system using Matlab's precision.
And what if I want to apply the methods at a 250x250 tridiagonal matrix with the number 2 at the main diagonal,-1 at the first diagonal below this and also -1 at the diagonal above this?Because both
of the methods do not converge for this matrix..
Last edited by evinda (2013-12-02 05:43:44)
Re: Not sure if output is right
The matrix is already diagonally dominant. There is nothing else to be done with it. Numerical methods are not like Algebra. They do not always work!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Not sure if output is right
bobbym wrote:
The matrix is already diagonally dominant. There is nothing else to be done with it. Numerical methods are not like Algebra. They do not always work!
So,why do the methods not converge,although the matrix is diagonally dominant??
Post reply
Pages: 1 2 | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=291976","timestamp":"2014-04-19T12:58:50Z","content_type":null,"content_length":"41425","record_id":"<urn:uuid:16828652-a174-4e76-9bba-ca8c0a3e56d6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-Dev] QR-decomposition with Q but not Q
Martin Teichmann lkb.teichmann@gmail....
Thu Aug 11 13:01:08 CDT 2011
Hi list,
Hi Nathaniel,
> I was a little confused by your removing the scipy tril/triu
> functions. Certainly they're redundant with the ones in numpy, but
> does this have any consequences for compatibility? Do they need to be
> deprecated for a cycle first?
Well, yes. But I guess it's much simpler to just import them from
numpy, than repeating the code. Some "from numpy import triu, tril"
should do the job, I am just not sure where to put it.
> Does the "economic" versus "full" distinction still make sense when
> working with the reflectors?
No. The reflectors always have one fixed size. (Let me double
check that I got the right one...)
> IIUC, the two options you add are (1) get out the elementary
> reflections themselves, (2) get out the product of those elementary
> reflections with some matrix c. Wouldn't it make more sense to have
> just one option, for getting out the elementary reflections, and then
> have a separate function that let you compute the product of some
> elementary reflections with some matrix?
I thought about that too, and I am not sure.
I opted for the way I did it because I think it gives more readable code,
especially for beginners. You don't need to explain what elementary
reflectors are, and what to do with them. In theory, one could make
an entire object-oriented approach with a class that automatically does
the right job, but I think that's just overkill.
The problem that one might want to multiply with Q several times is
normally not an issue: in this case it is more efficient to calculate Q
and just use dot to do the multiplication. Only if Q is large and c is
small the internal multiplication is an advantage.
More information about the SciPy-Dev mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-dev/2011-August/016401.html","timestamp":"2014-04-16T17:13:59Z","content_type":null,"content_length":"4434","record_id":"<urn:uuid:0de915a6-f177-42f8-80b4-78aba6dd8539>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00230-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaporation of Bourbon from a Wooden Barrel
Flow through the wall of a permeable cylindrical object can be modeled as:
[tex] v_{R}=\frac{K}{\mu} \frac{\Delta p}{d}[/tex]
Where v is the velocity of the radial flow (i.e. out of the container), K the permeability, μ the viscosity of the fluid, and Δp/d the pressure gradient between inside and outside. Using the vapor
pressure of EtOH (6 kPa), a wall thickness of 2 cm, and a permeability of wood of 0.1 milliDarcy (a crude estimate based on
). Then, convert v to a volume flow Q = v* area of barrel- I estimated 0.3 liters/hr, which is *way* too high.
Alternatively, you can use the measured losses as a measurement of K. A barrel holds about 180 liters and is aged for (say) 7 years- a 30% loss is 9*10^-4 liters/hr, so the permeability is likely
closer to 0.1 microDarcy. | {"url":"http://www.physicsforums.com/showthread.php?p=3903596","timestamp":"2014-04-20T03:21:23Z","content_type":null,"content_length":"31738","record_id":"<urn:uuid:37074eaf-1e3f-48ee-8ce9-b4668df8659b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
S.: Combinators for logic programming
Results 1 - 10 of 16
- In Proceedings Haskell Workshop , 2000
"... We describe how to embed a simple typed functional logic programming language in Haskell. The embedding is a natural extension of the Prolog embedding by Seres and Spivey [16]. To get full
static typing we need to use the Haskell extensions of quantified types and the ST-monad. 1 Introduction O ..."
Cited by 18 (0 self)
Add to MetaCart
We describe how to embed a simple typed functional logic programming language in Haskell. The embedding is a natural extension of the Prolog embedding by Seres and Spivey [16]. To get full static
typing we need to use the Haskell extensions of quantified types and the ST-monad. 1 Introduction Over the last ten to twenty years, there have been many attempts to combine the flavours of logic and
functional programming [3]. Among these, the most well-known ones are the programming languages Curry [4], Escher [13], and Mercury [14]. Curry and Escher can be seen as variations on Haskell, where
logic programming features are added. Mercury can be seen as an improvement of Prolog, where types and functional programming features are added. All three are completely new and autonomous
languages. Defining a new programming language has as a drawback for the developer to build a new compiler, and for the user to learn a new language. A different approach which has gained a lot of
popularity ...
, 2003
"... In this article we introduce a comprehensive set of algebraic laws for rool, a language similar to sequential Java but with a copy semantics. We present a few laws of commands, but focus on the
object-oriented features of the language. We show that this set of laws is complete in the sense that ..."
Cited by 16 (3 self)
Add to MetaCart
In this article we introduce a comprehensive set of algebraic laws for rool, a language similar to sequential Java but with a copy semantics. We present a few laws of commands, but focus on the
object-oriented features of the language. We show that this set of laws is complete in the sense that it is sufficient to reduce an arbitrary rool program to a normal form expressed in a restricted
subset of the rool operators. We also
- Department of Computer Science, University of Utrecht , 1999
"... The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are
still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, pr ..."
Cited by 16 (4 self)
Add to MetaCart
The distinctive merit of the declarative reading of logic programs is the validity ofallthelaws of reasoning supplied by the predicate calculus with equality. Surprisingly many of these laws are
still valid for the procedural reading � they can therefore be used safely for algebraic manipulation, program transformation and optimisation of executable logic programs. This paper lists a number
of common laws, and proves their validity for the standard (depth- rst search) procedural reading of Prolog. They also hold for alternative search strategies, e.g. breadth- rst search. Our proofs of
the laws are based on the standard algebra of functional programming, after the strategies have been given a rather simple implementation in Haskell. 1
"... this article is devoted to nding a suitable denition for this operator, and verifying that it is associative, as it must be if we are to write expressions like test n ^ ..."
Cited by 10 (0 self)
Add to MetaCart
this article is devoted to nding a suitable denition for this operator, and verifying that it is associative, as it must be if we are to write expressions like test n ^
- In Proceedings of the 2nd Asian Workshop on Programming Languages and Systems , 2001
"... It has been shown that non-determinism, both angelic and demonic, can be encoded in a functional language in di#erent representation of sets. In this paper we see quantum programming as a
special kind of non-deterministic programming where negative probabilities are allowed. ..."
Cited by 9 (0 self)
Add to MetaCart
It has been shown that non-determinism, both angelic and demonic, can be encoded in a functional language in di#erent representation of sets. In this paper we see quantum programming as a special
kind of non-deterministic programming where negative probabilities are allowed.
- IN PROC. 9TH INT. CONF. ON FUNCTIONAL PROGRAMMING , 2004
"... Past attempts to relate two well-known models of backtracking computataion have met with only limited success. We relate these two models using logical relations. We accommodate higher-order
values and in nite computations. We also provide an operational semantics, and we prove it adequate for both ..."
Cited by 8 (0 self)
Add to MetaCart
Past attempts to relate two well-known models of backtracking computataion have met with only limited success. We relate these two models using logical relations. We accommodate higher-order values
and in nite computations. We also provide an operational semantics, and we prove it adequate for both models.
, 2003
"... We give a deterministic, big-step operational semantics for the essential core of the Curry language, including higher-order functions, call-by-need evaluation, nondeterminism, narrowing, and
residuation. The semantics is structured in modular monadic style, and is presented in the form of an execut ..."
Cited by 7 (2 self)
Add to MetaCart
We give a deterministic, big-step operational semantics for the essential core of the Curry language, including higher-order functions, call-by-need evaluation, nondeterminism, narrowing, and
residuation. The semantics is structured in modular monadic style, and is presented in the form of an executable interpreter written in Haskell. It uses monadic formulations of state,
non-determinism, and resumptionbased concurrency.
- In Proc. of 2nd International ACM-SIGPLAN Conference on Principles and practice of Declarative Programming (PPDP’00 , 2000
"... This paper gives denotational models for three logic programming languages of progressive complexity, adopting the \logic programming without logic" approach. The rst language is the control ow
kernel of sequential Prolog, featuring sequential composition and backtracking. A committedchoice concurre ..."
Cited by 4 (3 self)
Add to MetaCart
This paper gives denotational models for three logic programming languages of progressive complexity, adopting the \logic programming without logic" approach. The rst language is the control ow
kernel of sequential Prolog, featuring sequential composition and backtracking. A committedchoice concurrent logic language with parallel composition (parallel AND) and don't care nondeterminism is
studied next. The third language is the core of Warren's basic Andorra model, combining parallel composition and don't care nondeterminism with two forms of don't know nondeterminism (interpreted as
sequential and parallel OR) and favoring deterministic over nondeterministic computation. We show that continuations are a valuable tool in the analysis and design of semantic models for both
sequential and parallel logic programming. Instead of using mathematical notation, we use the functional programming language Haskell as a metalanguage for our denotational semantics, and employ
monads in order to facilitate the transition from one language under study to another. Keywords Parallel logic programming, basic Andorra model, denotational semantics, continuations, monads,
Haskell. 1.
- PROCEEDINGS OF A SYMPOSIUM IN CELEBRATION OF THE WORK OF , 1999
"... this paper, we seek to develop a logic for logic programs that takes into account this procedural aspect of their behaviour under dierent search strategies, emphasizing algebraic properties that
are common to all search strategies. ..."
Cited by 4 (2 self)
Add to MetaCart
this paper, we seek to develop a logic for logic programs that takes into account this procedural aspect of their behaviour under dierent search strategies, emphasizing algebraic properties that are
common to all search strategies. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=535660","timestamp":"2014-04-19T02:30:04Z","content_type":null,"content_length":"34209","record_id":"<urn:uuid:5902e939-30de-4ba0-813e-a71eecc3d8be>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
Denotational Semantics - Imperative Languages
Chapter 5 ________________________________________________________
Imperative Languages
Most sequential programming languages use a data structure that exists independently of any program in the language. The data structure isn’t explicitly mentioned in the language’s syntax, but it is possible to build phrases that access it and update it. This data structure is called the store, and languages that utilize stores are called imperative. The fundamental example of a store is a computer’s primary memory, but file stores and data bases are also examples. The store and a computer program share an intimate relationship: 1. The store is critical to the evaluation of a phrase in a program. A phrase is understood in terms of how it handles the store, and the absence of a proper store makes the phrase nonexecutable. The store serves as a means of communication between the different phrases in the program. Values computed by one phrase are deposited in the store so that another phrase may use them. The language’s sequencing mechanism establishes the order of communication. The store is an inherently ‘‘large’’ argument. Only one copy of store exists at any point during the evaluation.
In this chapter, we study the store concept by examining three imperative languages. You may wish to study any subset of the three languages. The final section of the chapter presents some variants on the store and how it can be used.
5.1 A LANGUAGE WITH ASSIGNMENT _____________________________________ The first example language is a declaration-free Pascal subset. A program in the language is a sequence of commands. Stores belong to the domain Store and serve as arguments to the valuation function: C: Command → Store_ → Store_ | | The purpose of a command is to produce a new store from its store argument. However, a command might not terminate its actions upon the store— it can ‘‘loop.’’ The looping of a command [[C]] with store s has semantics C[[C]]s = −. (This explains why the Store domain is | lifted: − is a possible answer.) The primary property of nontermination is that it creates a | nonrecoverable situation. Any commands [[C']] following [[C]] in the evaluation sequence will not evaluate. This suggests that the function C[[C']]: Store_ → Store_ be strict; that is, given a | | nonrecoverable situation, C[[C']] can do nothing at all. Thus, command composition is C[[C1 ;C2 ]] = C[[C2 ]] ° C[[C1 ]]. Figure 5.1 presents the semantic algebras for the imperative language. The Store domain models a computer store as a mapping from the identifiers of the language to their values. The
5.1 A Language with Assignment
Figure 5.1 ____________________________________________________________________________ ________________________________________________________________________ I. Truth Values Domain t∈ Tr= I B Operations true, false : Tr not : Tr→ Tr II. Identifiers Domain i∈ Id= Identifier III. Natural Numbers Domain n∈ Nat= IN Operations zero, one, . . . : Nat plus : Nat× Nat→ Nat equals : Nat× Nat→ Tr IV. Store Domain s∈ Store= Id→ Nat Operations newstore: Store newstore= λi. zero access: Id→ Store→ Nat access= λi. λs. s(i) update: Id→ Nat→ Store→ Store update= λi.λn.λs. [ i |→ n]s ____________________________________________________________________________
operations upon the store include a constant for creating a new store, an operation for accessing a store, and an operation for placing a new value into a store. These operations are exactly those described in Example 3.11 of Chapter 3. The language’s definition appears in Figure 5.2. The valuation function P states that the meaning of a program is a map from an input number to an answer number. Since nontermination is possible, − is also a possible | ‘‘answer,’’ hence the rightmost codomain of P is Nat_ rather than just Nat. The equation for P | says that the input number is associated with identifier [[A]] in a new store. Then the program body is evaluated, and the answer is extracted from the store at [[Z]]. The clauses of the C function are all strict in their use of the store. Command composition works as described earlier. The conditional commands are choice functions. Since the
Imperative Languages
Figure 5.2 ____________________________________________________________________________ ________________________________________________________________________ Abstract syntax: P∈ Program C∈ Command E∈ Expression B∈ Boolean-expr I ∈ Identifier N∈ Numeral P ::= C. C ::= C1 ;C2 | if B then C | if B then C1 else C2 | I:=E | diverge E ::= E1 +E2 | I | N B ::= E1 =E2 | ¬B Semantic algebras: (defined in Figure 5.1) Valuation functions: P: Program→ Nat→ Nat_ | P[[C.]] = λn. let s= (update[[A]] n newstore) in let s = C[[C]]s in (access[[Z]] s ) ' ' C: Command→ Store_ → Store_ | | C[[C1 ;C2 ]] = λs. C[[C2 ]] (C[[C1 ]]s) _ C[[if B then C]] = λs. B[[B]]s→ C[[C]]s[] s _ C[[if B then C1 else C2 ]] = λs. B[[B]]s→ C[[C1 ]]s[] C[[C2 ]]s _ C[[I:=E]] = λs. update[[I]] (E[[E]]s) s _ C[[diverge]] = λs. − _ | E: Expression→ Store→ Nat E[[E1 +E2 ]] = λs. E[[E1 ]]s plus E[[E2 ]]s E[[I]] = λs. access [[I]] s E[[N]] = λs. N[[N]] B: Boolean-expr→ Store→ Tr B[[E1 =E2 ]] = λs. E[[E1 ]]s equals E[[E2 ]]s B[[¬B]] = λs. not(B[[B]]s) N: Numeral→ Nat (omitted) ____________________________________________________________________________
5.1 A Language with Assignment
expression (e1 → e2 [] e3 ) is nonstrict in arguments e2 and e3 , the value of C[[if B then C]]s is s when B[[B]]s is false, even if C[[C]]s=−. The assignment statement performs the expected | update; the [[diverge]] command causes nontermination. The E function also needs a store argument, but the store is used in a ‘‘read only’’ mode. E’s functionality shows that an expression produces a number, not a new version of store; the store is not updated by an expression. The equation for addition is stated so that the order of evaluation of [[E1 ]] and [[E2 ]] is not important to the final answer. Indeed, the two expressions might even be evaluated in parallel. A strictness check of the store is not needed, because C has already verified that the store is proper prior to passing it to E. Here is the denotation of a sample program with the input two: P[[Z:=1; if A=0 then diverge; Z:=3.]](two) = let s= (update[[A]] two newstore) in let s = C[[Z:=1; if A=0 then diverge; Z:=3]]s ' in access[[Z]] s
Since (update[[A]] two newstore) is ( [ [[A]]|→ two] newstore), that is, the store that maps [[A]] to two and all other identifiers to zero, the above expression simplifies to: let s = C[[Z:=1; if A=0 then diverge; Z:=3]] ( [ [[A]]|→ two] newstore) ' in access[[Z]] s
From here on, we use s1 to stand for ( [ [[A]]→ two] newstore). Working on the value bound to s' leads us to derive: C[[Z:=1; if A=0 then diverge; Z:=3]]s1 = (λs. C[[if A=0 then diverge; Z:=3]] (C[[Z:=1]]s))s1 _ The store s1 is a proper value, so it can be bound to s, giving: C[[if A=0 then diverge; Z:=3]] (C[[Z:=1]]s1 ) We next work on C[[Z:=1]]s1 : C[[Z:=1]]s1 = (λs. update[[Z]] (E[[1]]s) s) s1 _ = update[[Z]] (E[[1]]s1 ) s1 = update[[Z]] (N[[1]]) s1 = update[[Z]] one s1 = [ [[Z]]|→ one] [ [[A]]|→ two] newstore which we call s2 . Now: C[[if A=0 then diverge; Z:=3]]s2 = (λs. C[[Z:=3]] ((λs. B[[A=0]]s→ C[[diverge]]s[] s)s))s2 _ _ = C[[Z:=3]] ((λs. B[[A=0]]s→ C[[diverge]]s[] s)s2 ) _ = C[[Z:=3]] (B[[A=0]]s2 → C[[diverge]]s2 [] s2 ) Note that C[[diverge]]s2 = (λs. −)s2 = −, so nontermination is the result if the test has value | _ |
Imperative Languages
true. Simplifying the test, we obtain: B[[A=0]]s2 = (λs. E[[A]]s equals E[[0]]s)s2 = E[[A]]s2 equals E[[0]]s2 = (access[[A]] s2 ) equals zero Examining the left operand, we see that: access[[A]] s2 = s2 [[A]] = ( [ [[Z]]|→ one] [ [[A]]|→ two] newstore) [[A]] = ( [ [[A]]|→ two] newstore) [[A]] (why?) = two Thus, B[[A=0]]s2 = false, implying that C[[if A=0 then diverge]]s2 = s2 . Now: C[[Z:=3]]s2 = [ [[Z]]|→ three]s2 The denotation of the entire program is: let s = [ [[Z]]|→ three]s2 in access[[Z]] s ' ' = access[[Z]] [ [[Z]]|→ three]s2 = ( [ [[Z]]|→ three]s2 ) [[Z]] = three We obtain a much different denotation when the input number is zero: P[[Z:=1; if A=0 then diverge; Z:=3.]](zero) = let s = C[[Z:=1; if A=0 then diverge; Z:=3]]s3 in access[[Z]] s ' ' where s3 = [ [[A]]|→ zero] newstore. Simplifying the value bound to s' leads to: C[[Z:=1; if A=0 then diverge; Z:=3]]s3 = C[[if A=0 then diverge; Z:=3]]s4 where s4 = [ [[Z]]|→ one]s3 . As for the conditional, we see that: B[[A=0]]s4 → C[[diverge]]s4 [] s4 = true→ C[[diverge]]s4 [] s4 = C[[diverge]]s4 = (λs. −)s4 _ | =− | So the value bound to s' is C[[Z:=3]]−. But C[[Z:=3]]− = (λs. update[[Z]] (E[[3]]s) s)− = −. | | | | _ Because of the strict abstraction, the assignment isn’t performed. The denotation of the program is: let s = − in access[[Z]] s ' | '
5.1 A Language with Assignment
which simplifies directly to −. (Recall that the form (let x= e1 in e2 ) represents (λx. e2 )e1 .) The | _ undefined store forces the value of the entire program to be undefined. The denotational definition is also valuable for proving properties such as program equivalence. As a simple example, we show for distinct identifiers [[X]] and [[Y]] that the command C[[X:=0; Y:=X+1]] has the same denotation as C[[Y:=1; X:=0]]. The proof strategy goes as follows: since both commands are functions in the domain Store_ → Store_ , it suffices to | | prove that the two functions are equal by showing that both produce same answers from same arguments. (This is because of the principle of extensionality mentioned in Section 3.2.3.) First, it is easy to see that if the store argument is −, both commands produce the answer −. If | | the argument is a proper value, let us call it s and simplify: C[[X:=0; Y:=X+1]]s = C[[Y:=X+1]] (C[[X:=0]]s) = C[[Y:=X+1]] ( [ [[X]]|→ zero]s) = update[[Y]] (E[[X+1]] ( [ [[X]]|→ zero]s)) ( [ [[X]]|→ zero]s) = update[[Y]] one [ [[X]]|→ zero]s = [ [[Y]]|→ one] [ [[X]]|→ zero]s Call this result s1 . Next: C[[Y:=1; X:=0]]s = C[[X:=0]] (C[[Y:=1]]s) = C[[X:=0]] ( [ [[Y]]|→ one]s) = [ [[X]]|→ zero] [ [[Y]]|→ one]s Call this result s2 . The two values are defined stores. Are they the same store? It is not possible to simplify s1 into s2 with the simplification rules. But, recall that stores are themselves functions from the domain Id→ Nat. To prove that the two stores are the same, we must show that each produces the same number answer from the same identifier argument. There are three cases to consider: 1. 2. 3. The argument is [[X]]: then s1 [[X]] = ( [ [[Y]]|→ one] [ [[X]]|→ zero]s) [[X]] = ( [ [[X]]|→ zero]s) [[X]] = zero; and s2 [[X]] = ( [ [[X]]|→ zero] [ [[Y]]|→ one]s) [[X]] = zero. The argument is [[Y]]: then s1 [[Y]] = ( [ [[Y]]|→ one] [ [[X]]|→ zero]s) [[Y]] = one; and s2 [[Y]] = ( [ [[X]]|→ zero] [ [[Y]]|→ one]s) [[Y]] = ( [ [[Y]]|→ one]s) [[Y]] = one. The argument is some identifier [[I]] other than [[X]] or [[Y]]: then s1 [[I]] = s[[I]] and s2 [[I]] = s[[I]].
Since s1 and s2 behave the same for all arguments, they are the same function. This implies that C[[X:=0; Y:=X+1]] and C[[Y:=1; X:=0]] are the same function, so the two commands are equivalent. Many proofs of program properties require this style of reasoning.
Imperative Languages
5.1.1 Programs Are Functions ________________________________________________ The two sample simplification sequences in the previous section were operational-like: a program and its input were computed to an answer. This makes the denotational definition behave like an operational semantics, and it is easy to forget that functions and domains are even involved. Nonetheless, it is possible to study the denotation of a program without supplying sample input, a feature that is not available to operational semantics. This broader view emphasizes that the denotation of a program is a function. Consider again the example [[Z:=1; if A=0 then diverge; Z:=3]]. What is its meaning? It’s a function from Nat to Nat_ : | P[[Z:=1; if A=0 then diverge; Z:=3.]] = λn. let s = update[[A]] n newstore in let s = C[[Z:=1; if A=0 then diverge; Z:=3]]s ' in access[[Z]] s ' = λn. let s= update[[A]] n newstore in let s = (λs. (λs. C[[Z:=3]] (C[[if A=0 then diverge]]s))s)(C[[Z:=1]]s) ' _ _ in access[[Z]] s ' = λn. let s= update[[A]] n newstore in let s = (λs. (λs. update[[Z]] three s) ' _ _ ((λs. (access[[A]] s) equals zero→ (λs. −)s [] s)s)) _ _ | ((λs. update[[Z]] one s)s) _ in access[[Z]] s
which can be restated as: λn. let s= update[[A]] n newstore in let s = (let s 1 = update[[Z]] one s in ' ' let s 2 = (access[[A]] s 1 ) equals zero→ (λs. −)s 1 [] s 1 _ | ' ' ' ' in update[[Z]] three s'2 ) in access[[Z]] s
The simplifications taken so far have systematically replaced syntax constructs by their function denotations; all syntax pieces are removed (less the identifiers). The resulting expression denotes the meaning of the program. (A comment: it is proper to be concerned why a phrase such as E[[0]]s was simplified to zero even though the value of the store argument s is unknown. The simplification works because s is an argument bound to λs. Any undefined _ stores are ‘‘trapped’’ by λs. Thus, within the scope of the λs, all occurrences of s represent _ _ defined values.) The systematic mapping of syntax to function expressions resembles compiling. The function expression certainly does resemble compiled code, with its occurrences of tests, accesses, and updates. But it is still a function, mapping an input number to an output number. As it stands, the expression does not appear very attractive, and the intuitive meaning of the original program does not stand out. The simplifications shall proceed further. Let s0 be (update[[A]] n newstore). We simplify to:
5.1.1 Programs Are Functions
λn. let s' = (let s'1 = update[[Z]] one s0 in let s 2 = (access[[A]] s 1 ) equals zero→ (λs. −)s 1 [] s 1 _ | ' ' ' ' in update[[Z]] three s'2 ) in access[[Z]] s
We use s1 for (update[[Z]] one s0 ); the conditional in the value bound to s'2 is: (access[[A]] s1 ) equals zero→ − [] s1 | = n equals zero→ − [] s1 | The conditional can be simplified no further. We can make use of the following property; ‘‘for e2 ∈ Store_ such that e2 ≠ −, let s = (e1 → − [] e2 ) in e3 equals e1 → − [] [e2 /s]e3 .’’ (The | | | | proof is left as an exercise.) It allows us to state that: let s 2 = (n equals zero → − [] s1 ) in update[[Z]] three s 2 | ' ' = n equals zero→ − [] update[[Z]] three s1 | This reduces the program’s denotation to: λn. let s = (n equals zero→ − [] update[[Z]] three s1 ) in access[[Z]] s | ' ' The property used above can be applied a second time to show that this expression is just: λn. n equals zero→ − [] access[[Z]] (update[[Z]] three s1 ) | which is: λn. n equals zero→ − [] three | which is the intuitive meaning of the program! This example points out the beauty in the denotational semantics method. It extracts the essence of a program. What is startling about the example is that the primary semantic argument, the store, disappears completely, because it does not figure in the input-output relation that the program describes. This program does indeed denote a function from Nat to Nat_ . | Just as the replacement of syntax by function expressions resembles compilation, the internal simplification resembles compile-time code optimization. When more realistic languages are studied, such ‘‘optimizations’’ will be useful for understanding the nature of semantic arguments.
5.2 AN INTERACTIVE FILE EDITOR _ ______________________________________ _ The second example language is an interactive file editor. We define a file to be a list of records, where the domain of records is taken as primitive. The file editor makes use of two levels of store: the primary store is a component holding the file edited upon by the user, and the secondary store is a system of text files indexed by their names. The domains are listed in Figure 5.3. The edited files are values from the Openfile domain. An opened file r1 , r2 , . . . , rlast is
Imperative Languages
Figure 5.3 ____________________________________________________________________________ ________________________________________________________________________ IV. Text file Domain f ∈ File= Record ∗
V. File system Domain s∈ File-system = Id→ File Operations access: Id× File-system → File access= λ(i,s). s(i) update: Id× File× File-system → File-system update= λ(i,f,s). [ i |→ f ]s VI. Open file Domain p∈ Openfile = Record ∗ × Record ∗ Operations newfile: Openfile newfile= (nil,nil) copyin: File→ Openfile copyin= λf. (nil,f) copyout: Openfile→ File copyout= λp. ‘‘appends fst(p) to snd(p)— defined later’’ forwards: Openfile→ Openfile forwards= λ(front, back). null back→ (front, back) [] ((hd back) cons front, (tl back)) backwards: Openfile→ Openfile backwards= λ(front, back). null front→ (front, back) [] (tl front, (hd front) cons back) insert: Record× Openfile→ Openfile insert= λ(r, (front, back)). null back→ (front, r cons back) [] ((hd back) cons front), r cons (tl back)) delete: Openfile→ Openfile delete= λ(front, back). (front, (null back→ back [] tl back)) at-first-record: Openfile→ Tr at-first-record= λ(front, back). null front at-last-record : Openfile→ Tr at-last-record = λ(front, back). null back→ true [] (null (tl back)→ true [] false) isempty: Openfile→ Tr isempty= λ(front, back). (null front) and (null back) ____________________________________________________________________________
5.2 An Interactive File Editor
represented by two lists of text records; the lists break the file open in the middle: ri −1 . . . r2 r1 ri ri +1 . . . rlast
ri is the ‘‘current’’ record of the opened file. Of course, this is not the only representation of an opened file, so it is important that all operations that depend on this representation be grouped with the domain definition. There are a good number of them. Newfile represents a file with no records. Copyin takes a file from the file system and organizes it as: r1 r2 . . . rlast Record r1 is the current record of the file. Operation copyout appends the two lists back together. A definition of the operation appears in the next chapter. The forwards operation makes the record following the current record the new current record. Pictorially, for: ri −1 . . . r2 r1 a forwards move produces: ri ri −1 . . . r2 r1 ri +1 . . . rlast ri ri +1 rlast
Backwards performs the reverse operation. Insert places a record r behind the current record; an insertion of record r' produces: ri . . . r2 r1 r' ri +1 . . . rlast
The newly inserted record becomes current. Delete removes the current record. The final three operations test whether the first record in the file is current, the last record in the file is current, or if the file is empty. Figure 5.4 gives the semantics of the text editor. Since all of the file manipulations are done by the operations for the Openfile domain, the semantic equations are mainly concerned with trapping unreasonable user requests. They also model the editor’s output log, which echoes the input commands and reports errors. The C function produces a line of terminal output and a new open file from its open file argument. For user commands such as [[newfile]], the action is quite simple. Others, such as [[moveforward]], can generate error messages, which are appended to the output log. For example: C[[delete]](newfile) = let (k ,p ) = isempty (newfile) → (''error: file is empty'', newfile) ' '
Imperative Languages
Figure 5.4 ____________________________________________________________________________ ________________________________________________________________________ Abstract syntax: P∈ Program-session S∈ Command-sequence C∈ Command R∈ Record I ∈ Identifier P ::= edit I cr S S ::= C cr S | quit C ::= newfile | moveforward | moveback | insert R | delete Semantic algebras: I. Truth values Domain t∈ Tr Operations true, false : Tr and : Tr× Tr→ Tr II. Identifiers Domain i∈ Id= Identifier III. Text records Domain r∈ Record IV. - VI. defined in Figure 5.3 VII. Character Strings (defined in Example 3.3 of Chapter 3) VIII. Output terminal log Domain l∈ Log= String∗ Valuation functions: P: Program-session → File-system → (Log× File-system) P[[edit I cr S]] = λs. let p= copyin(access( [[I]], s)) in (''edit I'' cons fst(S[[S]]p), update( [[I]], copyout(snd(S[[S]]p)), s)) S: Command-sequence → Openfile→ (Log× Openfile) S[[C cr S]] = λp. let (l ,p ) = C[[C]]p in ((l cons fst(S[[S]]p )), snd(S[[S]]p )) ' ' ' ' ' S[[quit]] = λp. (''quit'' cons nil, p)
5.2 An Interactive File Editor
Figure 5.4 (continued) ____________________________________________________________________________ ________________________________________________________________________ C: Command → Openfile→ (String× Openfile) C[[newfile]] = λp. (''newfile'', newfile) C[[moveforward]] = λp. let (k ,p ) = isempty(p) → (''error: file is empty'', p) ' ' [] ( at-last-record(p) → (''error: at back already'', p) [] ('''', forwards(p)) ) in (''moveforward'' concat k , p )) ' ' C[[moveback]] = λp. let (k ,p ) = isempty (p) → (''error: file is empty'', p) ' ' [] ( at-first-record(p) → (''error: at front already'', p) ) [] ('''', backwards(p)) in (''moveback'' concat k , p ) ' ' C[[insert R]] = λp. (''insert R'', insert(R[[R]], p)) C[[delete]] = λp. let (k ,p ) = isempty(p) → (''error: file is empty'', p) ' ' [] ('''', delete(p)) in (''delete'' concat k , p )
' '
[] ('''', delete(newfile)) in (''delete'' concat k , p )) ' ' = let (k ,p ) = (''error: file is empty'', newfile) ' ' in (''delete'' concat k , p ) ' ' = (''delete'' concat ''error: file is empty'', newfile) = (''delete error: file is empty'', newfile) The S function collects the log messages into a list. S[[quit]] builds the very end of this list. The equation for S[[C cr S]] deserves a bit of study. It says to: 1. 2. 3. Evaluate C[[C]]p to obtain the next log entry l plus the updated open file p . ' ' Cons l' to the log list and pass p' onto S[[S]]. Evaluate S[[S]]p' to obtain the meaning of the remainder of the program, which is the rest of the log output plus the final version of the updated open file.
The two occurrences of S[[S]]p' may be a bit confusing. They do not mean to ‘‘execute’’ [[S]] twice— semantic definitions are functions, and the operational analogies are not always exact. The expression has the same meaning as: let (l , p ) = C[[C]]p in let (l , p ) = S[[S]]p in (l cons l , p ) ' ' '' '' ' ' '' '' The P function is similar in spirit to S. (One last note: there is a bit of cheating in writing ''edit I'' as a token, because [[I]] is actually a piece of abstract syntax tree. A coercion function should be used to convert abstract syntax forms to string forms. This is of little importance
Imperative Languages
and is omitted.) A small example shows how the log successfully collects terminal output. Let [[A]] be the name of a nonempty file in the file system s0 . P[[edit A cr moveback cr delete cr quit]]s0 = (''edit A'' cons fst(S[[moveback cr delete cr quit]]p0 ), update([[A]], copyout(snd(S[[moveback cr delete cr quit]]p0 ), s0 )) where p0 = copyin(access( [[A]],s0 )) Already, the first line of terminal output is evident, and the remainder of the program can be simplified. After a number of simplifications, we obtain: (''edit A'' cons ''moveback error: at front already'' cons fst(S[[delete cr quit]]p0 )), update([[A]], copyout(snd(S[[delete cr quit]]p0 ))) ) as the second command was incorrect. S[[delete cr quit]]p0 simplifies to a pair (''delete quit'', p1 ), for p1 = delete(p0 ), and the final result is: (''edit A moveback error: at front already delete quit'', update([[A]], copyout(p1 ), s0 ))
5.2.1 Interactive Input and Partial Syntax ______________________________________ A user of a file editor may validly complain that the above definition still isn’t realistic enough, for interactive programs like text editors do not collect all their input into a single program before parsing and processing it. Instead, the input is processed incrementally— one line at a time. We might model incremental output by a series of abstract syntax trees. Consider again the sample program [[edit A cr moveback cr delete cr quit]]. When the first line [[edit A cr]] is typed at the terminal, the file editor’s parser can build an abstract syntax tree that looks like Diagram 5.1: (5.1) P edit A cr Ω (5.2) edit A cr P S C moveback cr Ω
The parser knows that the first line of input is correct, but the remainder, the command sequence part, is unknown. It uses [[Ω ]] to stand in place of the command sequence that follows. The tree in Diagram 5.1 can be pushed through the P function, giving P[[edit A cr Ω ]]s0 = (''edit A'' cons fst(S[[Ω ]]p0 ), update([[A]], copyout(snd(S[[Ω ]]p0 ), s0 )))
5.2.1 Interactive Input and Partial Syntax
The processing has started, but the entire log and final file system are unknown. When the user types the next command, the better-defined tree in Diagram 5.2 is built, and the meaning of the new tree is: P[[edit A cr moveback cr Ω]] = (''edit A'' cons ''moveback error: at front already'' cons fst(S[[Ω ]]p0 ), update([[A]], copyout(snd(S[[Ω ]]p0 )), s0 )) This denotation includes more information than the one for Diagram 5.1; it is ‘‘better defined.’’ The next tree is Diagram 5.3: (5.3) edit A cr C moveback cr C Ω P S S
delete cr
The corresponding semantics can be worked out in a similar fashion. An implementation strategy is suggested by the sequence: an implementation of the valuation function executes under the control of the editor’s parser. Whenever the parser obtains a line of input, it inserts it into a partial abstract syntax tree and calls the semantic processor, which continues its logging and file manipulation from the point where it left off, using the new piece of abstract syntax. This idea can be formalized in an interesting way. Each of the abstract syntax trees was better defined than its predecessor. Let’s use the symbol |− to describe this relationship. − − Thus, (5.1) |− (5.2) |− (5.3) |− . . . holds for the example. Similarly, we expect that P[[(5.3)]]s0 − − − contains more answer information than P[[(5.2)]]s0 , which itself has more information than P[[(5.1)]]s0 . If we say that the undefined value − has the least answer information possible, we | can define S[[Ω ]]p=− for all arguments p. The − value stands for undetermined semantic | | information. Then we have that: (''edit A'' cons −, −) | | |− (''edit A'' cons ''moveback error: at front already'' cons | , | ) − − − |− (''edit A'' cons ''moveback error: at front already'' cons ''delete'' cons | , | ) − − − − |− − ... Each better-defined partial tree gives better-defined semantic information. We use these ideas in the next chapter for dealing with recursively defined functions.
Imperative Languages
5.3 A DYNAMICALLY TYPED LANGUAGE WITH INPUT AND OUTPUT ________________________________________________ The third example language is an extension of the one in Section 5.1. Languages like SNOBOL allow variables to take on values from different data types during the course of evaluation. This provides flexibility to the user but requires that type checking be performed at runtime. The semantics of the language gives us insight into the type checking. Input and output are also included in the example. Figure 5.5 gives the new semantic algebras needed for the language. The value domains that the language uses are the truth values Tr and the natural numbers Nat. Since these values can be assigned to identifiers, a domain: Storable-value = Tr + Nat is created. The + domain builder attaches a ‘‘type tag’’ to a value. The Store domain becomes: Store= Id→ Storable-value The type tags are stored with the truth values and numbers for later reference. Since storable values are used in arithmetic and logical expressions, type errors are possible, as in an attempt to add a truth value to a number. Thus, the values that expressions denote come from the domain: Figure 5.5 ____________________________________________________________________________ ________________________________________________________________________ V. Values that may be stored Domain v ∈ Storable-value = Tr+ Nat VI. Values that expressions may denote Domain x ∈ Expressible-value = Storable-value + Errvalue where Errvalue= Unit Operations check-expr : (Store→ Expressible-value ) × (Storable-value → Store→ Expressible-value) → (Store→ Expressible-value) f1 check-expr f2 = λs. cases (f1 s) of isStorable-value(v)→ (f2 v s) [] isErrvalue()→ inErrvalue() end VII. Input buffer Domain i ∈ Input= Expressible-value ∗ Operations get-value : Input→ (Expressible-value × Input) get-value = λi. null i→ (inErrvalue(), i) [] (hd i, tl i)
5.3 A Dynamically Typed Language with Input and Output
Figure 5.5 (continued) ____________________________________________________________________________ ________________________________________________________________________ VIII. Output buffer Domain o ∈ Output= (Storable-value + String)∗ Operations empty: Output empty= nil put-value : Storable-value × Output→ Output put-value = λ(v,o). inStorable-value(v) cons o put-message : String× Output→ Output put-message = λ(t,o). inString(t) cons o IX. Store Domain s ∈ Store=Id → Storable-value Operations newstore : Store access: Id→ Store→ Storable-value update: Id→ Storable-value → Store→ Store X. Program State Domain a ∈ State= Store× Input× Output XI. Post program state Domain z ∈ Post-state = OK+ Err where OK= State and Err= State Operations check-result : (Store→ Expressible-value) × (Storable-value → State→ Post-state_ ) | → (State→ Post-state_ ) | f check-result g= λ(s,i,o). cases (f s) of isStorable-value(v)→ (g v (s,i,o)) [] isErrvalue()→ inErr(s, i, put-message(''type error'', o)) end check-cmd : (State→ Post-state_ ) × (State→ Post-state_ )→ (State→ Post-state_ ) | | | h1 check-cmd h2 = λa. let z = (h1 a) in cases z of isOK(s,i,o)→ h2 (s,i,o) [] isErr(s,i,o)→ z end ____________________________________________________________________________
Imperative Languages
Expressible-value = Storable-value + Errvalue where the domain Errvalue = Unit is used to denote the result of a type error. Of interest is the program state, which is a triple of the store and the input and output buffers. The Post-state domain is used to signal when an evaluation is completed successfully and when a type error occurs. The tag attached to the state is utilized by the checkcmd operation. This operation is the sequencing operation for the language and is represented in infix form. The expression (C[[C1 ]] check-cmd C[[C2 ]]) does the following: 1. 2. It gives the current state a to C[[C1 ]], producing a post-state z = C[[C1 ]]a. If z is a proper state a', and then, if the state component is OK, it produces C[[C2 ]]a'. If z is erroneous, C[[C2 ]] is ignored (it is ‘‘branched over’’), and z is the result.
A similar sequencing operation, check-result, sequences an expression with a command. For example, in an assignment [[I:=E]], [[E]]’s value must be determined before a store update can occur. Since [[E]]’s evaluation may cause a type error, the error must be detected before the update is attempted. Operation check-result performs this action. Finally, check-expr performs error trapping at the expression level. Figure 5.6 shows the valuation functions for the language. You are encouraged to write several programs in the language and derive their denotations. Notice how the algebra operations abort normal evaluation when type errors occur. The intuition behind the operations is that they represent low-level (even hardware-level) fault detection and branching mechanisms. When a fault is detected, the usual machine action is a single branch out of the program. The operations defined here can only ‘‘branch’’ out of a subpart of the function expression, but since all type errors are propagated, these little branches chain together to form a branch out of the entire program. The implementor of the language would take note of this property and produce full jumps on error detection. Similarly, the inOK and inErr tags would not be physically implemented, as any running program has an OK state, and any error branch causes a change to the Err state.
5.4 ALTERING THE PROPERTIES OF STORES _ _____________________________ _ The uses of the store argument in this chapter maintain properties 1-3 noted in the introduction to this chapter. These properties limit the use of stores. Of course, the properties are limiting in the sense that they describe typical features of a store in a sequential programming language. It is instructive to relax each of restrictions 1, 3, and 2 in turn and see what character of programming languages result.
5.4.1 Delayed Evaluation _ ___________________________________________________ _ Call-by-value (argument first) simplification is the safe method for rewriting operator, argument combinations when strict functions are used. This point is important, for it suggests that an implementation of the strict function needs an evaluated argument to proceed. Similarly,
5.4.1 Delayed Evaluation
Figure 5.6 ____________________________________________________________________________ ________________________________________________________________________ Abstract syntax: P∈ Program C∈ Command E∈ Expression I ∈ Id N∈ Numeral P ::= C. C ::= C1 ;C2 | I:=E | if E then C1 else C2 | read I | write E | diverge E ::= E1 +E2 | E1 =E2 | ¬E | (E) | I | N | true Semantic algebras: I. Truth values (defined in Figure 5.1) II. Natural numbers (defined in Figure 5.1) III. Identifiers (defined in Figure 5.1) IV. Character strings (defined in Example 3.5 of Chapter 3) V. - XI. (defined in Figure 5.5) Valuation functions: P: Program→ Store→ Input→ Post-state_ | P[[C.]] = λs.λi. C[[C]] (s, i, empty) C: Command→ State→ Post-state_ | C[[C1 ;C2 ]] = C[[C1 ]] check-cmd C[[C2 ]] C[[I:=E]] = E[[E]] check-result (λv.λ(s,i,o). inOK((update[[I]] v s), i, o)) C[[if E then C1 else C2 ]] = E[[E]] check-result (λv.λ(s,i,o). cases v of isTr(t)→ (t→ C[[C1 ]] [] C[[C2 ]] )(s,i,o) [] isNat(n)→ inErr(s,i, put-message(''bad test'', o)) end) C[[read I]] = λ(s,i,o). let (x,i ) = get-value(i) in ' cases x of isStorable-value(v) → inOK((update[[I]] v s), i , o) ' [] isErrvalue() → inErr(s, i , put-message(''bad input'', o)) end ' C[[write E]] = E[[E]] check-result (λv.λ(s,i,o). inOK(s, i, put-value(v,o))) C[[diverge]] = λa. − |
Imperative Languages
Figure 5.6 (continued) ____________________________________________________________________________ ________________________________________________________________________ E: Expression→ Store→ Expressible-value E[[E1 +E2 ]] = E[[E1 ]] check-expr (λv. cases v of isTr(t)→ λs. inErrvalue() [] isNat(n)→ E[[E2 ]] check-expr (λv .λs. cases v of ' ' isTr(t )→ inErrvalue() ' [] isNat(n )→ inStorable-value(inNat(n plus n )) end) ' ' end) E[[E1 =E2 ]] = ‘‘similar to above equation’’ E[[¬E]] = E[[E]] check-expr (λv.λs. cases v of isTr(t)→ inStorable-value(inTr(not t)) [] isNat(n)→ inErrvalue() end) E[[(E)]] = E[[E]] E[[I]] = λs. inStorable-value(access [[I]] s) E[[N]] = λs. inStorable-value(inNat(N[[N]])) E[[true]] = λs. inStorable-value(inTr(true)) N:Numeral→ Nat (omitted) ____________________________________________________________________________
call-by-name (argument last) simplification is the safe method for handling arguments to nonstrict functions. Here is an example: consider the nonstrict function f= (λx. zero) of domain Nat_ → Nat_ . If f is given an argument e whose meaning is −, then f(e) is zero. Argument e’s | | | simplification may require an infinite number of steps, for it represents a nonterminating evaluation. Clearly, e should not be simplified if given to a nonstrict f. The Store-based operations use only proper arguments and a store can only hold values that are proper. Let’s consider how stores might operate with improper values. First, say that expression evaluation can produce both proper and improper values. Alter the Store domain to be Store= Id→ Nat_ . Now improper values may be stored. Next, adjust the update operation | to be: update : Id→ Nat_ → Store→ Store, update= λi.λn.λs. [ i |→ n]s. An assignment state| ment uses update to store the value of an expression [[E]] into the store. If [[E]] represents a ‘‘loop forever’’ situation, then E[[E]]s=−. But, since update is nonstrict in its second argu| ment, (update [[I]] (E[[E]]s) s) is defined. From the operational viewpoint, unevaluated or partially evaluated expressions may be stored into s. The form E[[E]]s need not be evaluated until it is used; the arrangement is called delayed (or lazy) evaluation. Delayed evaluation provides the advantage that the only expressions evaluated are the ones that are actually needed for
5.4.1 Delayed Evaluation
computing answers. But, once E[[E]]’s value is needed, it must be determined with respect to the store that was active when [[E]] was saved. To understand this point, consider this code: begin X:=0; Y:=X+1; X:=4 resultis Y where the block construct is defined as: K: Block→ Store_ → Nat_ | | K[[begin C resultis E]] = λs. E[[E]] (C[[C]]s) _ (Note: E now has functionality E : Expression → Store_ → Nat_ , and it is strict in its store | | argument.) At the final line of the example, the value of [[Y]] must be determined. The semantics of the example, with some proper store s0 , is: K[[begin X:=0; Y:=X+1; X:=4 resultis Y]]s0 = E[[Y]] (C[[X:=0; Y:=X+1; X:=4]]s0 ) = E[[Y]] (C[[Y:=X+1; X:=4]] (C[[X:=0]]s0 )) = E[[Y]] (C[[Y:=X+1; X:=4]] (update[[X]] (E[[0]]s0 ) s0 )) At this point, (E[[0]]s0 ) need not be simplified; a new, proper store, s1 = (update[[X]] E[[0]]s0 s0 ) is defined regardless. Continuing through the other two commands, we obtain: s3 = update[[X]] (E[[4]]s2 ) s2 where s2 = update[[Y]] (E[[X+1]]s1 ) s1 and the meaning of the block is: E[[Y]]s3 = access[[Y]] s3 = E[[X+1]]s1 = E[[X]]s1 plus one = (access[[X]] s1 ) plus one) = E[[0]]s0 plus one = zero plus one= one The old version of the store, version s1 , must be retained to obtain the proper value for [[X]] in [[X+1]]. If s3 was used instead, the answer would have been the incorrect five. Delayed evaluation can be carried up to the command level by making the C, E, and K functions nonstrict in their store arguments. The surprising result is that only those commands that have an effect on the output of a program need be evaluated. Convert all strict abstractions (λs. e) in the equations for C in Figure 5.2 to the nonstrict forms (λs. e). Redefine access and _ update to be:
Imperative Languages
access : Identifier → Store_ → Nat_ | | access = λi.λ s. s(i) _ update : Identifier → Nat_ → Store_ → Store_ | | | update = λi.λm.λp. (λi'. i' equals i → m [] (access i' p)) Then, regardless of the input store s, the program: begin X:=0; diverge; X:=2 resultis X+1 has the value three! This is because C[[X:=0; diverge]]s = −, and: | E[[X+1]] (C[[X:=2]]−) | = E[[X+1]] (update[[X]] (E[[2]]−)−), as C is nonstrict | | = E[[X+1]] ( [ [[X]]|→ E[[2]]− ]−), as update is nonstrict | | = E[[X]] ( [ [[X]]|→ E[[2]]− ]−) plus one | | |→ E[[2]] | ] | )) plus one = (access[[X]] ( [ [[X]] − − = E[[2]]− plus one | = two plus one, as E is nonstrict = three The derivation suggests that only the last command in the block need be evaluated to obtain the answer. Of course, this goes against the normal left-to-right, top-to-bottom sequentiality of command evaluation, so the nonstrict handling of stores requires a new implementation strategy.
5.4.2 Retaining Multiple Stores _______________________________________________ Relaxing the strictness condition upon stores means that multiple values of stores must be present in an evaluation. Must an implementation of any of the languages defined earlier in this chapter use multiple stores? At first glance, the definition of addition: E[[E1 +E2 ]] = λs. E[[E1 ]]s plus E[[E2 ]]s apparently does need two copies of the store to evaluate. Actually, the format is a bit deceiving. An implementation of this clause need only retain one copy of the store s because both E[[E1 ]] and E[[E2 ]] use s in a ‘‘read only’’ mode. Since s is not updated by either, the equation should be interpreted as saying that the order of evaluation of the two operands to the addition is unimportant. They may even be evaluated in parallel. The obvious implementation of the store is a global variable that both operands may access. This situation changes when side effects occur within expression evaluation. If we add
5.4.2 Retaining Multiple Stores
the block construct to the Expression syntax domain and define its semantics to be: E[[begin C resultis E]] = λs. let s = C[[C]]s in E[[E]]s _
then expressions are no longer ‘‘read only’’ objects. An implementation faithful to the semantic equation must allow an expression to own a local copy of store. The local store and its values disappear upon completion of expression evaluation. To see this, you should perform the simplification of C[[X:=(begin Y:=Y+1 resultis Y)+Y]]. The incrementation of [[Y]] in the left operand is unknown to the right operand. Further, the store that gets the new value of [[X]] is exactly the one that existed prior to the right-hand side’s evaluation. The more conventional method of integrating expression-level updates into a language forces any local update to remain in the global store and thus affect later evaluation. A more conventional semantics for the block construct is: K[[begin C resultis E]] = λs. let s = C[[C]]s in (E[[E]]s , s ) _ ' ' ' The expressible value and the updated store form a pair that is the result of the block.
5.4.3 Noncommunicating Commands __________________________________________ The form of communication that a store facilitates is the building up of side effects that lead to some final value. The purpose of a command is to advance a computation a bit further by drawing upon the values left in the store by previous commands. When a command is no longer allowed to draw upon the values, the communication breaks down, and the language no longer has a sequential flavor. Let’s consider an example that makes use of multiple stores. Assume there exists some domain D with an operation combine : D× D→ D. If combine builds a ‘‘higher-quality’’ Dvalue from its two D-valued arguments, a useful store-based, noncommunicating semantics might read: Domain s ∈ Store= Id→ D C: Command→ Store_ → Store_ | | C[[C1 ;C2 ]] = λs. join (C[[C1 ]]s) (C[[C2 ]]s) _ where join : Store_ → Store_ → Store_ | | | join = λs1 .λs2 . (λi. s1 (i) combine s2 (i)) _ _ These clauses suggest parallel but noninterfering execution of commands. Computing is divided between [[C1 ]] and [[C2 ]] and the partial results are joined using combine. This is a nontraditional use of parallelism on stores; the traditional form of parallelism allows interference and uses the single-store model. Nonetheless, the above example is interesting because it suggests that noncommunicating commands can work together to build answers rather than deleting each other’s updates.
Imperative Languages
SUGGESTED READINGS _ _________________________________________________ _ Semantics of the store and assignment: Barron 1977; Donohue 1977; Friedman et al. 1984; Landin 1965; Strachey 1966, 1968 Interactive systems: Bjorner and Jones 1982; Cleaveland 1980 / Dynamic typing: Tennent 1973 Delayed evaluation: Augustsson 1984; Friedman & Wise 1976; Henderson 1980; Henderson & Morris 1976
EXERCISES ______________________________________________________________ 1. Determine the denotations of the following programs in Nat_ when they are used with the | input data value one: a. P[[Z:=A.]] b. P[[(if A=0 then diverge else Y:=A+1);Z:=Y.]] c. P[[diverge; Z:=0.]] 2. Determine the denotations of the programs in the previous exercise without any input; that is, give their meanings in the domain Nat→Nat_ . | 3. Give an example of a program whose semantics with respect to Figure 5.2, is the denotation (λn. one). Does an algorithmic method exist for listing all the programs with exactly this denotation? 4. Show that the following properties hold with respect to the semantic definition of Figure 5.2: a. b. c. d. P[[Z:=0; if A=0 then Z:=A.]] = P[[Z:=0.]] For any C∈ Command, C[[diverge; C]] = C[[diverge]] For all E1 , E2 ∈ Expression, E[[E1 +E2 ]] = E[[E2 +E1 ]] For any B∈ Boolean-expr, C1 , C2 ∈ Command, C[[if B then C1 else C2 ]] = C[[if¬B then C2 else C1 ]]. e. There exist some B∈ Boolean-expr and C1 , C2 ∈ Command such that C[[if B then C1 ; if ¬B thenC2 ]] ≠ C[[if B then C1 else C2 ]] (Hint: many of the proofs will rely on the extensionality of functions.) 5. a. Using structural induction, prove the following: for every E∈ Expression in the language of Figure 5.2, for any I∈ Identifier, E'∈ Expression, and s ∈ Store, E[[[E'/I]E]]s = E[[E]](update [[I]] E[[E']]s s). b. Use the result of part a to prove: for every B∈ Boolean-expr in the language of Figure 5.2, for every I∈ Identifier, E'∈ Expression, and s ∈ Store, B[[[E'/I]B]]s = B[[B]](update [[I]] E[[E']]s s).
6. Say that the Store algebra in Figure 5.1 is redefined so that the domain is s∈ Store' = (Id × Nat)∗ . a. Define the operations newstore', access', and update' to operate upon the new domain. (For this exercise, you are allowed to use a recursive definition for access'. The definition must satisfy the properties stated in the solution to Exercise 14, part b, of Chapter 3.) Must the semantic equations in Figure 5.2 be adjusted to work with the new algebra? b. Prove that the definitions created in part a satisfy the properties: for all i∈ Id, n∈ Nat, and s∈ Store': access i newstore = zero ' ' access i (update i n s) = n ' ' access i (update j n s) = (access i s), for j ≠ i ' ' ' How do these proofs relate the new Store algebra to the original? Try to define a notion of ‘‘equivalence of definitions’’ for the class of all Store algebras. 7. Augment the Command syntax domain in Figure 5.2 with a swap command: C ::= . . . | swap I1 , I2 The action of swap is to interchange the values of its two identifier variables. Define the semantic equation for swap and prove that the following property holds for any J ∈ Id and s∈ Store: C[[swap J, J]]s = s. (Hint: appeal to the extensionality of store functions.) 8. a. Consider the addition of a Pascal-like cases command to the language of Figure 5.2. The syntax goes as follows: C∈ Command G∈ Guard E∈ Expression C ::= . . . | case E of G end G ::= N:C; G | N:C Define the semantic equation for C[[case E of G end]] and the equations for the valuation function G : Guard → (Nat × Store) → Store_ . List the design decisions that must | be made. b. Repeat part a with the rule G ::= N:C | G1 ; G2 9. Say that the command [[test E on C]] is proposed as an extension to the langauge of Figure 5.2. The semantics is: C[[test E on C]] = λs. let s = C[[C]]s in E[[E]]s equals zero → s [] s _ ' ' ' What problems do you see with implementing this construct on a conventional machine? 10. Someone proposes a version of ‘‘parallel assignment’’ with semantics: C[[I1 , I2 := E1 , E2 ]] = λs. let s = (update[[I1 ]] E[[E1 ]]s s) _ '
Imperative Languages
in update [[I2 ]] E[[E2 ]]s' s' Show, via a counterexample, that the semantics does not define a true parallel assignment. Propose an improvement. What is the denotation of [[J, J := 0, 1]] in your semantics? 11. In a LUCID-like language, a family of parallel assignments are performed in a construct known as a block. The syntax of a block B is: B ::= begin A end A ::= Inew :=E | A1 §A2 The block is further restricted so that all identifiers on the left hand sides of assignments in a block must be distinct. Define the semantics of the block construct. 12. Add the diverge construction to the syntax of Expression in Figure 5.2 and say that E[[diverge]] = λs. −. How does this addition impact: | a. The functionalities and semantic equations for C, E, and B? b. The definition and use of the operations update, plus, equals, and not? What is your opinion about allowing the possibility of nontermination in expression evaluation? What general purpose imperative languages do you know of that guarantee termination of expression evaluation? 13. The document defining the semantics of Pascal claims that the order of evaluation of operands in an (arithmetic) expression is left unspecified; that is, a machine may evaluate the operands in whatever order it pleases. Is this concept expressed in the semantics of expressions in Figure 5.2? However, recall that Pascal expressions may contain side effects. Let’s study this situation by adding the construct [[C in E]]. Its evaluation first evaluates [[C]] and then evaluates [[E]] using the store that was updated by [[C]]. The store (with the updates) is passed on for later use. Define E[[C in E]]. How must the functionality of E change to accommodate the new construct? Rewrite all the other semantic equations for E as needed. What order of evaluation of operands does your semantics describe? Is it possible to specify a truly nondeterminate order of evaluation? 14. For some defined store s0 , give the denotations of each of the following file editor programs, using the semantics in Figure 5.4: a. P[[edit A cr newfile cr insert R0 cr insert R1 quit]]s0 . Call the result (log1 , s1 ). b. P[[edit A cr moveforward cr delete cr insert R2 quit]]s1 , where s1 is from part a. Call the new result (log2 , s2 ). c. P[[edit A cr insert R3 cr quit]]s2 , where s2 is from part b. 15. Redo part a of the previous question in the style described in Section 5.2.1, showing the partial syntax trees and the partial denotations produced at each step. 16. Extend the file editor of Figure 5.4 to be a text editor: define the internal structure of the
Record semantic domain in Figure 5.3 and devise operations for manipulating the words in a record. Augment the syntax of the language so that a user may do manipulations on the words within individual records. 17. Design a programming language for performing character string manipulation. The language should support fundamental operations for pattern matching and string manipulation and possess assignment and control structure constructs for imperative programming. Define the semantic algebras first and then define the abstract syntax and valuation functions. 18. Design a semantics for the grocery store data base language that you defined in Exercise 6 of Chapter 1. What problems arise because the abstract syntax was defined before the semantic algebras? What changes would you make to the language’s syntax after this exercise? 19. In the example in Section 5.3, the Storable-value domain is a subdomain of the Expressible-value domain; that is, every storable value is expressible. What problems arise when this isn’t the case? What problems/situations arise when an expressible value isn’t storable? Give examples. 20. In the language of Figure 5.6, what is P[[write 2; diverge.]]? Is this a satisfactory denotation for the program? If not, suggest some revisions to the semantics. 21. Alter the semantics of the language of Figure 5.6 so that an expressible value error causes an error message to be placed into the output buffer immediately (rather than letting the command in which the expressible value is embedded report the message later). 22. Extend the Storable-value algebra of Figure 5.5 so that arithmetic can be performed on the (numeric portion of) storable values. In particular, define operations: plus : Storable-value × Storable-value → Expressible-value ' not : Storable-value → Expressible-value ' equals : Storable-value × Storable-value → Expressible-value ' so that the equations in the E valuation function can be written more simply, e.g., E[[E1 +E2 ]] = E[[E1 ]]s check-expr (λv1 . E[[E2 ]]s check-expr (λv2 . v1 plus v2 )) ' Rewrite the other equations of E in this fashion. How would the new versions of the storable value operations be implemented on a computer? 23. Alter the semantics of the language of Figure 5.6 so that a variable retains the type of the first identifier that is assigned to it. 24. a. Alter the Store algebra in Figure 5.5 so that: Store= Index→ Storable-value∗
Imperative Languages
where Index= Id+ Input+ Output Input= Unit Output= Unit that is, the input and output buffers are kept in the store and indexed by tags. Define the appropriate operations. Do the semantic equations require alterations? b. Take advantage of the new definition of storage by mapping a variable to a history of all its updates that have occurred since the program has been running. 25. Remove the command [[read I]] from the language of Figure 5.6 and place the construct [[read]] into the syntax of expressions. a. Give the semantic equation for E[[read]]. b. Prove that C[[read I]] = C[[I:= read]]. c. What are the pragmatic advantages and disadvantages of the new construct? 26. Suppose that the Store domain is defined to be Store= Id→ (Store→ Nat) and the semantic equation for assignment is: C[[I:=E]] = λs. update [[I]] (E[[E]]) s _ a. Define the semantic equations for the E valuation function. b. How does this view of expression evaluation differ from that given in Figures 5.1 and 5.2? How is the new version like a macroprocessor? How is it different? 27. If you are familiar with data flow and demand-driven languages, comment on the resemblance of the nonstrict version of the C valuation function in Section 5.4.1 to these forms of computation. 28. Say that a vendor has asked you to design a simple, general purpose, imperative programming language. The language will include concepts of expression and command. Commands update the store; expressions do not. The control structures for commands include sequencing and conditional choice. a. What questions should you ask the vendor about the language’s design? Which design decisions should you make without consulting the vendor first? b. Say that you decide to use denotational semantics to define the semantics of the language. How does its use direct and restrict your view of: i. ii. iii. iv. What the store should be? How stores are accessed and updated? What the order of evaluation of command and expression subparts should be? How the control structures order command evaluation?
29. Programming language design has traditionally worked from a ‘‘bottom up’’ perspective; that is, given a physical computer, a machine language is defined for giving instructions to the computer. Then, a second language is designed that is ‘‘higher level’’ (more concise or easier for humans to use) than the first, and a translator program is written to
translate from the second language to the first. Why does this approach limit our view as to what a programming language should be? How might we break out of this approach by using denotational semantics to design new languages? What biases do we acquire when we use denotational semantics? | {"url":"http://www.docstoc.com/docs/12343882/Denotational-Semantics---Imperative-Languages","timestamp":"2014-04-16T06:02:01Z","content_type":null,"content_length":"114840","record_id":"<urn:uuid:102d0593-1317-42b9-b9a4-7224fa94719a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] How to remove any row or column of a numpy matrix whose sum is 3?
[Numpy-discussion] How to remove any row or column of a numpy matrix whose sum is 3?
bob tnur bobtnur78@gmail....
Mon Jun 4 11:21:02 CDT 2012
Hello every body. I am new to python.
How to remove any row or column of a numpy matrix whose sum is 3.
To obtain and save new matrix P with (sum(anyrow)!=3 and sum(anycolumn)!=3
I tried like this:
P = M[np.logical_not( (M[n,:].sum()==3) & (M[:,n].sum()==3))]
P = M[np.logical_not( (np.sum(M[n,:])==3) & (np.sum(M[:,n])==3))]
M is the nxn numpy matrix.
But I got indexerror. So can anyone correct this or any other elegant way
of doing this?
Thanks for your help
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120604/a8816e24/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-June/062620.html","timestamp":"2014-04-21T04:39:44Z","content_type":null,"content_length":"3613","record_id":"<urn:uuid:9a67280b-5e58-4278-ac80-aa24513c06f7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-Step Equations
3.1: One-Step Equations
Created by: CK-12
Learning Objectives
At the end of this lesson, students will be able to:
• Solve an equation using addition.
• Solve an equation using subtraction.
• Solve an equation using multiplication.
• Solve an equation using division.
Terms introduced in this lesson:
linear equation
Teaching Strategies and Tips
Use the introductory problem to motivate equivalent equations, since it can be solved two ways:
• The cost plus the change received is equal to the amount paid, $x + 22 = 100$
• The cost is equal to the difference between the amount paid and the change, $x = 100 - 22$
Examples 1 and 2 are essentially the same; constant terms must be added to both sides to isolate $x$
Additional Example.
Solve $12 = -4 + x$
Solution. To isolate $x$$4$
$12 & = -\cancel{4} + x\\+4 & = +\cancel{4}\\16 & = x$
The variable in Example 3 is not the usual $x$
Solve $-21=n+14$
Hint: To isolate $n$$14$
In Examples 4-6, teachers may opt to solve the equations by adding the opposite in lieu of subtracting.
Solve $-17=x+8$
Hint: To isolate $x$add $-8$
$-17 & = x+\cancel{8}\\-8 & = -\cancel{8}\\-25 & = x$
Use Example 6 as an example of an equation with fractions. Remind students to find common denominators.
Point out in Example 8 that in general,
which will help students isolate $x$in one step, multiplying by the reciprocal of $a/b$
Note that the equation in Example 10 can be written in dollars or in cents:
• $5x=3.25$$x$dollars)
• $5x=325$$x$cents)
Although Examples 13 and 15 can be solved by making a table and Example 14 by guessing and checking, teachers are encouraged to help students setup and solve an equation of the type presented in the
Error Troubleshooting
General Tip: After the constant term is canceled in a one-step equation, the variable must be carried down onto the next line. Remind students to write the $x =$
General Tip: Students forget to perform the same operation on both sides of an equation. Have students use a colored pencil to write what they are doing to both sides of the equation.
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/3.1/","timestamp":"2014-04-18T06:49:54Z","content_type":null,"content_length":"117440","record_id":"<urn:uuid:78f6f1fe-611a-420f-a8a1-e1d631e49c65>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: derivative of a matrix
Replies: 4 Last Post: Mar 25, 2013 12:57 PM
Messages: [ Previous | Next ]
derivative of a matrix
Posted: Mar 24, 2013 3:59 PM
Hello, I need help about differentiation. Let D be such an nxn matrix
where both A and B are nxn matrices and c is a scalar. I need to derive
namely derivative of "D" respect to "c". I will be very glad, if you help me. Thanks a lot.
Date Subject Author
3/24/13 oercim@yahoo.com
3/24/13 Re: derivative of a matrix Ken.Pledger@vuw.ac.nz
3/24/13 Re: derivative of a matrix Herman Rubin
3/24/13 Re: derivative of a matrix oercim@yahoo.com
3/25/13 Re: derivative of a matrix Ray Koopman | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2443222","timestamp":"2014-04-18T03:10:52Z","content_type":null,"content_length":"20714","record_id":"<urn:uuid:d7f015f6-f727-41ba-8325-bc70b7aef7af>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplicative Inverse
Multiplicative Inverse
A selection of articles related to multiplicative inverse.
Original articles from our library related to the Multiplicative Inverse. See Table of Contents for further available material (downloadable resources) on Multiplicative Inverse.
Question the ultimate validity of the basic premises of society, in particular, scientific theory. Do not accept concepts or statements as truthful simply due to weight of tradition or learned
opinion. The synthesis of these statements is a healthy skepticism...
Multiplicative Inverse is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Multiplicative Inverse books and related discussion.
Suggested Pdf Resources
Suggested News Resources
For example, consider the following C# code, which multiples a number by its multiplicative inverse and then compares the calculated result with the expected result of one.
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
Multiplicative Inverse Topics | {"url":"http://www.realmagick.com/multiplicative-inverse/","timestamp":"2014-04-19T23:19:05Z","content_type":null,"content_length":"22915","record_id":"<urn:uuid:ebb12c5f-f71a-4481-81db-917a6e6081e0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brazilian Journal of Physics
Services on Demand
Related links
Print version ISSN 0103-9733
Braz. J. Phys. vol.37 no.1a São Paulo Mar. 2007
Diffraction and an infrared finite gluon propagator
E. G. S. Luna
Instituto de Física Teórica, UNESP, São Paulo State University, 01405-900, São Paulo, SP, Brazil
Instituto de Física Gleb Wataghin, Universidade Estadual de Campinas, 13083-970, Campinas, SP, Brazil
We discuss some phenomenological applications of an infrared finite gluon propagator characterized by a dynamically generated gluon mass. In particular we compute the effect of the dynamical gluon
mass on pp and ${\bar{p}}p$ diffractive scattering. We also show how the data on gp photoproduction and hadronic gg reactions can be derived from the pp and ${\bar{p}}p$ forward scattering amplitudes
by assuming vector meson dominance and the additive quark model.
Keywords: Diffractive dissociation; Dynamical gluon mass; Photoproduction
Nowadays, several studies support the hypothesis that the gluon may develop a dynamical mass [1, 2]. This dynamical gluon mass, intrinsically related to an infrared finite gluon propagator [3], and
whose existence is strongly supported by recent QCD lattice simulations [4]), has been adopted in many phenomenological studies [5-7]. Hence it is natural to correlate the arbitrary mass scale that
appears in QCD-inspired models with the dynamical gluon one, obtained by Cornwall [1] by means of the pinch technique in order to derive a gauge invariant Schwinger-Dyson equation for the gluon
propagator. This connection can be done building a QCD-based eikonal model where the onset of the dominance of gluons in the interaction of high-energy hadrons is managed by the dynamical gluon mass
A consistent calculation of high-energy hadron-hadron cross sections compatible with unitarity constraints can be automatically satisfied by use of an eikonalized treatment of the semihard parton
processes. In an eikonal representation, the total cross sections is given by
where s is the square of the total center-of-mass energy and c(b,s) is a complex eikonal function: c(b,s) = c[R](b,s) + ic[I](b,s). In terms of the proton-proton (pp) and antiproton-proton (p)
scatterings, this combination reads b,s) = c^+(b,s) ± c^- (b,s). Following the Ref. [8], we write the even eikonal as the sum of gluon-gluon, quark-gluon, and quark-quark contributions:
Here W(b;µ) is the overlap function at impact parameter space and s[ij](s) are the elementary subprocess cross sections of colliding quarks and gluons (i,j = q,g). The overlap function is associated
with the Fourier transform of a dipole form factor, W(b;µ) = (µ^2/96p) (µb)^3 K[3](µb), where K[3](x) is the modified Bessel function of second kind. The odd eikonal c^-(b,s), that accounts for the
difference between pp and p channels, is parametrized as
where m[g] is the dynamical gluon mass and the parameters C^- and µ^- are constants to be fitted. The factor S is defined as S º 9p [s] set at its frozen infrared value. The eikonal functions c[qq](
b,s) and c[qg](b,s), needed to describe the lower-energy forward data, are simply parametrized with terms dictated by the Regge phenomenology [8]; the gluon term c[gg](b,s), that dominates the
asymptotic behavior of hadron-hadron total cross sections, is written as c[gg](b,s) º s[gg](s)W(b; µ[gg]), where
Here F[gg](t) º [g Ä g] (t) is the convoluted structure function for pair gg, C¢ is a normalization constant and [gg](µ exp(-1/4pa[s]) are retained at lowest order. Only recently the physical meaning
of the parameter C¢ has become fully [7]: it is a normalization factor that appears in the gluon distribution function (at small x and low Q^2) after the resummation of soft emission in the leading
ln(1/x) approximation of QCD,
where J controls the asymptotic behavior of s[tot](s). The results of global fits to all high-energy forward pp and p scattering data above p at Figs. 1 e 2. The Figure 1 enables us to estimate a
dynamical gluon mass m[g] » s[tot] for both pp and p channels are displayed in Fig. 2 in the case of a dynamical gluon mass m[g] = 400 MeV, which is the preferred value for pp and p scattering. The s
[gg] cross section, calculated via expression (4), is shown in Fig. 3.
Early modeling of hadron-hadron, photon-hadron and photon-photon cross sections within Regge theory shows a energy dependence similar to the ones of nucleon-nucleon [10-12]. This universal behavior,
appropriately scaled in order to take into account the differences between hadrons and the photon, can be understood as follows: at high center-of-mass energies the total photoproduction s^g^p and
the total hadronic cross section s^gg for the production of hadrons in the interaction of one and two real photons, respectively, are expected to be dominated by interactions where the photon has
fluctuated into a hadronic state. Therefore measuring the energy dependence of photon-induced processes should improve our understanding of the hadronic nature of the photon as well as the universal
high energy behavior of total hadronic cross sections.
However the comparison of the experimental data and the theoretical prediction may present some subtleties depending on the Monte Carlo model used to analyze the data. For example, the gg cross
sections are extracted from a measurement of hadron production in e^+e^- processes and are strongly dependent upon the acceptance corrections to be employed. These corrections are in turn sensitive
to the Monte Carlo models used in the simulation of the different components of an event, and this general procedure produces uncertainties in the determination of s^gg [13]. This clearly implies
that any phenomenological analysis has to take properly into account the discrepancies among s^gg data obtained from different Monte Carlo generators. Therefore we performed global fits considering
separately data of the L3 [14] and OPAL [15] collaborations obtained through the PYTHIA [16] and PHOJET [17] codes, defining two data sets as
SET I: s^g^p and [gp], W[gg] > 10 GeV),
SET II: s^g^p and [gp], W[gg] > 10 GeV),
where gg total hadronic cross section obtained via the PYTHIA (PHOJET) generator.
The even and odd amplitudes for gp scattering can be obtained after the substitutions s[ij] ® (2/3) s[ij] and µ[ij] ® [ij] in the eikonals (2) and (3) [13], where
where O(a[em]):
where r, w and f are vector mesons. However, there are expected contributions to r, w, f, as for example, of heavier vector mesons and continuum states. Moreover, the probability
To extend the model to the gg channel we just perform the substitutions s[ij] ® (4/9) s[ij] and µ[ij] ® (3/2) µ[ij] in the even part of the eikonal (2). The calculation leads to the following
eikonalized total gg hadronic cross section
where N is a normalization factor which takes into account the uncertainty in the extrapolation to real photons (Q[1] = Q[2] = 0) of the hadronic cross section s[gg](W[gg], pp and p data, we have
performed all calculations of photoproduction and photon-photon scattering [13]. We have assumed a phenomenological expression for a + b ln(s). The total cross section curves are depicted in Figure 3
, where Figs. 3(a) and 3(b) [3(c) and 3(d)] are related to the SET I [SET II]. The results depicted in the Figures 3(c) and 3(d) show that the shape and normalization of the curves are in good
agreement with the data deconvoluted with PHOJET [13]. The calculations using a constant value of s) are represented by the dashed curves. These global results indicate that a energy dependence of
In this work we have investigated the influence of an infrared dynamical gluon mass scale in the calculation of pp, p, gp and gg total cross sections through a QCD-inspired eikonal model. By means of
the dynamical perturbation theory (DPT) we have computed the tree level gg ® gg cross section taking into account the dynamical gluon mass. The connection between the subprocess cross section [gg](pp
and p scattering data and to ddt data at m[g] » m[g] » 500 ±200 MeV, obtained in other calculations of strongly interacting processes. This result corroborates theoretical analysis taking into
account the possibility of dynamical mass generation and show that, in principle, a dynamical nonperturbative gluon propagator may be used in calculations as if it were a usual (derived from Feynman
rules) gluon propagator.
With the help of vector meson dominance and the additive quark model, the QCD model can successfully describe the data of the total photoproduction gp and total hadronic gg cross sections. We have
assumed that [had] with s is quite favored by the data. Notice that the data of s^gg data above
Acknowledgments: I am pleased to dedicate this paper to Prof. Yogiro Hama, on the occasion of his 70th birthday. I am grateful to the editors of the Braz. J. Phys. who gave me the opportunity of
contributing to the volume in his honor, and to M.J. Menon and A.A. Natale for useful comments. This research was supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico-CNPq
under contract 151360/2004-9.
[1] J.M. Cornwall, Phys. Rev. D 26, 1453 (1982); [ Links ]J.M. Cornwall and J. Papavassiliou, Phys. Rev. D 40, 3474 (1989); [ Links ]J. Papavassiliou and J.M. Cornwall D 44, 1285 (1991).
[2] R. Alkofer and L. von Smekal, Phys. Rept. 353, 281 (2001). [ Links ]
[3] A.C. Aguilar, A.A. Natale, and P.S. Rodrigues da Silva, Phys. Rev. Lett. 90, 152001 (2003). [ Links ]
[4] F.D.R. Bonnet et al., Phys. Rev. D 64, 034501 (2001); [ Links ]A. Cucchieri, T. Mendes, and A. Taurines, Phys. Rev. D 67, 091502 (2003); [ Links ]P.O. Bowman et al., Phys. Rev. D 70, 034509
(2004); [ Links ]A. Sternbeck, E.-M. Ilgenfritz, M. Muller-Preussker, and A. Schiller, Phys. Rev. D 72, 014507 (2005); [ Links ]73, 014502 (2006); A. Cucchieri and T. Mendes, Phys. Rev. D 73, 071502
(2006); [ Links ]Ph. Boucaud et al., JHEP 0606, 001 (2006). [ Links ]
[5] M.B. Gay Ducati, F. Halzen, and A.A. Natale, Phys. Rev. D 48, 2324 (1993); [ Links ]F. Halzen, G. Krein, and A.A. Natale, Phys. Rev. D 47, 295 (1993). [ Links ]
[6] A. Mihara and A.A. Natale, Phys. Lett. B 482, 378 (2000); [ Links ]A.C. Aguilar, A. Mihara, and A.A. Natale, Int. J. Mod. Phys. A 19, 249 (2004); [ Links ]E.G.S. Luna, Phys. Lett. B 641, 171
(2006); [ Links ]F. Carvalho, A.A. Natale, and C.M. Zanetti, Mod. Phys. Lett. A 21, 3021 (2006). [ Links ]
[7] E.G.S. Luna, A.A. Natale, and C.M. Zanetti, hep-ph/0605338. [ Links ]
[8] E.G.S. Luna et al., Phys. Rev. D 72, 034019 (2005). [ Links ]
[9] H. Pagels and S. Stokar, Phys. Rev. D 20, 2947 (1979). [ Links ]
[10] A. Donnachie and P.V. Landshoff, Phys. Lett. B 296, 227 (1992). [ Links ]
[11] R.F. Ávila, E.G.S. Luna, and M.J. Menon, Phys. Rev. D 67, 054020 (2003); [ Links ]Braz. J. Phys. 31, 567 (2001); [ Links ]E.G.S. Luna and M.J. Menon, Phys. Lett. B 565, 123 (2003). [ Links ]
[12] E.G.S. Luna, M.J. Menon, and J. Montanha, Nucl. Phys. A 745, 104 (2004); [ Links ]Braz. J. Phys. 34, 268 (2004). [ Links ]
[13] E.G.S. Luna and A.A. Natale, Phys. Rev. D 73, 074019 (2006). [ Links ]
[14] M. Acciarri et al., Phys. Lett. B 519, 33 (2001). [ Links ]
[15] G. Abbiendi et al., Eur. Phys. J. C 14, 199 (2000). [ Links ]
[16] T. Sjöstrand, Comput. Phys. Commun. 82, 74 (1994). [ Links ]
[17] R. Engel, Z. Phys. C 66, 203 (1995); [ Links ]R. Engel and J. Ranft, Phys. Rev. D 54, 4246 (1996). [ Links ]
[18] M.M. Block and A.B. Kaidalov, Phys. Rev. D 64, 076002 (2001). [ Links ]
Received on 29 September, 2006 | {"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332007000100025&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-18T03:32:44Z","content_type":null,"content_length":"51186","record_id":"<urn:uuid:1f8dad16-9405-4ede-8e66-0ff9d53b405d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Soil Temperature
Soil heat flux. Combining the heat flux equation with the equation for conservat . Temperature and Heat Flow. The previous expression may be simplified by using the soil's .
Sensible Heat Flux
Clausius-Clapeyron Relation The saturation vapor pressure of water increases exponentially with temperature –At higher T, faster water molecules in liquid Clausius–Clapeyron relation
Measurement of High Heat Flux Heat Transfer Coefficient
become familiar with FEHT, it is suggested that the reader stop and go through the tutorial Widely used programs such ANSYS, COSMOS, and COMSOL can handle 3-D problems and provide Comsol Tutorial
Soil Gas Flux Exploration at the Rotokawa Geothermal Field and
geothermal reservoir, Rotokawa A (34MW) and Nga Awa Purua (140 MW). The maximum fluid temperature is ~320°C, recorded within the Rotokawa Nga Awa Purua
Transverse Thermoelectric Effects for Cooling and Heat Flux Sensing
Transverse Thermoelectric Effects for Cooling and Heat Flux Sensing By Brooks Samuel Mann Thesis
Review on Critical Heat Flux in Water Cooled Reactors Karlsruhe
paper no. 982119 an asae meeting presentation latent heat flux of irrigated alfalfa measured
Estimation of the thin ice thickness and heat flux for the Chukchi
Cavalieri thin ice algorithm include the following. For the 1992 winter, Weingartner et al. [1998] find a maximum open water area of 26,000 km2, yielding a maximum Cavalieri
in the MARIA research reactor. The experiments were performed in 1977 at the WIW stand in the Institute of Atomic Research in Swierk (Poland). Maria reactor
Monitoring a Supervolcano in Repose: Heat and Volatile Flux at the
Caldera as our primary example and show how a simple analysis of gas and heat flux informs our
Ethylene furnace heat flux correlations - John Zink
Ethylene furnace heat flux correlations Equations are presented that correlate and predict heat
Effective elastic thickness and heat flux estimates on Ganymede
Effective elastic thickness and heat flux estimates on Ganymede Francis Nimmo Department
ii ACKNOWLEDGEMENTS I express my profound sense of gratitude to my technical supervisors Prof. Madya Dr. Mohd. Zulkifly Abdullah and Prof. Ahmad Yussof Hassan for Microchannel
FUNDAMENTAL STUDY OF HEAT PIPE DESIGN FOR HIGH HEAT FLUX S…
FUNDAMENTAL STUDY OF HEAT PIPE DESIGN FOR HIGH HEAT FLUX SOURCE Ryoji Oinuma, Frederick R. Best
Inverse Heat Transfer Solution of the Heat Flux Due to Induction
modeling, and inverse heat transfer solution of heat flux generated in induction heating
EXAMPLE 2.7-1: Measurement of High Heat Flux Heat Transfer Coefficient
EXAMPLE 2.7-1: Measurement of High Heat Flux Heat Transfer Coefficient Figure 1 illustrates
Flux Removal Guide
, and heat. Cleaning methodology is, of course, critical to a Flux
Soil CO2 flux from three ecosystems in tropical peatland of
from oil palm and sago were similar to other common crops on peat such as paddy in Borneo and grassland in the Amazon. Tree regression analysis showed that the Oil palm
FLAMMABILITY CHARACTERISTICS AT HEAT FLUX LEVELS UP TO 200 kW/m
Mr. Blair Swinnerton, Mr. Daniel Zelinski, Mr. Jeffrey Chaffee and Mr. Steve D’Aniello. I also thank my colleagues in the Measurements and Models Group who Swinnerton
New Concept of Critical Heat Flux Correlations for Safety Analysis
severe accident, such as a loss-of-coolant accident, occurs and results in core exposure. However, even with a sufficient amount of coolant in the reactor core, if a Loss of coolant accident | {"url":"http://pdf-world.net/pdf/48724/Soil-Temperature-and-Heat-Flow-pdf.php","timestamp":"2014-04-18T00:39:00Z","content_type":null,"content_length":"35583","record_id":"<urn:uuid:86105619-4573-4ea4-966d-4a6f1c152ecd>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving exponents
January 23rd 2009, 07:53 AM #1
Junior Member
Jul 2008
solving exponents
I have been working on this problem for a week now and I cant solve it and Im sure its easy.
The expression e^(x-8)/e^(x-2) can be written as e^f(x), where f(x) is a function of x. Find f(x).
I tried using laws of exponents and converting it to e^(x-8)*e^(-x+2) to get e^(-6) which is still wrong. Thanks to anyone who can help.
I have been working on this problem for a week now and I cant solve it and Im sure its easy.
The expression e^(x-8)/e^(x-2) can be written as e^f(x), where f(x) is a function of x. Find f(x).
I tried using laws of exponents and converting it to e^(x-8)*e^(-x+2) to get e^(-6) which is still wrong. Thanks to anyone who can help.
$\frac{e^{(x-8)}}{e^{(x-2)}} = e^{(x-8)}\times (e^{(x-2)})^{-1}$
$= e^{(x-8)}\times e^{(2-x)}$
$= e^{(x-8) + (2-x)}$
$= e^{x-8 + 2-x}$
$= e^{-6}$
You are correct.
January 23rd 2009, 08:12 AM #2
Super Member
Dec 2008 | {"url":"http://mathhelpforum.com/algebra/69567-solving-exponents.html","timestamp":"2014-04-20T09:12:01Z","content_type":null,"content_length":"32608","record_id":"<urn:uuid:dcdb5325-1486-475e-b585-6bb63c4b2ad3>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Everett, WA Science Tutor
Find an Everett, WA Science Tutor
...I have a Master's in Math Education and would love to demonstrate my ability to help you improve your skills and confidence. I currently teach at a College in Everett, but live in Oak Harbor.
Contact Steve, your answer to math anxiety.I've taught Statistics at Lightworks Institute and ITT Tech in Everett.
19 Subjects: including electrical engineering, physics, calculus, public speaking
...I have experience with chemistry, too. I also like math particularly algebra.I have a BS in Biochemistry from Iowa State University and a MS in Molecular Cell Biology from Washington
University in St. Louis.
6 Subjects: including chemistry, biology, microbiology, prealgebra
...I have tutored Algebra I for over five years now as an independent contractor through a private tutoring company. I have tutored high school level Algebra I for both Public and Private School
courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework.
27 Subjects: including biology, reading, algebra 2, algebra 1
...Students taking my lessons will learn the material for their course and study strategies that will help them in future classes. My approach is to teach the student how to identify the nature
of the problem and to recognize the appropriate way to solve it. We work through the process several times together, identifying simple, logical steps for solving each type of problem.
26 Subjects: including chemistry, ACT Science, physiology, physics
...In South Korea I taught four years (one year in an institute or hagwon, one year at a university, and two years in a women's community college, teaching speaking & listening, reading, and
writing). In Seattle area and at Bellevue College, I taught intensive ESL writing, reading, etc. In 1999 I w...
13 Subjects: including biology, vocabulary, grammar, reading
Related Everett, WA Tutors
Everett, WA Accounting Tutors
Everett, WA ACT Tutors
Everett, WA Algebra Tutors
Everett, WA Algebra 2 Tutors
Everett, WA Calculus Tutors
Everett, WA Geometry Tutors
Everett, WA Math Tutors
Everett, WA Prealgebra Tutors
Everett, WA Precalculus Tutors
Everett, WA SAT Tutors
Everett, WA SAT Math Tutors
Everett, WA Science Tutors
Everett, WA Statistics Tutors
Everett, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/everett_wa_science_tutors.php","timestamp":"2014-04-21T13:20:39Z","content_type":null,"content_length":"23893","record_id":"<urn:uuid:42c34dce-7232-4574-93ab-7cf3e15b5ced>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
they are fresh, eating oranges
Number of results: 2,086
I think #2 should read as follows: When oranges are fresh, eating them can often keep away the cold virus. If you keep it as, "When oranges are fresh they can often keep away a cold virus," it makes
it sound as if the oranges themselves keep away the cold virus. Eating the ...
Monday, September 24, 2007 at 7:20pm by CulAdam
Dangling Modifiers
Could some one check and see ifi got these right If your baby does not like cold apple juice, it should be heated. 1. If your baby does not like cold apple juice, you should heat it. When they are
fresh, eating oranges can often keep away the cold virus. 2.Eating fresh Oranges...
Wednesday, August 30, 2006 at 10:03pm by Toni
please check
And here are links to other responses for these very sentences in this very exercise in previous Jiskha posts: http://www.jiskha.com/search/search.cgi?query=they+are+fresh%2C+eating+oranges =)
Thursday, August 23, 2007 at 11:08pm by Writeacher
4th grade
nick's diner advertise fresh squeezed orange juice every day of the year. he serves twenty-four 8-ounce glasses of juice a day. nick knows it takes three 6-ounce oranges to make one 8-ounce glass of
juice. nick orders oranges by cases that have 24 oranges and weigh a total of ...
Monday, January 24, 2011 at 5:59pm by dk
PLS HELP!!! Some children were sharing oranges. If each child took 3 oranges, there would be 2 oranges left over. But if each child took 4 oranges, there would be 2 oranges short. How many oranges
were there?
Saturday, September 1, 2007 at 4:50am by Ben
PLS HELP!!!! Some children were sharing oranges. If each child took 3 oranges, there would be 2 oranges left over. But if each child took 4 oranges, there would be 2 oranges short. How many oranges
were there?
Saturday, September 1, 2007 at 7:41am by Ben
No. How many oranges can you get for 3 peaches? If oranges are half the price of peaches, then 3 peaches are worth 6 oranges. 3 bananas cost the same as 2 oranges, So, you can trade 2 oranges for 3
bananas. How many times can you trade 2 oranges for 3 bananas if you start with...
Wednesday, November 4, 2009 at 11:56pm by Quidditch
Some children were sharing oranges. If each child took 3 oranges, there would be 2 oranges left over. But if each child took 4 oranges, there would be 2 oranges short. How many oranges were there?
Saturday, September 1, 2007 at 4:50am by Ben
Your answer: Eating fresh organges , can often help keep away the cold virus. The only thing wrong with this one is - do not use a comma. "Eating fresh oranges" is the subject of the sentence. This
one is correct as it is. Punctuation-Parentheses: The blouse did not fit ...
Friday, August 31, 2007 at 5:06pm by GuruBlue
2 When oranges are fresh they can often keep away a cold virus.
Sunday, September 23, 2007 at 12:39pm by Dianna
A cartel profits by limiting supply, and thus driving up the price. For a cartel to survive, It must 1) keep out new entries of producers, and 2) keep its own members from cheating and
over-producing. The problem, however with fresh oranges is that there are too many close ...
Thursday, July 16, 2009 at 12:44pm by economyst
a box contain 60 oranges.3/5 of the oranges were equally shared among 4 boys.how many oranges did each boy received.
Saturday, February 9, 2013 at 2:53pm by simon
Oranges are on sale at $0.99 per pound. Either bought 3.25 pounds of oranges. About how much did she pay for the oranges?
Sunday, April 10, 2011 at 9:05pm by Chole
Part 1- Mr. Santiago bought 10 crates of oranges. There were 50 oranges in each crate. He sold %65 of them. How much did Mr. Santiago earn if he sold the oranges 5 for $1.20? Part 2- A week after the
first sale %8 of the remaining oranges (from part 1) went bad and he threw ...
Wednesday, November 13, 2013 at 8:19pm by Gwen
The question is: A Woman is selling an (X) amount of Oranges The first time she sold half of the total Oranges and half of an Orange. The second time she sold half of what is left of the Oranges and
also half of an Orange. The third time she sold half of what's left of the ...
Wednesday, February 4, 2009 at 2:15am by Keith
Which of the following contain water that is easily available for human use? (more than one answer) Icecaps & glaciers - 76% of fresh water Shallow Groundwater- 12% of fresh water Deep Groundwater -
11% of fresh water Lakes & Rivers - 0.34% of fresh water Water Vapor - 0.037% ...
Thursday, January 14, 2010 at 7:44pm by Anya
in a store all the oranges are stacked in triangular pyramids, each layer of oranges is in the shape of an equilateral triangle and the top layer is an single orange. How many oranges are are in a
stack ten layers high
Sunday, December 8, 2013 at 5:44pm by Jennifer
bananas cost twice as much as oranges.sue buys 10 bananas and 3 oranges.with the same amout of money,she could have bought 4 bananas and how many oranges?
Sunday, September 2, 2012 at 8:09pm by wan
Sophia buys an eqaul number of oranges and pears for a party. The oranges are bought at a price of 7 for $2 and the pears are bought at a price of 5 for $3. She pays $33 more for the pears than the
oranges. How much does Sophia pay in all? How many oranges and pears does she ...
Tuesday, October 22, 2013 at 6:01am by Tam
Q: Although heated in the microwave, I found the center of the pizza cold. A: No Change Were you heated in the microwave? Q: When they are fresh, eating oranges can often keep away the cold virus. A:
When fresh, eating oranges can often keep away the cold virus. Eating fresh ...
Friday, August 29, 2008 at 4:24pm by Ms. Sue
She is eating a noodle. (Does this mean she is eating a piece of noodle? When she is eating "noodle' for breakfast, what expression do we have to use? 1. She is eating noodles. 2. She is eating a
Monday, March 29, 2010 at 9:32pm by rfvv
Mr.Santiago brought 10 crates of oranges. There were 50 oranges in each crates. He sold 65% of them. 8% of the remainder went bad, and he threw them away. A) How much money did Mr.Santiago earn if he
sold the oranges 5 for $1.20 B) If he made $24.15 by selling the rest of the ...
Friday, October 28, 2011 at 10:23pm by Carrie
pick a fruit from the box labeled Apples and Oranges. Because all the labels are wrong, you know the box is either all apples or all oranges. If you draw an apple, you know that the box is all
apples. So, the box labeled Oranges must be not oranges, but apples and oranges, ...
Thursday, May 9, 2013 at 2:39pm by Steve
Mr. Davis wants to buy some oranges. He visits a local grocery store and finds that 6 oranges cost $2. Write an equation that represents the relationship between the number of oranges, n, and their
total cost, c. Express the rate of change in simplest form, if necessary.
Thursday, January 23, 2014 at 6:00pm by Anonymous
Treat the < as an equal sign for a minute to solve the equation. The first thing you want to do is get "y" by itself. 5y-3 < 13 Let's just imagine the left side of the equation first. Let's say you
have 5 bags of oranges, each containing a certain number of oranges. You ...
Thursday, October 1, 2009 at 12:59am by MattsRiceBowl
If you bought 6 pears and 20 oranges and then bought 11 pears and 12 oranges and spent the same each time, how much do oranges cost if the pears are 80 cents each?
Wednesday, February 27, 2013 at 3:24pm by Jenn
Math (for PsyDag)
This is the example i was given: The oranges in a box have a mass of 4680 gm.If the average mass of an orange is 97.5gm, find the number of oranges in the box. Total mass = average mass of an orange
x number of oranges in the box. Therefore the number of oranges in the box= ...
Monday, July 20, 2009 at 2:41pm by keisha
jasmine bought 26 pieces of fruit.she bought 6 oranges.she bought twice as many apples as oranges,and twice as many oranges as bananas.she also bought some pears.
Wednesday, February 22, 2012 at 12:08pm by john
I need help figuring out these two math word porblem: 1) A sales tax of 6% of the cost of a car was added to the purchase price of $29,500. a. how much was the sales tax? b. What is the total cost of
the car, including sales tax? 2) During the packaging process for oranges, ...
Saturday, November 15, 2008 at 12:42pm by Tati
if the mealybugs keep eating and eating and eating the plants, that certain plant might completely disappear from that region
Monday, February 15, 2010 at 7:33pm by mike
Mr. R has some money to buy oranges, if he will buy 15 oranges he will need $90 more, if he will buy 10 oranges he will have $60 left. how much money has mr. R?
Thursday, August 9, 2012 at 8:27am by Kaloy
What are we supposed to eat fresh? Worms? Grass? What does it mean by fresh? How fresh are the bananas we buy at the grocery store? Should we go to the countries where bananas are grown? Yes, I think
that's an example of a glittering generality.
Saturday, January 8, 2011 at 7:51pm by Ms. Sue
Mr. R has some money to buy oranges, if he will buy 15 oranges he will need $90 more, if he will buy 10 oranges he will have $60 left. how much money has mr. R? Please kindly solve this using George
Polya method.. guess and check...
Thursday, August 9, 2012 at 7:30pm by Kaloy
Pick a piece of fruit from the "apples and oranges" box. If you get an apple, then you know this box is the apple box. That leaves two boxes: one labeled oranges and the other apples and oranges. You
know that both of these labels are wrong -- so you now know what to label ...
Saturday, August 20, 2011 at 5:25pm by Ms. Sue
Jimmy had a total of 163 apples and oranges. After selling 50% of all the apples and 20% of all the oranges, he had 110 fruits left. How many oranges did Jimmy have at first?
Friday, January 20, 2012 at 5:10am by widyan
out of 250 oranges 50 oranges distributed in class what is the percentage left
Tuesday, January 25, 2011 at 12:08am by sahil
A bowl of fruit contains 4 apples, 5 oranges, and 3 bananas. In how many ways can 3 oranges be selected?
Tuesday, November 27, 2012 at 6:36pm by Anonymous
A boy was sent with $2.10 to buy oranges. He found the price 3 cents higher per dozen than he had expected to pay, and so he bought 6 fewer oranges than he had intended to buy and received 1 cent in
change. How much did he pay for a dozen oranges?
Thursday, April 17, 2014 at 9:46am by Summer
Quick Math
If you have 12 apples, 7 oranges, 8 pears and 9 kiwi, what percent of the fruit do the oranges represent?
Tuesday, December 5, 2006 at 6:21pm by Loria
math stuff
suppose to arrage the names of the ppl in descending order of their share of oranges: Eddy's share of oranges is half the sum of dennis and carols share bob has more oranges then dennis the total
share of anna and bob is equal to that of carol and dennis eddy has fewer oranges...
Saturday, March 31, 2007 at 10:21pm by jake p
A grocer sell 84 oranges in one day. At the end of the day, the grocer has 1/3 as many oranges as he did at the start of the day. If n=the number of oranges the grocer has left at the end of the day,
which of the equations could be used to determine the number of oranges left ...
Friday, February 22, 2013 at 9:21am by Tonia
3 apples and a pineapple equally balances 10 oranges. Also,6 oranges and an apple equally balances a pineapple. How many oranges balance a pineapple?
Sunday, February 16, 2014 at 2:26pm by Anonymous
Health Repost
One of the articles I posted for you included many fruits -- such as grapefruit, oranges, and apples. That article pointed out that acidic foods soften the enamel on teeth. Brushing after eating them
wears down the enamel even more because its softer then.
Tuesday, January 27, 2009 at 6:48pm by Ms. Sue
The yearly production of a 5 foot orange tree is 34 pounds of oranges. A 14 foot tree produces 86 pounds of oranges. Let h represent the height of an orange tree and p the number of pounds of oranges
produced. List the ordered pairs.
Thursday, November 29, 2012 at 9:38pm by Antonio
math - word problem
sorry it is a typo. The problem is restated 2/3 of the pieces of Fruit in Lurene's basket are oranges. Of the fruit that are not oranges, 2/3 are apples, If 2/3 of the apples are green and there are
6 pieces of fruit that are neither apples or oranges, how many more oranges ...
Friday, September 20, 2013 at 7:56pm by Anonymous
Suppose 15 peaches and 20 oranges need to be divided between fred and ethel. Fred's utility function is U=Max(#of peaches, #of oranges) ethels utility function is U=Max(2# peaches, # of oranges).
Carefully draw a graph showing the set of Pareto efficient allocations.
Thursday, September 20, 2007 at 10:14pm by jon
A fruit seller has an equal number of apples and oranges.He sold 109 oranges and 45 apples.The number of apples left is 5 times the number of oranges left.What is the total number of fruit he had at
Monday, February 4, 2013 at 7:29pm by May
math - word problem
2/3 of the pieces of Fruit in Lurene's basket are oranges. Of the fruit that are not oranges, 2/3 are apples, IF 2/2 of the apples are green and there are 6 pieces of fruit that are neither apples or
oranges, how many more oranges than green apples are in Lurene's basket?
Friday, September 20, 2013 at 7:56pm by Anonymous
A fruit seller has an equal number of apples and oranges. He sold 109 oranges and 45 apples.The number of apples left is 5 times the number of oranges left. What is the total number of fruit he had
Sunday, February 3, 2013 at 4:11pm by May
Although I will not attempt to answer your questions, here are some things that she might do: 1. If she MUST eat that way, she cuts the portions way down. (portion control) If she has a much smaller
plate, she only eats what arranges on there. 2. Eating only 1 meal day is a ...
Monday, June 6, 2011 at 12:09pm by SraJMcGin
A farm sells boxes of oranges for $8 and boxes of grapefruits for $12. The farm wants to ear at least 4,000. If he sells 210 boxes of oranges and 150 boxes of grapefruit will he earn at least $4,000?
If not how many more boxes of oranges must he sell to make up the difference...
Thursday, October 2, 2008 at 6:06pm by Katy
The ratio of oranges and bananas is 3:6. There are 54 pieces of fruit altogether, how many of them are oranges? Do I set it up 3/6 times x/54?
Thursday, February 4, 2010 at 2:57am by Jenny
You have 8 apples(8x) and 5 oranges(5y), then take away one apple(-x) and 2 oranges(-2y). What do you have ?
Monday, February 6, 2012 at 3:25am by Reiny
Find the ratio of bananas to oranges. WRite the ratio as a fraction in simplest form. Then explain its meaning. 1/3? 1 banana to 3 oranges?
Friday, November 16, 2012 at 9:38pm by Jerald
a crate contains apples, pears, and oranges.There are 184 apples and pears and 248 apples and oranges. there are 3 times as many oranges as pears. how many apples are in the crate? How do I show my
work to solve this problem using Singapore math - Math in Focus curriculum?
Thursday, November 29, 2012 at 8:43pm by Braden
ms.Perez has 24 pieces of fruit 1/2 are oranges,1/4 are mangos,2/8 are pineapple. How many oranges are there. How many mangos are there? how many pineapples are there?
Monday, January 28, 2013 at 7:45pm by keivin
joe and jim are picking oranges in an orange grove.joe can fill his sack with oranges in 20 minutes.Jim can fill his sack with oranges in 30 minutes They need one more sack at the end of the
day.Working together,how long will it take them to fill the sack?
Monday, April 22, 2013 at 11:53pm by loulou
A basket of fruit has 10 oranges,12 apples, and 18 peaches.Express in simplest form the fraction of fruit that are oranges.
Monday, December 14, 2009 at 7:25pm by elvis
Describe the picture using the following expressions. (There are some English expressions in a colored box.) 1. How much juice is there in the glass? There is a little juice. 2. How much water is
there in the cup? There is a little water in the cup. 3. How many books are there...
Monday, March 29, 2010 at 5:11pm by rfvv
Which of the following sentence contain a fragment?A)To prepare for the 10K race B)Eric eliminated greasy foods from his diet and began eating fresh fruits and vegetables.C)He also began getting at
least eight hours of sleep every night.D)Correct
Monday, September 27, 2010 at 2:53pm by Anonymous
three boxes are in a room. You know that one contains apples, one centains oranges, and one contains a mixture of apples and oranges, but you don't know the contents of the specific boxes. The boxes
are labeled "apples","oranges", and "apples and oranges", but each box has the...
Saturday, August 20, 2011 at 5:25pm by Audrey
3rd grade
There are 32 more apples than oranges at a fruit stand. How many apples and oranges could there be?
Wednesday, November 10, 2010 at 8:56pm by JT
Math - Fractions
2/3 of the pieces of Fruit in Lurene's basket are oranges. Of the fruit that are not oranges, 2/3 are apples, IF 2/2 of the apples are green and there are 6 pieces of fruit that are neither apples or
oranges, how many more oranges than green apples are in Lurene's basket?
Tuesday, September 10, 2013 at 1:16pm by Venus
algebra 2
a grocer sold a total of 126 apples, oranges, and melons one day. she sold 16 fewer oranges than three times as many apples as melons. write a system of three equations that represents how many
apples, oranges, and melons the grocer sold
Friday, December 17, 2010 at 9:08pm by lupita
3 boxes are in a room. You know that one contains apples, one contains oranges, and one contains both apples and oranges, nut you don't know the contents of the specific boxes. The boxes are labeled,
"apples," "oranges," and "apples and oranges," but each box has the wrong ...
Sunday, August 21, 2011 at 12:21pm by Audrey
Let A = appples and B = oranges 1/2 A = half the number of apples 3B - 15 = fifteen less than thrice as much the number of oranges Do you think you can work it out from there? I hope this helps.
Thanks for asking.
Monday, August 20, 2007 at 6:14pm by PsyDAG
5th grade word problems
5 3 -- * -- 100 x 5x=300 x=60 1. put 5 over 100, oranges over oranges. And 3 over X. glasses over glasses 2. cross multiply, 5*x and 100*3 3. divide both sides of the equation by 5 to get x by it's
self. 4. X=60, you can make 60 glasses with 100 oranges
Wednesday, October 24, 2012 at 2:41pm by Maddie
Peartree Elementary
There are 50 oranges in a box 3/10 of them are rotten. How many of the oranges are not rotten?
Sunday, February 6, 2011 at 8:04pm by Drake
In a box of fruit there were 11 oranges and 9 apples. What fraction of the fruit were the oranges?
Wednesday, May 2, 2012 at 4:09am by Brad
Singular: The fresh cake tastes . . . Plural: The fresh cakes taste . . .
Wednesday, March 6, 2013 at 9:28pm by Ms. Sue
determine by how much the demand for Florida Indian River oranges would change as a result of a 10 percent increase in the price of Florida interior oranges, and vice versa.
Sunday, May 30, 2010 at 9:26pm by Mary
The fresh cake tastes ... OR The fresh cakes taste ...
Wednesday, March 6, 2013 at 7:12pm by Writeacher
From Table 4-1 in the text, determine how much the demand for Florida Indian River oranges would change as a result of a10 percent increase in the price of Florida interior oranges, and vice versa.
Monday, February 8, 2010 at 1:49pm by Anonymous
There are 50 oranges in a box 3/10 of them are rotten. How many oranges are not rotten?
Sunday, April 7, 2013 at 2:49pm by Armando
the cost of 10 oranges is $2.50. What is the cost of 5 dozen oranges?
Thursday, February 13, 2014 at 3:49pm by mon
College Level Calculus
Each orange tree grown in California produces 720 oranges per year if not more than 20 trees are planted per acre. For each additional tree planted per acre, the yield per tree decreases by 15
oranges. How many trees per acre should be planted to obtain the greatest number of ...
Tuesday, November 12, 2013 at 10:31pm by Timofey
What are the three types of eating disorders discussed in your reading materials? A. Anorexia, bulimia, binge-eating B. Anorexia, bulimia, starvation C. Anorexia, binge-eating, starvation D.
Anorexia, binge-eating, chronic nervosa A?
Wednesday, August 1, 2012 at 2:49pm by Aya
3rd grade
Apples = 32 + oranges if there is one orange, there are 33 apples if there are 2 oranges, there are 34 apples etc
Wednesday, November 10, 2010 at 8:56pm by Damon
donna brings 32 slices of apples and oranges to the party. there are 3 times as many oranges than apples. how many apples are there
Thursday, February 6, 2014 at 7:14pm by ann
Try this and then re-post: Put a capital letter at the beginning of each item's first word, and then add a verb and complete each thought as a sentence. improving health 1. exercise a. aerobics b.
stretching and strengh training c. variation of routine 2 better eating habits a...
Monday, January 10, 2011 at 6:25am by Writeacher
1. They are eating cotton candy. 2. They are eating cotton candies. 3. They are eating a cotton candy. 4. The boy is eating two cotton candies. (Which one is correct?)
Monday, October 11, 2010 at 9:21pm by rfvv
math help
Six hundred ninety four oranges is how much more than fifty seven oranges? Whatever you're measuring, how much more is 694 than 57?
Friday, January 25, 2013 at 2:11pm by Steve
1. Yes, A is the producer. 2. B and C are eating the producer, so they are the primary consumers, by definition. 3. Species E is a primary consumer (eating A), a secondary consumer ( eating C and B),
and an omnivore (eats plants A, and animals B,C). Answer C
Sunday, August 30, 2009 at 6:37pm by bobpursley
Four large oranges are squeezed to make six glasses of juice. How much juice can be made from six oranges.
Friday, January 30, 2009 at 11:15pm by Ryan
Actually the question was wrong> 24 pieces of fruit. 1/2 are oranges, 1/4 are mangos, and 2/8 are pineapples. How many oranges, how many mangos and how many pineapples are there?
Tuesday, April 12, 2011 at 7:51am by Tom
Calc and Vectors
The owner of an orange grove estimates that 150 orange trees per acre will each yield on average 800 oranges per year. For each additional tree planted per acre the number of oranges produced by each
tree decreases by 10 per year. How many trees should be planted per acre in ...
Wednesday, January 22, 2014 at 10:20pm by Annie
supose you bought eight oranges and one grapefruit for a total of $4.60. later day, you bought six oranges and three grapefruits for a total of $4.80. what is the price of each type of fruit?
Tuesday, January 8, 2013 at 5:08pm by alex
I forgot to include the following statements I'm not sure of. Thank you very much for your help. 1) For lunch I usually have pasta with tomato sauce, then a breaded cutlet with chips or fresh
vegetables. For my dessert I generally have some fresh season fruit (such as oranges...
Tuesday, December 14, 2010 at 3:09pm by Franco
4th grade
I need to know I good way to do long division. The problem is: It takes 14 oranges to make a small pitcher of juice. Annette has 112 oranges. How many pitchers of juice can she make?
Wednesday, January 20, 2010 at 6:13pm by Katie
Ms.Perez has a crate of 24 pieces of fruit. 1/2 are oranges, 1/4 are mangos, and 2/3 are pineapples. How many or oranges? How many or mangos? How many are pineapples?
Tuesday, April 12, 2011 at 7:51am by Tom
Ms. Perez has a crate has 24 pieces of fruit . 1/2 are oranges, 1/4 are mango's and 2/8 are pineapples. How many oranges are there, how many mango's and how many pineapples?
Monday, January 30, 2012 at 11:57pm by Gabe
Math Mania
A fruit stand carries apples, oranges, lemons, and pears. There are twice as many apples as oranges and 3 times as many lemons as pears. If there are 3 more oranges than pears, and p represents the
number of pears, which of the following expressions represents the total number...
Tuesday, November 9, 2010 at 10:01pm by Anonymous
1) A sales tax of 6% of the cost of a car was added to the purchase price of $29,500. a. how much was the sales tax? multiply 29500 x 0.06 b. What is the total cost of the car, including sales tax?
add 29500 + the answer to 1a above 2) During the packaging process for oranges...
Saturday, November 15, 2008 at 12:42pm by Writeacher
if she's only eating one piece of a noodle, then it would be She is eating a noodle. If she is eating a bowl of noodles-- she has more than one piece of noodle on her spoon, fork, etc, then it would
be She is eating noodles.
Monday, March 29, 2010 at 9:32pm by anonymous
The Science of Nutrition
What is the cycle of malnutrition? What I have read it is the eating habits of one person and then they pass these eating habits onto their offspring, who passes these eating habits onto their
offspring and so on. Until it just keeps going on and on. I am right?
Thursday, March 12, 2009 at 2:38pm by A.W.
1) For lunch I usually have pasta with tomato sauce, then a breaded cutlet with chips or fresh vegetables. For dessert I generally have some fresh fruit (such as oranges, mandarine oranges, kiwi, or
khaki) or chocolate pudding. 2) In the evening I always have vegetable soup ...
Tuesday, December 14, 2010 at 3:09pm by Writeacher
com 150 body paragraphs
My thesis statment is Healthy Eating means Healthy Living. My supporting is watching your diet, knowing how many calories are in your food, Eating less fat or fried food, avoid greasy food, eating
healthy food.
Sunday, December 13, 2009 at 3:38pm by kase
Think of this example of an ecosystem: A large tree with grasshoppers, caterpillars and other animals eating the leaves, birds eating the leaf eaters, snakes eating birds and a hawk eating snakes. An
energy pyramid that counts the number of organisms at each trophic level ...
Monday, September 5, 2011 at 3:32pm by Joshue
Wednesday, April 10, 2013 at 8:51pm by Anonymous
I dont understand how you can tell that E is eating A. I thought A would be eating E?
Sunday, August 30, 2009 at 6:37pm by casey
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=they+are+fresh%2C+eating+oranges","timestamp":"2014-04-20T22:12:29Z","content_type":null,"content_length":"40026","record_id":"<urn:uuid:56386678-bdd7-44ac-a9d3-068debe209bc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: High School Equations/Graphs/Translations
This page:
equations, graphs
Dr. Math
See also the
Internet Library:
graphing equations
About Math
basic algebra
linear algebra
linear equations
Complex Numbers
Discrete Math
Fibonacci Sequence/
Golden Ratio
conic sections/
coordinate plane
practical geometry
Negative Numbers
Number Theory
Square/Cube Roots
Browse High School Equations, Graphs, Translations
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Direct and indirect variation.
Slope, slope-intercept, standard form.
A student seeks integer solutions to (x^2 + y)(x + y^2) = (x - y)^3. To oblige, Doctor Vogler parameterizes the equation of this twisted curve.
Show that the graph of every cubic polynomial has a point of symmetry.
I need help writing an algorithm that fits a curve to a given set of data points.
I need to know how to convert from rectangular coordinates back to polar.
I don't see how a right circular cone cut parallel to the axis of symmetry reveals a hyperbola. Shouldn't it be a parabola?
Suppose that (x, y) is to be on the parabola. Suppose that the line mentioned in the definition is given by x = -p. Find the distance between (x, y) and the line...
Can you give me definitions for: Pythagorean Triplets, Principle of Duality, Euclid's Elements, Cycloid, Fermat's Last Theorem?
Identifying the degenerate cases for the graphs of equations in conic form.
When speaking of hyperbolas, why does C^2 = A^2 + B^2?
How do you derive the quadratic formula from ax^2 + bx + c = 0?
Can you help me find the equation of the tangent of y^2 = 2x using Descartes' method...?
Which of the following is true of the graph of the equation: y = 2x^2 - 5x + 3 ?...
Given a set of points on a plane, how can you tell if the relation between them is linear, quadratic, cubic, or exponential?
How do I find the slope of a line given the coordinates of two points?
Find the equation of the tangent to the ellipse that forms, with the coordinate axes, the triangle of smallest possible area.
How can you tell whether or not a function is continuous?
Find the equation of the line tangent to a curve f(x) = x^3 - 7x^2 + 12x + 2 at the point (3,2) without using calculus.
If y = 2f(x) stretches y = f(x) vertically by a factor of 2, why does y = f(2x) shrink it horizontally by a factor of 2? The notation seems inconsistent.
Can you explain the difference between direct and indirect variation? How would you interpret them in a word problem?
Please explain direct and indirect variation.
What is the domain of f(x)= x/sqrt(x^2-4)?
How do you find the domain, range, and asymptote of a function?
Given the graph of y = f(x), domain [-4,3], range [-2,3], sketch other functions, such as y = |f(x)| and y = f(x - 2), and find the domain and range of each.
How do you find the domain and range of the function f(x) = 2x^2-3x+1? (Both with and without calculus.)
How do I draw a line on a graph when I know the slope and the location of only one point on the line?
I want to draw spirals with the computer. I don't know how. Could you send me formula(s) I can use to do it?
How do I get the equation of an ellipse, given four points and the inclination of the major axis?
Is this parametric equation elliptical or a circle?... And how do I compute the slopes at points 0, pi/4, pi/2, 3pi/2,and 2pi?
I found the right formula for a circle but can't find it for an ellipse.
What is equation for a vertical line passing through (7, -10)?
What sort of equation has a graph that is shaped like a 'W'?
Find an equation of the line with slope -4/3 and y-intercept 3. Leave the equation in slope intercept form....
I would like to know the equation of a cone. Could you help me?
How do I find the equation of a line through origin perpendicular to 2x + 3y = 5?
What is the equation of the line that contains the point (4, -4) and is perpendicular to the line defined by 2x-5y+3 = 0?
Determine the equation of a parabola given its vertex and either the equation of the directrix or the coordinates of the focus.
Given several points that appear to be a parabola, how do you approximate the equation that would give a similar graph?
What is the equation for a sphere on a graph with the dimensions x, y, and z?
How do I find the equations for all parabolas containing the origin whose vertex is at (2,1)?
Graph d = 30t and d = 28t+12 (d is distance and t is time).
Page: [<prev] 1 2 3 4 5 6 7 [next>] | {"url":"http://mathforum.org/library/drmath/sets/high_equations.html?start_at=41&num_to_see=40&s_keyid=39607126&f_keyid=39607128","timestamp":"2014-04-21T12:47:08Z","content_type":null,"content_length":"23105","record_id":"<urn:uuid:3091f09c-d22c-4664-8fb2-e424e08fec02>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Sines
y = Asin(ωt + Φ)
There is a fourth constant you can use:
y = Asin(ωt + Φ) + C
It doens't mean anything when talking about scientific waves (such as modeling sound), but it can be used to help make the sin graph go through a certain point.
Last edited by Ricky (2006-01-21 09:18:56)
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=25059","timestamp":"2014-04-19T02:09:19Z","content_type":null,"content_length":"13847","record_id":"<urn:uuid:c9cadf66-9571-40af-98e8-7e3f7676425c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
verpaying a mortgage
Overpaying a Mortgage
How it works - Why it's important
Last update : November 2013
Want to improve your credit rating? Consider using a low-limit Credit Card as a strategy of monthly borrowing and repaying. It works wonders. Find out more.
If you take out a repayment mortgage your lender will split the monthly payment into 2 -
1. Part goes towards repaying the original loan, and
2. Part towards paying the interest
But the lender doesn't split the payment right down the middle. So assume you pay £1,000 a month, this does not mean that £500 will be allocated towards repaying the loan and £500 for the interest.
Instead, during the early years the majority of the monthly payment will go towards paying the interest bill with a much smaller part repaying the original loan. Then, as the years progress more
money is allocated towards repaying the loan and less towards the interest.
This makes sense because as the total loan amount gets smaller so will the interest levied.
What is 'Overpaying' a mortgage
● Assume you take out a repayment mortgage of £100,000 over 25 years
● The monthly payment is £600 and the interest rate is fixed for the length of the deal (this is to make this example easier to understand)
● Theoretically if you continue to pay £600 every month the total loan will be repaid after 25 years and you will owe nothing
An overpayment is where you pay more than the specified amount per month. The amount of overpayment is up to you, and you can also designate when you wish to pay. For example -
● In Jan-Feb-Mar you pay £600
● But in April you up the amount to £675 and then £725 in May
● Then for the rest of the year you go back to paying £600 a month
● So in April the overpayment was £75 and it was £125 in May
Now, an extra £75 or £125 doesn't sound much does it? But look at the examples below to see just how effective even a small overpayment of a mortgage can be.
The effects of overpaying a mortgage
Assume the following -
● £100,000 mortgage at 5.75% (fixed) for 25 years
● Monthly repayments would be around £630
● Over the lifetime of the mortgage (25 years) you would pay back a total of £188,732 (the £100,000 initial loan + £88,732 of interest)
But what would happen if every month you paid an extra £50 to make a payment of £679?
● Total repayment is now £173,713
● The mortgage would be fully paid off almost 4 years earlier (21 versus 25 years), and
● A total interest saving of just over £15,000
● All these benefits for just an extra £50 a month, or £11.53 a week, or £1.64 a day
And what if you overpay by £100 a month - the monthly amount is now £729?
● Total repayment would be £163,294
● The mortgage would be fully repaid in just under 19 years, ie 6 years early
What if an extra £250 was paid every month making the payment £879?
● Total repayment would be £144,842
● The mortgage would be fully paid off in just under 14 years
How to work out the effect of overpayments on your mortgage deal
What you need is an overpayment mortgage calculator and here are 2 -
□ Excel based spread sheet - The following (see link below) is great, it's not complex but at the same time it requires more information to be entered than the simple option below
□ Simple Overpayment calculator - This calculator gives you a quick back-of-the-fag-packet overview of the figures
So which calculator would I use? Both, I'd use the simple one to first give me a rough idea and then dig deeper into the figures with the Excel spreadsheet.
Points to look out for
Restrictions on overpayments
● Most mortgages have restrictions on overpayments
● For example, only a maximum of 10%-20% can be overpaid each month so if your normal payment is £1,000 you can only overpay a maximum of £100 - £200 per month
● Check with your lender to see if there are any restrictions
Lump sum overpayments
● A lump sum overpayment is where a larger amount of money is used to repay a chunk of the mortgage in one go
● For example your job might pay a bonus of £4,000 and you make a one-off lump sum overpayment of £2,000 or £3,000
● Again, lenders often put restrictions on the amount one can repay, perhaps 10% - 20% of the total size of the loan per year
Important - How does your mortgage lender calculate the interest
Making a monthly overpayment is a complete waste of time if your lender bases their interest calculation on a yearly basis.
Some mortgages will charge interest for that year based on the total amount of the loan at the beginning of the year even though every month the loan is being reduced. For example -
● On January 1st the total mortgage loan is £100,000
● Your monthly payment is £1,000 and for this example assume that £500 goes towards repaying the loan and £500 towards the interest bill
● So on February 1st the total size of the loan is now £99,500 and therefore interest should only be charged on that amount
● But if the bank calculates interest yearly then you're still being charged as if the loan was £100,000
● And of course it gets even worse as the year progresses as in December the loan would be £94,000 but the lender charges interest on £100,000
Truly a great racket for the banks if ever there was one - And they can get away with this because they're often selling to an uneducated clientele (uneducated in this sense means in the mortgage/
personal finance sense).
So if you add in an overpayment of any amount at any time during the year it's pointless because of how the interest is charged. It's therefore critical to find out how interest is charged on the
total loan - Monthly is fine but daily interest calculations are the best.
So what should you do if your mortgage interest is charged on a yearly basis?
Simple, remortgage as soon as possible to a deal that either charges interest monthly or preferably daily - See How to remortgage for more information.
The importance of flexibility when choosing a mortgage
The above points - Restrictions on overpayments - Lump sum overpayments and how your mortgage company calculates its interest are why it's critical to spend time on research when deciding which
mortgage to get (or where to remortgage if you're changing your loan).
Basically you want to get the most flexible deal available, even if you don't necessarily need the flexibility at that time. For example -
● Your budget might be so stretched when you take out a mortgage that overpaying is a dream
● But in a few years your salary might have increased or perhaps a rich aunt dies leaving you £30,000 and then overpaying or making a lump sum payment is possible
● So make sure you read our guide on how to get the right mortgage at the best price for more details on the ever important aspect of flexibility
● See Secret 3 - Buy simple and flexible - you can't read the future
Anyone can overpay a mortgage if they start thinking small
If you like the idea of overpaying your mortgage then start to think small.
As you've seen in the working examples above even relatively tiny amounts (in relation to the overall size of the loan) can have a tremendous effect on reducing the number of years on your mortgage.
For example, if you look at the total size of your mortgage, perhaps £100,000 or £150,000+, it seems unlikely that £50 or £100 would make any difference at all.
But this is the wrong way to approach the problem. Instead start thinking of small amounts of money like £5, £10 and £20. Then work out how to save this sort of cash everyday, for example -
8 easy ways to save money - Then use the savings to overpay your mortgage
1. Not buying a Starbucks Coffee 3+ times a week
2. Having breakfast at home instead of buying it at work
3. Going cheaper at the supermarket, either buying some own brand products or looking out for special offers, or just stop making those impulse buys
4. Take the Tube (if you live in London) or Bus once in a while rather than a taxi
5. Dump your gym membership if you have one and are not making good use of it by going at least 2 times a week
6. Think about dumping Sky TV if you have it and go Freeview instead
7. Be more economical with your utility bills, turn lights off when not needed, turn your central heating on for 1-hour less per day (and/or down 1-2 degrees)
8. Drive slower to reduce your monthly petrol bill
£5 a day is £150 a month. Or £100 a month is £3.34 a day. Now go back and look at the examples above and see the difference that an overpayment of £100 - £150 a month can make.
Thinking small money is therefore the secret to generating easy funds to overpay a mortgage on a regular basis.
A different mortgage overpayment strategy
A mistake that's easily made with overpaying is to use all your excess cash to pay down the mortgage as quickly as possible.
But this can often cause a problem because suddenly you may lose your job or need access to some extra money in a hurry and unfortunately all of your spare cash will have been used to overpay.
Some people therefore argue that a better strategy is to not overpay until you've funded an emergency account which as its name suggests is a savings account that has money to use in case of an
emergency such as loss of job.
Then when this account has been properly funded you can then start overpaying your mortgage. For example -
● Pay £150 a month for 1-2 years into a Cash ISA (the Emergency Account) - a total of £1,800 - £3,600 (plus accrued interest)
● Then after 1-2 years when the emergency type account is fully funded start overpaying your mortgage
● Cash ISAs - What are they - How to use them
Yes, the mortgage won't get repaid as quickly as if the £150 a month was used to overpay 1-2 years back but you'll have built up an important nest egg.
So if you gave me the choice which strategy would I use -
● Start overpaying straight away with £150 a month, or
● Fund the Cash ISA and start overpaying in 1-2 years time?
When it comes to personal finance prudence normally wins so I'd go the Emergency route first.
Mortgage Overpayment Summary
Put simply - overpaying a mortgage, whether each month or just a for a few months of the year is one of the simplest and most effective personal finance strategy out there.
Anyone can do it, all you've got to do is work out how to save at least £50 from your present monthly budget and most readers should be able to achieve that goal easily and with little effort.
FREE Report: How to get the right mortgage at the best price
It takes just 4 easy steps How to make sure you don't overpay on charges
Learn to quickly sort through the market maze How to get a flexible deal - why this is so important
Download the Free Report
Read more in the Mortgage section:
Looking for something? Then search this site: | {"url":"http://www.learnmoney.co.uk/mortgages/repayment.html","timestamp":"2014-04-17T07:45:47Z","content_type":null,"content_length":"38487","record_id":"<urn:uuid:180a622e-9308-4324-bb29-1c70f85f2a18>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Cantor’s Actual Infinity
Authors: Andrew Banks
This paper will demonstrate a diagonal argument by listing all non-empty finite ordinals in a table according to their ε order using their subset representation, meaning {0,1,2…n-1} is listed for the
ordinal n. Next, the axiom of choice is applied to all of these ordinals and selects the maximal element. This selection process forms a diagonal which satisfies the axiom of infinity, hence, the
diagonal is a limit ordinal. However, it will also be shown for the nth choice made by the choice function, the diagonal is the successor ordinal number n = {0,1,2…n-1} and this is true for all n.
So, at the n+1 choice, the diagonal is the ordinal n+1 and so on. Therefore, based on all the actions of the choice function, it is provable from ZFC on one hand that this diagonal cannot ever be
anything other than a successor ordinal and on the other hand, the diagonal is a limit ordinal.
Comments: 8 Pages.
Download: PDF
Submission history
[v1] 2012-06-09 09:13:06
Unique-IP document downloads: 117 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1206.0030","timestamp":"2014-04-16T16:56:17Z","content_type":null,"content_length":"7458","record_id":"<urn:uuid:55ee9938-b8cb-4404-bbdf-5e7bcbefb0c9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00449-ip-10-147-4-33.ec2.internal.warc.gz"} |
urgent Home Work.Statistics & Probability
November 7th 2007, 09:52 AM #1
Oct 2007
on the planet earth
urgent Home Work.Statistics & Probability
I need help in this problem......please pay your kind attention to this ......thanks
1: Why we use correlation Analysis Technique?
2: A computer while computing the correlation coefficient between two variables x and y from 25 pairs of observations, obtained the following results:
It was, however discovered at the time of checking that he had copied down two pairs of observations as:
x y x y
10 9 instead of 15 7
Obtain the correct value of correlation coefficient between x and y.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/statistics/22210-urgent-home-work-statistics-probability.html","timestamp":"2014-04-19T18:49:38Z","content_type":null,"content_length":"30518","record_id":"<urn:uuid:1e2f714e-cde7-4994-9e6c-ed14ee912e64>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
College Algebra: Linear Function Models Video | MindBites
College Algebra: Linear Function Models
About this Lesson
• Type: Video Tutorial
• Length: 6:17
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 67 MB
• Posted: 06/26/2009
This lesson is part of the following series:
College Algebra: Full Course (258 lessons, $198.00)
College Algebra: Relations and Functions (57 lessons, $74.25)
College Algebra: Applications of Linear Concepts (4 lessons, $4.95)
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found athttp://www.thinkwell.com/student/product/collegealgebra. The full course covers equations and inequalities, relations and functions, polynomial and rational functions, exponential and
logarithmic functions, systems of equations, conic sections and a variety of other AP algebra, advanced algebra and Algebra II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
You know, some of the real power of mathematics is that it allows us to analytically analyze real-world events and real-world situations. In fact, I want to share one with you now, so you can see the
power of just using the straight-line things that we talked about, to at least understand and to appreciate better a situation that's actually going on. So, I want to now take a look at the situation
with respect to the AIDS epidemic, which is really quite astounding.
If you look here at this graph, you can see that the number - and these are in thousands of recorded AIDS cases from 1986 up through 1994. This is from US News & World Reports from a few years ago.
In 1986, there were a reported number of cases of 11,900, and then all the way to 1994, you can see, eight years later, there was a staggering 62,500. That's way up here. Now, if you look at this
incredible trend, you might wonder how many AIDS cases should we expect in the future, and how should that dictate policy decisions, and so forth? Mathematics can actually be a powerful tool used to
understand that.
Let's just do something very, very simple here to try to capture this. One thing that seems evident by this graph, if you look at the data points, is that if I just put a line down, it seems to
approximate, reasonably accurately, the trend in the number of these cases. Just for a very simple point of view, let's find the equation of the line that passes through that first point, that first
data point we had, and that last data point. I'm not saying that's the best fit. Maybe, in fact, the best fit of the line might be something like that. Just for simplicity, let's find the equation of
the line that passes through those two points and then use that line to try to predict how many cases we would expect to see, for example, in the year 2000.
Well, to do this problem, what we want to do is find the equation of the line. So, what do we know? Well, we have two points. We have the point - these are quite large numbers. Let's call this, by
the way, zero - zero, meaning the first year of this experiment. This is year one, year two, year three, year four, year five, year six, year seven, year eight - this is eight years later, eight
years after the first time we measured this. So, in fact, I would graph that first point as 0, because it's the start of this counting and the number of AIDS cases back in 1986, where they're 11,900.
Then, the second data point we have is from 1994, which was eight years later, so I have 8 in the x direction, and I have 62,500. We had that many cases then. The question that we now want to find is
what is the equation of the line that contains those two points? So, I'm going to use the slope intercept form. You'll notice that, actually, I first need the slopes. Let me compute the slope. The
slope is going to be the rise over run. So, I take the change in the y values, divided by the change in the x values. That would be 62,500 - 11,900, all divided by - that's the change in the y,
divided by the change in x, which is actually pretty easy. It's 8 - 0, which is just 8. So, what does that equal? Well, that equals - if you subtract these two things out, I think we see 50,600, all
divided by 8. In fact, if you divide that out, it's just 6,325. That's the slope. So, the slope = 6,325.
Now, what's the y intercept? Well, actually, that's where it crosses the y axis, and we're given that. That's this point right here. So, in fact, we see that y equals the slope, which is 6325x, plus
the y intercept, which is 11,900. So, I see that as the equation for the line that passes through those two points. That line, we see, reasonably accurately - maybe not exactly emulates the data that
we have here. So, given that, let's see if we can guess, through this line, how many AIDS cases there were in 2000.
What I would do is plug in 2000 here for x. Actually, plugging in 2000 for x wouldn't be quite right, because remember, we're counting our values as how many years they are from 1986. We have - '94,
for example, was 8. So, 1995 would be 9, 1996 would be 10, 1997 would be 11, 1998 would be 12, and so forth. So, what would we have here? How many years away is 2000 from '86? It would be 14 years.
So, in fact, I want to plug in 14 for x. If I plug in 14 for x, I would see that the y value would be 6,325 x 14 + 11,900. What is that number? That would be 6325 x 14, and then I add to that,
11,900. So, the estimate that we're coming up with is 100,400 cases, and that's just a staggering amount.
So, in fact, mathematics allows us to model, even in a very simple way, phenomena that are going on around us, and it actually gives us the power to make predictions and to make estimates as so how
many of something is going on or how much is happening. This, I think, in turn, will maybe help dictate public policy and help to actually one day find a cure for this or, in other applications, to
resolve those issues.
So, this is a simple application, where a straight line allows us to make a prediction, with reasonable accuracy.
Relations and Functions
Linear Functions- Applications
Constructing Linear Function Models of a Set of Data Page [1 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/3104-college-algebra-linear-function-models","timestamp":"2014-04-19T09:24:35Z","content_type":null,"content_length":"58233","record_id":"<urn:uuid:75bf1d92-adb1-42d5-bbbd-8e660146e5c4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
DD What is the standard form of the equation y = - 8x + 7 with integer coefficients? View Solution
DD What is the standard form of the equation y = 7x - 5 with integer coefficients? View Solution
DD What is the standard form of the equation y = (54)x + 5 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = 9x + 23 with integer coefficients? DD View Solution
DD What is the standard form of the equation of the line that passes through the points (3, 2) and (5, 6) with integer coefficients? View Solution
DD What is the standard form of the equation of the line that passes through the points (4, 3) and (- 2, 4) with integer coefficients? DD View Solution
DD What is the equation of the line that passes through the point (4, 6) and has a slope of 2? View Solution
DD Marissa went to a bakery to buy a few burgers and pizzas. The cost of each burger is $4 and each pizza costs $8. Marissa has $21 to buy burgers and pizzas. Identify an equation that View
models the number of burgers and pizzas that Marissa can buy. [Let x represent the number of burgers and y represent the number of pizzas.] Solution
DD What is the equation of the line in standard form that passes through the point (2, 2) and has a slope of - 23? View Solution
DD From 1980 to 2000, the price of crude oil increased by $4 per barrel per decade(10 years.) In the year 2000, the price of crude oil was $60. Write a linear model for the price per View
gallon of crude oil, p over the number of decades t. Solution
DD Write the equation of the line that passes through the point (- 34, 2) with a slope of 45 in standard form with integer coefficients. View Solution
DD Brad & Co. organized a fundraiser for the cyclone victims. Each child ticket costs $3 and an adult ticket costs $4. Write an equation that models the total amount collected, if the View
company collected $703. [Let the total number of child tickets sold be x and the total number of adult tickets sold be y. ] Solution
DD What is the equation of the line that passes through the point (5, - 65) with a slope of 56 in standard form with integer coefficients? View Solution
DD What is the equation of the line (in standard form with integer coefficients) which passes through the point (- 6, - 3) with a slope of 56? DD View Solution
DD What is the standard form of the equation y = - (34)x + 3 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = - ( 25 )x - 35 with integer coefficients? View Solution
DD Write an equation of the line which passes through the points (4, - 6) and (- 7, 8) in standard form using integer coefficients. View Solution
DD What is the equation of the line which passes through the point (- 5, 7) with a slope of - 6 in standard form with integer coefficients? DD View Solution
DD What is the standard form of the equation y = 7x + 6 with integer coefficients? DD View Solution
DD John bought textbooks and workbooks for $130. Cost of one textbook is $20 and cost of one workbook is $18. Identify an equation that models the number of textbooks and workbooks John View
bought. [Let x represent the number of textbooks and y represent the number of workbooks.] Solution
DD What is the standard form of the equation y = 6x + 2 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = - 6x + 8 with integer coefficients? View Solution
DD What is the standard form of the equation y = 4x - 9 with integer coefficients? View Solution
DD What is the standard form of the equation y = (43)x + 2 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = 4x + 12 with integer coefficients? DD View Solution
DD William bought textbooks and workbooks for $192. Cost of one textbook is $50 and cost of one workbook is $46. Identify an equation that models the number of textbooks and workbooks View
William bought. [Let x represent the number of textbooks and y represent the number of workbooks.] Solution
DD What is the equation of the line that passes through the point (4, 9) and has a slope of 5? View Solution
DD What is the equation of the line in standard form that passes through the point (4, 2) and has a slope of - 35? View Solution
DD Write the equation of the line that passes through the point (- 45, 5) with a slope of 56 in standard form with integer coefficients. View Solution
DD What is the equation of the line that passes through the point (3, - 54) with a slope of 45 in standard form with integer coefficients? View Solution
DD What is the equation of the line (in standard form with integer coefficients) which passes through the point (- 8, - 5) with a slope of 78? DD View Solution
DD What is the standard form of the equation y = - (45)x + 2 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = - ( 49 )x - 79 with integer coefficients? View Solution
DD Write an equation of the line which passes through the points (2, - 4) and (- 5, 6) in standard form using integer coefficients. View Solution
DD Which of the following represents the standard form of the equation of a line? View Solution
DD Which is an equation of the line in standard form? View Solution
DD What is the equation of the line which passes through the point (- 4, 6) with a slope of - 5 in standard form with integer coefficients? DD View Solution
DD What is the standard form of the equation y = 2x + 9 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = - 7x + 7 with integer coefficients? View Solution
DD What is the standard form of the equation y = 8x - 5 with integer coefficients? View Solution
DD What is the standard form of the equation y = (76)x + 2 with integer coefficients? DD View Solution
DD What is the standard form of the equation y = 7x + 56 with integer coefficients? DD View Solution
DD What is the standard form of the equation of the line that passes through the points (2, 4) and (4, 12) with integer coefficients? View Solution
DD What is the standard form of the equation of the line that passes through the points (4, 4) and (- 4, 5) with integer coefficients? DD View Solution
DD What is the standard form of the equation of the line? DD View Solution
DD What is the standard form of the equation of the line? DD View Solution
DD Which is an equation of the line in standard form? View Solution
DD Which is an equation of the line in standard form? View Solution
DD Marissa went to a bakery to buy a few burgers and pizzas. The cost of each burger is $5 and each pizza costs $9. Marissa has $25 to buy burgers and pizzas. Identify an equation that View
models the number of burgers and pizzas that Marissa can buy. [Let x represent the number of burgers and y represent the number of pizzas.] Solution
DD From 1980 to 2000, the price of crude oil increased by $3 per barrel per decade(10 years.) In the year 2000, the price of crude oil was $40. Write a linear model for the price per View
gallon of crude oil, p over the number of decades t. Solution
DD Jimmy & Co. organized a fundraiser for the cyclone victims. Each child ticket costs $4 and an adult ticket costs $6. Write an equation that models the total amount collected, if the View
company collected $829. [Let the total number of child tickets sold be x and the total number of adult tickets sold be y. ] Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgehmxkjjbj&.html","timestamp":"2014-04-18T23:17:20Z","content_type":null,"content_length":"101804","record_id":"<urn:uuid:e7e61278-c8d5-49b5-a542-8824e7c3975a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00303-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matrix-Vector Algebra
Multiplying A Matrix And A Vector
You are at: Basic Concepts - Circuits - Mathematical Background - Matrix-Vector Multiplication
Return to Table of Contents Multiplying A Matrix And A Vector
Multiplying a matrix and a vector is a special case of matrix multiplication. Circuit equations and state equations representing linear system dynamics contain products of a matrix and a vector. In
the first lesson on circuit analysis, equations that come about by writing node equations can be put into a vector-matrix representation that includes a term that is a matrix - the conductance matrix
- multiplied by a vector - the vector of node voltages. (Click here to go to that point in the lessons where that is presented.)
Since vector-matrix representations are encountered often in electrical engineering, you need to be very familiar with basic operations. In this lesson, we will examine mutiplying a matrix and a
vector. In the basic lesson on circuits, we encountered this vector-matrix representation for the circuit below.
The form we are interested in is this. We want to be able to evaluate a matrix vector product of this form whenever we encounter one.
The algorithm for computing the product is best presented visually. Here it is.
There are some things to remember about matrix-vector multiplication.
• The matrix is assumed to be N x M. In other words:
□ The matrix has N rows.
□ The matrix has M columns.
□ For example, a 2 x 3 matrix has 2 rows and 3 columns.
• In matrix-vector multiplication, if the matrix is N x M, then the vector must have a dimension, M.
□ In other words, the vector will have M entries.
□ If the matrix is 2 x 3, then the vector must be 3 dimensional.
• This is usually stated as saying the matrix and vector must be conformable.
• Then, if the matrix and vector are conformable, the product of the matrix and the vector is a resultant vector that has a dimension of N. (So, the result could be a different size than the
original vector!)
□ For example, if the matrix is 2 x 3, and the vector is 3 dimensional, the result of the multiplication would be a vector of 2 dimensions.
It is possible to express the calculations mathematically.
• Let the matrix be represented by A.
□ The elements of A are a[ij], where,
□ i is the row index and takes on values from 1 to M.
□ j is the column index and takes on values from 1 to N.
• Let the vector be represented by b.
□ The elements of b are b[j], where,
□ j is the index and takes on values from 1 to N.
• The product is c = A*b,
□ The product is a vector of length M.
Then, the calculation for the the terms in the product vector are given by:
This expression just puts the process for calculating the product into standard mathematical form. What it says to do is the following.
• To calculate the j^th entry in the product vector.
□ Multiply entries in the j^th row of the matrix, A, by the corresponding entries in the vector, b, and sum all of the terms.
So, now you should be able to perform these calculations. Let's look at some example problems. Questions
Q1. For this matrix and vector, is the product defined?
Q2. For this matrix and vector, is the product defined?
P1 In this matrix-vector product, the result is a vector, c, with two components, c[1] and c[2]. Calculate the components of the product and enter your answer in the spaces below.
First, calculate the value of c[1].
Your grade is:
P1 Next, calculate the value of c[2].
Your grade is:
Finally, we have a calculator you can use to avoid doing these kinds of problems by hand. Here is the calculator. Here's how to use the calculator.
• Determine the size of the matrix and enter N (#rows) and M (#columns).
□ The maximum matrix size is 5 x 5.
• Create a matrix and a vector by clicking the button. You will automatically get a conformable matrix and vector.
• Enter data into the matrix and vector.
• Click the button to multiply.
Problems Send your comments on these lessons. | {"url":"http://www.facstaff.bucknell.edu/mastascu/elessonshtml/Circuit/MatVecMultiply.htm","timestamp":"2014-04-18T08:41:14Z","content_type":null,"content_length":"13929","record_id":"<urn:uuid:3274f8af-1ffa-490e-b8b7-72c54e1be1a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00000-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milton, WA Prealgebra Tutor
Find a Milton, WA Prealgebra Tutor
...My unique mix of industry (research scientist) and academia (PhD in electrical engineering) not only enables me to teach students how to solve problems but most importantly why they need to
learn the material. With the associated costs with tutoring, I concentrate on helping students obtain self...
45 Subjects: including prealgebra, chemistry, physics, calculus
...I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I am aiming for more than just good test scores
- I will build confidence so that my students know that they know the material. Math is my passion, not just what I majored in.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I use various theatre games and warm ups, and I strongly emphasize scene by scene analysis. I consider voice lessons to be an extension of acting lessons, and I primarily focus on musical
theatre. My approach to teaching voice revolves around getting the student to convey the emotional content and the dramatic meaning of the song.
22 Subjects: including prealgebra, reading, English, chemistry
I have an Associate's degree from Tacoma Community College, and I have been tutoring at TCC for the past twelve years. Most of my tutoring experience has been with adults, but I also help high
school students who are in the Running Start program. I can tutor any level of math from arithmetic to intermediate algebra, and even a little bit of precalculus.
8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I am a 1st generation Latin American female, my parents were born in Mexico and Costa Rica. As a result, I am bilingual, having had to learn Spanish as a second language since most of my
extended family does not speak English. Thus, I have been speaking Spanish since an early age, have lived in Spanish-speaking countries and have some experience acting as a translator.
25 Subjects: including prealgebra, Spanish, chemistry, writing
Related Milton, WA Tutors
Milton, WA Accounting Tutors
Milton, WA ACT Tutors
Milton, WA Algebra Tutors
Milton, WA Algebra 2 Tutors
Milton, WA Calculus Tutors
Milton, WA Geometry Tutors
Milton, WA Math Tutors
Milton, WA Prealgebra Tutors
Milton, WA Precalculus Tutors
Milton, WA SAT Tutors
Milton, WA SAT Math Tutors
Milton, WA Science Tutors
Milton, WA Statistics Tutors
Milton, WA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Algona, WA prealgebra Tutors
Auburn, WA prealgebra Tutors
Covington, WA prealgebra Tutors
Dupont, WA prealgebra Tutors
Edgewood, WA prealgebra Tutors
Federal Way prealgebra Tutors
Fife, WA prealgebra Tutors
Fircrest, WA prealgebra Tutors
Graham, WA prealgebra Tutors
Jovita, WA prealgebra Tutors
Maple Valley prealgebra Tutors
Pacific, WA prealgebra Tutors
Puy, WA prealgebra Tutors
Puyallup prealgebra Tutors
Sumner, WA prealgebra Tutors | {"url":"http://www.purplemath.com/Milton_WA_prealgebra_tutors.php","timestamp":"2014-04-20T00:00:15Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:9353e284-6072-4666-bfd9-0003f55d7454>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
r Logic
What will be the best algorithm or logic that can be used for game like greedy spiders. As far as I can think it is based on A* algorithm ? where the shortest path has to be search between the spider
and the flies....
is there any better logic to implement that game ?
I would like to learn the concept behind it ? How they will be managing nodes and edges concept ?
I am planning to make such game but do not know from where should I start | {"url":"http://www.dreamincode.net/forums/topic/283948-how-to-go-about-game-like-geedy-spiders-or-logic-behind-it/page__pid__1673191__st__0","timestamp":"2014-04-20T07:41:42Z","content_type":null,"content_length":"103285","record_id":"<urn:uuid:b8052e70-dc56-4329-b7e6-393625cdfb9b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Preliminary discussion of the logical design of an electronic
computing instrument
Preliminary discussion of the logical design of an electronic computing instrument^1
Arthur W. Burks / Herman H. Goldstine / John von Neumann
1. Principal components of the machine
1.1. Inasmuch as the completed device will be a general-purpose computing machine it should contain certain main organs relating to arithmetic, memory-storage, control and connection with the human
operator. It is intended that the machine be fully automatic in character, i.e. independent of the human operator after the computation starts. A fuller discussion of the implications of this remark
will be given in Sec. 3 below.
1.2. It is evident that the machine must be capable of storing in some manner not only the digital information needed in a given computation such as boundary values, tables of functions (such as the
equation of state of a fluid) and also the intermediate results of the computation (which may be wanted for varying lengths of time), but also the instructions which govern the actual routine to be
performed on the numerical data. In a special-purpose machine these instructions are an integral part of the device and constitute a part of its design structure. For an all-purpose machine it must
be possible to instruct the device to carry out any computation that can be formulated in numerical terms. Hence there must be some organ capable of storing these program orders. There must,
moreover, be a unit which can understand these instructions and order their execution.
1.3. Conceptually we have discussed above two different forms of memory: storage of numbers and storage of orders. If, however, the orders to the machine are reduced to a numerical code and if the
machine can in some fashion distinguish a number from an order, the memory organ can be used to store both numbers and orders. The coding of orders into numeric form is discussed in 6.3 below.
1.4. If the memory for orders is merely a storage organ there must exist an organ which can automatically execute the orders stored in the memory. We shall call this organ the Control.
1.5. Inasmuch as the device is to be a computing machine there must be an arithmetic organ in it which can perform certain of the elementary arithmetic operations. There will be, therefore, a unit
capable of adding, subtracting, multiplying and dividing. It will be seen in 6.6 below that it can also perform additional operations that occur quite frequently.
The operations that the machine will view as elementary are clearly those which are wired into the machine. To illustrate, the operation of multiplication could be eliminated from the device as an
elementary process if one were willing to view it as a properly ordered series of additions. Similar remarks apply to division. In general, the inner economy of the arithmetic unit is determined by a
compromise between the desire for speed of operation-a non-elementary operation will generally take a long time to perform since it is constituted of a series of orders given by the control-and the
desire for simplicity, or cheapness, of the machine.
1.6. Lastly there must exist devices, the input and output organ, whereby the human operator and the machine can communicate with each other. This organ will be seen below in 4.5, where it is
discussed, to constitute a secondary form of automatic memory.
2. First remarks on the memory
2.1. It is clear that the size of the memory is a critical consideration in the design of a satisfactory general-purpose computing
From A. H. Taub (ed.), "Collected Works of John von Neumann," vol. 5, pp. 34-79, The Macmillan Company, New York, 1963. Taken from report to U. S. Army Ordnance Department, 1946. See also
Bibliography Burks, Goldstine and von Neumann, 1962a, 1962b, 1963; and Goldstine and von Neumann 1963a, 1963b, 1963c, 1963d.
machine. We proceed to discuss what quantities the memory should store for various types of computations.
2.2. In the solution of partial differential equations the storage requirements are likely to be quite extensive, In general, one must remember not only the initial and boundary conditions and any
arbitrary functions that enter the problem but also an extensive number of intermediate results.
a For equations of parabolic or hyperbolic type in two independent variables the integration process is essentially a double induction. To find the values of the dependent variables at time t
+ D t one integrates with respect to x from one boundary to the other by utilizing the data at time t as if they were coefficients which contribute to defining the problem of this
Not only must the memory have sufficient room to store these intermediate data but there must be provisions whereby these data can later be removed, i.e. at the end of the (t + D t) cycle,
and replaced by the corresponding data for the (t + 2D t) cycle. This process of removing data from the memory and of replacing them with new information must, of course, be done quite
automatically under the direction of the control.
b For total differential equations the memory requirements are clearly similar to, but smaller than, those discussed in (a) above.
c Problems that are solved by iterative procedures such as systems of linear equations or elliptic partial differential equations, treated by relaxation techniques, may be expected to require
quite extensive memory capacity. The memory requirement for such problems is apparently much greater than for those problems in (a) above in which one needs only to store information
corresponding to the instantaneous value of one variable [t in (a) above], while now entire solutions (covering all values of all variables) must be stored. This apparent discrepancy in
magnitudes can, however, be somewhat overcome by the use of techniques which permit the use of much coarser integration meshes in this case, than in the cases under (a).
2.3. It is reasonable at this time to build a machine that can conveniently handle problems several orders of magnitude more complex than are now handled by existing machines, electronic or
electro-mechanical. We consequently plan on a fully automatic electronic storage facility of about 4,000 numbers of 40 binary digits each. This corresponds to a precision of 2- ^40 ~ 0.9 x 10- ^12,
i.e. of about 12 decimals. We believe that this memory capacity exceeds the capacities required for most problems that one deals with at present by a factor of about 10. The precision is also safely
higher than what is required for the great majority of present day problems. In addition, we propose that we have a subsidiary memory of much larger capacity, which is also fully automatic, on some
medium such as magnetic wire or tape.
3. First remarks on the control and code
3.1. It is easy to see by formal-logical methods that there exist codes that are in abstracto adequate to control and cause the execution of any sequence of operations which are individually
available in the machine and which are, in their entirety, conceivable by the problem planner. The really decisive considerations from the present point of view, in selecting a code, are more of a
practical nature: simplicity of the equipment demanded by the code, and the clarity of its application to the actually important problems together with the speed of its handling of those problems. It
would take us much too far afield to discuss these questions at all generally or from first principles. We will therefore restrict ourselves to analyzing only the type of code which we now envisage
for our machine.
3.2. There must certainly be instructions for performing the fundamental arithmetic operations. The specifications for these orders will not be completely given until the arithmetic unit is described
in a little more detail.
3.3. It must be possible to transfer data from the memory to the arithmetic organ and back again. In transferring information from the arithmetic organ back into the memory there are two types we
must distinguish: Transfers of numbers as such and transfers of numbers which are parts of orders. The first case is quite obvious and needs no further explication. The second case is more subtle and
serves to illustrate the generality and simplicity of the system. Consider, by way of illustration, the problem of interpolation in the system. Let us suppose that we have formulated the necessary
instructions for performing an interpolation of order n in a sequence of data. The exact location in the memory of the (n + 1) quantities that bracket the desired functional value is, of course, a
function of the argument. This argument probably is found as the result of a computation in the machine. We thus need an order which can substitute a number into a given order-in the case of
interpolation the location of the argument or the group of arguments that is nearest in our table to the desired value. By means of such an order the results of a computation can be introduced into
the instructions governing that or a different computation. This makes it possible for a sequence of instructions to be used with different sets of numbers located in different parts of the memory.
Section 1 Processors with one address per instruction
To summarize, transfers into the memory will be of two sorts:
Total substitutions, whereby the quantity previously stored is cleared out and replaced by a new number. Partial substitutions in which that part of an order containing a memory location-number-we
assume the various positions in the memory are enumerated serially by memory location-numbers-is replaced by a new memory location-number.
3.4. It is clear that one must be able to get numbers from any part of the memory at any time. The treatment in the case of orders can, however, b more methodical since one can at least partially
arrange the control instructions in a linear sequence. Consequently the control will be so constructed that it will normally proceed from place n in the memory to place (n + 1) for its next
3.5. The utility of an automatic computer lies in the possibility of using a given sequence of instructions repeatedly, the number of times it is iterated being either preassigned or dependent upon
the results of the computation. When the iteration is completed a different sequence of orders is to be followed, so we must, in most cases, give two parallel trains of orders preceded by an
instruction as to which routine is to be followed. This choice can be made to depend upon the sign of a number (zero being reckoned as plus for machine purposes). Consequently, we introduce an order
(the conditional transfer order) which will, depending on the sign of a given number, cause the proper one of two routines to be executed.
Frequently two parallel trains of orders terminate in a common routine. It is desirable, therefore, to order the control in either case to proceed to the beginning point of the common routine. This
unconditional transfer can be achieved either by the artificial use of a conditional transfer or by the introduction of an explicit order for such a transfer.
3.6. Finally we need orders which will integrate the input-output devices with the machine. These are discussed briefly in 6.8.
3.7. We proceed now to a more detailed discussion of the machine. Inasmuch as our experience has shown that the moment one chooses a given component as the elementary memory unit, one has also more
or less determined upon much of the balance of the machine, we start by a consideration of the memory organ. In attempting an exposition of a highly integrated device like a computing machine we do
not find it possible, however, to give an exhaustive discussion of each organ before completing its description. It is only in the final block diagrams that anything approaching a complete unit can
be achieved.
The time units to be used in what follows will be:
1 m sec = 1 microsecond = 10- ^6 seconds
1 msec = 1 millisecond = 10- ^3 seconds
4. The memory organ
4.1. Ideally one would desire an indefinitely large memory capacity such that any particular aggregate of 40 binary digits, or word (cf. 2.3), would be immediately available-i.e. in a time which is
somewhat or considerably shorter than the operation time of a fast electronic multiplier. This may be assumed to be practical at the level of about 100 m sec. Hence the availability time for a word
in the memory should be 5 to 50 m sec. It is equally desirable that words may be replaced with new words at about the same rate. It does not seem possible physically to achieve such a capacity. We
are therefore forced to recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding but which is less quickly accessible.
The most common forms of storage in electrical circuits are the flip-flop or trigger circuit, the gas tube, and the electromechanical relay. To achieve a memory of n words would, of course, require
about 40n such elements, exclusive of the switching elements. We saw earlier (cf. 2.2) that a fast memory of several thousand words is not at all unreasonable for an all-purpose instrument. Hence,
about 10^5 flip-flops or analogous elements would be required! This would, of course, be entirely impractical.
We must therefore seek out some more fundamental method of storing electrical information than has been suggested above. One criterion for such a storage medium is that the individual storage organs,
which accommodate only one binary digit each, should not be macroscopic components, but rather microscopic elements of some suitable organ. They would then, of course, not be identified and switched
to by the usual macroscopic wire connections, but by some functional procedure in manipulating that organ.
One device which displays this property to a marked degree is the iconoscope tube. In its conventional form it possesses a linear resolution of about one part in 500. This would correspond to a
(two-dimensional) memory capacity of 500 x 500 = 2.5 x 10^5. One is accordingly led to consider the possibility of storing electrical charges on a dielectric plate inside a cathode-ray tube.
Effectively such a tube is nothing more than a myriad of electrical capacitors which can be connected into the circuit by means of an electron beam.
Actually the above mentioned high resolution and concomitant memory capacity are only realistic under the conditions of television-image storage, which are much less exigent in respect to
the reliability of individual markings than what one can accept in the storage for a computer. In this latter case resolutions of one part in 20 to 100, i.e. memory capacities of 400 to 10,000, would
seem to be more reasonable in terms of equipment built essentially along familiar lines.
At the present time the Princeton Laboratories of the Radio Corporation of America are engaged in the development of a storage tube, the Selectron, of the type we have mentioned above. This tube is
also planned to have a non-amplitude-sensitive switching system whereby the electron beam can be directed to a given spot on the plate within a quite small fraction of a millisecond. Inasmuch as the
storage tube is the key component of the machine envisaged in this report we are extremely fortunate in having secured the cooperation of the RCA group in this as well as in various other
An alternate form of rapid memory organ is the acoustic feedback delay line described in various reports on the EDVAC. (This is an electronic computing machine being developed for the Ordnance
Department, U.S. Army, by the University of Pennsylvania, Moore School of Electrical Engineering.) Inasmuch as that device has been so clearly reported in those papers we give no further discussion.
There are still other physical and chemical properties of matter in the presence of electrons or photons that might be considered, but since none is yet beyond the early discussion stage we shall not
make further mention of them.
4.2. We shall accordingly assume throughout the balance of this report that the Selectron is the modus for storage of words at electronic speeds. As now planned, this tube will have a capacity of 2^
12 = 4,096 » 4,000 binary digits. To achieve a total electronic storage of about 4,000 words we propose to use 40 Selectrons, thereby achieving a memory of 212 words of 40 binary digits each. (Cf.
again 2.3.)
4.3. There are two possible means for storing a particular word in the Selectron memory-or, in fact, in either a delay line memory or in a storage tube with amplitude-sensitive deflection. One method
is to store the entire word in a given tube and then to get the word out by picking out its respective digits in a serial fashion. The other method is to store in corresponding places in each of the
40 tubes one digit of the word. To get a word from the memory in this scheme requires, then, one switching mechanism to which all 40 tubes are connected in parallel. Such a switching scheme seems to
us to be simpler than the technique needed in the serial system and is, of course, 40 times faster. We accordingly adopt the parallel procedure and thus are led to consider a so-called parallel
machine, as contrasted with the serial principles being considered for the EDVAC. (In the EDVAC the peculiar characteristics of the acoustic delay line, as well as various other considerations, seem
to justify a serial procedure. For more details, cf. the reports referred to in 4.1.) The essential difference between these two systems lies in the method of performing an addition; in a parallel
machine all corresponding pairs of digits are added simultaneously, whereas in a serial one these pairs are added serially in time.
4.4. To summarize, we assume that the fast electronic memory consists of 40 Selectrons which are switched in parallel by a common switching arrangement. The inputs of the switch are controlled by the
4.5. Inasmuch as a great many highly important classes of problems require a far greater total memory than 2^12 words, we now consider the next stage in our storage hierarchy. Although the solution
of partial differential equations frequently involves the manipulation of many thousands of words, these data are generally required only in blocks which are well within the 2^12 capacity of the
electronic memory. Our second form of storage must therefore be a medium which feeds these blocks of words to the electronic memory. It should be controlled by the control of the computer and is thus
an integral part of the system, not requiring human intervention.
There are evidently two distinct problems raised above. One can choose a given medium for storage such as teletype tapes, magnetic wire or tapes, movie film or similar media. There still remains the
problem of automatic integration of this storage medium with the machine. This integration is achieved logically by introducing appropriate orders into the code which can instruct the machine to read
or write on the medium, or to move it by a given amount or to a place with given characteristics. We discuss this question a little more fully in 6.8.
Let us return now to the question of what properties the secondary storage medium should have. It clearly should be able to store information for periods of time long enough so that only a few per
cent of the total computing time is spent in re-registering information that is "fading off." It is certainly desirable, although not imperative, that information can be erased and replaced by new
data. The medium should be such that it can be controlled, i.e. moved forward and backward, automatically. This consideration makes certain media, such as punched cards, undesirable. While cards can,
of course, be printed or read by appropriate orders from some machine, they are not well adapted to problems in which the output data are fed directly back into the machine, and are required in a
sequence which is non-monotone with respect to the order of the cards. The medium should be capable of remembering very large numbers of data at a much smaller price
96 Part 2 ÷ The instruction-set processor: main-line computers
Section 1 ÷ Processors with one address per instruction
than electronic devices. It must be fast enough so that, even when it has to be used frequently in a problem, a large percentage of the total solution time is not spent in getting data into and out
of this medium and achieving the desired positioning on it. If this condition is not reasonably well met, the advantages of the high electronic speeds of the machine will be largely lost.
Both light- or electron-sensitive film and magnetic wires or tapes, whose motions are controlled by servo-mechanisms integrated with the control, would seem to fulfil our needs reasonably well. We
have tentatively decided to use magnetic wires since we have achieved reliable performance with them at pulse rates of the order of 25,000/sec and beyond.
4.6. Lastly our memory hierarchy requires a vast quantity of dead storage, i.e. storage not integrated with the machine. This storage requirement may be satisfied by a library of wires that can be
introduced into the machine when desired and at that time become automatically controlled. Thus our dead storage is really nothing but an extension of our secondary storage medium. It differs from
the latter only in its availability to the machine.
47. We impose one additional requirement on our secondary memory. It must be possible for a human to put words on to the wire or other substance used and to read the words put on by the machine. In
this manner the human can control the machine's functions. It is now clear that the secondary storage medium is really nothing other than a part of our input-output system, cf. 6.8.4 for a
description of a mechanism for achieving this.
4.8. There is another highly important part of the input-output which we merely mention at this time, namely, some mechanism for viewing graphically the results of a given computation. This can, of
course, be achieved by a Selectron-like tube which causes its screen to fluoresce when data are put on it by an electron beam.
4.9. For definiteness in the subsequent discussions we assume that associated with the output of each Selectron is a flip-flop. This assemblage of 40 flip-flops we term the Selectron Register.
5. The arithmetic organ
5.1. In this section we discuss the features we now consider desirable for the arithmetic part of our machine. We give our tentative conclusions as to which of the arithmetic operations should be
built into the machine and which should be programmed. Finally, a schematic of the arithmetic unit is described.
5.2. In a discussion of the arithmetical organs of a computing machine one is naturally led to a consideration of the number system to be adopted. In spite of the longstanding tradition of building
digital machines in the decimal system, we feel strongly in favor of the binary system for our device. Our fundamental unit of memory is naturally adapted to the binary system since we do not attempt
to measure gradations of charge at a particular point in the Selectron but are content to distinguish two states. The flip-flop again is truly a binary device. On magnetic wires or tapes and in
acoustic delay line memories one is also content to recognize the presence or absence of a pulse or (if a carrier frequency is used) of a pulse train, or of the sign of a pulse. (We will not discuss
here the ternary possibilities of a positive-or-negative-or-no-pulse system and their relationship to questions of reliability and checking, nor the very interesting possibilities of carrier
frequency modulation.) Hence if one contemplates using a decimal system with either the iconoscope or delay-line memory one is forced into a binary coding of the decimal system-each decimal digit
being represented by at least a tetrad of binary digits. Thus an accuracy of ten decimal digits requires at least 40 binary digits. In a true binary representation of numbers, however, about 33
digits suffice to achieve a precision of 10^10. The use of the binary system is therefore somewhat more economical of equipment than is the decimal.
The main virtue of the binary system as against the decimal is, however, the greater simplicity and speed with which the elementary operations can be performed. To illustrate, consider multiplication
by repeated addition. In binary multiplication the product of a particular digit of the multiplier by the multiplicand is either the multiplicand or null according as the multiplier digit is 1 or 0.
In the decimal system, however, this product has ten possible values between null and nine times the multiplicand, inclusive. Of course, a decimal number has only 1og[10]2 ~ 0.3 times as many digits
as a binary number of the same accuracy, but even so multiplication in the decimal system is considerably longer than in the binary system. One can accelerate decimal multiplication by complicating
the circuits, but this fact is irrelevant to the point just made since binary multiplication can likewise be accelerated by adding to the equipment. Similar remarks may be made about the other
An additional point that deserves emphasis is this: An important part of the machine is not arithmetical, but logical in nature. Now logics, being a yes-no system, is fundamentally binary. Therefore
a binary arrangement of the arithmetical organs contributes very significantly towards producing a more homogeneous machine, which can be better integrated and is more efficient.
The one disadvantage of the binary system from the human point of view is the conversion problem. Since, however, it is completely known how to convert numbers from one base to
another and since this conversion can be effected solely by the use of the usual arithmetic processes there is no reason why the computer itself cannot carry out this conversion. It might b argued
that this is a time consuming operation. This, however is not the case. (Cf. 9.6 and 9.7 of Part II. Part II is a report issued under the title Planning and Coding of Problems for an Electronic
Computing Instrument.^1) Indeed a general-purpose computer, used as a scientific research tool, is called upon to do a very great number of multiplications upon a relatively small amount of input
data, and hence the time consumed in the decimal to binary conversion is only a trivial percentage of the total computing time. A similar remark is applicable to the output data.
In the preceding discussion we have tacitly assumed the desirability of introducing and withdrawing data in the decimal system. We feel, however, that the base 10 may not even be a permanent feature
in a scientific instrument and consequently will probably attempt to train ourselves to use numbers base 2 or 8 or 16. The reason for the bases 8 or 16 is this: Since 8 and 16 are powers of 2 the
conversion to binary is trivial; since both are about the size of 10, they violate many of our habits less badly than base 2. (Cf. Part II, 9.4.)
5.3. Several of the digital computers being built or planned in this country and England are to contain a so-called "floating decimal point". This is a mechanism for expressing each word as a
characteristic and a mantissa-e.g. 123.45 would be carried in the machine as (0.12345,03), where the 3 is the exponent of 10 associated with the number. There appear to be two major purposes in a
"floating" decimal point system both of which arise from the fact that the number of digits in a word is a constant, fixed by design considerations for each particular machine. The first of these
purposes is to retain in a sum or product as many significant digits as possible and the second of these is to free the human operator from the burden of estimating and inserting into a problem
"scale factors"- multiplicative constants which serve to keep numbers within the limits of the machine.
There is, of course, no denying the fact that human time is consumed in arranging for the introduction of suitable scale factors. We only argue that the time so consumed is a very small percentage of
the total time we will spend in preparing an interesting problem for our machine. The first advantage of the floating point is, we feel, somewhat illusory. In order to have such a floating point one
must waste memory capacity which could otherwise be used for carrying more digits per word. It would therefore seem to us not at all clear whether the modest advantages of a floating binary point
offset the loss of memory capacity and the increased complexity of the arithmetic and control circuits.
There are certainly some problems within the scope of our device which really require more than 2- ^40 precision. To handle such problems we wish to plan in terms of words whose lengths are some
fixed integral multiple of 40, and program the machine in such a manner as to give the corresponding aggregates of 40 digit words the proper treatment. We must then consider an addition or
multiplication as a complex operation programmed from a number of primitive additions or multiplications (cf. ò ò 9, Part II). There would seem to be considerable extra difficulties in the way of
such a procedure in an instrument with a floating binary point.
The reader may remark upon our alternate spells of radicalism and conservatism in deciding upon various possible features for our mechanism. We hope, however, that he will agree, on closer
inspection, that we are guided by a consistent and sound principle in judging the merits of any idea. We wish to incorporate into the machine-in the form of circuits-only such logical concepts as are
either necessary to have a complete system or highly convenient because of the frequency with which they occur and the influence they exert in the relevant mathematical situations.
5.4. On the basis of this criterion we definitely wish to build into the machine circuits which will enable it to form the binary sum of two 40 digit numbers. We make this decision not because
addition is a logically basic notion but rather because it would slow the mechanism as well as the operator down enormously if each addition were programmed out of the more simple operations of
"and", "or", and "not". The same is true for the subtraction. Similarly we reject the desire to form products by programming them out of additions, the detailed motivation being very much the same as
in the case of addition and subtraction. The cases for division and square-rooting are much less clear.
It is well known that the reciprocal of a number a can be formed to any desired accuracy by iterative schemes. One such scheme consists of improving an estimate X by forming X' = 2X - aX^2. Thus the
new error 1 - aX' is (1 - aX)^2, which is the square of the error in the preceding estimate. We notice that in the formation of X', there are two bona fide multiplications-we do not consider
multiplication by 2 as a true product since we will have a facility for shifting right or left in one or two pulse times. If then we somehow could guess 1/a to a precision of 2^-5, 6
multiplications-3 iterations-would suffice to give a final result good to 2^-40. Accordingly a small table of 2^4 entries could be used to get the initial estimate of 1/a. In this way a reciprocal 1/
See Bibliography [Goldstine and von Neumann, 1963b, 1963c, 1963d]. References in this chapter are all to this report.
Section 1 Processors with one address per instruction
could be formed in 6 multiplication times, and hence a quotient b/a in 7 multiplication times. Accordingly we see that the question of building a divider s really a function of how fast it can be
made to operate compared to the iterative method sketched above: In order to justify its existence, a divider must perform a division in a good deal less than 7 multiplication times. We have,
however, conceived a divider which is much faster than these 7 multiplication times and therefore feel justified in building it, especially since the amount of equipment needed above the requirements
of the multiplier is not important.
It is, of course, also possible to handle square roots by iterative techniques. In fact, if X is our estimate of a^1/2, then X' = 1/2(X + a/X) is a better estimate. We se that this scheme involves
one division per iteration. As will be seen below in our more detailed examination of the arithmetic organ we do not include a square- rooter in our plans because such a device would involve more
equipment than we feel is desirable in a first model. (Concerning the iterative method of square-rooting, cf. 8.10 in Part II)
55. The first part of our arithmetic organ requires little discussion at this point. It should be a parallel storage organ which can receive a number and add it to the one already in it, which is
also able to clear its contents and which can transmit what it contains. We will call such an organ an Accumulator. It is quite conventional in principle in past and present computing machines of the
most varied types, e.g. desk multipliers, standard IBM counters, more modem relay machines, the ENIAC. There are of, course, numerous ways to build such a binary accumulator. We distinguish two broad
types of such devices: static, and dynamic or pulse-type accumulators. These will be discussed in 5.11, but it is first necessary to make a few remarks concerning the arithmetic of binary addition.
In a parallel accumulator, the first step in an addition is to add each digit of the addend to the corresponding digit of the augend. The second step is to perform the carries, and this must be done
in sequence since a carry may produce a carry. In the worst case, 39 carries will occur. Clearly it is inefficient to allow 39 times as much time for the second step (performing the carries) as for
the first step (adding the digits). Hence either the carries must be accelerated, or use must be made of the average number of carries or both.
5.6. We shall show that for a sum of binary words, each of length n, the length of the largest carry sequence is on the average not in excess of ^2log n. Let p[n](v) designate the probability that a
carry sequence is of length v or greater in the sum of two binary words of length n. Then clearly p[n](v) - p[n](v + 1) is the probability that the largest carry sequence is of length exactly v and
the weighted average
Indeed, p[n](v) is the probability that the sum of two n-digit numbers contains a carry sequence of length ³ v. This probability obtains by adding the probabilities of two mutually exclusive
alternatives: First: Either the n - 1 first digits of the two numbers by themselves contain a carry sequence of length ³ v. This has the probability p[n-1](v). Second: The n - 1 first digits of the
two numbers by themselves do not contain a carry sequence of length ³ v. In this case any carry sequence of length ³ v in the total numbers (of length n) must end with the last digits of the total
sequence. Hence these must form the combination 1, 1. The next v - 1 digits must propagate the carry, hence each of these must form the combination 1, 0 or 0, 1. (The combinations 1, 1 and 0, 0 do
not propagate a carry.) The probability of the combination 1, 1 is 1/4, that one of the alternative combinations 1, 0 or 0, 1 is 1/2 . The total probability of this sequence is therefore 1/4(1/2)^v-1
= (1/2)^v+1. The remaining n - v digits must not contain a carry sequence of length ³ v. This has the probability 1 - p[n-v](v). Thus the probability of the second case is [1 - p[n-v](v)]/2 ^v+l.
Combining these two cases, the desired relation
obtains. The observation that p[n](v) = 0 if v >n is trivial.
We see with the help of the formulas proved above that p[n](v) - p[n-1](v) is always £ 1/2 ^v+1, and hence that the sum
is not in excess of (n - v + l)/2 ^v+1 since there are n - v + 1 terms in the sum; since, moreover, each p[n](v) is a probability, it is not greater than 1. Hence we have
This last expression is clearly linear in n in the interval 2^K £ n £ 2 ^K+1, and it is =K for n = 2^K and =K+ 1 for n = 2 ^K+1 , i.e it is = ^2log n at both ends of this interval. Since the function
^2log n is everywhere concave from below, it follows that our expression is £ ^2log n throughout this interval. Thus a[n] £ ^2log n. This holds for all K, i.e. for all n, and it is the in equality
which we wanted to prove.
For our case n = 40 we have a[n] £ log[2]40 ~5.3, i.e. an average length of about 5 for the longest carry sequence. (The actual value of a[40 ]is 4.62.)
5.7. Having discussed the addition, we can now go on to the subtraction. It is convenient to discuss at this point our treatment of negative numbers, and in order to do that right, it is desirable to
make some observations about the treatment of numbers in general.
Our numbers are 40 digit aggregates, the left-most digit being the sign digit, and the other digits genuine binary digits, with positional values 2^-1, 2^-2 , . . . , 2^-39 (going from left to
right). Our accumulator will, however, treat the sign digit, too, as a binary digit with the positional value 2^0-at least when it functions as an adder. For numbers between 0 and 1 this is clearly
all right: The left-most digit will then be 0, and if 0 at this place is taken to represent a + sign, then the number is correctly expressed with its sign and 39 binary digits.
Let us now consider one or more unrestricted 40 binary digit numbers. The accumulator will add them, with the digit-adding and the carrying mechanisms functioning normally and identically in all 40
positions. There is one reservation, however: If a carry originates in the left-most position, then it has nowhere to go from there (there being no further positions to the left) and is "lost". This
means, of course, that the addend and the augend, both numbers between 0 and 2, produced a sum exceeding 2, and the accumulator, being unable to express a digit with a positional value 21, which
would now be necessary, omitted 2. That is, the sum was formed correctly, excepting a possible error 2. If several such additions are performed in succession, then the ultimate error may be any
integer multiple of 2. That is, the accumulator is an adder which allows errors that are integer multiples of 2-it is an adder modulo 2.
It should be noted that our convention of placing the binary point immediately to the right of the left-most digit has nothing to do with the structure of the adder. In order to make this point
clearer we proceed to discuss the possibilities of positioning the binary point in somewhat more detail.
We begin by enumerating the 40 digits of our numbers (words) from left to right. In doing this we use an index h = 1, . . . , 40. Now we might have placed the binary point just as well between digits
j and j + 1, j = 0, . . . , 40. Note, that j = .0 corresponds to the position at the extreme left (there is no digit h = j = 0); j = 40 corresponds to the position at the extreme right (there is no
position h = j + 1 = 41); and j = 1 corresponds to our above choice. Whatever our choice of /, it does not affect the correctness of the accumulator's addition. (This is equally true for subtraction,
cf. below, but not for multiplication and division, cf. 5.8.) Indeed, we have merely multiplied all numbers by 2^j-1 (as against our previous convention), and such a "change of scale" has no effect
on addition (and subtraction). However, now the accumulator is an adder which allows errors that are integer multiples of 2^j it is an adder modulo 2^j. We mention this because it is occasionally
convenient to think in terms of a convention which places the binary point at the right end of the digital aggregate. Then j = 40, our numbers are integers, and the accumulator is an adder modulo 2^
40. We must emphasize, however, that all of this, i.e. all attributions of values to j, are purely convention-i.e. it is solely the mathematician's interpretation of the functioning of the machine
and not a physical feature of the machine. This convention will necessitate measures that have to be made effective by actual physical features of the machine-i.e. the convention will become a
physical and engineering reality only when we come to the organs of multiplication.
We will use the convention j = 1, i.e. our numbers lie in 0 and 2 and the accumulator adds modulo 2.
This being so, these numbers between 0 and 2 can be used to represent all numbers modulo 2. Any real number x agrees modulo 2 with one and only one number x between 0 and 2-or, to be quite precise: 0
£ x < 2. Since our addition functions modulo 2, we see that the accumulator may be used to represent and to add numbers modulo 2.
This determines the representation of negative numbers: If x < 0, then we have to find the unique integer multiple of 2, 2s
Section 1 Processors with one address per instruction
(s = 1, 2, . . . ) such that 0 £ ~ < 2 for ~ = x + 2s (i.e. - 2s £ x < 2(1 - s)), and represent x by the digitalization of ~.
In this way, however, the sign digit character of the left-most digit is lost: It can be 0 or 1 for both x ³ 0 and x < 0, hence 0 in the left-most position can no longer be associated with the + sign
of x. This may seem a bad deficiency of the system, but it is easy to remedy-at least to an extent which suffices for our purposes. This is done as follows:
We usually work with numbers x between -1 and 1-or, to be quite precise: -1 £ x < 1. Now the ~ with 0 £ ~ < 2, which differs from x by an integer multiple of 2, behaves as follows: If x ³ 0. then 0 £
x < 1. hence ~ = x. and so 0 £ ~ < 1. the left-most-digit of ~ is 0. If x <0, then -1£ x< 0, hence ~ = x + 2, and so 1 £ ~ < 2, the left-most digit of ~ is 1. Thus the left-most digit (of ~) is now a
precise equivalent of the sign (of x): 0 corresponds to + and 1 to - .
Summing up:
The accumulator may be taken to represent all real numbers modulo 2, and it adds them modulo 2. If x lies between -1 and 1 (precisely: - 1 £ x < 1)-as it will in almost all of our uses of the
machine-then the left-most digit represents the sign: 0 is + and 1 is - .
Consider now a negative number x with -1 £ x < 0. Put x = - y, 0 < y £ 1. Then we digitalize x by representing it as x + 2 = 2 - y = 1 + (1 - y). That is, the left-most (sign) digit of x = - y is, as
it should be, 1; and the remaining 39 digits are those of the complement of y = - x = ÷ x÷ , ie. those of 1 - y. Thus we have been led to the familiar representation of negative numbers by
The connection between the digits of x and those of - x is now easily formulated, for any x ³ 0. Indeed, - x is equivalent to
(This digit index i = 1, . . . , 39 is related to our previous digit index h = 1 , . . . , 40 by i = h - 1. Actually it is best to treat as if its domain included the additional value i = 0-indeed i
= 0 then corresponds to h = 1, i.e. to the sign digit. In any case expresses the positional value of the digit to which it refers more simply than h does: This positional value is 2^-i = 2^-(h-1) .
Note that if we had positioned the binary point more generally between j and j + 1, as discussed further above, this positional value would have been 2^-(h-j) . We now have, as pointed out
previously, j = 1.) Hence its digits obtain by subtracting every digit of x from 1-by complementing each digit, i.e. by replacing 0 by 1 and 1 by 0-and then adding 1 in the right-most position (and
effecting all the carries that this may cause). (Note how the left-most digit, interpreted as a sign digit, gets inverted by this procedure as it should be.)
A subtraction x - y is therefore performed by the accumulator, Ac, as follows: Form x + y', where y' has a digit 0 or 1 where y has a digit 1 or 0, respectively, and then add 1 in the right-most
position. The last operation can be performed by injecting a carry into the right-most stage of Ac-since this stage can never receive a carry from any other source (there being no further positions
to the right).
5.8. In the light of 5.7 multiplication requires special care, because here the entire modulo 2 procedure breaks down. Indeed, assume that we want to compute a product xy, and that we had to change
one of the factors, say x, by an integer multiple of 2, say by 2. Then the product (x + 2)y obtains, and this differs from the desired xy by 2y. 2y, however, will not in general be an integer
multiple of 2, since y is not in general an integer.
We will therefore begin our discussion of the multiplication by eliminating all such difficulties, and assume that both factors x, y lie between 0 and 1. Or, to be quite precise: 0 £ x < 1, 0 £ y <
To effect such a multiplication we first send the multiplier x into a register AR, the Arithmetic Register, which is essentially just a set of 40 flip-flops whose characteristics will be discussed
below. We place the multiplicand y in the Selectron Register, SR (cf. 4.9) and use the accumulator, Ac, to form and store the partial products. We propose to multiply the entire multiplicand by the
successive digits of the multiplier in a serial fashion. There are, of course, two possible ways this can be done: We can either start with the digit in the lowest position-position 2^-39-or in the
highest position-position 2^-1-and proceed successively to the left or right, respectively. There are a few advantages from our point of view in starting with the right-most digit of the multiplier.
We therefore describe that scheme.
The multiplication takes place in 39 steps, which correspond to the 39 (non-sign) digits of the multiplier x = 0, x [1],x [2], . . . , x [39] = (0. x [1x 2], . . . ,x [39]), enumerated backwards: x
[39], . . . ,x [2],x [1]. Assume that the k - 1 first steps (k = 1,..., 39) have already taken place, involving multiplication of the multiplicand y with the k - 1 last digits of the multiplier: x
[39], . . . , x [41-k] ; and that we are now at the kth step, involving multiplication with the kth last digit: x [40-k] . Assume furthermore, that Ac now contains the quantity P[k-1], the result of
the k - 1 first steps. [This is the (k - 1)st partial product. For k = 1 clearly P[0] = 0.1 We now form 2p[k] = P[k-1] + x [40-k]y, i.e.
That is, we do nothing or add y, according to whether x [40-k] = 0 or 1. We can then form p[k] by halving 2p[k].
Note that the addition of (1) produces no carry beyond the 2^0 position, i.e. the sign digit: 0 £ p[h] < 1 is true for h = 0, and if it is true for h = k - 1 , then (1) extends it to h = k also,
since 0 £ y[k] < 1. Hence the sum in (1) is ³ 0 and <2, and no carries beyond the 2^0 position arise.
Hence p[k] obtains from 2p[k] by a simple right shift, which is combined with filling in the sign digit (that is freed by this shift) with a 0. This right shift is effected by an electronic shifter
that is part of Ac.
Thus this process produces the product xy, as desired. Note that this xy is the exact product of x and y.
Since x and y are 39 digit binaries, their exact product xy is a 78 digit binary (we disregard the sign digit throughout). However, Ac will only hold 39 of these. These are clearly the left 39 digits
of xy. The right 39 digits of xy are dropped from Ac one by one in the course of the 39 steps, or to be more specific, of the 39 right shifts. We will see later that these right 39 digits of xy
should and will also be conserved (cf. the end of this section and the end of 5.12, as well as 6.6.3). The left 39 digits, which remain in Ac, should also be rounded off, but we will not discuss this
matter here. (cf. loc. cit. above and 9.9, Part II).
To complete the general picture of our multiplication technique we must consider how we sense the respective digits of our multiplier. There are two schemes which come to one's mind in this
connection. One is to have a gate tube associated with each flip-flop of AR in such a fashion that this gate is open if a digit is 1 and closed if it is null. We would then need a 39-stage counter to
act as a switch which would successively stimulate these gate tubes to react. A more efficient scheme is to build into AR a shifter circuit which enables AR to be shifted one stage to the right each
time Ac is shifted and to sense the value of the digit in the right-most flip-flop of AR. The shifter itself requires one gate tube per stage. We need in addition a counter to count out the 39 steps
of the multiplication, but this can be achieved by a six stage binary counter. Thus the latter is more economical of tubes and has one additional virtue from our point of view which we discuss in the
next paragraph.
The choice of 40 digits to a word (including the sign) is probably adequate for most computational problems but situations certainly might arise when we desire higher precision, i.e. words of greater
length. A trivial illustration of this would be the computation of p to more places than are now known (about 700 decimals, i.e. about 2,300 binaries). More important instances are the solutions of N
linear equations in N variables for large values of N. The extra precision becomes probably necessary when N exceeds a limit somewhere between 20 and 40. A justification of this estimate has to be
based on a detailed theory of numerical matrix inversion which will be given in a subsequent report. It is therefore desirable to be able to handle numbers of 39k digits and signs by means of program
instructions. One way to achieve this end is to use k words to represent a 39k digit number with signs. (In this way 39 digits in each 40 digit word are used, but all sign digits excepting the first
one, are apparently wasted; cf. however the treatment of double precision numbers in Chapter 9, Part II.) It is, of course, necessary in this case to instruct the machine to perform the elementary
operations of arithmetic in a manner that conforms with this interpretation of k-word complexes as single numbers. (Cf. 9.8-9.10, Part II.) In order to be able to treat numbers in this manner, it is
desirable to keep not 39 digits in a product, but 78; this is discussed in more detail in 6.6.3 below. To accomplish this end (conserving 78 product digits) we connect, via our shifter circuit, the
right-most digit of Ac with the left-most non-sign digit of AR. Thus, when in the process of multiplication a shift is ordered, the last digit of Ac is transferred into the place in AR made vacant
when the multiplier was shifted.
5.9. To conclude our discussion of the multiplication of positive numbers, we note this:
As described thus far, the multiplier forms the 78 digit product, xy, for a 39 digit multipler x and a 39 digit multiplicand y. We assumed x ³ 0, y ³ 0 and therefore had xy ³ 0 and we will only
depart from these assumptions in 5.10. In addition to these, however, we also assumed x <1, y <1, i.e. the x, y have their binary points both immediately right of the sign digit, which implied the
same for xy. One might question the necessity of these additional assumptions.
Prima facie they may seem mere conventions, which affect only the mathematician's interpretation of the functioning of the machine, and not a physical feature of the machine. (Cf. the corresponding
situation in addition and subtraction, in 5.7.) Indeed, if x had its binary point between digits j and j + 1 from the left (cf. the discussion of 5.7 dealing with this j; it also applies to k below),
and y between k and k + 1, then our above method of multiplication would still give the correct result xy, provided that
Section 1 Processors with one address per instruction
the position of the binary point in xy is appropriately assigned. Specifically: Let the binary point of xy be between digits l and l + 1. x has the binary point between digits j and j + 1, and its
sign digit is 0, hence its range is 0 £ x < 2^j-1. Similarly y has the range 0 £ y < 2^k-1, and xy has the range 0 £ xy <2^l-1. Now the ranges of x and y imply that the range of xy is necessarily 0 £
xy < 2^j-1 2^k-1 = 2[ ]^j+k-2 . Hence l = j + k - 1. Thus it might seem that our actual positioning of the binary point-immediately right of the sign digit, i.e. j = k = 1-is still a mere convention.
It is therefore important to realize that this is not so: The choices of j and k actually correspond to very real, physical, engineering decisions. The reason for this is as follows: It is desirable
to base the running of the machine on a sole, consistent mathematical interpretation. It is therefore desirable that all arithmetical operations be performed with an identically conceived positioning
of the binary point in Ac. Applying this principle to x and y gives j = k. Hence the position of the binary point for xy is given by j + k - 1 = 2j- 1. If this is to be the same as for x, and y, then
2j - 1 = j, i.e. j = 1 ensues-that is, our above positioning of the binary point immediately right of the sign digit.
There is one possible escape: To place into Ac not the left 39 digits of xy (not counting the sign digit 0), but the digits j to j + 38 from the left. Indeed, in this way the position of the binary
point of xy will be (2j - 1) - (j - 1) = j, the same as for x and y.
This procedure means that we drop the left j - 1 and right 40 + j digits of xy and hold the middle 39 in Ac. Note that positioning of the binary point-means that x < 2^j-1, y <2 ^j-1 and xy can only
be used if xy <2 ^j-1. Now the assumptions secure only xy <2^2j-2. Hence xy must be 2^j-1 times smaller than it might be. This is just the thing which would be secured by the vanishing of the left j
- 1 digits that we had to drop from Ac, as shown above.
If we wanted to use such a procedure, with those dropped left j - 1 digits really existing, i.e. with j ¹ 1, then we would have to make physical arrangements for their conservation elsewhere. Also
the general mathematical planning for the machine would be definitely complicated, due to the physical fact that Ac now holds a rather arbitrarily picked middle stretch of 39 digits from among the 78
digits of xy. Alternatively, we might fail to make such arrangements, but this would necessitate to see to it in the mathematical planning of each problem, that all products turn out to be 2^j-1
times smaller than their a priori maxima. Such an observance is not at all impossible; indeed similar things are unavoidable for the other operations. [For example, with a factor 2 in addition (of
positives) or subtraction (of opposite sign quantities). Cf. also the remarks in the first part of 5.12, dealing with keeping "within range".] However, it involves a loss of significant digits, and
the choice j = 1 makes it unnecessary in multiplication.
We will therefore make our choice j = 1, i.e. the positioning of the binary point immediately right of the sign digit, binding for all that follows.
5.10. We now pass to the case where the multiplier x and the multiplicand y may have either sign + or -, i.e. any combination of these signs.
It would not do simply to extend the method of 5.8 to include the sign digits of x and y also. Indeed, we assume -1 £ x < 1, -1 £ y < 1, and the multiplication procedure in question is definitely
based on the ³ 0 interpretations of x and y. Hence if x < 0, then it is really using x + 2, and if y < 0, then it is really using y + 2. Hence for x < 0, y ³ 0 it forms
(x + 2)y = xy + 2y
for x ³ 0, y <0 it forms
x(y + 2) = xy + 2x
for x < 0, x < 0, it forms
(x + 2)(y + 2) = xy + 2x + 2y + 4
or since things may be taken modulo 2, xy + 2x + 2y. Hence correction terms -2y, -2x would be needed for x < 0, y < 0, respectively (either or both).
This would be a possible procedure, but there is one difficulty: As xy is formed, the 39 digits of the multiplier x are gradually lost from AR, to be replaced by the right 39 digits of xy. (Cf. the
discussion at the end of 5.8.) Unless we are willing to build an additional 40 stage register to hold x, therefore, x will not be available at the end of the multiplication. Hence we cannot use it in
the correction 2x of xy, which becomes necessary for y < 0.
Thus the case x < 0 can be handled along the above lines, but not the case y <0.
It is nevertheless possible to develop an adequate procedure, and we now proceed to do this. Throughout this procedure we will maintain the assumptions -1 £ x < 1, -1 £ y < 1. We proceed in several
successive steps.
First: Assume that the corrections necessitated by the possibility of y <0 have been taken care of. We permit therefore y ³ 0. We will consider the corrections necessitated by the possibility of x <
Let us disregard the sign digit of x, which is 1, i.e. replace it by 0. Then x goes over into x' = x - 1 and as -1 £ x< 0, this x' will actually behave like (x - 1) + 2 = x + 1. Hence our
multiplication procedure will produce x'y = (x + l)y = xy + y,
and therefore a correction - y is needed at the end. (Note that we did not use the sign digit of x in the conventional way. Had we done so, then a correction -2y would have been necessary, as seen
We see therefore: Consider x ~0. Perform first all necessary steps for forming x'y(y ~ 0), without yet reaching the sign digit of x (i.e. treating x as if it were ³ 0). When the time arrives at which
the digit x [0] of x has to become effective-i.e. immediately after x [1] became effective, after 39 shifts (cf. the discussion near the end of 5.8)-at which time Ac contains, say, ~ (this
corresponds to the p[39 ]of 5.8). then form
This ~ is xy. (Note the difference between this last step, forming ~, and the 39 preceding steps in 5.8, forming p[1], p[2] , . . . , p[39].)
Second: Having disposed of the possibility x < 0, we may now assume x ³ 0. With this assumption we have to treat all y ~ 0. Since y ³ 0 brings us back entirely to the familiar case of 5.8, we need to
consider the case y <0 only.
Let y' be the number that obtains by disregarding the sign digit of y' which is 1, i.e. by replacing it by 0. Again y' acts not like y - 1, but like (y - 1) + 2 = y + 1. Hence the multiplication
procedure of 5.8 will produce xy' = x(y + 1) = xy + x, and therefore a correction x is needed. (Note that, quite similarly to what we saw in the first case above, the suppression of the sign digit of
y replaced the previously recognized correction - 2x by the present one - x.) As we observed earlier, this correction - x cannot be applied at the end to the completed xy' since at that time x is no
longer available. Hence we must apply the correction - x digitwise, subtracting every digit at the time when it is last found in AR, and in a way that makes it effective with the proper positional
Third: Consider then x = 0, x [1], x [2] , . . . , x [39] = x [1], x [2] . . .x [39]). The 39 digits x [1] . . . x [39] of x are lost in the course of the 39 shifts of the multiplication procedure of
5.8, going from right to left. Thus the operation No. k + 1 (k = 0, 1 38, cf. 5.8) finds x [39-k] in the right-most stage of AR, uses it, and then loses it through its concluding right shift (of both
Ac and AR). After this step 39 - (k + 1) = 38 - k further steps, i.e. shifts follow, hence before its own concluding shift there are still 39 - k shifts to come. Hence the positional values are 2^
39-k times higher than they will be at the end. x [39- k] should appear at the end, in the correcting term - x, with the sign - and the positional value 2^-(39-k) . Hence we may inject it during the
step k + 1 (before its shift) with the sign - and the positional value 1. That is to say, - x [39-k] in the sign digit.
This, however, is inadmissible. Indeed, x [39-k] might cause carries (if x [39-k]= 1), which would have nowhere to go from the sign digit (there being no further positions to the left). This error is
at its origin an integer multiple of 2, but the 39 - k subsequent shifts reduce its positional value 2^39-k times. Hence it might contribute to the end result any integer multiple of 2^-(38-k) ¾ and
this is a genuine error.
Let us therefore add 1 - x [39-k] to the sign digit, i.e. 0 or 1 if x [39-k] is 1 or 0, respectively. We will show further below, that with this procedure there arise no carries of the inadmissible
kind. Taking this momentarily for granted, let us see what the total effect is. We are correcting not by -x but by å i^39=1 2^-i - x = 1 - 2^-39 - x. Hence a final correction by -1 + 2^-39 is needed.
Since this is done at the end (after all shifts), it may be taken modulo 2. That is to say, we must add 1 + 2^-39 i.e. 1 in each of the two extreme positions. Adding 1 in the right-most position has
the same effect as in the discussion at the end of 5.7 (dealing with the subtraction). It is equivalent to injecting a carry into the right-most stage of Ac. Adding 1 in the left-most position, i.e.
to the sign digit, produces a 1, since that digit was necessarily 0. (Indeed, the last operation ended in a shift, thus freeing the sign digit, cf. below.)
Fourth: Let us now consider the question of the carries that may arise in the 39 steps of the process described above. In order to do this, let us describe the kth step (k = 1, . . . , 39), which is
a variant of the kth step described for a positive multiplication in 5.8, in the same way in which we described the original kth step bc. cit. That is to say, let us see what the formula (1) of 5.8
has become. It is clearly 2pk = p[k-1] + (1 - x [40-k]) + x [40-k]y' i.e.
That is, we add 1 (y's sign digit) or y' (y without its sign digit), according to whether x [40-k] = 0 or 1. Then p[k] should obtain from 2p[k] again by halving.
Now the addition of (2) produces no carries beyond the 2^0 position, as we asserted earlier, for the same reason as the addition of (1) in 5.8. We can argue in the same way as there: 0 £ p[h] <1 is
true for h = 0, and if it is true for h = k - 1, then (1) extends it to h = k also, since 0 £ y'[k] £ 1. Hence the sum in (2) is ³ 0 and <2, and no carries beyond the 2^0 position arise.
Fifth: In the three last observations we assumed y < 0. Let us now restore the full generality of y ~ 0. We can then describe
Section 1 Processors with one address per instruction
the equations (1) of 5.8 (valid for y ³ 0) and (2) above (valid for y <0) by a single formula,
Thus our verbal formulation of (2) applies here, too: We add y's sign digit or y without its sign, according to whether x [40-k] = 0 or 1. All p[k] are ³ 0 and <1, and the addition of (3) never
originates a carry beyond the 2^0 position. p[k] obtains from 2p[k] by a right shift, filling the sign digit with a 0. (Cf. however, Part II, Table 2 for another sort of right shift that is desirable
in explicit form, i.e. as an order.)
For y ³ 0, xy is p[39], for y <0, xy obtains from p[39 ]by injecting a carry into the right-most stage of Ac and by placing a 1 into the sign digit in Ac.
Sixth: This procedure applies for x ³ 0. For x <0 it should also be applied, since it makes use of x's non-sign digits only, but at the end y must be subtracted from the result.
This method of binary multiplication will be illustrated in some examples in 5.15.
5.11. To complete our discussion of the multiplicative organs of our machine we must return to a consideration of the types of accumulators mentioned in 5.5. The static accumulator operates as an
adder by simultaneously applying static voltages to its two inputs-one for each of the two numbers being added. When steady-state operation is reached the total sum is formed complete with all
carries. For such an accumulator the above discussion is substantially complete, except that it should be remarked that such a circuit requires at most 39 rise times to complete a carry. Actually it
is possible that the duration of these successive rises is proportional to a lower power of 39 than the first one.
Each stage of a dynamic accumulator consists of a binary counter for registering the digit and a flip-flop for temporary storage of the carry. The counter receives a pulse if a 1 is to be added in at
that place; if this causes the counter to go from 1 to 0 a carry has occurred and hence the carry flip-flop will be set. It then remains to perform the carries. Each flip-flop has associated with it
a gate, the output of which is connected to the next binary counter to the left. The carry is begun by pulsing all carry gates. Now a carry may produce a carry, so that the process needs to be
repeated until all carry flip-flops register 0. This can be detected by means of a circuit involving a sensing tube connected to each carry flip-flop. It was shown in 5.6 that, on the average, five
pulse times (flip-flop reaction times) are required for the complete carry. An alternative scheme is to connect a gate tube to each binary counter which will detect whether an incoming carry pulse
would produce a carry and will, under this circumstance, pass the incoming carry pulse directly to the next stage. This circuit would require at most 39 rise times for the completion of the carry.
(Actually less, cf. above.)
At the present time the development of a static accumulator is being concluded. From preliminary tests it seems that it will add two numbers in about 5 m sec and will shift right or left in about 1 m
We return now to the multiplication operation. In a static accumulator we order simultaneously an addition of the multiplicand with sign deleted or the sign f the multiplicand (cf. 5.10) and a
complete carry and then a shift for each of the 39 steps. In a dynamic accumulator of the second kind just described we order in succession an addition of the multiplicand with sign deleted or the
sign of the multiplicand, a complete carry, and a shift for each of the 39 steps. In a dynamic accumulator of the first kind we can avoid losing the time required for completing the carry (in this
case an average of 5 pulse times, cf. above) at each of the 39 steps. We order an addition by the multiplicand with sign deleted or the sign of the multiplicand, then order one pulsing of the carry
gates, and finally shift the contents of both the digit counters and the carry flip-flops. This process is repeated 39 times. A simple arithmetical analysis which may be carried out in a later
report, shows that at each one of these intermediate stages a single carry is adequate, and that a complete set of carries is needed at the end only. We then carry out the complement corrections,
still without ever ordering a complete set of carry operations. When all these corrections are completed and after round-off, described below, we then order the complete carry mentioned above.
5.12. It is desirable at this point in the discussion to consider rules for rounding-off to n-digits. In order to assess the characteristics of alternative possibilities for such properly, and in
particular the role of the concept of "unbiasedness", it is necessary to visualize the conditions under which rounding-off is needed.
Every number x that appears in the computing machine is an approximation of another number x', which would have appeared if the calculation had been performed absolutely rigorously. The
approximations to which we refer here are not those that are caused by the explicitly introduced approximations of the numerical-mathematical set-up, e.g. the replacement of a (continuous)
differential equation by a (discrete) difference equation. The effect of such approximations should be evaluated mathematically by the person who plans the problem for the machine, and should not be
a direct concern of the machine. Indeed, it has to be handled
by a mathematician and cannot be handled by the machine, since its nature, complexity, and difficulty may be of any kind, depending upon the problem under consideration. The approximations which
concern us here are these: Even the elementary operations of arithmetic, to which the mathematical approximation-formulation for the machine has to reduce the true (possibly transcendental) problem,
are not rigorously executed by the machine. The machine deals with numbers of n digits, where n, no matter how large, has to be a fixed quantity. (We assumed for our machine 40 digits, including the
sign, i.e. n = 39.) Now the sum and difference of two n-digit numbers are again n-digit numbers, but their product and quotient (in general) are not. (They have, in general, 2n or ¥ -digits,
respectively.) Consequently, multiplication and division must unavoidably be replaced by the machine by two different operations which must produce n-digits under all conditions, and which, subject
to this limitation, should lie as close as possible to the results of the true multiplication and division. One might call them pseudo-multiplication and pseudo-division; however, the accepted
nomenclature terms them as multiplication and division with round-off. (We are now creating the impression that addition and subtraction are entirely free of such shortcomings. This is only true
inasmuch as they do not create new digits to the right, as multiplication and division do. However, they can create new digits to the left, i.e. cause the numbers to "grow out of range". This
complication, which is, of course, well known, is normally met by the planner, by mathematical arrangements and estimates to keep the numbers "within range". Since we propose to have our machine deal
with numbers between -1 and 1, multiplication can never cause them to "grow out of range". Division, of course, might cause this complication, too. The planner must therefore see to it that in every
division the absolute value of the divisor exceeds that of the dividend.)
Thus the round-off is intended to produce satisfactory n-digit approximations for the product xy and the quotient x/y of two n-digit numbers. Two things are wanted of the round-off: (1) The
approximation should be good, i.e. its variance from the "true" xy or x/y should be as small as practicable; (2) The approximation should be unbiased, i.e. its mean should be equal to the "true" xy
or x/y.
These desiderata must, however, be considered in conjunction with some further comments. Specifically: (a) x and y themselves are likely to be the results of similar round-offs, directly or in
directly inherent, i.e. x and y themselves should be viewed as unbiased n-digit approximations of "true" x' and y' values; (b) by talking of "variances" and "means" we are introducing statistical
concepts. Now the approximations which we are here considering are not really of a statistical nature, but are due to the peculiarities (from our point of view, inadequacies) of arithmetic and of
digital representation, and are therefore actually rigorously and uniquely determined. It seems, however, in the present state of mathematical science, rather hopeless to try to deal with these
matters rigorously. Furthermore, a certain statistical approach, while not truly justified, has always given adequate practical results. This consists of treating those digits which one does not wish
to use individually in subsequent calculations as random variables with equiprobable digital values, and of treating any two such digits as statistically independent (unless this is patently false).
These things being understood, we can now undertake to discuss round-off procedures, realizing that we will have to apply them to the multiplication and to the division.
Let x = (.x [1] . . . x [n]) and y = (.h [1] . . . h [n]) be unbiased approximations of x' and y'. Then the "true" xy = (.x [1] . . . x [nx n+1] . . . x [2n]) and the "true" x/y = (.w [1] . . . w [nw
n+1w n+2] . . .) (this goes on ad infinitum!) are approximations of x'y' and x'/y'. Before we discuss how to round them off, we must know whether the "true" xy and x/y are themselves unbiased
approximations of x'y' and x'/y'. xy is indeed an unbiased approximation of x'y', i.e. the mean of xy is the mean of x( = x') times the mean of y( = y'), owing to the independence assumption which we
made above. However, if x and y are closely correlated, e.g. for x = y, i.e. for squaring, there is a bias. It is of the order of the mean square of x - x', i.e. of the variance of x. Since x has n
digits, this variance is about l/2^2n (If the digits of x', beyond n are entirely unknown, then our original assumptions give the variance l/l2.2^2n.) Next, x/y can be written as x.y^-1, and since we
have already discussed the bias of the product, it suffices now to consider the reciprocal y^-1. Now if y is an unbiased estimate of y', then y^-1 is not an unbiased estimate of y'^-1, i.e. the mean
of y's reciprocal is not the reciprocal of y's mean. The difference is ~y^-3 times the variance of y, i.e. it is of essentially the same order as the bias found above in the case of squaring.
It follows from all this that it is futile to attempt to avoid biases of the order of magnitude l/2^2n or less. (The factor 1/12 above may seem to be changing the order of magnitude in question.
However, it is really the square root of the variance which matters and Ö (1/12 ~ 0.3 is a moderate factor.) Since we propose to use n = 39, therefore l/2^78(~3 ´ l0^-24) is the critical case. Note
that this possible bias level is 1/2^39(~2 x 10^-12) times our last significant digit. Hence we will look for round-off rules to n digits for the "true" xy = (.x [1] . . . x [nx n+1] . . . x [2n])
and x/y = (.w [1 ]. . . w [nw n+1w n+2] . . . ). The desideratum (1) which we formulated previously, that the variance should be small, is still valid. The
Section 1 Processors with one address per instruction
desideratum (2), however, that the bias should be zero, need, according to the above, only be enforced up to terms of the order 1/2^2n.
The round-off procedures, which we can use in this connection, fall into two broad classes. The first class is characterized by its ignoring all digits beyond the nth, and even the nth digit itself,
which it replaces by a 1. The second class is characterized by the procedure of adding one unit in the (n + 1)st digit, performing the carries which this may induce, and then keeping only the n first
When applied to a number of the form (.v[1 ]. . . v[n]v [n+1]v [n+2] . . . ) (ad infinitum!), the effects of either procedure are easily estimated. In the first case we may say we are dealing with
(.v[1], . . ., v [n-1]) plus a random number of the form (.0 . . . , 0v[n]v [n+1]v [n+2] . . .), i.e. random in the interval 0, 1/2^n-1 . Comparing with the rounded off (.v[1]v[2 ]. . . v[n-1]l), we
therefore have a difference random in the interval - l/2^n, l/2^n. Hence its mean is 0 and its variance 1/3 ^. 2^2n. In the second case we are dealing with (.v[1] . . . v[n]) plus a random number of
the form (.0 . . . 00v[n+1 ]v[n+2] . . . ), i . e . random in the interval 0, 1/2^n. The "rounded-off" value will be (.v[1 ]. . . v[n]) increased by 0 or by 1/2^n, according to whether the random
number in question lies in the interval 0, 1/2^n+1, or in the interval 1/2^n+1, 1/2^n. Hence comparing with the "rounded-off" value, we have a difference random in the intervals 0, 1/2^n+1, and 0, -
l/2^n+1, i.e. in the interval -1/2^n+1, 1/2^n+1. Hence its mean is 0 and its variance (1/12)2^2n.
If the number to be rounded-off has the form (.v[1 ]. . . v[n ]v[n+1] v[n+2] . . . v[n+p]) (p finite), then these results are somewhat affected. The order of magnitude of the variance remains the
same; indeed for large p even its relative change is negligible. The mean difference may deviate from 0 by amounts which are easily estimated to be of the order 1/2^n ^.1/2^p = 1/2 ^n+p.
In division we have the first situation, x/y = (.w [1 ]. . . w [nw ][n+1w n+2] . . . ), i.e. p is infinite. In multiplication we have the second one, xy = (.x [1] . . . x [nx n+1] . . . x [2n]), i.e.
p = n. Hence for the division both methods are applicable without modification. In multiplication a bias of the order of 1/2^2n may be introduced. We have seen that it is pointless to insist on
removing biases of this size. We will therefore use the unmodified methods in this case, too.
It should be noted that the bias in the case of multiplication can be removed in various ways. However, for the reasons set forth above, we shall not complicate the machine by introducing such
Thus we have two standard "round-off" methods, both unbiased to the extent to which we need this, and with the variances 1/3 ^. 2^2n, and (1/12)2^2n that is, with the dispersions (1/Ö 3)(1/2^n) =
0.58 times the last digit and (1/2 Ö 3)(1/2^n) = 0.29 times the last digit. The first one requires no carry facilities, the second one requires them.
Inasmuch as we propose to form the product x'y' in the accumulator, which has carry facilities, there is no reason why we should not adopt the rounding scheme described above which has the smaller
dispersion, i.e. the one which may induce carries. In the case, however, of division we wish to avoid schemes leading to carries since we expect to form the quotient in the arithmetic register, which
does not permit of carry operations. The scheme which we accordingly adopt is the one in which w [n] is replaced by 1. This method has the decided advantage that it enables us to write down the
approximate quotient as soon as we know its first (n - 1) digits. It will be seen in 5.14 and 6.6.4 below that our procedure for forming the quotient of two numbers will always lead to a result that
is correctly rounded in accordance with the decisions just made. We do not consider as serious the fact that our rounding scheme in the case of division has a dispersion twice as large as that in
multiplication since division is a far less frequent operation.
A final remark should be made in connection with the possible, occasional need of carrying more than n = 39 digits. Our logical control is sufficiently flexible to permit treating k (=2, 3, . . .
words as one number, and thus effecting n = 39k. In this case the round-off has to be handled differently, cf. Chapter 9, Part II. The multiplier produces all 78 digits of the basic 39 by 39 digit
multiplication: The first 39 in the Ac, the last 39 in the AR. These must then be manipulated in an appropriate manner. (For details, cf. 6.6.3 and 9.9-9.10, Part II.) The divider works for 39 digits
only: In forming x/y, it is necessary, even if x and y are available to 39k digits, to use only 39 digits of each, and a 39 digit result will appear. It seems most convenient to use this result as
the first step of a series of successive approximations. The successive improvements can then be obtained by various means. One way consists of using the well known iteration formula (cf. 5.4). For k
= 2 one such step will be needed, for k = 3, 4, two steps, for k = 5, 6, 7, 8 three steps, etc. An alternative procedure is this: Calculate the remainder, using the approximate, 39 digit, quotient
and the complete, 39k digit, divisor and dividend. Divide this again by the approximate, 39 digit, divisor, thus obtaining essentially the next 39 digits of the quotient. Repeat this procedure until
the full 39k desired digits of the quotient have been obtained.
5.13. We might mention at this time a complication which arises when a floating binary point is introduced into the machine. The operation of addition which usually takes at most 1/10 of a
multiplication time becomes much longer in a machine with floating binary since one must perform shifts and round-offs as well as additions. It would seem reasonable in this case to place the time of
an addition as about 1/3 to 1/2 of a multiplication. At this rate it is clear that the number of additions in a problem is as important a factor in the total solution time as are the number of
multiplications. (For further details concerning the floating binary point, cf. 6.6.7.)
5.14. We conclude our discussion of the arithmetic unit with a description of our method for handling the division operation. To perform a division we wish to store the dividend in SR, the partial
remainder in Ac and the partial quotient in AR. Before proceeding further let us consider the so-called restoring and non-restoring methods of division. In order to be able to make certain
comparisons, we will do this for a general base m = 2, 3
Assume for the moment that divisor and dividend are both positive. The ordinary process of division consists of subtracting from the partial remainder (at the very beginning of the process this is,
of course, the dividend) the divisor, repeating this until the former becomes smaller than the latter. For any fixed positional value in the quotient in a well-conducted division this need be done at
most m - 1 times. If, after precisely k = 0, 1, . . . , m - 1 repetitions of this step, the partial remainder has indeed become less than the divisor, then the digit k is put in the quotient (at the
position under consideration), the partial remainder is shifted one place to the left, and the whole process is repeated for the next position, etc. Note that the above comparison of sizes is only
needed at k = 0, 1, . . . , m - 2, i.e. before step 1 and after steps 1, . . . , m- 2. If the value k = m-1, i.e. the point after step m - 1, is at all reached in a well-conducted division, then it
may be taken for granted without any test, that the partial remainder has become smaller than the divisor, and the operations on the position under consideration can therefore be concluded. (In the
binary system, m = 2, there is thus only one step, and only one comparison of sizes, before this step.) In this way this scheme, known as the restoring scheme, requires a maximum of m - 1 comparisons
and utilizes the digits 0,1,..., m - 1 in each place in the quotient. The difficulty of this scheme for machine purposes is that usually the only economical method for comparing two numbers as to
size is to subtract one from the other. If the partial remainder r[n] were less than the dividend d, one would then have to add d back into r[~] - d in order to restore the remainder. Thus at every
stage an unnecessary operation would be performed. A more symmetrical scheme is obtained by not restoring. In this method (from here on we need not assume the positivity of divisor and dividend) one
compares the signs of r[n] and d; if they are of the same sign, the dividend is repeatedly subtracted from the remainder until the signs become opposite; if they are opposite, the dividend is
repeatedly added to the remainder until the signs again become like. In this scheme the digits that may occur in a given place in the quotient are evidently +1, +2 , . . . ,+(m - 1) the positive
digits corresponding to subtractions and the negative ones to additions of the dividend to the remainder.
Thus we have 2(m - 1) digits instead of the usual m digits. In the decimal system this would mean 18 digits instead of 10. This is a redundant notation. The standard form of the quotient must
therefore be restored by subtracting from the aggregate of its positive digits the aggregate of its negative digits. This requires carry facilities in the place where the quotient is stored.
We propose to store the quotient in AR, which has no carry facilities. Hence we could not use this scheme if we were to operate in the decimal system.
The same objection applies to any base m for which the digital representation in question is redundant-i.e. when 2(m - 1) > m. Now 2(m - 1) > m whenever m > 2, but 2(m - 1) = m for m = 2. Hence, with
the use of a register which we have so far contemplated, this division scheme is certainly excluded from the start unless the binary system is used.
Let us now investigate the situation in the binary system. We inquire if it is possible to obtain a quasi-quotient by using the non-restoring scheme and by using the digits 1, 0 instead of 1, -1. Or
rather we have to ask this question: Does this quasi-quotient bear a simple relationship to the true quotient?
Let us momentarily assume this question can be answered affirmatively and describe the division procedure. We store the divisor initially in Ac, the dividend in SR and wish to form the quotient in
AR. We now either add or subtract the contents of SR into Ac, according to whether the signs in Ac and SR are opposite or the same, and insert correspondingly a 0 or 1 in the right-hand place of AR.
We then shift both Ac and AR one place left, with electronic shifters that are parts of these two aggregates.
At this point we interrupt the discussion to note this: multiplication required an ability to shift right in both Ac and AR (cf. 5.8). We have now found that division similarly requires an ability to
shift left in both Ac and AR. Hence both organs must be able to shift both ways electronically. Since these abilities have to be present for the implicit needs of multiplication and division, it is
just as well to make use of them explicitly in the form of explicit orders. These are the orders 20,21 of Table 1, and of Table 2, Part II. It will, however, turn out to be convenient to arrange some
details in the shifts, when they occur explicitly under the control of those orders,
Section 1 Processors with one address per instruction
differently from when they occur implicitly under the control of a multiplication or a division. (For these things. cf. the discussion of the shifts near the end of 5.8 and in the third remark below
on one hand, and in the third remark in 7.2, Part II, on the other hand.)
Let us now resume the discussion of the division. The process described above will have to be repeated as many times as the number of quotient digits that we consider appropriate to produce in this
way. This is likely to be 39 or 40; we will determine the exact number further below.
In this process we formed digits x '[i] = 0 or 1 for the quotient, when the digit should actually have been x [i ]= -1 or 1, with x '[i] = 2x '[i] - 1. Thus we have a difference between the true
quotient z (based on the digits x [i]) and the quasi-quotient z' (based on the digits x '[i]), but at the same time a one-to-one connection. It would be easy to establish the algebraical expression
for this connection between z' and z directly, but it seems better to do this as part of a discussion which clarifies all other questions connected with the process of division at the same time.
We first make some general remarks:
First: Let x be the dividend and y the divisor. We assume, of course, -1 £ x < 1, -1 £ y < 1. It will be found that our present process of division is entirely unaffected by the signs of x and y,
hence no further restrictions on that score are required.
On the other hand, the quotient z = x/y must also fulfil -1 £ z < 1. It seems somewhat simpler although this is by no means necessary, to exclude for the purposes of this discussion z = -1, and to
demand ÷ z÷ <1. This means in terms of the dividend x and the divisor y that we exclude x = - y and assume ÷ x÷ <y.
Second: The division takes place in n steps, which correspond to the n digits x '[i] , . . .,x '[n] of the pseudo-quotient z', n being yet to be determined (presumably 39 or 40). Assume that the k -
1 first steps (k = 1, . . . , n) have already taken place, having produced the k - 1 first digits: x '[i] , . . . , x '[k-1]; and that we are now at the kth step, involving production of the kth
digit; x '[k]. Assume furthermore, that Ac now contains the quantity r[k-1], the result of the k - 1 first steps. (This is the (k - 1)st partial remainder. For k = 1 clearly r[0 ]= x.) We then form r
[k] = 2r[k-1 ]+ y, according to whether the signs of r[k-1] and y do or do not agree, i.e.
Let us now see what carries may originate in this procedure. We can argue as follows: ÷ r[h÷ ]< ÷ y÷ is true for h = 0(÷ r[0 ÷ ]= ÷ x÷ <÷ y÷ ), and if it is true for h = k - 1, then (4) extends it to
h = k also, since r[k-1] and Å y have opposite signs. The last point may be elaborated a little further: because of the opposite signs
ô rô =2ô r[k-1ô ]- ô yô <2ô yô - ô yô = ô yô
Hence we have always ô r[kô ]<ô yô , and therefore afortiori ôr[kô ]< 1, i.e. -l< r[k] < l.
Consequently in equation (4) one summand is necessarily > -2, <2, the other is ³ 1, <1, and the sum is >-1, <1. Hence we may carry out the operations of (4) modulo 2, disregarding any possibilities
of carries beyond the 20 position, and the resulting r[k] will be automatically correct (in the range > - 1. <1).
Third: Note however that the sign of r[k-1], which plays an important role in (4) above, is only then correctly determinable from the sign digit, if the number from which it is derived is ³- 1 <1.
(Cf. the discussion in 5.7.) This requirement however is met, as we saw above, by r[k-1], but not necessarily by 2r[k-1]. Hence the sign of r[k-1] (i.e. its sign digit) as required by (4), must be
sensed before r[k-1] is doubled.
This being understood, the doubling of r[k-1] may be performed as a simple left shift, in which the left-most digit (the sign digit) is allowed to be lost-this corresponds to the disregarding of
carries beyond the 2^0 position, which we recognized above as being permissible in (4). (Cf. however, Part II, Table 2, for another sort of left shift that is desirable in explicit form, i.e. as an
Fourth: Consider now the precise implication of (4) above. x '[i ]= 1 or 0 corresponds to Å = - or +, respectively. Hence (4) may be written
Fifth: If we do not wish to get involved in more complicated round-off procedure which exceed the immediate capacity of the only available adder Ac, then the above result suggests that we should put
n + 1 = 40, n = 39. The x '[1], . . . , x '[39] are then 39 digits of the quotient, including the sign digit, but not including the right-most digit.
The right-most digit is taken care of by placing a 1 into the right-most stage of Ac.
At this point an additional argument in favor of the procedure that we have adopted here becomes apparent. The procedure coincides (without a need for any further corrections) with the second
round-off procedure that we discussed in 5.12.
There remains the term -1. Since this applies to the final result, and no right shifts are to follow, carries which might go beyond the 2^0 position may be disregarded. Hence this amounts simply to
changing the sign digit of the quotient ~: replacing 0 or 1 by 1 or 0, respectively.
This concludes our discussion of the division scheme. We wish, however, to re-emphasize two very distinctive features which it possesses:
First: This division scheme applies equally for any combinations of signs of divisor and dividend. This is a characteristic of the non-restoring division schemes, but it is not the case for any
simple known multiplication scheme. It will be remembered, in particular, that our multiplication procedure of 5.9 had to contain special correcting steps for the cases where either or both factors
are negative.
Second: This division scheme is practicable in the binary system only; it has no analog for any other base.
This method of binary division will be illustrated on some examples in 5.15.
5.15. We give below some illustrative examples of the operations of binary arithmetic which were discussed in the preceding sections.
Although it presented no difficulties or ambiguities, it seems best to begin with an example of addition.
Note that this deviates by 1/64, i.e. by one unit of the right-most position, from the correct result - 3/8. This is a consequence of our round-off rule, which forces the right-most digit to be 1
under all conditions. This occasionally produces results with unfamiliar and even annoying aspects (e.g. when quotients like 0:y or y:y are formed), but it is nevertheless unobjectionable and
self-consistent on the basis of our general principles.
6. The control
6.1. It has already been stated that the computer will contain an organ. called the control, which can automatically execute the orders stored in the Selectrons. Actually, for a reason stated in 6.3,
the orders for this computer are less than half as long as a forty binary digit number, and hence the orders are stored in the Selectron memory in pairs.
Let us consider the routine that the control performs in directing a computation. The control must know the location in the Selectron memory of the pair of orders to be executed. It must direct the
Selectrons to transmit this pair of orders to the Selectron register and then to itself. It must then direct the execution of the operation specified in the first of the two orders. Among these
orders we can immediately describe two major types: An order of the first type begins by causing the transfer of the number, which is stored at a specified memory location, from the Selectrons to the
Selectron register. Next, it causes the arithmetical unit to perform some arithmetical operations on this number (usually in conjunction with another number which is already in the arithmetical
unit), and to retain the resulting number in the arithmetical unit. The second type order causes the transfer of the number, which is held in the arithmetical unit, into the Selectron register, and
from there to a specified memory location in the Selectrons. (It may also be that this latter operation will permit a direct transfer from the arithmetical unit into the Selectrons.) An additional
type of order consists of the transfer orders of 3.5. Further orders control the inputs and the outputs of the machine. The process described at the beginning of this paragraph must then be repeated
with the second order of the order pair. This entire routine is repeated until the end of the problem.
6.2. It is clear from what has just been stated that the control must have a means of switching to a specified location in the Selectron memory, for withdrawing both numbers for the computation and
pairs of orders. Since the Selectron memory (as tentatively planned) will hold 2^12 = 4,096 forty-digit words (a word is either a number or a pair of orders), a twelve-digit binary number suffices to
identify a memory location. Hence a switching mechanism is required which will, on receiving a twelve-digit binary number, select the corresponding memory location.
The type of circuit we propose to use for this purpose is known as a decoding or many-one function table. It has been developed in various forms independently by J. Rajchman [Rajchman, 1943] and P.
Crawford [Crawford, 19??]. It consists of n flip-flops which register an n-digit binary number. It also has a maximum of 2^n output wires. The flip-flops activate a matrix in which the
interconnections between input and output wires are made in such a way that one and only one of 2^n output wires is selected (i.e. has a positive voltage applied to it). These interconnections may be
established by means of resistors or by means of non-linear elements (such as diodes or rectifiers); all these various methods are under investigation. The Selectron is so designed that four such
function table switches are required, each with a three digit entry and eight (2^3) outputs. Four sets of eight wires each are brought out of the Selectron for switching purposes, and a particular
location is selected by making one wire positive with respect to the remainder. Since all forty Selectrons are switched in parallel, these four sets of wires may be connected directly to the four
function table outputs.
6.3. Since most computer operations involve at least one number located in the Selectron memory, it is reasonable to adopt a code in which twelve binary digits of every order are assigned to the
specification of a Selectron location. In those orders which do not require a number to be taken out of or into the Selectrons these digit positions will not be used.
Though it has not been definitely decided how many operations will be built into the computer (i.e. how many different orders the control must be able to understand), it will be seen presently that
there will probably be more than 2^5 but certainly less than 2^6. For this reason it is feasible to assign 6 binary digits for the order code. It thus turns out that each order must contain eighteen
binary digits, the first twelve identifying a memory location and the remaining six specifying an operation. It can now be explained why orders are stored in the memory in pairs. Since the same
memory organ is to be used in this computer for both orders and numbers, it is efficient to make the length of each about equivalent. But numbers of eighteen binary digits would not be sufficiently
accurate for problems which this machine will solve. Rather, an accuracy of at least 10^-10 or 2^-33 is required. Hence it is preferable to make the numbers long enough to accommodate two orders.
As we pointed out in 2.3, and used in 4.2 et seq. and 5.7 et seq., our numbers will actually have 40 binary digits each. This allows 20 binary digits for each order, i.e. the 12 digits that specify a
memory location, and 8 more digits specifying the nature of the
Section 1 Processors with one address per instruction
operation (instead of the minimum of 6 referred to above). It is convenient, as will be seen in 6.8.2. and Chapter 9, Part II to group these binary digits into tetrads, groups of 4 binary digits.
Hence a whole word consists of 10 tetrads, a half word or order of 5 tetrads, and of these 3 specify a memory location and the remaining 2 specify the nature of the operation. Outside the machine
each tetrad can be expressed by a base 16 digit. (The base 16 digits are best designated by symbols of the 10 decimal digits 0 to 9, and 6 additional symbols, e.g. the letters a to f. Cf. Chapter 9,
Part II.) These 16 characters should appear in the typing for and the printing from the machine. (For further details of these arrangements, cf. loc. cit. above.)
The specification of the nature of the operation that is involved in an order occurs in binary form, so that another many-one or decoding function is required to decode the order. This function table
will have six input flip-flops (the two remaining digits of the order are not needed). Since there will not be 64 different orders, not all 64 outputs need be provided. However, it is perhaps
worthwhile to connect the outputs corresponding to unused order possibilities to a checking circuit which will give an indication whenever a code word unintelligible to the control is received in the
input flip-flops.
The function table just described energizes a different output wire for each different code operation. As will be shown later, many of the steps involved in executing different orders overlap. (For
example, addition, multiplication, division, and going from the Selectrons to the register all include transferring a number from the Selectrons to the Selectron register.) For this reason it is
perhaps desirable to have an additional set of control wires, each of which is activated by any particular combination of different code digits. These may be obtained by taking the output wires of
the many-one function table and using them to operate tubes which will in turn operate a one-many (or coding) function table. Such a function table consists of a matrix as before, but in this case
only one of the input wires are activated. This particular table may be referred to as the recoding function table.
The twelve flip-flops operating the four function tables used in selecting a Selectron position, and the six flip-flops operating the function table used for decoding the order, are referred to as
the Function Table Register, FR.
6.4. Let us consider next the process of transferring a pair of orders from the Selectrons to the control. These orders first go into SR. The order which is to be used next may be transferred
directly into FR. The second order of the pair must be removed from SR (since SR may be used when the first order is executed), but cannot as yet be placed in FR. Hence a temporary storage is
provided for it. The storage means is called the Control Register, CR, and consists of 20 (or possibly 18) flip-flops, capable of receiving a number from SR and transmitting a number to FR.
As already stated (6.1), the control must know the location of the pair of orders it is to get from the Selectron memory. Normally this location will be the one following the location of the two
orders just executed. That is, until it receives an order to do otherwise, the control will take its orders from the Selectrons in sequence. Hence the order location may be remembered in a twelve
stage binary counter (one capable of counting 2^12) to which one unit is added whenever a pair of orders is executed. This counter is called the Control Counter, CC.
The details of the process of obtaining a pair of orders from the Selectron are thus as follows: The contents of CC are copied into FR, the proper Selectron location is selected, and the contents of
the Selectrons are transferred to SR. FR is then cleared, and the contents of SR are transferred to it and CR. CC is advanced by one unit so the control will be prepared to select the next pair of
orders from the memory. (There is, however, an exception from this last rule for the so-called transfer orders, cf. 3.5. This may feed CC in a different manner, cf. the next paragraph below.) First
the order in FR is executed and then the order in CR is transferred to FR and executed. It should be noted that all these operations are directed by the control itself-not only the operations
specified in the control words sent to FR, but also the automatic operations required to get the correct orders there.
Since the method by means of which the control takes order pairs in sequence from the memory has been described, it only remains to consider how the control shifts itself from one sequence of control
orders to another in accordance with the operations described in 3.5. The execution of these operations is relatively simple. An order calling for one of these operations contains the twelve digit
specification of the position to which the control is to be switched, and these digits will appear in the left-hand twelve flip-flops of FR. All that is required to shift the control is to transfer
the contents of these flip-flops to CC. When the control goes to the Selectrons for the next pair of orders it will then go to the location specified by the number so transferred. In the case of the
unconditional transfer, the transfer is made automatically; in the case of the conditional transfer it is made only if the sign counter of the Accumulator registers zero.
6.5. In this report we will discuss only the general method by means of which the control will execute specific orders, leaving the details until later. It has already been explained (5.5) that when
a circuit is to be designed to accomplish a particular elementary operation (such as addition), a choice must be made between a
static type and a dynamic type circuit. When the design of the control is considered, this same choice arises. The function of the control is to direct a sequence of operations which take place in
the various circuits of the computer (including the circuits of the control itself). Consider what is involved in directing an operation. The control must signal for the operation to begin, it must
supply whatever signals are required to specify that particular operation, and it must in some way know when the operation has been completed so that it may start the succeeding operation. Hence the
control circuits must be capable of timing the operations. It should be noted that timing is required whether the circuit performing the operation is static or dynamic. In the case of a static type
circuit the control must supply static control signals for a period of time sufficient to allow the output voltages to reach the steady-state condition. In the case of a dynamic type circuit the
control must send various pulses at proper intervals to this circuit.
If all circuits of a computer are static in character, the control timing circuits may likewise be static, and no pulses are needed in the system. However, though some of the circuits of the computer
we are planning will be static, they will probably not all be so, and hence pulses as well as static signals must be supplied by the control to the rest of the computer. There are many advantages in
deriving these pulses from a central source, called the clock. The timing may then be done either by means of counters counting clock pulses or by means of electrical delay lines (an RC circuit is
here regarded as a simple delay line). Since the timing of the entire computer is governed by a single pulse source, the computer circuits will be said to operate as a synchronized system.
The clock plays an important role both in detecting and in localizing the errors made by the computer. One method of checking which is under consideration is that of having two identical computers
which operate in parallel and automatically compare each other's results. Both machines would be controlled by the same clock, so they would operate in absolute synchronism. It is not necessary to
compare every flip-flop of one machine with the corresponding flip-flop of the other. Since all numbers and control words pass through either the Selectron register or the accumulator soon before or
soon after they are used, it suffices to check the flip-flops of the Selectron register and the flip-flops of the accumulator which hold the number registered there; in fact, it seems possible to
check the accumulator only (cf. the end of 6.6.2). The checking circuit would stop the clock whenever a difference appeared, or stop the machine in a more direct manner if an asynchronous system is
used. Every flip-flop of each computer will be located at a convenient place. In fact, all neons will be located on one panel, the corresponding neons of the two machines being placed in parallel
rows so that one can tell at a glance (after the machine has been stopped) where the discrepancies are.
The merits of any checking system must be weighed against its cost. Building two machines may appear to be expensive, but since most of the cost of a scientific computer lies in development rather
than production, this consideration is not so important as it might seem. Experience may show that for most problems the two machines need not be operated in parallel. Indeed, in most cases purely
mathematical, external checks are possible: Smoothness of the results, behavior of differences of various types, validity of suitable identities, redundant calculations, etc. All of these methods are
usually adequate to disclose the presence or absence of error in toto; their drawback is only that they may not allow the detailed diagnosing and locating of errors at all or with ease. When a
problem is run for the first time, so that it requires special care, or when an error is known to be present, and has to be located-only then will it be necessary as a rule, to use both machines in
parallel. Thus they can be used as separate machines most of the time. The essential feature of such a method of checking lies in the fact that it checks the computation at every point (and hence
detects transient errors as well as steady-state ones) and stops the machine when the error occurs so that the process of localizing the fault is greatly simplified. These advantages are only
partially gained by duplicating the arithmetic part of the computer, or by following one operation with the complement operation (multiplication by division, etc.), since this fails to check either
the memory or the control (which is the most complicated, though not the largest, part of the machine).
The method of localizing errors, either with or without a duplicate machine, needs further discussion. It is planned to design all the circuits (including those of the control) of the computer so
that if the clock is stopped between pulses the computer will retain all its information in flip-flops so that the computation may proceed unaltered when the clock is started again. This principle
has already demonstrated its usefulness in the ENIAC. This makes it possible for the machine to compute with the clock operating at any speed below a certain maximum, as long as the clock gives out
pulses of constant shape regardless of the spacing between pulses. In particular, the spacing between pulses may be made indefinitely large. The clock will be provided with a mode of operation in
which it will emit a single pulse whenever instructed to do so by the operator. By means of this, the operator can cause the machine to go through an operation step by step, checking the results by
means of the indicating-lamps connected to the flip-flops. It will be noted that this design principle does not exclude the use of delay lines to obtain delays as long as these
Section 1 Processors with one address per instruction
are only used to time the constituent operations of a single step, and have no part in determining the machine's operating repetition rate. Timing coincidences by means of delay lines is excluded
since this requires a constant pulse rate.
6.6. The orders which the control understands may be divided into two groups: Those that specify operations which are performed within the computer and those that specify operations involved in
getting data into and out of the computer. At the present time the internal operations are more completely planned than the input and output operations, and hence they will be discussed more in
detail than the latter (which are treated briefly in 6.8). The internal operations which have been tentatively adopted are listed in Table 1. It has already been pointed out that not all of these
operations are logically basic, but that many can be programmed by means of others. In the case of some of these operations the reasons for building them into the control have already been given. In
this section we will give reasons for building the other operations into the control and will explain in the case of each operation what the control must do in order to execute it.
In order to have the precise mathematical meaning of the symbols which are introduced in what follows clearly in mind, the reader should consult the table at the end of the report for each new
symbol, in addition to the explanations given in the text.
6.61. Throughout what follows S(x) will denote the memory location No. x in the Selectron. Accordingly the x which appears in S(x) is a 12-digit binary, in the sense of 6.2. The eight addition
operations [S(x) ® Ac +, S(x) ® Ac-, S(x) ® Ah +, S(x) ® Ah -, S(x)® Ac + M, S(x)® Ac- M, S(x)® Ah + M, S(x)® Ah - M] involves the following possible four steps:
First: Clear SR and transfer into it the number at S(x).
Second: Clear Ac if the order contains the symbol C; do not clear Ac if the order contains the symbol h.
Third: Add the number in SR or its negative (i.e. in our present system its complement with respect to 2^1) into Ac. If the order does not contain the symbol M, use the number in SR or its negative
according to whether the order contains the symbol + or - . If the order contains the symbol M, use the number in SR or its negative according to whether the sign of the number in SR and the symbol +
or - in the order do or do not agree.
Fourth: Perform a complete carry. Building the last four addition operations (those containing the symbol M) into the control is fairly simple: It calls only for one extra comparison (of the sign in
SR and the + or - in the order, cf. the third step above), and it requires, therefore, only a few tubes more than required for the first four addition operations (those not containing the symbol M).
These facts would seem of themselves to justify adding the operations in question: plus and minus the absolute value. But it should be noted that these operations can be programmed out of the other
operations of Table 1 with correspondingly few orders (three for absolute value and five for minus absolute value), so that some further justification for building them in is required. The absolute
value order is frequently in connection with the orders L and R (see 6.6.7), while the minus absolute value order makes the detection of a zero very simple by merely detecting the sign of - ô Nô .
(If - ô Nô ³ 0 then N = 0.)
6.6.2. The operation of S(x) ® R involves the following two steps:
First: Clear SR, and transfer S(x) to it.
Second: Clear AR and add the number in the Selectron register into it. The operation of R ® Ac merits more detailed discussion, since there are alternative ways of removing numbers from AR. Such
numbers could be taken directly to the Selectrons as well as into Ac, and they could be transferred to Ac in parallel, in sequence, or in sequence parallel. It should be recalled that while most of
the numbers that go into AR have come from the Selectrons and thus need not be returned to them, the result of a division and the right-hand 39 digits of a product appear in AR. Hence while an
operation for withdrawing a number from AR is required, it is relatively infrequent and therefore need not be particularly fast. We are therefore considering the possibility of transferring at least
partially in sequence and of using the shifting properties of Ac and of AR for this. Transferring the number to the Selectron via the accumulator is also desirable if the dual machine method of
checking is employed, for it means that even if numbers are only checked in their transit through the accumulator, nevertheless every number going into the Selectron is checked before being placed
6.6.3. The operation S(x) ´ R ® Ac involves the following six steps:
First: Clear SR and transfer S(x) (the multiplicand) into it.
Second: Thirty-nine steps, each of which consist of the two following parts: (a) Add (or rather shift) the sign digit of SR into the partial product in Ac, or add all but the sign digit of SR into
the partial product in Ac-depending upon whether the right-most digit in AR is 0 or 1-and effect the appropriate carries. (b) Shift Ac and AR to the right, fill the sign digit of Ac with a 0 and the
digit of AR immediately right of the sign digit (positional value 2') with the previously right-most digit of Ac. (There are ways to save time by merging these two operations when the right-most
digit in Ar is 0, but we will not discuss them here more fully.)
Third: If the sign digit in SR is 1 (i.e. -), then inject a carry
into the right-most stage of Ac and place a 1 into the sign digit of Ac.
Fourth: If the original sign digit of AR is 1 (i.e. -), then subtract the contents of SR from Ac.
Fifth: If a partial carry system was employed in the main process, then a complete carry is necessary at the end.
Sixth: The appropriate round-off must be effected. (Cf. Chapter 9, Part II, for details, where it is also explained how the sign digit of the Arithmetic register is treated as part of the round-off
It will be noted that since any number held in Ac at the beginning of the process is gradually shifted into AR, it is impossible to accumulate sums of products in Ac without storing the various
products temporarily in the Selectrons. While this is undoubtedly a disadvantage, it cannot be eliminated without constructing an extra register, and this does not at this moment seem worthwhile.
On the other hand, saving the right-hand 39 digits of the answer is accomplished with very little extra equipment, since it means connecting the 2^-39 stage of Ac to the 2^-1 stage of AR during the
shift operation. The advantage of saving these digits is that it simplifies the handling of numbers of any number of digits in the computer (cf. the last part of 5.12). Any number of 39k binary
digits (where k is an integer) and sign can be divided into k parts, each part being placed in a separate Selectron position. Addition and subtraction of such numbers may be programmed out of a
series of additions or subtractions of the 39-digit parts, the carry-over being programmed by means of Cc ® S(x) and Cc' ® S(x) operations. (If the 2^0 stage of Ac registers negative after the
addition of two 39-digit parts, a carry-over has taken place and hence 2^-39 must be added to the sum of the next parts.) A similar procedure may be followed in multiplication if all 78 digits of the
product of the two 39-digit parts are kept, as is planned. (For the details, cf. Chapter 9, Part II.) Since it would greatly complicate the computer to make provision for holding and using a 78 digit
dividend, it is planned to program 39k digit division in one of the ways described at the end of 5.12.
6.6.4. The operation of division Ac ¸ S(x) ® R involves the following four steps:
First: Clear SR and transfer S(x) (the divisor) into it.
Second: Clear AR.
Third: Thirty-nine steps, each of which consists of the following three parts: (a) Sense the signs of the contents of Ac (the partial remainder) and of SR, and sense whether they agree or not. (b)
Shift Ac and AR left. In this process the previous sign digit of Ac is lost. Fill the right-most digit of Ac (after the shift) with a 0, and the right-most digit of AR (before the shift) with 0 or 1,
depending on whether there was disagreement or agreement in (a). (c) Add or subtract the contents of SR into Ac, depending on the same ^. alternative as above.
Fourth: Fill the right-most digit of AR with a 1, and change its sign digit.
For the purpose of timing the 39 steps involved in division a six-stage counter (capable of counting to 2^6 = 64) will be built into the control. This same counter will also be used for timing the 39
steps of multiplication, and possibly for controlling Ac when a number is being transferred between it and a tape in either direction (see 6.8.).
6.6.5. The three substitution operations [At ® S(x), Ap ® S(x), and Ap' ® S(x)] involve transferring all or part of the number held in Ac into the Selectrons. This will be done by means of gate tubes
connected to the registering flip-flops of Ac. Forty such tubes are needed for the total substitutions, At ® S(x). The partial substitution Ap ® S(x) and Ap' ® S(x) requires that the left-hand twelve
digits of the number held in Ac be substituted in the proper places in the left-hand and right-hand orders, respectively. This may be done by means of extra gate tubes, or by shifting the number in
Ac and using the gate tubes required for At ® S(x). (This scheme needs some additional elaboration, when the order directing and the order suffering the substitution are the two successive halves of
the same word; i.e. when the latter is already in FR at the time when the former becomes operative in CR, so that the substitution effected in the Selectrons comes too late to alter the order which
has already reached CR, to become operative at the next step in FR. There are various ways to take care of this complication, either by some additional equipment or by appropriate prescriptions in
coding. We will not discuss them here in more detail, since the decisions in this respect are still open.)
The importance of the partial substitution operations can hardly be overestimated. It has already been pointed out (3.3) that they allow the computer to perform operations it could not other wise
conveniently perform, such as making use of a function table stored in the Selectron memory. Furthermore, these operations remove a very sizeable burden from the person coding problems, for they make
possible the coding of classes of problems in contrast to coding each individual problem separately. Because Ap ® S(x) and Ap' ® S(x) are available, any program sequence may be stated in general form
(that is, without Selectron location designations for the numbers being operated on) and the Selectron locations of the numbers to be operated on substituted whenever that sequence is used. As an
example, consider a general code for nth order integration of m total differential equations for p steps of independent variable t, formulated in advance. Whenever a prob-
Section 1 Processors with one address per instruction
lem requiring this rule is coded for the computer, the general integration sequence can be inserted into the statement of the problem along with coded instructions for telling the sequence where it
will be located in the memory [so that the proper S(x) designations will be inserted into such orders as Cu ® S(x), etc.]. Whenever this sequence is to be used by the computer it will automatically
substitute the correct values of m, n, p and D t, as well as the locations of the boundary conditions and the descriptions of the differential equations, into the general sequence. (For the details
of this particular procedure, cf. Chapter 13, Part II.) A library of such general sequences will be built up, and facilities provided for convenient insertion of any of these into the coded statement
of a problem (cf. 6.8.4). When such a scheme is used, only the distinctive features of a problem need be coded.
6.6.6. The manner in which the control shift operations [Cu® S(x), Cu' ® S(x), Cc ®S(x), and Cc' ® S(x)] are realized has been discussed in 6.4 and needs no further comment.
6.6.7. One basic question which must be decided before a computer is built is whether the machine is to have a so-called floating binary (or decimal) point. While a floating binary point is
undoubtedly very convenient in coding problems, building it into the computer adds greatly to its complexity and hence a choice in this matter should receive very careful attention. However, it
should first be noted that the alternatives ordinarily considered (building a machine with a floating binary point vs. doing all computation with a fixed binary point) are not exhaustive and hence
that the arguments generally advanced for the floating binary point are only of limited validity. Such arguments overlook the fact that the choice with respect to any particular operation (except for
certain basic ones) is not between building it into the computer and not using it at all, but rather between building it into the computer and programming it out of operations built into the
computer. (One short reference to the floating binary point was made in 5.13.)
Building a floating binary point into the computer will not only complicate the control but will also increase the length of a number and hence increase the size of the memory and the arithmetic
unit. Every number is effectively increased in size, even though the floating binary point is not needed in many instances. Furthermore, there is considerable redundancy in a floating binary point
type of notation, for each number carries with it a scale factor, while generally speaking a single scale factor will suffice for a possibly extensive set of numbers. By means of the operations
already described in the report a floating binary point can be programmed. While additional memory capacity is needed for this, it is probably less than that required by a built-in floating binary
point since a different scale factor does not need to be remembered for each number.
To program a floating binary point involves detecting where the first zero occurs in a number in Ac. Since Ac has shifting facilities this can best be done by means of them. In terms of the
operations previously described this would require taking the given number out of Ac and performing a suitable arithmetical operation on it: For a (multiple) right shift a multiplication, for a
(multiple) left shift either one division, or as many doublings (i.e. additions) as the shift has stages. However, these operations are inconvenient and time-consuming, so we propose to introduce two
operations (L and R) in order that this (i.e. the single left and right shift) can be accomplished directly. These operations make use of facilities already present in Ac and hence add very little
equipment to the computer. It should be noted that in many instances a single use of L and possibly of R will suffice in programming a floating binary point. For if the two factors in a
multiplication have no superfluous zeros, the product will have at most one superfluous zero (if 1/2£ X < 1 and 1/2 £ Y < 1, then 1/4 £ XY < 1). This is similarly true in division (if 1/4 £ X < 1/2
and 1/2 £ Y < 1, then 1/4 <X/Y < 1). In addition and subtraction any numbers growing out of range can be treated similarly. Numbers which decrease in these cases, i.e. develop a sequence of zeros at
the beginning, are really (mathematically) losing precision. Hence it is perfectly proper to omit formal readjustments in this event. (Indeed, such a true loss of precision cannot be obviated by any
formal procedure, but, if at all, only by a different mathematical formulation of the problem.)
6.7. Table 1 shows that many of the operations which the control is to execute have common elements. Thus addition, subtraction, multiplication and division all involve transferring a number from the
Selectrons to SR. Hence the control may be simplified by breaking some of the operations up into more basic ones. A timing circuit will be provided for each basic operation, and one or more such
circuits will be involved in the execution of an order. The exact choice of basic operations will depend upon how the arithmetic unit is built.
In addition to the timing circuits needed for executing the orders of Table 1, two such circuits are needed for the automatic operations of transferring orders from the Selectron register to CR and
FR, and for transferring an order from CR to FR. In normal computer operation these two circuits are used alternately, so a binary counter is needed to remember which is to be used next. In the
operations Cu' ®S(x) and Cc® S(x) the first order of a pair is ignored, so the binary counter must be altered accordingly.
The execution of a sequence of orders involves using the various
timing circuits in sequence. When a given timing circuit has completed its operation, it emits a pulse which should go to the timing circuit to be used next. Since this depends upon the particular
operation being executed, these pulses are routed according to the signals received from the decoding and recoding function tables activated by the six binary digits specifying an order.
6.8. In this section we will consider what must be added to the control so that it can direct the mechanisms for getting data into and out of the computer and also describe the mechanisms themselves.
Three different kinds of input-output mechanisms are planned.
First: Several magnetic wire storage units operated by servo-mechanisms controlled by the computer.
Second: Some viewing tubes for graphical portrayal of results.
Third: A typewriter for feeding data directly into the computer, not to be confused with the equipment used for preparing and printing from magnetic wires. As presently planned the latter will
consist of modified Teletypewriter equipment, cf. 6.8.2 and 6.8.4.
6.8.1. Since there already exists a way of transferring numbers between the Selectrons and Ac, therefore Ac may be used for transferring numbers from and to a wire. The latter transfer will be done
serially and will make use of the shifting facilities of Ac. Using Ac for this purpose eliminates the possibility of computing and reading from or writing on the wires simultaneously. However,
simultaneous operation of the computer and the input-output
Section 1 Processors with one address per instruction
organ requires additional temporary storage and introduces a synchronizing problem, and hence it is not being considered for the first model.
Since, at the beginning of the problem, the computer is empty, facilities must be built into the control for reading a set of numbers from a wire when the operator presses a manual switch. As each
number is read from a wire into Ac, the control must transfer it to its proper location in the Selectrons. The CC may be used to count off these positions in sequence, since it is capable of
transmitting its contents to FR. A detection circuit on CC will stop the process when the specified number of numbers has been placed n the memory, and the control will then be shifted to the orders
located in the first position of the Selectron memory.
It has already been stated that the entire memory facilities of the wires should be available to the computer without human intervention. This means that the control must be able to select the proper
set of numbers from those going by. Hence additional orders are required for the code. Here, as before, we are faced with two alternatives. We can make the control capable of executing an order of
the form: Take numbers from positions p to p + s on wire No. k and place them in Selectron locations v to v + s. Or we can make the control capable of executing some less complicated operations
which, together with the already given control orders, are sufficient for programming the transfer operation of the first alternative. Since the latter scheme is simpler we adopt it tentatively.
The computer must have some way of finding a particular number on a wire. One method of arranging for this is to have each number carry with it its own location designation. A method more economical
of wire memory capacity is to use the Selectron memory facilities to remember the position of each wire. For example, the computer would hold the number t[1 ]specifying which number on the wire is in
position to be read. If the control is instructed to read the number at position p[1] on this wire, it will compare p[1] with t[1 ]and if they differ, cause the wire to move in the proper direction.
As each number on the wire passes by, one unit is added or subtracted to t[1 ]and the comparison repeated. When p[1] = t[1 ]numbers will be transferred from the wire to the accumulator and then to
the proper location in the memory. Then both t[1 ]and p[1] will be increased by 1, and the transfer from the wire to accumulator to memory repeated. This will be iterated, until t[1 ]+ s and p[1] + s
are reached, at which time the control will direct the wire to stop.
Under this system the control must be able to execute the following orders with regard to each wire: Start the wire forward, start the wire in reverse, stop the wire, transfer from wire to Ac, and
transfer from Ac to wire. In addition, the wire must signal the control as each digit is read and when the end of a number has been reached. Conversely, when recording is done the control must have a
means of timing the signals sent from Ac to the wire, and of counting off the digits. The 2^6 counter used for multiplication and division may be used for the latter purpose, but other timing
circuits will be required for the former.
If the method of checking by means of two computers operating simultaneously is adopted, and each machine is built so that it can operate independently of the other, then each will have a separate
input-output mechanism. The process of making wires for the computer must then be duplicated, and in this way the work of the person making a wire can be checked. Since the wire servomechanisms
cannot be synchronized by the central clock, a problem of synchronizing the two computers when the wires are being used arises. It is probably not practical to synchronize the wire feeds to within a
given digit, but this is unnecessary since the numbers coming into the two organs Ac need not be checked as the individual digits arrive, but only prior to being deposited in the Selectron memory.
6.8.2. Since the computer operates in the binary system, some means of decimal-binary and binary-decimal conversions is highly desirable. Various alternative ways of handling this problem have been
considered. In general we recognize two broad classes of solutions to this problem.
First: The conversion problems can be regarded as simple arithmetic processes and programmed as sub-routines out of the orders already incorporated in the machine. The details of these programs
together with a more complete discussion are given fully in Chapter 9, Part II, where it is shown, among other things, that the conversion of a word takes about 5 msec. Thus the conversion time is
comparable to the reading or withdrawing time for a word- about 2 msec-and is trivial as compared to the solution time for problems to be handled by the computer. It should be noted that the
treatment proposed there presupposes only that the decimal data presented to or received from the computer are in tetrads, each tetrad being the binary coding of a decimal digit-the information
(precision) represented by a decimal digit being actually equivalent to that represented by 3.3 binary digits. The coding of decimal digits into tetrads of binary digits and the printing of decimal
digits from such tetrads can be accomplished quite simply and automatically by slightly modified Teletype equipment, cf. 6.8.4 below.
Second: The conversion problems can be regarded as unique problems and handled by separate conversion equipment incorporated either in the computer proper or associated with the
mechanisms for preparing and printing from magnetic wires. Such converters are really nothing other than special purpose digital computers. They would seem to be justified only for those computers
which are primarily intended for solving problems in which the computation time is small compared to the input-output time, to which class our computer does not belong.
6.8.3. It is possible to use various types of cathode ray tubes, and in particular Selectrons for the viewing tubes, in which case programming the viewing operation is quite simple. The viewing
Selectrons can be switched by the same function tables that switch the memory Selectrons. By means of the substitution operation Ap ® S(x) and Ap' ® S(x). six-digit numbers specifying the abscissa
and ordinate of the point (six binary digits represent a precision of one part in 2^6 = 64, i.e. of about 1.5 per cent which seems reasonable in such a component) can be substituted in this order,
which will specify that a particular one of the viewing Selectrons is to be activated.
6.8.4. As was mentioned above, the mechanisms used for preparing and printing from wire for the first model, at least, will be modified Teletype equipment. We are quite fortunate in having secured
the full cooperation of the Ordnance Development Division of the National Bureau of Standards in making these modifications and in designing and building some associated equipment.
By means of this modified Teletype equipment an operator first prepares a checked paper tape and then directs the equipment to transfer the information from the paper tape to the magnetic wire.
Similarly a magnetic wire can transfer its contents to a paper tape which can be used to operate a teletypewriter. (Studies are being undertaken to design equipment that will eliminate the necessity
for using paper tapes.)
As was shown in 6.6.5, the statement of a new problem on a wire involves data unique to that problem interspersed with data found on previously prepared paper tapes or magnetic wires. The equipment
discussed in the previous paragraph makes it possible for the operator to combine conveniently these data on to a single magnetic wire ready for insertion into the computer.
It is frequently very convenient to introduce data into a computation without producing a new wire. Hence it is planned to build one simple typewriter as an integral part of the computer. By means of
this typewriter the operator can stop the computation, type in a memory location (which will go to the FR), type in a number (which will go to Ac and then be placed in the first mentioned location),
and start the computation again.
6.8.5. There is one further order that the control needs to execute. There should be some means by which the computer can signal to the operator when a computation has been concluded, or when the
computation has reached a previously determined point. Hence an order is needed which will tell the computer to stop and to flash a light or ring a bell.
BurkA62a, BurkA62b; CrawP??; GoldH63a, b, c, d; RajcJ43. | {"url":"http://www.cs.unc.edu/~adyilie/comp265/vonNeumann.html","timestamp":"2014-04-16T15:58:48Z","content_type":null,"content_length":"163522","record_id":"<urn:uuid:d9622b53-338e-4ffe-bf81-54bcf843f3aa>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Neuroscience] How do you deal with equations when the denominator
tends to 0?
[Neuroscience] How do you deal with equations when the denominator tends to 0?
Bill via neur-sci%40net.bio.net (by connelly.bill from gmail.com)
Tue Aug 18 21:38:05 EST 2009
This is more of a maths question than a neuroscience question, but
I've come across it twice when dealing with neuroscience problems
1) I was trying to solve the Goldman Hodgkin Katz field equation,
which I shan't type out here in full, but it has a denonimator term of
1-e^(-z.V.F/RT) so when V=0, the denominator is 0. Obviously, I could
calculate the value fractionally above 0, and fractionally below 0,
and average the result to get the value for 0; but I was wondering if
there was a smarter way
2) Now here is the real problem. I've got some voltage ramp data, I
wanted to convert the current trace to a conductance trace using G = I/
(Vm-Ve). However as Vm approaches Ve the trace goes crazy (obviously
again, at Vm=Ve I couldn't calculate G, but even as Vm-Ve gets very
small, presumabley the noise of the trace is amplified, so you have a
rectangular hyperbola overlaid on a bolztman style curve). Is there
anything I can do about this? (and filtering doesn't work).
More information about the Neur-sci mailing list | {"url":"http://www.bio.net/bionet/mm/neur-sci/2009-August/062483.html","timestamp":"2014-04-21T05:04:19Z","content_type":null,"content_length":"3535","record_id":"<urn:uuid:040baf2a-da8d-4f80-bcfb-c4dea1a3203b>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Publisher: Springer-Verlag (Total: 2187 journals)
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
First | 3 4 5 6 7 8 9 10 | Last
Eating and Weight Disorders - Studies on Anorexia, Bulimia and Obesity
Ecological Research
Economic Botany
Economic Bulletin
Economic Change and Restructuring
Economic Theory
Economic Theory Bulletin
Economics of Governance
Education and Information Technologies
Educational Assessment, Evaluation and Accountability
Educational Psychology Review
Educational Research for Policy and Practice
Educational Studies in Mathematics
Educational Technology Research and Development
Electrical Engineering
Electronic Commerce Research
Electronic Markets
Electronic Materials Letters
Elemente der Mathematik
Emergency Radiology
Empirical Economics
Empirical Software Engineering
Employee Responsibilities and Rights Journal
Endocrine Pathology
Energy Efficiency
Energy Systems
Engineering With Computers
Entomological Review
Environment Systems & Decisions
Environment, Development and Sustainability
Environmental and Ecological Statistics
Environmental and Resource Economics
Environmental Biology of Fishes
Environmental Chemistry Letters
Environmental Earth Sciences
Environmental Economics and Policy Studies
Environmental Evidence
Environmental Fluid Mechanics
Environmental Geochemistry and Health
Environmental Geology
Environmental Health and Preventive Medicine
Environmental Management
Environmental Modeling & Assessment
Environmental Monitoring and Assessment
Environmental Science and Pollution Research
Epidemiologic Perspectives & Innovations
Epileptic Disorders
EPJ A - Hadrons and Nuclei
EPJ B - Condensed Matter and Complex Systems
EPJ direct
EPJ E - Soft Matter and Biological Physics
Estuaries and Coasts
Ethical Theory and Moral Practice
Ethics and Information Technology
Ethik in der Medizin
Eurasian Soil Science
EURO Journal of Transportation and Logistics
EURO Journal on Computational Optimization
Europaisches Journal fur Minderheitenfragen
European Actuarial Journal
European Archives of Oto-Rhino-Laryngology
European Archives of Paediatric Dentistry
European Archives of Psychiatry and Clinical Neuroscience
European Biophysics Journal
European Child & Adolescent Psychiatry
European Clinics in Obstetrics and Gynaecology
European Food Research and Technology
European Journal for Education Law and Policy
European Journal for Philosophy of Science
European Journal of Ageing
European Journal of Applied Physiology
European Journal of Clinical Microbiology & Infectious Diseases
European Journal of Clinical Pharmacology
European Journal of Drug Metabolism and Pharmacokinetics
European Journal of Epidemiology
European Journal of Forest Research
European Journal of Health Economics
European Journal of Law and Economics
European Journal of Nuclear Medicine and Molecular Imaging
European Journal of Nutrition
European Journal of Orthopaedic Surgery & Traumatology
European Journal of Pediatrics
European Journal of Plant Pathology
European Journal of Plastic Surgery
European Journal of Population/Revue européenne de Démographie
European Journal of Psychology of Education
European Journal of Trauma and Emergency Surgery
European Journal of Wildlife Research
European Journal of Wood and Wood Products
First | 3 4 5 6 7 8 9 10 | Last
Environmental Fluid Mechanics [4 followers]
Hybrid journal
It can contain Open Access articles
ISSN (Print) 1573-1510 - ISSN (Online) 1567-7419
Published by Springer-Verlag [2187 journals] [SJR: 0.732] [H-I: 23]
• Numerical modelling of horizontal sediment-laden jets
□ Abstract: Abstract Sediment-laden turbulent flows are commonly encountered in natural and engineered environments. It is well known that turbulence generates fluctuations to the particle
motion, resulting in modulation of the particle settling velocity. A novel stochastic particle tracking model is developed to predict the particle settling out and deposition from a
sediment-laden jet. Particle velocity fluctuations in the jet flow are modelled from a Lagrangian velocity autocorrelation function that incorporates the physical mechanism leading to a
reduction of settling velocity. The model is first applied to study the settling velocity modulation in a homogeneous turbulence field. Consistent with basic experiments using grid-generated
turbulence and computational fluid dynamics (CFD) calculations, the model predicts that the apparent settling velocity can be reduced by as much as 30 % of the stillwater settling velocity.
Using analytical solution for the jet mean flow and semi-empirical RMS turbulent velocity fluctuation and dissipation rate profiles derived from CFD predictions, model predictions of the
sediment deposition and cross-sectional concentration profiles of horizontal sediment-laden jets are in excellent agreement with data. Unlike CFD calculations of sediment fall out and
deposition from a jet flow, the present method does not require any a priori adjustment of particle settling velocity.
PubDate: 2014-02-01
• Moisture effects on eolian particle entrainment
□ Abstract: Abstract In wind tunnel experiments, we study the effects of soil moisture on the threshold condition to entrain fine grain sand/silt into eolian flow and the near-bed concentration
of airborne particles. To study the effect of particle shape on moisture bonding, we use two types of particles nearly equal in size: spherical glass beads $(d_{50} = 134\,\upmu \mathrm{m})$
and sieved quartz sand $(d_{50} = 139 \,\upmu \mathrm{m})$ . Both are poorly graded soils. We conducted these experiments at low moisture contents $({<}1\,\%)$ . We found that the spherical
particles were more sensitive to changes in moisture than the sand, attributable to the large differences in specific surface area of the two particles. The larger specific surface area for
sand is due to the surface roughness of the angular sand particle. Consequently, sand “stores” more moisture via surface adsorption, requiring higher soil moisture content to form liquid
bridges between sand particles. Based on these findings, we extend the concept of a threshold moisture content, $w^{\prime }$ —originally proposed for clayey soils—to soils that lack any
measureable clay content. This allows application of existing models developed for clayey soils that quantify the moisture effect on the threshold friction velocity to sand and silty soils
(i.e., clay content $=$ 0). Additionally, we develop a model that quantifies the moisture effects on near-surface airborne particulate concentration, using experimental observations to
determine the functional dependence on fluid and particle properties, including soil specific area. These models can be applied to numerical simulation of particulate plume formation and
PubDate: 2014-02-01
• The wave-induced solute flux from submerged sediment
□ Abstract: Abstract The issue of the transport of dissolved nutrients and contaminants between the sediment in the bottom of a lake or reservoir and the body of water above it is an important
one for many reasons. In particular the biological and chemical condition of the body of water is intricately linked to these mass transport processes. As the review by Boudreau (Rev Geophys
38(3):389–416, 2000) clearly demonstrates those transport processes are very complex involving mechanisms as diverse as the wave-induced flux between the sediment and the overlying water and
the effect of burrowing animals on the transport within the sediment as well as basic diffusion mechanisms. The present paper focuses on one facet of these transport processes; we re-examine
the balance of diffusion and wave-induced advection and demonstrate that the wave-induced flux of a solute from submerged sediment is not necessarily purely diffusive as suggested by Harrison
et al. (J Geophys Res 88:7617–7622, 1983) but can be dominated by a mean or time-averaged flux induced by the advective fluid motion into and out of the sediment caused by the fluctuating
pressure waves associated with wave motion. Indeed along the subtidal shoreline where the fluctuating bottom pressures are greatest, wave-induced advection will dominate the mean,
time-averaged transport of solute into or out of the sediment as suggested in the work of Riedl et al. (Mar Biol 13:210–221, 1972). However, the present calculations also indicate that this
advective flux decreases rapidly with increasing depth so that further away from the shoreline the advective flux becomes negligible relative to the diffusive flux and therefore the latter
dominates in deeper water.
PubDate: 2014-02-01
• Plume rise and spread in buoyant releases from elevated sources in the
lower atmosphere
□ Abstract: Abstract This study focuses on the influence of emission conditions—velocity and temperature—on the dynamics of a buoyant gas release in the atmosphere. The investigations are
performed by means of wind tunnel experiments and numerical simulations. The aim is to evaluate the reliability of a Lagrangian code to simulate the dispersion of a plume produced by
pollutant emissions influenced by thermal and inertial phenomena. This numerical code implements the coupling between a Lagrangian stochastic model and an integral plume rise model being able
to estimate the centroid trajectory. We verified the accuracy of the plume rise model and we investigated the ability of two Lagrangian models to evaluate the plume spread by means of
comparisons between experiments and numerical solutions. A quantitative study of the performances of the models through some suitable statistical indices is presented and critically
discussed. This analysis shows that an additional spread has to be introduced in the Lagrangian trajectory equation in order to account the dynamical and thermal effects induced by the source
PubDate: 2014-02-01
• Measurements and modeling of open-channel flows with finite semi-rigid
vegetation patches
□ Abstract: Abstract The hydrodynamics of flows through a finite length semi-rigid vegetation patch (VP) were investigated experimentally and numerically. Detailed measurements have been
carried out to determine the spatial variation of velocity and turbulence profiles within the VP. The measurement results show that an intrusion region exists in which the peak Reynolds
stress remains near the bed. The velocity profile is invariant within the downstream part of the VP while the Reynolds stress profile requires a longer distance to attain the spatially
invariant state. Higher vegetation density leads to a shorter adjustment length of the transition region, and a higher turbulence level within the VP. The vegetation density used in the
present study permits the passing through of water and causes the peak Reynolds stress and turbulence kinetic energy each the maximum at the downstream end of the patch. A 3D
Reynolds-averaged Navier–Stokes model incorporating the Spalart–Allmaras turbulence closure was employed subsequently to replicate the flow development within the VP. The model reproduced
transitional flow characteristics well and the results are in good agreement with the experimental data. Additional numerical experiments show that the adjustment length can be scaled by the
water depth, mean velocity and maximum shear stress. Empirical equations of the adjustment lengths for mean velocity and Reynolds stress were derived with coefficients quantified from the
numerical simulation results.
PubDate: 2014-02-01
• Initial mixing of inclined dense jet in perpendicular crossflow
□ Abstract: Abstract A comprehensive experimental investigation for an inclined ( $60^{\circ }$ to vertical) dense jet in perpendicular crossflow—with a three-dimensional trajectory—is
reported. The detailed tracer concentration field in the vertical cross-section of the bent-over jet is measured by the laser-induced fluorescence technique for a wide range of jet
densimetric Froude number $Fr$ and ambient to jet velocity ratios $U_r$ . The jet trajectory and dilution determined from a large number of cross-sectional scalar fields are interpreted by
the Lagrangian model over the entire range of jet-dominated to crossflow-dominated regimes. The mixing during the ascent phase of the dense jet resembles that of an advected jet or line puff
and changes to a negatively buoyant thermal on descent. It is found that the mixing behavior is governed by a crossflow Froude number $\mathbf{F} = U_r Fr$ . For $\mathbf{F} < 0.8$ , the
mixing is jet-dominated and governed by shear entrainment; significant detrainment occurs and the maximum height of rise $Z_{max}$ is under-predicted as in the case of a dense jet in stagnant
fluid. While the jet trajectory in the horizontal momentum plane is well-predicted, the measurements indicate a greater rise and slower descent. For $\mathbf{F} \ge 0.8$ the dense jet becomes
significantly bent-over during its ascent phase; the jet mixing is dominated by vortex entrainment. For $\mathbf{F} \ge 2$ , the detrainment ceases to have any effect on the jet behavior. The
jet trajectory in both the horizontal momentum and buoyancy planes are well predicted by the model. Despite the under-prediction of terminal rise, the jet dilution at a large number of
cross-sections covering the ascent and descent of the dense jet are well-predicted. Both the terminal rise and the initial dilution for the inclined jet in perpendicular crossflow are smaller
than those of a corresponding vertical jet. Both the maximum terminal rise $Z_{max}$ and horizontal lateral penetration $Y_{max}$ follow a $\mathbf{F}^{-1/2}$ dependence in the
crossflow-dominated regime. The initial dilution at terminal rise follows a $S \sim \mathbf{F}^{1/3}$ dependence.
PubDate: 2014-02-01
• Modeling the impact of natural and anthropogenic nutrient sources on
phytoplankton dynamics in a shallow coastal domain, Western Australia
□ Abstract: Abstract The influence of different nutrient sources on the seasonal variation of nutrients and phytoplankton was assessed in the northern area of the Perth coastal margin,
south–western Australia. This nearshore area is shallow, semi-enclosed by submerged reefs, oligotrophic, nitrogen-limited and receives sewage effluent via submerged outfalls. Analysis of 14
year of field observations showed seasonal variability in the concentration of dissolved inorganic nitrogen and phytoplankton biomass, measured as chlorophyll-a. For 2007–2008, we quantified
dissolved inorganic nitrogen inputs from the main nutrient sources: superficial runoff, groundwater, wastewater treatment plant effluent, atmospheric deposition and exchange with surrounding
coastal waters. We validated a three-dimensional hydrodynamic-ecological model and then used it to assess nutrient-phytoplankton dynamics. The model reproduced the temporal and spatial
variations of nitrate and chlorophyll-a satisfactorily. Such variations were highly influenced by exchange through the open boundaries driven by the wind field. An alongshore (south–north)
flow dominated the flux through the domain, with dissolved inorganic nitrogen annual mean net-exportation. Further, when compared with the input of runoff, the contributions from
atmospheric-deposition, groundwater and wastewater effluent to the domain’s inorganic nitrogen annual balance were one, two and three orders of magnitude higher, respectively. Inputs through
exchange with offshore waters were considerably larger than previous estimates. When the offshore boundary was forced with remote-sensed derived data, the simulated chlorophyll-a results were
closer to the field measurements. Our comprehensive analysis demonstrates the strong influence that the atmosphere–water surface interactions and the offshore dynamics have on the nearshore
ecosystem. The results suggest that any additional nutrient removal at the local wastewater treatment plant is not likely to extensively affect the seasonal variations of nutrients and
chlorophyll-a. The approach used proved useful for improving the understanding of the coastal ecosystem.
PubDate: 2014-02-01
• Aeolian erosion of storage piles yards: contribution of the surrounding
□ Abstract: Abstract Dust emissions from stockpiles surfaces are often estimated applying mathematical models such as the widely used model proposed by the USEPA. It employs specific emission
factors, which are based on the fluid flow patterns over the near surface. But, some of the emitted dust particles settle downstream the pile and can usually be re-emitted which creates a
secondary source. The emission from the ground surface around a pile is actually not accounted for by the USEPA model but the method, based on the wind exposure and a reconstruction from
different sources defined by the same wind exposure, is relevant. This work aims to quantify the contribution of dust re-emission from the areas surrounding the piles in the total emission of
an open storage yard. Three angles of incidence of the incoming wind flow are investigated ( $30^{\circ }, 60^{\circ }$ and $90^{\circ }$ ). Results of friction velocity from numerical
modelling of fluid dynamics were used in the USEPA model to determine dust emission. It was found that as the wind velocity increases, the contribution of particles re-emission from the
ground area around the pile in the total emission also increases. The dust emission from the pile surface is higher for piles oriented $30^{\circ }$ to the wind direction. On the other hand,
considering the ground area around the pile, the $60^{\circ }$ configuration is responsible for higher emission rates (up to 67 %). The global emissions assumed a minimum value for the piles
oriented perpendicular to the wind direction for all wind velocity investigated.
PubDate: 2014-02-01
• Efficient numerical computation and experimental study of temporally long
equilibrium scour development around abutment
□ Abstract: Abstract For the abutment bed scour to reach its equilibrium state, a long flow time is needed. Hence, the employment of usual strategy of simulating such scouring event using the
3D numerical model is very time consuming and less practical. In order to develop an applicable model to consider temporally long abutment scouring process, this study modifies the common
approach of 2D shallow water equations (SWEs) model to account for the sediment transport and turbulence, and provides a realistic approach to simulate the long scouring process to reach the
full scour equilibrium. Due to the high demand of the 2D SWEs numerical scheme performance to simulate the abutment bed scouring, a recently proposed surface gradient upwind method (SGUM) was
also used to improve the simulation of the numerical source terms. The abutment scour experiments of this study were conducted using the facility of Hydraulics Laboratory at Nanyang
Technological University, Singapore to compare with the presented 2D SGUM–SWEs model. Fifteen experiments were conducted with their scouring flow durations vary from 46 to 546 h. The
comparison shows that the 2D SGUM–SWEs model gives good representation to the experimental results with the practical advantage.
PubDate: 2014-02-01
• On the effective hydraulic conductivity and macrodispersivity for
density-dependent groundwater flow
□ Abstract: Abstract In this paper, semi-analytical expressions of the effective hydraulic conductivity ( $K^{E})$ and macrodispersivity ( $\alpha ^{E})$ for 3D steady-state density-dependent
groundwater flow are derived using a stationary spectral method. Based on the derived expressions, we present the dependence of $K^{E}$ and $\alpha ^{E}$ on the density of fluid under
different dispersivity and spatial correlation scale of hydraulic conductivity. The results show that the horizontal $K^{E}$ and $\alpha ^{E}$ are not affected by density-induced flow.
However, due to gravitational instability of the fluid induced by density contrasts, both vertical $K^{E}$ and $\alpha ^{E}$ are found to be reduced slightly when the density factor ( $\gamma
$ ) is less than 0.01, whereas significant decreases occur when $\gamma $ exceeds 0.01. Of note, the variation of $K^{E}$ and $\alpha ^{E}$ is more significant when local dispersivity is
small and the correlation scale of hydraulic conductivity is large.
PubDate: 2014-02-01
• Waves of intermediate length through an array of vertical cylinders
□ Abstract: Abstract We report a semi-analytical theory of wave propagation through a vegetated water. Our aim is to construct a mathematical model for waves propagating through a lattice-like
array of vertical cylinders, where the macro-scale variation of waves is derived from the dynamics in the micro-scale cells. Assuming infinitesimal waves, periodic lattice configuration, and
strong contrast between the lattice spacing and the typical wavelength, the perturbation theory of homogenization (multiple scales) is used to derive the effective equations governing the
macro-scale wave dynamics. The constitutive coefficients are computed from the solution of micro-scale boundary-value problem for a finite number of unit cells. Eddy viscosity in a unit cell
is determined by balancing the time-averaged rate of dissipation and the rate of work done by wave force on the forest at a finite number of macro stations. While the spirit is similar to
RANS scheme, less computational effort is needed. Using one fitting parameter, the theory is used to simulate three existing experiments with encouraging results. Limitations of the present
theory are also pointed out.
PubDate: 2014-02-01
• Numerical analysis on the Brisbane River plume in Moreton Bay due to
Queensland floods 2010–2011
□ Abstract: Abstract During the Queensland floods in the summer of 2010–2011, a flood-driven Brisbane River plume extended into Moreton Bay, Queensland, Australia, and then seaward, travelling
in a northward direction. It covered approximately 500 km $^{2}$ . This paper presents a three- dimensional hydrodynamic numerical model investigation into the behaviour of the Brisbane River
plume. The model was verified by using satellite observations and field measurement data. The present study concludes that the high river discharge was the primary factor determining the
plume size and its seaward extensions. A notable finding was that the plume was a bottom-trapped type rather than a buoyant type. Further, the southerly winds were found to have moderately
confined the alongshore extension of the plume, and had caused the plume to mix thoroughly with the ocean water.
PubDate: 2014-02-01
• Large-eddy simulation of turbulent flow and dispersion over a complex
urban street canyon
□ Abstract: Abstract Turbulent flow and dispersion characteristics over a complex urban street canyon are investigated by large-eddy simulation using a modified version of the Fire Dynamics
Simulator. Two kinds of subgrid scale (SGS) models, the constant coefficient Smagorinsky model and the Vreman model, are assessed. Turbulent statistics, particularly turbulent stresses and
wake patterns, are compared between the two SGS models for three different wind directions. We found that while the role of the SGS model is small on average, the local or instantaneous
contribution to total stress near the surface or edge of the buildings is not negligible. By yielding a smaller eddy viscosity near solid surfaces, the Vreman model appears to be more
appropriate for the simulation of a flow in a complex urban street canyon. Depending on wind direction, wind fields, turbulence statistics, and dispersion patterns show very different
characteristics. Particularly, tall buildings near the street canyon predominantly generate turbulence, leading to homogenization of the mean flow inside the street canyon. Furthermore, the
release position of pollutants sensitively determines subsequent dispersion characteristics.
PubDate: 2014-01-24
• Preface
• Three-dimensional analysis of coherent turbulent flow structure around a
single circular bridge pier
□ Abstract: Abstract The coherent turbulent flow around a single circular bridge pier and its effects on the bed scouring pattern is investigated in this study. The coherent turbulent flow and
associated shear stresses play a major role in sediment entrainment from the bed particularly around a bridge pier where complex vortex structures exist. The conventional two-dimensional
quadrant analysis of the bursting process is unable to define sediment entrainment, particularly where fully three-dimensional flow structures exist. In this paper, three-dimensional octant
analysis was used to improve understanding of the role of bursting events in the process of particle entrainment. In this study, the three-dimensional velocity of flow was measured at 102
points near the bed of an open channel using an Acoustic Doppler Velocity meter (Micro-ADV). The pattern of bed scouring was measured during the experiment. The velocity data were analysed
using the Markov process to investigate the sequential occurrence of bursting events and to determine the transition probability of the bursting events. The results showed that external sweep
and internal ejection events were an effective mechanism for sediment entrainment around a single circular bridge pier. The results are useful in understanding scour patterns around bridge
PubDate: 2014-01-17
• Experimental and large eddy simulation study of the flow developed by a
sequence of lateral obstacles
□ Abstract: Abstract In this paper we provide a description of the three-dimensional flow induced by a sequence of lateral obstacles in a straight shallow open-channel flow with flat
bathymetry. The obstacles are modelled as rectangular blocks and are located at one channel wall, perpendicular to the main stream direction. Two aspect ratios of the resulting dead zones are
analysed. The flow structure is experimentally characterised by particle image velocimetry measurements in a laboratory flume and simulated using three-dimensional Large Eddy Simulations.
Good agreement between experimental measurements and numerical results is obtained. The results show that the effect of the obstacles in the main channel is observed up to one obstacle length
in the spanwise direction. The spacing between obstacles does not seem to have a large influence in the outer flow. The mean flow within the dead zone is characterised by a large
recirculation region and several additional vortex systems. They are discussed in the paper, as well as the mean and root-mean-square wall shear-stresses.
PubDate: 2014-01-10
• Horizontal transport, mixing and retention in a large, shallow estuary:
Río de la Plata
□ Abstract: Abstract We use field data and a high-resolution three-dimensional (3D) hydrodynamic numerical model to investigate the horizontal transport and dispersion characteristics in the
upper reaches of the shallow Río de la Plata estuary, located between the Argentinean and Uruguayan coasts, with the objective of relating the mixing characteristics to the likelihood of
algal bloom formation. The 3D hydrodynamic model was validated with an extensive field experiment including both, synoptic profiling and in situ data, and then used to quantify the geographic
variability of the local residence time and rate of dispersion. We show that during a high inflow regime, the aquatic environment near the Uruguayan coast, stretching almost to the middle of
the estuary, had short residence time and horizontal dispersion coefficient of around 77 $\mathrm {m}^{2}\,\mathrm {s}^{-1}$ , compared to the conditions along the Argentinean coastal regime
where the residence time was much longer and the dispersion coefficient (40 $\mathrm {m}^{2}\,\mathrm {s}^{-1}$ ) much smaller, making the Argentinian coastal margin more susceptible for
algae blooms.
PubDate: 2014-01-05
• Near-surface flow in complex terrain with coastal and urban influence
□ Abstract: Abstract A simple conceptual model is presented to describe the near-surface flow of a long, partially urbanized valley of slope $\beta $ located normal to a coastline, considering
forcing due to differential surface temperatures between the sea, undeveloped (rural) land and urban area. Accordingly, under weak synoptic conditions and when the coastal and urban
(thermally induced pressure-gradient) forcing are in phase with that of the valley thermal circulation, the mean flow velocity $U$ is parameterized by the cumulative effects of multiple
forcing: $U = \varGamma w_*\beta ^{1/3} +C(g\alpha \varDelta TL)^{1/2}$ . This accounts for the coastal/urban forcing due to surface-air buoyancy difference $g\alpha \Delta T$ over a distance
$L$ . Here $\varGamma $ and $C$ are constants and $w_*$ the convective velocity. Comparisons with data of the Meteo-diffusion field experiment conducted in a coastal semi-urbanized valley of
Italy (Biferno Valley) reveal that the inferences of the model are consistent with observed valley flow velocities as well as sharp morning and prolonged evening transitions. While the
experimental dataset is limited, the agreement with observations suggests that the model captures essential dynamics of valley circulation subjected to multiple forcing. Further observations
are necessary to investigate the general efficacy of the model.
PubDate: 2014-01-05
• Analytical solutions of nonlinear and variable-parameter transport
equations for verification of numerical solvers
□ Abstract: Abstract All numerical codes developed to solve the advection–diffusion-reaction (ADR) equation need to be verified before they are moved to the operational phase. In this paper, we
initially provide four new one-dimensional analytical solutions designed to help code verification; these solutions are able to handle the challenges of the scalar transport equation
including nonlinearity and spatiotemporal variability of the velocity and dispersion coefficient, and of the source term. Then, we present a solution of Burgers’ equation in a novel setup.
Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. By the end of the paper, we solve hypothetical
test problems for each of the solutions numerically, and we use the derived analytical solutions for code verification. Finally, we provide assessments of results accuracy based on well-known
model skill metrics.
PubDate: 2013-12-31
• Quantifying the effect of wind on internal wave resonance in Lake
Villarrica, Chile
□ Abstract: Abstract Lake Villarrica, located in south central Chile, has a maximum depth of 167 m and a maximum fetch of about 20 km. The lake is monomictic, with a seasonal thermocline
located at a depth of approximately 20 m. Field data show the presence of basin-scale internal waves that are forced by daily winds and affected by Coriolis acceleration. A modal linear and
non-linear analysis of internal waves has been used, assuming a two-layer system. The numerical simulations show good agreement with the internal wave field observations. The obtained modes
were used to study the energy dissipation within the system, which is necessary to control the amplitude growth. Field data and numerical simulations identify (1) the occurrence of a
horizontal mode 1 Kelvin wave, with a period of about a day that coincides with the frequency of daily winds, suggesting that this mode of the Kelvin waves is in a resonant state (subject to
damping and controlled by frictional effects in the field) and (2) the presence of higher-frequency internal waves, which are excited by non-linear interactions between basin-scale internal
waves. The non-linear simulation indicates that only 10 % of the dissipation rate of the Kelvin wave is because of bottom friction, while the rest 90 % represents the energy that is radiated
from the Kelvin wave to other modes. Also, this study shows that modes with periods between 5 and 8 h are excited by non-linear interactions between the fundamental Kelvin wave and horizontal
Poincaré-type waves. A laboratory study of the resonant interaction between a periodic forcing and the internal wave field response has also been performed, confirming the resonance for the
horizontal mode 1 Kelvin wave.
PubDate: 2013-12-25 | {"url":"http://www.journaltocs.ac.uk/index.php?action=browse&subAction=pub&publisherID=822&journalID=3975&pageb=7&userQueryID=&sort=&local_page=&sorType=&sorCol=","timestamp":"2014-04-18T14:02:38Z","content_type":null,"content_length":"146311","record_id":"<urn:uuid:d5b5193b-8644-4961-ae80-6cb359ce9a25>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: What program/film/documentary are you watching now?
Matrix Games Forums Forums Register Login Photo Gallery Member List Search
Calendars FAQ
My Profile Inbox Address Book My Subscription My Forums Log Out
View related threads: (in this forum | in all forums) Logged in as: Guest
All Forums >> [General] >> General Discussion >> RE: What program/film/documentary are you watching now? Page: << < prev 20 21 [22] 23 24 next > >>
2/6/2013 7:06:10 PM
warspite1 quote:
ORIGINAL: parusski
Posts: 16006
Joined: 2/2/2008
From: England
Status: offline quote:
ORIGINAL: warspite1
This is amazing....
They found his body straight away - How spooky is that?? They didn't even know where the Church was and yet they came upon it as soon as they started digging - you
couldn't make it up....
warspite1, read the interesting Wall Street Journal article about Richard III:
Thanks I will - the author wrote a piece in my newspaper yesterday too.
England expects that every man will do his duty - Horatio Nelson 1805.
Post #: 631
quote:ORIGINAL: parusski quote:ORIGINAL: warspite1 This is amazing.... They found his body straight away - How spooky is that?? They didn't even know where the Church was and yet they came upon it as
soon as they started digging - you couldn't make it up.... warspite1, read the interesting Wall Street Journal article about Richard III: http://online.wsj.com/article/
quote:ORIGINAL: warspite1 This is amazing.... They found his body straight away - How spooky is that?? They didn't even know where the Church was and yet they came upon it as soon as they started
digging - you couldn't make it up....
2/10/2013 11:39:52 PM
SLAAKMAN Secret Agent X-9, 1945- Chapters 1,2&3
Posts: 2808 _____________________________
Joined: 7/24/2002
Status: offline Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 632
2/13/2013 7:31:26 PM
warspite1 Coronation Street
Posts: 16006
Joined: 2/2/2008 England expects that every man will do his duty - Horatio Nelson 1805.
From: England
Status: offline
Post #: 633
2/13/2013 9:14:30 PM
SLAAKMAN 911 missing links
Posts: 2808
Joined: 7/24/2002
Status: offline
Attachment (1)
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 634
2/13/2013 9:49:36 PM
Darryl60 1)The Blue Max
2)Ken Burns-Civil War
Posts: 3 3)Mayday(I find the reasons behind air crashes to be both interesting and sometimes infuriating).
Joined: 7/23/2010
Status: offline
Post #: 635
2/15/2013 6:44:22 PM
SLAAKMAN World War II - BLITZKRIEG - The Lightning War [France Falls 1940]
Posts: 2808
Joined: 7/24/2002
Status: offline _____________________________
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
(in reply to Darryl60)
Post #: 636
2/15/2013 7:24:35 PM
SLAAKMAN [2012] NEW EVIDENCE THAT 9/11 WAS AN INSIDE JOB
Posts: 2808 George Humphrey - 911 The great Illusion - Full version
Joined: 7/24/2002 http://www.youtube.com/watch?v=EVjeUjgY-ps
Status: offline
Attachment (1)
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 637
2/15/2013 9:49:44 PM
parusski quote:
ORIGINAL: SLAAKMAN
Posts: 4591
Joined: 5/8/2000 [2012] NEW EVIDENCE THAT 9/11 WAS AN INSIDE JOB
From: Wyoming, Even http://www.youtube.com/watch?v=u3UpwskTIRc
Liberals Welcome
Status: offline George Humphrey - 911 The great Illusion - Full version
There is new evidence that Barrack Hussein Obama had secret meeting with AlGore and Jimmy Carter in 2000 to plan the destruction of George Bush. They decided to create fake
air planes(I hear they were helium planes) that would appear to fly into the Trade Center buildings while two nuclear devices were detonated to cause the buildings to fall
faster than a conspiracy theorist can make s**t up. It worked and BO became president. Also, they recruited the entire population to lie about seeing those planes hit
< Message edited by parusski -- 2/15/2013 9:52:47 PM >
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 638
quote:ORIGINAL: SLAAKMAN [2012] NEW EVIDENCE THAT 9/11 WAS AN INSIDE JOB http://www.youtube.com/watch?v=u3UpwskTIRc George Humphrey - 911 The great Illusion - Full version http://www.youtube.com/
2/15/2013 10:53:33 PM
warspite1 When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story straight.
So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that there were NO
Posts: 16006
Joined: 2/2/2008
From: England _____________________________
Status: offline
England expects that every man will do his duty - Horatio Nelson 1805.
Post #: 639
2/15/2013 10:59:12 PM
SLAAKMAN quote:
When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story
Posts: 2808 straight. So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that
Joined: 7/24/2002 there were NO planes....okay....
Status: offline
Suuuure just like there were WMD's in Iraq to justify invasion & jet fuel burned down the WTC....LMAOROFL!
Attachment (1)
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 640
quote:When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story straight. So in the clips I
watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that there were NO planes....okay....
2/15/2013 11:07:27 PM
Posts: 16006 ORIGINAL: SLAAKMAN
Joined: 2/2/2008
From: England
Status: offline quote:
When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story
straight. So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that
there were NO planes....okay....
Suuuure just like there were WMD's in Iraq to justify invasion & jet fuel burned down the WTC....LMAOROFL!
Oh, I'm with you on the WMD crap, but let's not go down that Allied soldier graveyard
But seriously, are the conspiracy theorists trying to argue no planes or yes there were planes but they weren't civilian? If they don't stick to a theory, but instead just
try a scattergun approach, then they lose the credibility they are trying to achieve. There was also a theory that the second plane was in fact a ball?? Is that why there
was an enormous trebuchet set up in Newfoundland at the start of September? I wonder if it was a civilian ball or a military ball?
England expects that every man will do his duty - Horatio Nelson 1805.
Post #: 641
quote:ORIGINAL: SLAAKMAN quote:When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story
straight. So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that there were NO planes....okay....
Suuuure just like there were WMD's in Iraq to justify invasion & jet fuel burned down the WTC....LMAOROFL!
2/16/2013 2:21:42 AM
SLAAKMAN quote:
But seriously, are the conspiracy theorists trying to argue no planes or yes there were planes but they weren't civilian? If they don't stick to a theory, but instead
Posts: 2808 just try a scattergun approach, then they lose the credibility they are trying to achieve.
Joined: 7/24/2002
Status: offline
There are no "conspiracy theories". Thats only a brainwashing term. There are only verifiable historical facts & logical deductions.
There was also a theory that the second plane was in fact a ball?? Is that why there was an enormous trebuchet set up in Newfoundland at the start of September? I
wonder if it was a civilian ball or a military ball?
My grandfather & the side of my family who grew up before & during World War One were told that SciFi writers believed people would fly, watch & talk to each other with
wireless devices, cure polio & smallpox and travel to the moon. Few believed they would ever see it in their lifetimes. Most laughed at the notions. They were also told
that horrific wars were coming & millions would be slaughtered with weapons only few had dreamed of, whole cities & peoples would be destroyed & even nightmarish weapons
would be used that had only been fantasized before. When the first reports of atrocities in the Bolshevik Revolution started to leak out my grandparents were told that its
only a silly "conspiracy theory"....The rest is history.
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 642
quote:But seriously, are the conspiracy theorists trying to argue no planes or yes there were planes but they weren't civilian? If they don't stick to a theory, but instead just try a scattergun
approach, then they lose the credibility they are trying to achieve.
quote: There was also a theory that the second plane was in fact a ball?? Is that why there was an enormous trebuchet set up in Newfoundland at the start of September? I wonder if it was a civilian
ball or a military ball?
2/16/2013 2:41:52 PM
Posts: 4591 ORIGINAL: warspite1
Joined: 5/8/2000
From: Wyoming, Even
Liberals Welcome
Status: offline quote:
ORIGINAL: SLAAKMAN
When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they actually got their story
straight. So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO
that there were NO planes....okay....
Suuuure just like there were WMD's in Iraq to justify invasion & jet fuel burned down the WTC....LMAOROFL!
Oh, I'm with you on the WMD crap, but let's not go down that Allied soldier graveyard
But seriously, are the conspiracy theorists trying to argue no planes or yes there were planes but they weren't civilian? If they don't stick to a theory, but instead
just try a scattergun approach, then they lose the credibility they are trying to achieve. There was also a theory that the second plane was in fact a ball?? Is that
why there was an enormous trebuchet set up in Newfoundland at the start of September? I wonder if it was a civilian ball or a military ball?
W1, you must realize that SLAAK and other planetary aliens, KNOW the truth. Just ask them about WW2. WW2 never happened, it was all a lot of footage from Hollywood lots and
pretend combat on the moon.
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 643
quote:ORIGINAL: warspite1 quote:ORIGINAL: SLAAKMAN quote:When watching those programmes you can see how people can be taken in. However, it would be better for the conspiracy theorists if they
actually got their story straight. So in the clips I watched there were eye witness reports that the aircraft that struck the towers were military and not civilian planes, oh and ALSO that there were
NO planes....okay.... Suuuure just like there were WMD's in Iraq to justify invasion & jet fuel burned down the WTC....LMAOROFL! warspite1 Oh, I'm with you on the WMD crap, but let's not go down that
Allied soldier graveyard But seriously, are the conspiracy theorists trying to argue no planes or yes there were planes but they weren't civilian? If they don't stick to a theory, but instead just
try a scattergun approach, then they lose the credibility they are trying to achieve. There was also a theory that the second plane was in fact a ball?? Is that why there was an enormous trebuchet
set up in Newfoundland at the start of September? I wonder if it was a civilian ball or a military ball?
2/17/2013 12:05:30 AM
SLAAKMAN quote:
Posts: 2808 W1, you must realize that SLAAK and other planetary aliens, KNOW the truth. Just ask them about WW2. WW2 never happened, it was all a lot of footage from Hollywood lots
Joined: 7/24/2002 and pretend combat on the moon.
Status: offline
Silly parruski-Newblette, observe;
UFO - OVNI - UFOs In Washington D.C - 60 years ago
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 644
quote: W1, you must realize that SLAAK and other planetary aliens, KNOW the truth. Just ask them about WW2. WW2 never happened, it was all a lot of footage from Hollywood lots and pretend combat on
the moon.
2/17/2013 12:09:00 AM
Posts: 2808
Joined: 7/24/2002 Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
Status: offline which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 645
2/17/2013 12:10:13 AM
Posts: 4591 ORIGINAL: SLAAKMAN
Joined: 5/8/2000
From: Wyoming, Even
Liberals Welcome
Status: offline
What? Is that a story from SLAAKMAN Daily Insane News?
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 646
2/17/2013 12:53:19 AM
Chickenboy quote:
ORIGINAL: SLAAKMAN
Posts: 17371
Joined: 6/29/2002
From: Twin Cities, MN
Status: offline
I love it.
Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley astronomy
department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about.
Under the subheading 'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the
quotes do not support the title or subheading.
All in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like this.
C'mon SLAAK-why don't you find a credible source?
ETA: I missed the "We *know* there is vegetation on Mars so..."
< Message edited by Chickenboy -- 2/17/2013 12:55:18 AM >
Post #: 647
2/17/2013 2:17:08 AM
Posts: 4591 ORIGINAL: Chickenboy
Joined: 5/8/2000
From: Wyoming, Even
Liberals Welcome quote:
Status: offline
ORIGINAL: SLAAKMAN
I love it.
Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley astronomy
department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about.
Under the subheading 'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the
quotes do not support the title or subheading.
All in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like
this. C'mon SLAAK-why don't you find a credible source?
ETA: I missed the "We *know* there is vegetation on Mars so..."
Hey, cut SLAAK some SLACK...I need to say that at least once a month. There are unicorns on Mars, I know this cause SLAAKER told me.
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 648
quote:ORIGINAL: Chickenboy quote:ORIGINAL: SLAAKMAN I love it. Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his
opinion. The cited Berkeley astronomy department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about. Under the subheading 'space travel
possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the quotes do not support the title or subheading. All in all, my
journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like this. C'mon SLAAK-why don't you find a credible
source? ETA: I missed the "We *know* there is vegetation on Mars so..."
2/17/2013 3:07:39 AM
SLAAKMAN Basting-Chickenboy,
Posts: 2808
Joined: 7/24/2002 I love it.
Status: offline
Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley astronomy
department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about.
Under the subheading 'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the
quotes do not support the title or subheading.
All in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like
this. C'mon SLAAK-why don't you find a credible source?
ETA: I missed the "We *know* there is vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns and
pegasi. Lovely.
(The second article is merely a government disinformation juxtaposition of the irrefutable sighting of alien spacecraft over Washington so as to prevent old grannies, heart
patients & weak bladdered-bloated prostate sufferers such as yourself & parruski from incurring lethal panic attacks as a result. Your fragile testicles & cystic bowels
might explode otherwise).
Attachment (1)
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 649
quote:I love it. Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley astronomy department
professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about. Under the subheading 'space travel possible' a professor is cited as being entirely
unlikely that these saucers came from another planet. No, silly newspaper person-the quotes do not support the title or subheading. All in all, my journalism professor would have flayed me, skinned
me, had me drawn and quartered and then made me wash parusski's socks had I written something like this. C'mon SLAAK-why don't you find a credible source? ETA: I missed the "We *know* there is
vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns and pegasi. Lovely.
2/17/2013 4:02:38 AM
parusski quote:
ORIGINAL: SLAAKMAN
Posts: 4591
Joined: 5/8/2000 Basting-Chickenboy,
From: Wyoming, Even
Liberals Welcome quote:
Status: offline
I love it.
Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley
astronomy department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about.
Under the subheading 'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper
person-the quotes do not support the title or subheading.
All in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like
this. C'mon SLAAK-why don't you find a credible source?
ETA: I missed the "We *know* there is vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns and
pegasi. Lovely.
(The second article is merely a government disinformation juxtaposition of the irrefutable sighting of alien spacecraft over Washington so as to prevent old grannies,
heart patients & weak bladdered-bloated prostate sufferers such as yourself & parruski from incurring lethal panic attacks as a result. Your fragile testicles & cystic
bowels might explode otherwise).
I have to say that I admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or was
it a government disinformation campaign to take attention away from your birth?
And how did you know I have cystic bowels?
Attachment (1)
< Message edited by parusski -- 2/17/2013 4:07:32 AM >
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 650
quote:ORIGINAL: SLAAKMAN Basting-Chickenboy, quote:I love it. Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion.
The cited Berkeley astronomy department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about. Under the subheading 'space travel possible' a
professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the quotes do not support the title or subheading. All in all, my journalism
professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like this. C'mon SLAAK-why don't you find a credible source? ETA:
I missed the "We *know* there is vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns and pegasi. Lovely. (The second article is
merely a government disinformation juxtaposition of the irrefutable sighting of alien spacecraft over Washington so as to prevent old grannies, heart patients & weak bladdered-bloated prostate
sufferers such as yourself & parruski from incurring lethal panic attacks as a result. Your fragile testicles & cystic bowels might explode otherwise).
2/17/2013 5:05:24 AM
Posts: 17371 ORIGINAL: parusski
Joined: 6/29/2002
From: Twin Cities, MN
Status: offline quote:
ORIGINAL: SLAAKMAN
I love it.
Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name withheld' with his opinion. The cited Berkeley
astronomy department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about.
Under the subheading 'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper
person-the quotes do not support the title or subheading.
All in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something
like this. C'mon SLAAK-why don't you find a credible source?
ETA: I missed the "We *know* there is vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns
and pegasi. Lovely.
(The second article is merely a government disinformation juxtaposition of the irrefutable sighting of alien spacecraft over Washington so as to prevent old
grannies, heart patients & weak bladdered-bloated prostate sufferers such as yourself & parruski from incurring lethal panic attacks as a result. Your fragile
testicles & cystic bowels might explode otherwise).
I have to say that I admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or
was it a government disinformation campaign to take attention away from your birth?
And how did you know I have cystic bowels?
I'm waiting patiently for the 'irrefutable' part of this bit. I've yet to see it.
Post #: 651
quote:ORIGINAL: parusski quote:ORIGINAL: SLAAKMAN Basting-Chickenboy, quote:I love it. Hard hitting journalism that has half of its scientific opinions 'off the record' or 'wished to have his name
withheld' with his opinion. The cited Berkeley astronomy department professor (like *he* would know?) stated he had no idea what in the heck the interviewer was talking about. Under the subheading
'space travel possible' a professor is cited as being entirely unlikely that these saucers came from another planet. No, silly newspaper person-the quotes do not support the title or subheading. All
in all, my journalism professor would have flayed me, skinned me, had me drawn and quartered and then made me wash parusski's socks had I written something like this. C'mon SLAAK-why don't you find a
credible source? ETA: I missed the "We *know* there is vegetation on Mars so..." Oh yes. Vegetation on Mars. Fruit trees surround gumdrop lane on Mars. Butterflies and unicorns and pegasi. Lovely.
(The second article is merely a government disinformation juxtaposition of the irrefutable sighting of alien spacecraft over Washington so as to prevent old grannies, heart patients & weak
bladdered-bloated prostate sufferers such as yourself & parruski from incurring lethal panic attacks as a result. Your fragile testicles & cystic bowels might explode otherwise). I have to say that I
admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or was it a government disinformation campaign to take
attention away from your birth? And how did you know I have cystic bowels?
2/17/2013 5:52:06 AM
SLAAKMAN quote:
Posts: 2808 And how did you know I have cystic bowels?
Joined: 7/24/2002
Status: offline
(Youre woman told me while bathing in my glory).
I have to say that I admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or
was it a government disinformation campaign to take attention away from your birth?
Neither. That sighting was a Wright-Patterson visIon of an alien autopsy;
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 652
quote: And how did you know I have cystic bowels?
quote:I have to say that I admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or was it a government
disinformation campaign to take attention away from your birth?
2/17/2013 5:58:07 AM
SLAAKMAN quote:
I'm waiting patiently for the 'irrefutable' part of this bit. I've yet to see it.
Posts: 2808
Joined: 7/24/2002
Status: offline Tell me what type aircraft did we have in 1952 that could fly at 7200MPH from horizontal to vertical vector?
Unbelievable Speed:
The irrefutable radar returns were seen at Washington National Airport and Andrews Air Force Base. Government officials were at a loss to account what was happening
over their own air space. The blips traveled around 100 mph for the most part, but what was unbelievable was their ability to reach the astonishing speed of 7,200 mph
when accelerating. The capabilities of the UFOs were far beyond our technological proficiency at the time.
Vanished From Sight:
The U.S. Air Force Air Defense Command was first notified of what was occurring by Andrews Air Force Base. Immediately, several F-94 night fliers were ordered to hunt
down and verify the subject of the radar sightings. However, repairs being done on a runway delayed their response. There would be actual dogfights between the U. S.
planes and the UFOs, with our planes being out-maneuvered.
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 653
quote:I'm waiting patiently for the 'irrefutable' part of this bit. I've yet to see it.
quote: Unbelievable Speed: The irrefutable radar returns were seen at Washington National Airport and Andrews Air Force Base. Government officials were at a loss to account what was happening over
their own air space. The blips traveled around 100 mph for the most part, but what was unbelievable was their ability to reach the astonishing speed of 7,200 mph when accelerating. The capabilities
of the UFOs were far beyond our technological proficiency at the time. Vanished From Sight: The U.S. Air Force Air Defense Command was first notified of what was occurring by Andrews Air Force Base.
Immediately, several F-94 night fliers were ordered to hunt down and verify the subject of the radar sightings. However, repairs being done on a runway delayed their response. There would be actual
dogfights between the U. S. planes and the UFOs, with our planes being out-maneuvered.
2/17/2013 7:47:03 AM
warspite1 The BBC's "The Normans" on DVD. Qualiteeee
Posts: 16006
Joined: 2/2/2008 England expects that every man will do his duty - Horatio Nelson 1805.
From: England
Status: offline
Post #: 654
2/17/2013 11:09:34 AM
warspite1 I watched the first of the new series of Air Crash Investigation yesterday. Tragic...
A Boeing 737, flying from Jakarta to Singapore suddenly fell out of the sky and crashed into a Sumatran river, killing all 104 passengers and crew onboard.
Posts: 16006
Joined: 2/2/2008 The investigators came to the conclusion that one of the pilots, heavily in debt and recently demoted, purposely flew the plane into the river.... what a .........
From: England
Status: offline _____________________________
England expects that every man will do his duty - Horatio Nelson 1805.
Post #: 655
2/17/2013 2:32:08 PM
Posts: 4591 ORIGINAL: SLAAKMAN
Joined: 5/8/2000
From: Wyoming, Even
Liberals Welcome quote:
Status: offline
And how did you know I have cystic bowels?
(Youre woman told me while bathing in my glory).
I have to say that I admire your incredible ability to have a FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy
or was it a government disinformation campaign to take attention away from your birth?
Neither. That sighting was a Wright-Patterson visIon of an alien autopsy;
Wow, SLAAK, I never thought you could get any more delusional.
"I hate newspapermen. They come into camp and pick up their camp rumors and print them as facts. I regard them as spies, which, in truth, they are. If I killed them all
there would be news from Hell before breakfast."- W.T. Sherman
Post #: 656
quote:ORIGINAL: SLAAKMAN quote: And how did you know I have cystic bowels? (Youre woman told me while bathing in my glory). quote:I have to say that I admire your incredible ability to have a
FREAKING answer for everything. So, was the 1967 Patterson-Gimlin Bigfoot film a government conspiracy or was it a government disinformation campaign to take attention away from your birth? Neither.
That sighting was a Wright-Patterson visIon of an alien autopsy;
2/17/2013 3:25:48 PM
SLAAKMAN quote:
Wow, SLAAK, I never thought you could get any more delusional.
Posts: 2808
Joined: 7/24/2002
Status: offline Silly parruski Newblette why are you straining your neurons without addressing the issue properly? Begin with Drakes Formula to understand that the odds favor the Glory of
The Drake equation states that:
N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone);
R* = the average rate of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fℓ = the fraction of the above that actually go on to develop life at some point
fi = the fraction of the above that actually go on to develop intelligent life
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space[5]
R factor
One can question why the number of civilizations should be proportional to the star formation rate, though this makes technical sense. (The product of all the factors
except L tells how many new communicating civilizations are born each year. Then you multiply by the lifetime to get the expected number. For example, if an average of
0.01 new civilizations are born each year, and they each last 500 years on the average, then on the average 5 will exist at any time.) The original Drake Equation can
be extended to a more realistic model, where the equation uses not the number of stars that are forming now, but those that were forming several billion years ago. The
alternate formulation, in terms of the number of stars in the galaxy, is easier to explain and understand, but implicitly assumes the star formation rate is constant
over the life of the galaxy.
The number of stars in the galaxy now, N*, is related to the star formation rate R* by
Now examine this astute testimony of Bob Lazars account of working on an alien spaceship;
The Bob Lazar Interview
And examine the proximity of our neighbors from Zeta Reticuli, only 39 LY from Earth;
Attachment (1)
Germany's unforgivable crime before the Second World War was her attempt to extricate her economy from the world's trading system and to create her own exchange mechanism
which would deny world finance its opportunity to profit.
— Winston Churchill
Post #: 657
quote:Wow, SLAAK, I never thought you could get any more delusional.
quote:The Drake equation states that: where: N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone); and R* = the
average rate of star formation per year in our galaxy fp = the fraction of those stars that have planets ne = the average number of planets that can potentially support life per star that has planets
fℓ = the fraction of the above that actually go on to develop life at some point fi = the fraction of the above that actually go on to develop intelligent life fc = the fraction of civilizations that
develop a technology that releases detectable signs of their existence into space L = the length of time for which such civilizations release detectable signals into space[5] R factor One can
question why the number of civilizations should be proportional to the star formation rate, though this makes technical sense. (The product of all the factors except L tells how many new
communicating civilizations are born each year. Then you multiply by the lifetime to get the expected number. For example, if an average of 0.01 new civilizations are born each year, and they each
last 500 years on the average, then on the average 5 will exist at any time.) The original Drake Equation can be extended to a more realistic model, where the equation uses not the number of stars
that are forming now, but those that were forming several billion years ago. The alternate formulation, in terms of the number of stars in the galaxy, is easier to explain and understand, but
implicitly assumes the star formation rate is constant over the life of the galaxy. The number of stars in the galaxy now, N*, is related to the star formation rate R* by
2/17/2013 4:45:48 PM
Curtis Lemay quote:
ORIGINAL: SLAAKMAN
Posts: 6720
Joined: 9/17/2004 The Drake equation states that:
From: Houston, TX
Status: offline
N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone);
R* = the average rate of star formation per year in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
f§¤ = the fraction of the above that actually go on to develop life at some point
fi = the fraction of the above that actually go on to develop intelligent life
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space[5]
R factor
One can question why the number of civilizations should be proportional to the star formation rate, though this makes technical sense. (The product of all the factors
except L tells how many new communicating civilizations are born each year. Then you multiply by the lifetime to get the expected number. For example, if an average of
0.01 new civilizations are born each year, and they each last 500 years on the average, then on the average 5 will exist at any time.) The original Drake Equation can
be extended to a more realistic model, where the equation uses not the number of stars that are forming now, but those that were forming several billion years ago. The
alternate formulation, in terms of the number of stars in the galaxy, is easier to explain and understand, but implicitly assumes the star formation rate is constant
over the life of the galaxy.
The number of stars in the galaxy now, N*, is related to the star formation rate R* by
The problem with the Drake equation is that all the terms multiply. That means that all the errors sum. That means that if even ONE of the terms is a guess, the whole thing
is a guess. And, sadly, most of the terms are guesses - and will remain guesses for millions of years. Plausible solutions to the equation range practically from zero to
infinity. That's a guess by any definition. So, it's sort of psudeo-science.
I'm more convinced by the Fermi Paradox:
Basically, it says that, while the galaxy is 100,000 light years across, it is 10,000,000,000 years old. So, at any reasonable pace of exploration, it shouldn't take more
than a few million years to explore the entire galaxy - no more than only 0.1% of its age. That means that if space-faring worlds are common, we shouldn't exist. Because
they wouldn't have just arrived to check out our radio broadcasts, they wouldn't even have arrived late enough to turn us all into fish sticks. They would have gotten here
when we were still pond scum. At that point, it would have been absurd for them to decide to wait billions of years for the pond scum to turn into Carl Sagan. Rather, they
would have said "Look at all this Living Space" - if we can just get rid of this pond scum!".
This is confirmed by SETI's decades of searching without finding diddly-squat. Millions of years from now, after OUR decendents have terra-formed and colonized the entire
galaxy, they'll get a hit everywhere they point their radio telescopes.
< Message edited by Curtis Lemay -- 2/17/2013 4:47:11 PM >
Post #: 658
quote:ORIGINAL: SLAAKMAN The Drake equation states that: where: N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light
cone); and R* = the average rate of star formation per year in our galaxy fp = the fraction of those stars that have planets ne = the average number of planets that can potentially support life per
star that has planets f§¤ = the fraction of the above that actually go on to develop life at some point fi = the fraction of the above that actually go on to develop intelligent life fc = the
fraction of civilizations that develop a technology that releases detectable signs of their existence into space L = the length of time for which such civilizations release detectable signals into
space[5] R factor One can question why the number of civilizations should be proportional to the star formation rate, though this makes technical sense. (The product of all the factors except L tells
how many new communicating civilizations are born each year. Then you multiply by the lifetime to get the expected number. For example, if an average of 0.01 new civilizations are born each year, and
they each last 500 years on the average, then on the average 5 will exist at any time.) The original Drake Equation can be extended to a more realistic model, where the equation uses not the number
of stars that are forming now, but those that were forming several billion years ago. The alternate formulation, in terms of the number of stars in the galaxy, is easier to explain and understand,
but implicitly assumes the star formation rate is constant over the life of the galaxy. The number of stars in the galaxy now, N*, is related to the star formation rate R* by
2/17/2013 5:17:25 PM
Posts: 17371 ORIGINAL: SLAAKMAN
Joined: 6/29/2002
From: Twin Cities, MN
Status: offline quote:
I'm waiting patiently for the 'irrefutable' part of this bit. I've yet to see it.
Tell me what type aircraft did we have in 1952 that could fly at 7200MPH from horizontal to vertical vector?
Unbelievable Speed:
The irrefutable radar returns were seen at Washington National Airport and Andrews Air Force Base. Government officials were at a loss to account what was happening
over their own air space. The blips traveled around 100 mph for the most part, but what was unbelievable was their ability to reach the astonishing speed of 7,200
mph when accelerating. The capabilities of the UFOs were far beyond our technological proficiency at the time.
Vanished From Sight:
The U.S. Air Force Air Defense Command was first notified of what was occurring by Andrews Air Force Base. Immediately, several F-94 night fliers were ordered to
hunt down and verify the subject of the radar sightings. However, repairs being done on a runway delayed their response. There would be actual dogfights between the
U. S. planes and the UFOs, with our planes being out-maneuvered.
Yes. Irrefutable 'radar returns'. 'Cuz nothing EVER has caused funky radar returns. Ever. OK-maybe that once. But otherwise, radar signatures (and measuring speed from
same) is PERFECT.
Post #: 659
quote:ORIGINAL: SLAAKMAN quote:I'm waiting patiently for the 'irrefutable' part of this bit. I've yet to see it. Tell me what type aircraft did we have in 1952 that could fly at 7200MPH from
horizontal to vertical vector? quote: Unbelievable Speed: The irrefutable radar returns were seen at Washington National Airport and Andrews Air Force Base. Government officials were at a loss to
account what was happening over their own air space. The blips traveled around 100 mph for the most part, but what was unbelievable was their ability to reach the astonishing speed of 7,200 mph when
accelerating. The capabilities of the UFOs were far beyond our technological proficiency at the time. Vanished From Sight: The U.S. Air Force Air Defense Command was first notified of what was
occurring by Andrews Air Force Base. Immediately, several F-94 night fliers were ordered to hunt down and verify the subject of the radar sightings. However, repairs being done on a runway delayed
their response. There would be actual dogfights between the U. S. planes and the UFOs, with our planes being out-maneuvered. http://ufos.about.com/od/visualproofphotosvideo/p/washingtondc.htm
2/17/2013 5:19:07 PM
Chickenboy The irrefutable Drake formula (it's a ratio, really) is 11:1 drakes:hens. That's about right.
Posts: 17371
Joined: 6/29/2002
From: Twin Cities, MN
Status: offline
Post #: 660
Page: << < prev 20 21 [22] 23 24 next > >> | {"url":"http://www.matrixgames.com/forums/tm.asp?m=2959208&mpage=22&key=","timestamp":"2014-04-16T19:43:00Z","content_type":null,"content_length":"204000","record_id":"<urn:uuid:8e58af39-cfc4-429b-8ba2-98ad1548202b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the fallowing equations 4e-8=32 please show steps??
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ec68322e4b0306379b2e1b3","timestamp":"2014-04-21T02:03:50Z","content_type":null,"content_length":"37301","record_id":"<urn:uuid:31d66cfe-f86c-4abe-9d01-c245b2e45228>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Selected Papers on Discrete Mathematics
Distributed for Center for the Study of Language and Information
828 pages | 50 line drawings | 6 x 9 | © 2003
Sixth in a series of collected works, Selected Papers on Discrete Mathematics is devoted to Knuth's purely mathematical work. Over forty of Knuth's classic papers spanning the entire range of
discrete mathematics are collected in this volume, all brought up to date with extensive revisions and the addition of new material.
The papers emphasize general techniques of problem solving and explore the creation of mathematical patterns. Knuth's prize-winning expositions of mathematical notation, his accounts of episodes in
the history of mathematics, and his fundamental papers on tableaux and random graphs are all found here, along with fifty new illustrations. Scholars and students of mathematics will find this an
indispensable collection.
1. Combinatorial Analysis and Computers
2. Two Notes on Notation
3. Bracket Notation for the ‘Coefficient of’ Operator
4. Johann Faulhaber and Sums of Powers
5. Notes on Thomas Harriot
6. A Permanent Inequality
7. Overlapping Pfaffians
8. The Sandwich Theorem
9. Combinatorial Matrices
10. Aztec Diamonds, Checkerboard Graphs, and Spanning Trees
11. Partitioned Tensor Products and Their Spectra
12. Oriented Subtrees of an Arc Digraph
13. Another Enumeration of Trees
14. Abel Identities and Inverse Relations
15. Convolution Polynomials
16. Polynomials Involving the Floor Function
17. Construction of a Random Sequence
18. An Imaginary Number System
10. Tables of Finite Fields
20. Finite Semifields and Projective Planes
21. A Class of Projective Planes
22. Notes on Central Groupoids
23. Huffman’s Algorithm via Algebra
24. Wheels Within Wheels
25. Complements and Transitive Closures
26. Random Matroids
27. The Asymptotic Number of Geometries
28. Permutations with Nonnegative Partial Sums
29. Efficient Balanced Codes
30. The Knowlton – Graham Partition Problem
31. Permutations, Matrices, and Generalized Young Tableaux
32. Enumeration of Plane Partitions
33. A Note on Solid Partitions
34. Identities from Partition Involutions
35. Subspaces, Subsets, and Partitions
36. The Power of a Prime That Divides a Generalized Coefficient
37. An Almost Linear Recurrence
38. Recurrence Relations Based on Minimization
39. A Recurrence Related to Trees
40. The First Cycles in an Evolving Graph
41. The Birth of the Giant Component
For more information, or to order this book, please visit http://www.press.uchicago.edu
Google preview here | {"url":"http://press.uchicago.edu/ucp/books/book/distributed/S/bo3613158.html","timestamp":"2014-04-24T18:02:29Z","content_type":null,"content_length":"25536","record_id":"<urn:uuid:634d7007-3c5b-45fe-8267-db2dbe3022d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carrollton, TX Algebra 1 Tutor
Find a Carrollton, TX Algebra 1 Tutor
...I helped businesses prepare business plans, apply for business loans, and set up both automated and manual accounting systems. I have over 10 years experience as an accountant in all sectors of
accounting including public accounting. I have a Masters of Science in Accounting from the University of Hartford, West Hartford, CT.
19 Subjects: including algebra 1, calculus, GRE, geometry
...I utilize this knowledge to identify underlying processing difficulties that may be interfering with learning, and to design a specialized strategy for achieving the desired results. In
addition, I have a thorough understanding of effective study skills, organization, and test taking skills. So...
39 Subjects: including algebra 1, reading, English, chemistry
...Then, after the child can identify and pronounce letters, I move on to a comprehensive phonics program that utilizes flash cards, letter blends, colorful pictures, and short books for every
level of beginning reader. Do you remember or have you heard of the old "Dick and Jane" readers that we grew up with decades ago? Well, those books are not phonics based!
41 Subjects: including algebra 1, reading, English, GED
...I am a Texas state certified teacher (math 4-12), I teach complete Geometry course to students whether as acceleration (credit by exam at the end from their district) or as extension of their
school curriculum. The course includes units: Points,Lines,planes and Angles, Deductive Reasoning, Trans...
20 Subjects: including algebra 1, calculus, physics, geometry
...I hold a Masters Degree in Education with emphasize on instruction in math and science for grades 4th through 8th. I have taken courses in pre-algebra, algebra I and II, Matrix Algebra,
Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in college for students that needed help in math.
11 Subjects: including algebra 1, geometry, algebra 2, precalculus | {"url":"http://www.purplemath.com/carrollton_tx_algebra_1_tutors.php","timestamp":"2014-04-20T19:49:24Z","content_type":null,"content_length":"24302","record_id":"<urn:uuid:73a74501-7b72-487c-bde4-b990ac3e0b78>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milton, WA Prealgebra Tutor
Find a Milton, WA Prealgebra Tutor
...My unique mix of industry (research scientist) and academia (PhD in electrical engineering) not only enables me to teach students how to solve problems but most importantly why they need to
learn the material. With the associated costs with tutoring, I concentrate on helping students obtain self...
45 Subjects: including prealgebra, chemistry, physics, calculus
...I have over four years of experience as a tutor, working with students from the elementary through the college level. When I work with students, I am aiming for more than just good test scores
- I will build confidence so that my students know that they know the material. Math is my passion, not just what I majored in.
8 Subjects: including prealgebra, calculus, geometry, algebra 1
...I use various theatre games and warm ups, and I strongly emphasize scene by scene analysis. I consider voice lessons to be an extension of acting lessons, and I primarily focus on musical
theatre. My approach to teaching voice revolves around getting the student to convey the emotional content and the dramatic meaning of the song.
22 Subjects: including prealgebra, reading, English, chemistry
I have an Associate's degree from Tacoma Community College, and I have been tutoring at TCC for the past twelve years. Most of my tutoring experience has been with adults, but I also help high
school students who are in the Running Start program. I can tutor any level of math from arithmetic to intermediate algebra, and even a little bit of precalculus.
8 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I am a 1st generation Latin American female, my parents were born in Mexico and Costa Rica. As a result, I am bilingual, having had to learn Spanish as a second language since most of my
extended family does not speak English. Thus, I have been speaking Spanish since an early age, have lived in Spanish-speaking countries and have some experience acting as a translator.
25 Subjects: including prealgebra, Spanish, chemistry, writing
Related Milton, WA Tutors
Milton, WA Accounting Tutors
Milton, WA ACT Tutors
Milton, WA Algebra Tutors
Milton, WA Algebra 2 Tutors
Milton, WA Calculus Tutors
Milton, WA Geometry Tutors
Milton, WA Math Tutors
Milton, WA Prealgebra Tutors
Milton, WA Precalculus Tutors
Milton, WA SAT Tutors
Milton, WA SAT Math Tutors
Milton, WA Science Tutors
Milton, WA Statistics Tutors
Milton, WA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Algona, WA prealgebra Tutors
Auburn, WA prealgebra Tutors
Covington, WA prealgebra Tutors
Dupont, WA prealgebra Tutors
Edgewood, WA prealgebra Tutors
Federal Way prealgebra Tutors
Fife, WA prealgebra Tutors
Fircrest, WA prealgebra Tutors
Graham, WA prealgebra Tutors
Jovita, WA prealgebra Tutors
Maple Valley prealgebra Tutors
Pacific, WA prealgebra Tutors
Puy, WA prealgebra Tutors
Puyallup prealgebra Tutors
Sumner, WA prealgebra Tutors | {"url":"http://www.purplemath.com/Milton_WA_prealgebra_tutors.php","timestamp":"2014-04-20T00:00:15Z","content_type":null,"content_length":"24185","record_id":"<urn:uuid:9353e284-6072-4666-bfd9-0003f55d7454>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Graph of Inequalities
The graph of a linear equation is a straight line. The graph of a linear inequality is the half of the plane to one side of the line. The solution of a system of inequalities is the set intersection
of the regions for each inequality. It may be the empty set, a finite polygon, the inside of an acute angle, a strip, and so on, depending on the nature of the inequalities. | {"url":"http://demonstrations.wolfram.com/GraphOfInequalities/","timestamp":"2014-04-20T11:33:26Z","content_type":null,"content_length":"42882","record_id":"<urn:uuid:72068586-0334-469b-b297-be0888b3af4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copying formulas with changing cells
Thanks Eduardo!!!
"Eduardo" wrote:
> Hi,
> highlight the column where you have the formulas, hit CTRL H, find what R,
> replace with enter C
> "KenJ" wrote:
> > Hi,
> >
> > I'm using Microsoft Office 2003. I have formulas in a particular column, but
> > now I need to change from one column to another with out changing the
> > formula. e.g =R20/12 now needs to be =C20/12. how do I do this without having
> > to do each one separately? | {"url":"http://www.pcreview.co.uk/forums/copying-formulas-changing-cells-t3898505.html","timestamp":"2014-04-17T18:42:23Z","content_type":null,"content_length":"41368","record_id":"<urn:uuid:bb78cdd2-bfd3-4f35-bec2-c8ac43f57cf2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register or Login To Download This Patent As A PDF
United States Patent Application 20060251061
Kind Code A1
Kim; Jaeseok ;   et al. November 9, 2006
Apparatus for detecting symbol in SDM system and method thereof
An apparatus for detecting a symbol in a space division multiplexing system and a method thereof are disclosed. The apparatus includes: a QR decomposing unit for performing a QR decomposition on a
channel matrix H to decompose the channel matrix H to a Q matrix and a R matrix; a first symbol detecting unit for detecting a first symbol from a receive signal vector by using the result of the QR
decomposition; a candidate symbol deciding unit for deciding the detected first symbol and Nc-1 symbols adjacent to the detected first symbol on a constellation as candidate symbols, wherein the Nc
is an positive integer number; a candidate vector detecting unit for detecting Nc candidate transmit symbol vectors from the detected Nc candidate symbols; and a transmit symbol vector deciding unit
for deciding an optimized transmit symbol vector among the detected Nc candidate transmit symbol vectors.
Inventors: Kim; Jaeseok; (Seoul, KR) ; Jung; Yunho; (Seoul, KR)
Correspondence Address: SUGHRUE MION, PLLC
2100 PENNSYLVANIA AVENUE, N.W.
SUITE 800
Assignee: YONSEI UNIVERSITY
Serial No.: 136402
Series Code: 11
Filed: May 25, 2005
Current U.S. Class: 370/366
Class at Publication: 370/366
International Class: H04Q 11/00 20060101 H04Q011/00
Foreign Application Data
Date Code Application Number
Apr 13, 2005 KR 10-2005-0030836
1. An apparatus for detecting a symbol in a space division multiplexing (SDM) system, the apparatus comprising: a QR decomposing unit for performing a QR decomposition on a channel matrix H to
decompose the channel matrix H to a Q matrix and a R matrix a first symbol detecting unit for detecting a first symbol from a receive signal vector by using the result of the QR decomposition; a
candidate symbol deciding unit for deciding the detected first symbol and Nc-1 symbols adjacent to the detected first symbol on a constellation as candidate symbols, wherein the Nc is an positive
integer number; a candidate symbol vector detecting unit for detecting Nc candidate transmit symbol vectors from the detected Nc candidate symbols; and a final symbol vector deciding unit for
deciding an optimized transmit symbol vector among the detected Nc candidate transmit symbol vectors.
2. The apparatus according to claim 1, wherein the QR decomposing unit decomposes the channel matrix to the Q matrix and the R matrix based on a sorted QR decomposition detection (SQRD) based
3. The apparatus according to claim 1, wherein the candidate symbol vector detecting unit detects the candidate transmit symbol vectors in parallel on each of the Nc candidate symbols.
4. The apparatus according to claim 3, wherein the candidate symbol vector detecting unit performs a QR decomposition based symbol detection on each of the Nc candidate symbols determined in the
candidate symbol deciding unit.
5. The apparatus according to claim 3, wherein the candidate symbol vector detecting unit performs a sorted QR decomposition based symbol detection on each of the Nc candidate symbols determined in
the candidate symbol deciding unit.
6. The apparatus according to claim 3, wherein the candidate symbol vector detecting unit performs a QR decomposition based symbol detection with minimum mean square error (MMSE) criterion on each of
the Nc candidate symbols determined in the candidate symbol deciding unit.
7. The apparatus according to claim 1, wherein the final symbol vector deciding unit performs a maximum likelihood (ML) test on each of the Nc candidate transmit symbol vectors for deciding the
optimized transmit symbol vector.
8. A method of detecting a symbol in a space division multiplexing (SDM) system, the method comprising: detecting a first symbol from a receive signal vector by performing a QR decomposition on the
channel matrix H; deciding the detected first symbol and Nc-1 candidate symbols adjacent to the detected first symbol on a constellation as candidate symbols, wherein the Nc is an positive integer
number; detecting Nc candidate transmit symbol vectors from the decided Nc candidate symbols as candidate symbol vectors; and deciding an optimized transmit symbol vector among the Nc candidate
transmit symbol vector.
9. The method according to claim 8, wherein in the detecting the first symbol, the first symbol is detected by performing a sorted QR decomposition (SQRD) on the channel matrix.
10. The method according to claim 8, wherein in the detecting the candidate symbol vectors, the Nc candidate symbol vectors are detected from the Nc candidate symbols in parallel.
11. The method according to claim 10, wherein in the deciding the candidate symbol vectors, a symbol detection is orderly performed on each of the Nc candidate symbols for detecting the Nc candidate
transmit symbol vectors.
12. The method according to claim 10, wherein in the deciding the candidate symbol vectors, a QR decomposition based symbol detection is performed on each of the Nc candidate symbols for detecting
the Nc candidate transmit vectors.
13. The method according to claim 10, wherein in the deciding the candidate symbol vectors, remained symbol vectors are detected from a receive signal vector according to a sorted QR decomposition
based symbol detection by using the detected first transmit symbol vector.
14. The method according to claim 10, wherein in the detecting the candidate symbol vectors, a QR decomposition based symbol detection with minimum mean square error (MMSE) criterion is performed on
each of the Nc candidate symbols.
15. The method according to claim 8, wherein in the deciding the transmit symbol vector, a minimum likelihood (ML) test is performed on the Nc candidate transmit symbol vectors for deciding the
optimized transmit symbol vector.
[0001] The present invention relates to an apparatus for detecting a symbol and a method thereof and, more particularly, to an apparatus for rapidly and precisely detecting a symbol in a receiver of
a space division multiplexing (SDM) system.
[0002] A wireless communication service is transited from a low speed voice communication service to a high speed multimedia communication service. Accordingly, there is a growing interest in a high
speed data communication service and there are many studies in progress for increasing data transmission rate. A space division multiplexing (SDM) scheme was introduced by G. J. Foschini in 1996. The
SDM scheme dramatically increases the data transmission rate by using a multiple transmit/receive antennas. The SDM scheme may be called as a bell-labs layered space-time (BLAST). The SDM scheme
splits a single user's data stream into multiple sub-streams and simultaneously transmits the multiple sub-streams in parallel through the multiple transmit antennas. Therefore, the data transmission
rate of the SDM scheme increases in proportion to the number of the transmit antennas used.
[0003] Since the SDM scheme pursues a spatial multiplexing gain while a space-time code (STC) scheme pursues a spatial diversity gain, each of transmit antennas independently transmits symbols in the
SDM scheme. Therefore, to detect transmitted symbols becomes a major scheme for a receiver in a SDM system.
[0004] FIG. 1 is a block diagram illustrating a SDM system using N.sub.t transmit antennas and Nr receive antennas.
[0005] As shown in FIG. 1, the SDM system includes a serial-parallel converting unit 11, N.sub.t transmit antennas, N.sub.r receive antennas, a symbol detecting unit 12 and a parallel-serial
converting unit 13.
[0006] The serial-parallel converting unit 11 splits a data stream into N.sub.t uncorrelated sub-streams and transmits the N.sub.t uncorrelated sub-streams through the N.sub.t transmit antennas. The
transmitted N.sub.t sub-streams are picked up by the N.sub.r receive antennas after being perturbed by a channel matrix H (assuming quasi-static).
[0007] The symbol detecting unit 12 detects symbols from the sub-streams received through the N.sub.r receive antennas.
[0008] The parallel-serial converting unit 13 converts the detected symbols, which are parallel data, to serial data.
[0009] If it assumes that a transmitting signal experiences a Rayleigh flat fading while traveling a narrowband wireless channel, a relation between a N.sub.t-dimensional transmit signal vector and a
N.sub.r-dimensional receive signal vector can be expressed as following Equation (1) x=Hs+v (1)
[0010] where x denotes the N.sub.r-dimensional receive signal vector, s stands for the N.sub.t-dimensional transmit signal vector, H represents the [N.sub.r.times.N.sub.t]-dimensional complex channel
matrix and v is an additive white Gaussian noise. H is assumed to be a constant during each symbol time and to be known to a receiver through channel training. Since the transmitting signal is
assumed to experience the Rayleigh flat fading, each of elements of H is also assumed to be independently and identically distributed (i.i.d), to have a mean value of 0 and to have a variance of 1.
As described above, v is an additive white Gaussian noise and has a mean value of 0. Accordingly, a covariance matrix of v can be represented as following Equation (2) E[vv.sup..cndot.]
=.delta..sub.v.sup.2I.sub.N.sub.r, (2) where the superscript * denotes the conjugate-transpose of a vector signal and I.sub.N.sub.r represents a N.sub.r-dimensional identity matrix. The
N.sub.t-dimensional transmit signal vector s is assumed to have a mean value of 0 and a variance of .delta..sub.s.sup.2, and the total power of s is assumed to be P. Thus, the covariance matrix of s
is given by Equation (3) and a signal-to-noise ratio (SNR) is defined as Equation (4). E .function. [ s s * ] = .delta. s 2 I N t = P N t I N t ( 3 ) .rho. = E s N 0 = N t .times. .delta. s 2 .times.
.delta. t 2 .delta. v 2 = P .delta. v 2 ( 4 ) In Equation (4), E.sub.s and N.sub.0 denote the signal energy and noise power spectral density, respectively. Meanwhile, there are several optimized
detection algorithms with reduced complexity introduced recently for effectively performing a SDM scheme. Among the introduced algorithms, a sorted QR decomposition (SQRD) algorithm is spotlighted as
a good solution for real-time implementation. The SQRD algorithm detects symbols by a QR decomposition of a channel matrix H without calculating a series of pseudo inverse of the channel matrix.
Hereinafter, a conventional method of detecting a transmit signal vector s from a receive signal vector x using QR decomposition will be explained in detail. At first, the QR decomposition is
performed on a channel matrix H for decomposing the channel matrix H to QR (H=QR). Accordingly, a unitary matrix Q and an upper-triangular matrix R having zero lower triangular are decomposed from
the channel matrix H. If Q.sup.H is multiplied to both sides by using a characteristic that the matrix Q is the unitary matrix, i.e., Q.sup.HQ=I, where the superscript H denotes a conjugate transpose
of a matrix and I represents an identity matrix, following Equation (5) can be obtained. y = Q H .times. x = Rs + Q H .times. v = ( r 1.1 r 1.2 r 1 .times. .times. .times. N t 0 r 2.2 r 2 .times.
.times. .times. .times. N t 0 0 r N t - 1 .times. .times. .times. .times. N t 0 0 r N t .times. .times. .times. .times. N t ) .times. s + Q H .times. v ( 5 ) In Equation (5), y denotes a
N.sub.t-dimensional column vector. A last term of the Equation (5) can be simplified to following Equation (6) if Q.sup.Hv=v' is applied to the Equation (5). y.sub.N.sub.t=
r.sub.N.sub.t.cndot..sub.N.sub.tS.sub.N.sub.t/V'.sub.N.sub.- t (6) Since the Equation (6) is same to a result equation of a general communication system using single antenna, it is possible to detect
a (Nt)th symbol by using the following Equation (7). s ^ = Q .function. ( s ~ ) = Q .function. ( y N t r N t = N 0 ) = Q .function. ( s N t + v ' N t r N t = N 0 ) ( 7 ) In the Equation (7), Q( )
denotes a symbol decision calculation appropriate to a constellation of a transmitted symbol. If an influence of (Nt)th symbol detected through the Equation (7) is eliminated from (Nt-1)th term of
the Equation (5), Equation (8) is obtained. y'.sub.N.sub.t-1=y.sub.N.sub.t.sub.-1-r.sub.N.sub.t-1,.sub.N.su- b.ts.sub.N.sub.t=r.sub.N.sub.t-1,.sub.N.sub.t-1S.sub.N.sub.t-1+v'.sub.N.su- b.t-1 (8) The
Equation (8) is also identical to a result equation of a general communication system having single antenna. Therefore, a (Nt-1)th symbol can be detected by identical method of the Equation (7) and
the transmit symbol vector s can be detected by orderly applying the above describe method to remained terms of y. However, a performance of the SQRD based algorithm may be degraded compared to a
conventional ordered successive detection (OSD) algorithm because a detection order of the SQRD based algorithm is not always optimized while the OSD algorithm always provides optimized detection
order. Especially, if an error is occurred from the first detected signal, the error is propagated while detecting following symbols. Therefore, the total system performance can be seriously
[0011] It is, therefore, an object of the present invention to provide an apparatus for detecting a symbol in a SDM system to improve a total system performance by lowering an error probability of a
first detected symbol in a receiver of a space division multiplexing (SDM) system, and a method thereof.
[0012] In accordance with an aspect of the present invention, there is provided an apparatus for detecting a symbol in a space division multiplexing (SDM) system including: a QR decomposing unit for
performing a QR decomposition on a channel matrix H to decompose the channel matrix H to a Q matrix and a R matrix; a first symbol detecting unit for detecting a first symbol from a receive signal
vector by using the result of the QR decomposition; a candidate symbol detecting unit for deciding the detected first symbol and Nc-1 symbols adjacent to the detected first symbol on a constellation
as candidate symbols, wherein the Nc is an positive integer number; a candidate vector detecting unit for detecting Nc candidate transmit symbol vectors from the detected Nc candidate symbols; and a
transmit symbol vector deciding unit for deciding an optimized transmit symbol vector among the detected Nc candidate transmit symbol vectors.
[0013] In accordance with another aspect of the present invention, there is provided a method of detecting a symbol in a space division multiplexing (SDM) system including: detecting a first symbol
from a receive signal vector by performing a QR decomposition on the channel matrix H; deciding the detected first symbol and Nc-1 candidate symbols adjacent to the detected first symbol on a
constellation as candidate symbols, wherein the Nc is an positive integer number; detecting Nc candidate transmit symbol vectors from the decided Nc candidate symbols as candidate symbol vectors; and
deciding an optimized transmit symbol vector among the Nc candidate transmit symbol vector.
[0014] The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in
[0015] FIG. 1 is a block diagram illustrating a SDM system using N.sub.t transmit antennas and N.sub.r receive antennas;
[0016] FIG. 2 is a block diagram depicting an apparatus for detecting a symbol in a space division multiplexing (SDM) system in accordance with a preferred embodiment of the present invention;
[0017] FIG. 3 is a flowchart showing a method of detecting a symbol in accordance with a preferred embodiment of the present invention; and
[0018] FIG. 4 is a graph depicting a performance of a apparatus for detecting a symbol in a SDM system according to a preferred embodiment of the present invention.
[0019] Hereinafter, an apparatus for rapidly and precisely detecting a symbol in a receiver of a space division multiplexing (SDM) system in accordance with a preferred embodiment of the present
invention will be described in more detail with reference to the accompanying drawings.
[0020] FIG. 2 is a block diagram illustrating an apparatus for detecting a symbol in a space division multiplexing (SDM) system in accordance with a preferred embodiment of the present invention.
[0021] As shown in FIG. 2, the apparatus for detecting s symbol in the SDM system includes a QR decomposing unit 21, a first symbol detecting unit 22, a candidate symbol deciding unit 23, a candidate
symbol vector detecting unit 24, and a final symbol vector deciding unit 25.
[0022] The QR decomposing unit 21 performs a QR decomposition on a channel matrix H for decomposing the channel matrix H to a matrix Q and a matrix R.
[0023] The first symbol detecting unit 22 detects a first symbol from a receive signal vector by using a result of the QR decomposition from the QR decomposing unit 21.
[0024] The candidate symbol deciding unit 23 decides the first symbol detected from the first symbol detecting unit 22 and Nc-1 symbols adjacent to the detected first symbol on a constellation as
candidate symbols. The Nc is a positive integer number.
[0025] The candidate symbol vector detecting unit 24 detects Nc candidate transmit signal vectors by orderly performing symbol detection through the Equation (8) on each of the Nc candidate symbols
which are decided by the candidate symbol deciding unit 23. The symbol detection may be performed in parallel for each candidate transmit symbol vector.
[0026] The final symbol vector deciding unit 25 decides an optimized transmit symbol vector s by performing a maximum likelihood (ML) test on each of Nc candidate transmit symbol vectors detected
from the candidate symbol vector detecting unit 24 based on following Equation (9). The ML test selects an input having a minimum squared Euclidean distance by substituting the Nc candidate transmit
symbol vectors. s=arg min.sub.i=1, . . . ,N.sub.t|x-Hc.sub.t|.sup.2 (9)
[0027] As described above, the apparatus for detecting a symbol according to the present embodiment not only detects the first symbol but also detects Nc-1 symbols adjacent to the detected first
symbol on a constellation in order to detect the Nc candidate symbols. The Nc candidate transmit symbol vectors are detected by applying the Equation (8) to the Nc candidate symbols and the ML test
is performed over the Nc candidate transmit symbol vectors for finally deciding single transmit symbol vector for enhancing accuracy of the first detect signal.
[0028] In other word, the conventional QR decomposition based symbol detection method detects single symbol by using the Equations (6) and (7), and orderly applies the Equation (8) to detect other
symbols for finally detecting single transmit symbol vector. However, a method of detecting a symbol according to the present embodiment detects Nc candidate transmit symbol vectors by using the
Equations (6) and (7), and performs the ML test on each of Nc candidate transmit symbol vectors for deciding the final transmit symbol vector as an optimized vector.
[0029] Meanwhile, the present embodiment may be implemented to various QR decomposition based symbol detection algorithms such as a QR decomposition based symbol detection algorithm (QRD), a sorted
QR decomposition based symbol detection algorithm (SQRD) and a QR decomposition symbol detection algorithm with minimum mean square error (MMSE) criterion.
[0030] FIG. 3 is a flowchart showing a method of detecting a symbol in accordance with a preferred embodiment of the present invention.
[0031] Referring to FIG. 3, a QR decomposition is performed on a channel matrix H in operation S301. By using the result of the QR decomposition, a first symbol is detected from a receive signal
vector in operation S302. The first detected symbol and Nc-1 symbols adjacent to the first detected symbol on a constellation are determined as candidate symbols in operation S303. Therefore, the
total number of the candidate symbols decided in operation S303 is Nc by including the first detected symbol.
[0032] A symbol detection is orderly performed for the Nc candidate symbols in operation S304. In operation S304, total Nc candidate transmit symbol vectors are obtained. The symbol detection for the
Nc candidate symbols may be performed in parallel for reducing processing time.
[0033] A ML test is performed over the Nc candidate transmit symbol vectors by using the Equation (9) for finally deciding a single transmit symbol vector s in operation S305.
[0034] Hereinafter, a performance of the apparatus for detecting a symbol in the SDM system according to the present embodiment will be compared to the same of the conventional apparatus for
detecting a symbol.
[0035] FIG. 4 is a graph depicting a performance of a apparatus for detecting a symbol in a SDM system according to a preferred embodiment of the present invention. The performance shown in FIG. 4 is
obtained from a simulation under conditions of 4 transmit/receive antennas, 16 QAM modulation scheme and 4 candidate symbols (Nc=4).
[0036] In FIG. 4, a curve LD represents a performance of the simulation when a linear detection (LD) is applied, a curve OSD denotes a performance of the simulation when an ordered successive
detection (OSD) is applied, a curve QRD is a performance of the simulation when a QR decomposition based detection is applied and a curve SQRD shows a performance of the simulation when a sorted QR
decomposition based detection (SQRD) is applied. Furthermore, a curve QRDML and a curve SQRDML represent performances of the simulations when the method of detecting a symbol according to the present
embodiment is applied. Especially, the curve QRDML shows a performance of the simulation when a QRD scheme with the ML test, which is one embodiment of the present invention, is applied and the curve
SQRDML denotes a performance of the simulation when a SQRD scheme with the ML test, which is another embodiment of the present invention, is applied.
[0037] As shown in the graph of FIG. 4, the method of detecting a symbol according to the present embodiment has better performance than the LD scheme. That is, the method of detecting a symbol based
on the QRD scheme with the ML test (QRDML) obtains about 9 dB of a signal-to-noise ratio (SNR) gain and the method of detecting a symbol based on the SQRD scheme with the ML test (SQRDML) obtains
about 12 dB of the SNR gain compared to the conventional method using the LD scheme. Compared to the conventional symbol detection method using the QRD scheme, the symbol detection method using the
QRDML obtains about 7 dB of the SNR gain and the symbol detection method using the SQRDML obtains about 10 dB of the SNR gain. Furthermore, compared to the conventional symbol detection method using
the SQRD scheme, the symbol detection method using the QRDML obtains about 4.5 dB of the SNR gain and the symbol detection method using the SQRDML obtains about 7.5 dB of the SNR gain. Moreover, the
symbol detection method based on the QRDML obtains about 3 dB of the SNR gain and the symbol detection method based on the SQRDML obtains about 6 dB of the SNR gain compared to the conventional
symbol detection method using the OSD scheme.
[0038] As described above, the present embodiment decides a transmit symbol vector by detecting Nc candidate symbol vectors form Nc candidate symbols and performing the ML test on each of the
detected Nc candidate symbol vectors in a receiver of a space division multiplexing (SDM) system. Therefore, accuracy of symbol detection is improved by lowering an error probability of the first
detected symbol and a total performance of a system is also enhanced.
[0039] While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skilled in the art that
various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined in the following claims.
* * * * * | {"url":"http://patents.com/us-20060251061.html","timestamp":"2014-04-18T11:31:26Z","content_type":null,"content_length":"39297","record_id":"<urn:uuid:8fa3a90c-95cf-4af1-8133-a107e2a2dda5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 7. Evaluating Risks
Previous | Table of Contents | Next
This chapter continues the discussion of Phase II of the specification development process. This chapter is intended to provide "how to use" best practices in evaluating the risks associated with the
initial acceptance procedures that have been developed up to this point. The steps that are involved in this part of the process are identified in the flowchart in figure 25. The numbers in boxes
before the titles of the following sections refer to the corresponding box in the flowchart.
Establishing the limits to be used for acceptance is an important step. Making the limits too restrictive deprives the contractor of a reasonable opportunity to meet the specification. Making them
not sufficiently restrictive makes them ineffective in controlling quality. Selection of the limits relates to the determination of risks. The concept of risks for acceptance is similar to that
discussed in chapter 5 for verification testing to evaluate whether test results came from the same population. The two types of risk discussed in chapter 5 are the seller's (or contractor's) risk,
a, and the buyer's (or agency's) risk, b. The a risk is also called a Type I risk, and the b risk is also called a Type II risk. A well-written QA acceptance plan takes these risks into consideration
in a manner that is fair to both the contractor and the agency. Too large a risk for either party undermines credibility.
39.1. Risks: Definitions and Concepts
39.1.1. Risks. Before proceeding further, some terms need to be formally defined. The TRB glossary^ (2) includes the following definitions:
• Seller's risk (a)-also called risk of a type I error. The probability that an acceptance plan will erroneously reject acceptable quality level (AQL) material or construction with respect to a
single acceptance quality characteristic. It is the risk the contractor or producer takes in having AQL material or construction rejected.
• Buyer's risk (b)-also called risk of a type II error. The probability that an acceptance plan will erroneously fully accept (100 percent or greater) rejectable quality level (RQL) material or
construction with respect to a single acceptance quality characteristic. It is the risk the highway agency takes in having RQL material or construction fully accepted. [The probability of having
RQL material or construction accepted (at any pay) may be considerably greater than the buyer's risk.]
The a and b risk levels that might be appropriate vary depending upon the material or construction process that is involved. The appropriate risk level is a subjective decision that can vary from
agency-to-agency. In reality, it is likely that few agencies have developed and evaluated the risk levels associated with their acceptance plans. While risk levels are an agency decision, AASHTO R-9,
"Acceptance Sampling Plans for Highway Construction," suggests the risk levels indicated in table 21.^ (22) It should be noted that large sample sizes, on the order of 10 to 20 or more, may be
required to achieve some of the risk levels stipulated in this table.
^1Critical: when the requirement is essential to preservation of life.
Major: when the requirement is necessary for the prevention of substantial financial loss.
Minor: when the requirement does not materially affect performance.
Contractual: when the requirement is established only to provide uniform standards for bidding.
As noted in the section on verification testing in chapter 5, the concept of a and b risks derives from statistical hypothesis testing where there is either a right or wrong decision. As such, when a
and b risks are applied to materials or construction they are only truly appropriate for the case of a pass/fail or accept/reject decision and, in fact, may lead to considerable confusion if an
attempt is made to apply them to the payment adjustment case. When materials not only can be accepted or rejected, but can also be accepted at an adjusted payment, then additional interpretations or
clarifications must be applied to the definitions for these risks in an effort to manipulate them to apply to the payment adjustment situation.
For example, in the definition for buyer's risk above, it states that b is the probability that RQL material will be accepted at 100 percent payment or greater. The definition must then go on to
point out that there is a much greater probability that the RQL material will receive some reduced payment. While it is not stated as directly, the same reasoning is true for the seller's risk. The
definition indicates that a is the probability that AQL material will be rejected. Although not stated in the definition, it is also true that there is a much greater probability that the AQL
material will be accepted at a reduced payment.
39.1.2. OC Curves. The buyer's and seller's risks are very narrowly defined to occur at only two specific quality levels. The buyer's risk is the probability of accepting material that is exactly at
the RQL level of quality, while the seller's risk is the probability of rejecting material that is exactly at the AQL level of quality. These definitions do not therefore provide a very good
indication of the risks over a wide range of possible quality levels. To evaluate how the acceptance plan will actually perform in practice, it is necessary to construct an OC curve. The TRB glossary
^ (2) includes the following definition:
● OC curve — A graphic representation of an acceptance plan that shows the relationship between the actual quality of a lot and either (1) the probability of its acceptance (for accept/reject
acceptance plans) or (2) the probability of its acceptance at various payment levels (for acceptance plans that include pay adjustment provisions).
An example of an OC curve for a pass/fail or accept/reject acceptance plan, case (a) in the above definition, is shown in figure 26. Probability of acceptance is shown on the vertical axis for the
range of quality levels indicated on the horizontal axis. An example of an OC curve for an acceptance plan with payment adjustment provisions, case (b) in the above definition, is shown in figure 27.
The axes are the same as for figure 26, but there are multiple curves, one for each of several selected payment levels, plotted.
Each curve plotted in figure 27 represents the probability of receiving a payment factor equal to or greater than the one indicated for the line. For example, for the OC curves in figure 27, material
that is of exactly AQL quality has approximately a 45 percent chance of receiving a payment factor of 1.04 (104 percent) or greater. This same AQL material has approximately a 55 percent chance of
receiving full payment (100 percent) or greater, which also means that it has approximately a 45 percent chance of receiving less than 100 percent payment. This AQL material has essentially a 100
percent chance of receiving a payment factor of 0.80 (80 percent) or greater.
On the other hand, for the OC curves in figure 27, material that is of exactly RQL quality has approximately a 30 percent chance of receiving a payment factor of 0.80 (80 percent) or greater, and
nearly an 80 percent chance of receiving a payment factor of 0.70 (70 percent) or greater. Similar payment probabilities can be determined for any level of actual quality, and additional curves could
be developed for any specific value of payment factor.
39.1.3. Expected Payment Curves. Figure 27 clearly shows that consideration of onlya and brisks is clearly not sufficient when payment adjustments are used. From figure 27 it can also be seen that
using multiple OC curves is not an easy way to evaluate an acceptance plan. It would be convenient to have a single curve that can represent the operation of the plan as opposed to many different
curves for each plan. Another way to present the payment performance for an acceptance plan is with what is call an expected payment (EP) curve. The TRB glossary^ (2) includes the following
● EP curve — A graphic representation of an acceptance plan that shows the relation between the actual quality of a lot and its EP (i.e., mathematical pay expectation, or the average pay the
contractor can expect to receive over the long run for submitted lots of a given quality). [Both OC and EP curves should be used to evaluate how well an acceptance plan is theoretically expected
to work.]
An example of an EP curve is shown in figure 28. Quality levels are indicated on the horizontal axis in the usual manner, but instead of probability of acceptance, the vertical axis gives the
expected (long-term average) payment factor as a percent of the contract price.
Although the risks have a different interpretation when associated with EP curves than with OC curves, the same type of information is provided. For the example in figure 28, AQL work receives an
expected payment of 100 percent, as desired, while truly superior work that is better than the AQL receives an expected payment of 102 percent. At the other extreme, RQL work corresponds to an
expected payment of 70 percent. For still lower levels of quality, the curve levels off at a minimum expected payment of 50 percent.
A simplified example of how risks are related to specification limits can be illustrated by considering primitive acceptance plans that were based on measuring and accepting a property based on only
one test. Suppose that an accept/reject acceptance plan for asphalt content has been developed based on the definitions for AQL material and RQL material that follow.
Define AQL Material. It is assumed that asphalt content follows a normal distribution. It has been determined that for asphalt content, acceptable material has a standard deviation of about 0.20
percent when the mean is close to the target JMF value. If the JMF has established the target as 6.0 percent asphalt content, the AQL is therefore a lot (population) with a mean of 6.0 percent and a
standard deviation of 0.20 percent. Figure 29 shows an AQL population.
Define RQL Material. Additionally, unacceptable material might be defined as that for which the mean differs from the target value by 0.4 percent or more, as long as the standard deviation does not
exceed 0.20 percent. (Other definitions would be equally valid.) The RQL is therefore a lot (population) with a standard deviation of 0.20 percent and a mean of 5.6 percent or lower, or 6.4 percent
or higher. Examples of RQL populations are shown in figure 30.
Determine a Risk. Suppose the agency has established the specification limits, i.e., the limits within which individual asphalt content results must fall, as the JMF ± 0.40. For a JMF target value of
6.0 percent, this establishes the specification limits as 5.60 percent and 6.40 percent. An AQL population is shown along with the specification limits in figure 31. The a risk is the probability
that a single test result from this AQL lot would be outside of the allowable specification range of 5.60 percent to 6.40 percent. This is the a risk to the contractor because if a test falls outside
these limits the agency will erroneously reject the material. The Z-statistics can be calculated and used in conjunction with the standard normal distribution table (table 7) to determine this
probability to be 0.0456 or 4.56 percent.
Determine b Risk. Figure 32 shows an RQL population with its mean at 5.60 percent and standard deviation of 0.20 percent. The RQL population could also have its mean at 6.40 percent. The RQL
population can either be too low or too high, but not both at the same time. The b risk is the probability that a single test result from this RQL lot would be within of the allowable specification
range of 5.60 percent to 6.40 percent. This is the risk to the agency because if a test result falls in this range, the agency will erroneously accept the material. From figure 32 this probability
can be seen to be 0.50 or 50 percent.
Develop the OC Curve. Similarly, the probabilities of acceptance for lots with means of any value, e.g., 5.20 percent, 5.40 percent, 5.60 percent, etc., can be calculated and plotted to form the OC
curve shown in figure 33. The AQL and RQL are also noted on the figure. It should be noted that it is purely coincidental that the OC curve in figure 33 has the appearance of a normal curve.
39.2. OC Curves for PWL or PD Acceptance Plans
As with any acceptance plan that bases the acceptance decision on a sample, there are risks associated with PWL or PD acceptance plans. The above example demonstrated the calculation of risks for a
simple acceptance plan based on an assumed known standard deviation, and with the acceptance decision based on only a single test result. The risks associated with PWL or PD acceptance plans cannot
be calculated so easily.
For PWL or PD acceptance plans the risks are almost always determined by means of computer simulation. It is, however, possible to illustrate the risks associated with using a sample to estimate PWL
by means of a simplified attributes example.
Simplified Example
Assume that we have a bag that has 100 marbles. Further assume that the bag has 70 white marbles and 30 blue marbles. Also assume that we wish to take a sample of 10 marbles to estimate the
percentage of the marbles in the bag that are blue.
It is easy to estimate the percentage of blue marbles from a sample of 10 marbles. However, each sample of 10 marbles will not yield the same percentage of blue marbles. The first sample of 10
marbles might contain 3 blue marbles, thereby yielding an estimate of 30 percent blue marbles. However, it could also have only one blue marble, or five blue marbles. In each of these cases the
estimate from the sample will be fairly far from the true value of 30 percent.
The histogram in figure 34 shows the results of 100 samples, each with 10 marbles. While the individual sample results could be quite far from the actual percentage in the population, the average
of the 100 samples is quite close to the true population value. Also, most of the sample values are close to the actual population percentage, with fewer values as the estimate becomes farther
from the actual population percentage. Although simplified, this example clearly shows how the PWL values estimated from samples can vary. The long-run average of the sample averages will tend to
equal the true population PWL value, but there is a risk that any individual estimate may either over-estimate or underestimate the true population PWL value.
39.2.1. Computer Simulation. As noted above, calculating the risks for actual PWL acceptance plans is much more involved than the simplified example from figure 34. In fact, computer simulation is
almost always used to developa and brisks, as well as OC and EP curves. OCPLOT, a user-friendly program that develops OC and EP curves by computer simulation, was developed as part of FHWA
Demonstration Project No. 89. This program is explained in detail in the report for that project,^ (18) and is also presented in appendix M along with some examples.
OCPLOT can be used to develop OC curves for accept/reject acceptance plans. It can also be used with a stipulated payment equation to determine the probability of receiving a lot payment factor
greater than or equal to any specific value. In this way, it can be used to plot multiple OC curves similar to those in figure 27. The program can also develop EP curves for a given payment equation.
The program is also capable of simulating acceptance plans containing retest provisions.
39.3. Evaluating the Risks
39.3.1. Accept/Reject Acceptance Plans. How potential risks are evaluated depends upon the type of acceptance plan that is used. The evaluation of risks is rather straightforward for accept/reject
(pass/fail) acceptance plans. As noted above,a and brisks and OC curves were developed specifically for this type of situation. Therefore, they can be directly used to assess the risks to both
To reiterate, the a risk is the probability that AQL material will be rejected; while the b risk is the probability that RQL material will be accepted. However, since contractors will not operate at
only these two quality levels, to fully consider risks the OC curve, which illustrates probability of acceptance for any quality level, must be developed for the acceptance plan under consideration.
An example will help to illustrate how this can be done.
Example: Accept/Reject Acceptance Plans-OC Curves
The previously discussed OCPLOT program can be used to determine thea and brisks and to plot the OC curve for a sample acceptance plan. Suppose that an agency decides to use asphalt content as an
accept/reject property for an HMAC pavement (note, this is not recommended, but is used here solely for the purpose of illustrating the use of an OC curve for an accept/reject situation). Further
suppose that the agency has established for asphalt content a lower specification limit of 5.60 percent and an upper specification limit of 6.40 percent. The agency has decided to use the PWL,
based on a sample of size 4, as the quality measure. The agency has selected 90 PWL for the AQL and 50 PWL for the RQL. The lot will be accepted if the estimated PWL is greater than or equal to
70. Table 22 and figure 35 show the results of the OCPLOT analysis of this proposed acceptance plan.
From table 22 it can be seen that the seller's risk is a = 1.000 - 0.905 = 0.095 (or 9.5 percent) and the buyer's risk is b = 0.144 (or 14.4 percent). Further, both table 22 and figure 35 show
the probability of acceptance over the total range of possible lot quality levels, as defined by the actual PWL for the lot. The agency would need to decide whether or not it considers these
levels of risk to be appropriate.
39.3.2. Payment Adjustment Acceptance Plans. The evaluation of risks becomes much more complicated when the acceptance plan includes payment adjustment provisions. The concepts of a and b risks,
which were developed from hypothesis testing where there is a yes or no decision, i.e., reject or fail to reject the null hypothesis, are not sufficient when the decision involves not only accept or
reject, but also accept at an adjusted payment level.
The TRB glossary^ (2) definitions for seller's and buyer's risks that are presented above do not attempt to incorporate the concept of payment adjustments. The seller's risk is defined as the
probability that an acceptance plan will erroneously reject AQL material or construction. This disregards the fact that the material or construction can be accepted at full payment, increased
payment, or decreased payment. In other words, whether or not a lot received 105 percent, 100 percent, 75 percent, or 50 percent payment would have no impact with regard to the seller's risk based on
this definition. Obviously, however, these different payment levels would have quite an impact on how the contractor perceived its risks.
Similarly, the buyer's risk is defined as the probability that an acceptance plan will erroneously fully accept (100 percent or greater) RQL material or construction. Once again, this definition
disregards the impact of partial payments when determining the buyer's risk. However, when considering its risks the agency will certainly be interested in the probability of accepting RQL material
at reduced payment levels as well as at 100 percent payment or greater.
The use ofa and brisks to evaluate payment adjustment acceptance plans is simply not sufficient. Some additional method or methods is/are necessary to properly evaluate the risks when payment
adjustments are added to the acceptance decision options. The expected payment, or EP, (see figure 28) is another method for considering the payment adjustment aspects of the acceptance plan.
However, EP alone is also not sufficient to fully evaluate the risks that are involved. Multiple OC curves for various payment levels (see figure 27) should also be developed when evaluating
acceptance plans with payment adjustment provisions. An example will help to illustrate the evaluation of risks for payment adjustment acceptance plans.
Example: Payment Adjustment Acceptance Plans-EP Curves
Consider the previous asphalt content example where the sample size was 4, the allowable specification range was 5.60 percent to 6.40 percent, and the AQL and RQL were defined as 90 PWL and 50
PWL, respectively. However, instead of a simple accept/reject acceptance plan, the agency chooses to use equation 28 to establish the payment factor for a lot:
PF = 55 + (0.50 x PWL) (28)
where: PF = payment factor for the lot, as a percent of contract price.
PWL = estimated PWL value for the lot.
From the above equation, it can be seen that the maximum payment factor is 105 percent at 100 PWL, while the payment factor at the AQL will be 100 percent and the payment factor at the RQL will
be 80 percent. It is generally accepted that the average payment for AQL material should be 100 percent. In this example, the payment factor at the AQL is 100 percent, exactly as intended.
However, if the payment equation is not developed properly, the average payment factor may turn out to be above or below 100 percent at the AQL. If this is the case, the agency should determine
if an expected payment other than 100 percent for AQL material is acceptable.
With the above information, the OCPLOT program can be used to develop the EP curve shown in figure 36. It can be seen in this figure that, as desired, the EP for AQL material is 100 percent. This
means that a contractor that consistently produces material that just meets the minimum requirements, i.e., AQL material, will receive an average payment factor of 100 percent in the long-run.
Similarly, the EP for RQL material is 80 percent as desired from the payment equation.
The EP curve has the advantage of combining all of the possible payment levels into a single expected, or long-term average, payment for each given level of quality. While it is a major improvement
over only considering a and b risks, the use of the EP alone still has some serious deficiencies. The primary deficiency in the use of EP alone is that, while it considers the average long-term
payment factor, it fails to consider for a given quality level the variability of the individual lot payment factors that comprise this long-term average. This variability is directly related to the
sample size. That is, the variability about the average payment factor decreases as the size of the individual samples increases. To fully evaluate the risks it is necessary to also consider this
variability about the expected payment values.
The OCPLOT program output can be used to demonstrate this variability of the individual lot payment factors. Figure 37 shows for an AQL population a histogram that displays the individual lot PWL
estimates along with their corresponding payment values for 1,000 simulated lots using a sample of size of 4 for each individual lot. Figure 38 shows similar information for an RQL population. The
high degree of variability of the individual lot payment factors is obvious from this histogram. However, over a large number of lots, the high and low estimates for lot PWL will tend to balance out
to give the correct average payment factor.
If, however, there are only a small number of lots on a project, then it will be possible that a significantly low estimated PWL value could negatively impact the payment that the contractor should
have received. Similarly, larger PWL estimates could be obtained that would provide a larger payment than is deserved. A contractor would be wise to target a quality level above the AQL, particularly
on smaller projects, to ensure that this variability of individual lot PWL estimates does not create a problem. However, as discussed elsewhere in this manual, it is often the practice of contractors
to bid projects with the anticipation of receiving the maximum incentive payments. If this is the case, it is unlikely that contractors will target their processes at the AQL. It is more likely that
they will target their processes for greater than AQL quality to try to maximize their incentive payment. In this event, the variability of the individual lot payment factors will not likely pose a
serious problem to either the contractor or the agency.
The variability associated with the estimate of the lot PWL can be reduced by increasing the size of the sample obtained from each lot. Figure 39 shows a histogram that displays for an AQL population
the individual lot PWL estimates along with their corresponding payment values for 1,000 simulated lots using a sample of size of 20 for each individual lot. Figure 40 shows similar information for
an RQL population. When these figures are compared with figures 37 and 38, for samples of size 4, the smaller spread of the individual PWL and payment factor estimates is apparent.
Even if it reduces the variability of the PWL estimates, and hence the risk to both the contractor and the agency, it may not be practical or economical to use large sample sizes unless
correspondingly large lot sizes are also used. The use of a very large lot, possibly even the total project, will allow larger sample sizes, but also introduces problems of its own. As noted in
chapter 6, a major assumption that is required is that all of the material and/or construction processes remain consistent throughout the total lot.
Over the course of a long project, changes in weather, materials, rolling patterns, mix designs, etc., are likely to lead to variations throughout the project. Combining all of these together may
result in a normal distribution, albeit one with a larger variability than the individual production lots, but this may not be the best method to evaluate a project. If there are "bad" segments on a
project, it might be better to see them penalized on a lot-by-lot basis than to have them lumped together with the "good" material from all of the other lots.
While figures 37 through 40 clearly illustrate the relative variabilities of the individual PWL and payment factor estimates associated with different sample sizes, they do not provide any
quantitative measure for the variabilities. One way to quantify these variabilities would be to calculate the standard deviation of the individual PWL or payment estimates. This is not discussed in
this manual, but is presented and discussed in the technical report for this project.^(17)
Example: Payment Adjustment Acceptance Plans-Multiple OC Curves
Another step that is necessary to evaluate fully the risks for a payment adjustment acceptance plan is to plot OC curves, such as those shown in figure 27, associated with receiving various
payment factors. As shown in appendix M, the OCPLOT program can be used to develop these curves, although each curve must be developed individually and then manually combined onto a single set of
Suppose that the OCPLOT program is used to develop multiple OC curves for the asphalt content acceptance plan from the previous example. Figure 41 shows OC curves for the probability of receiving
greater than or equal to various levels of payment factor for a sample of size of 4 using the payment relationship shown in equation 28. These OC curves would be considered along with the EP
curve from the previous example to evaluate the risks associated with the acceptance plan.
While the EP curve in figure 36 shows that the average long-term payment is 100 percent for AQL material, the OC curves in figure 41 show that the probability is less than 60 percent that any
individual lot of AQL material will receive 100 percent payment or greater. This means that there is nearly a 40 percent chance that a contractor would receive less than full payment for a lot
that was of AQL quality. This risk, which would be considered to be a (if a is defined as the probability that AQL material will receive less than full payment), seems high. However, it is
somewhat offset by the fact that the OC curves also indicate that there is over a 40 percent chance of receiving a payment of 104 percent or greater.
The OC curves and EP curves describe the operation of the acceptance plan such that the risks can be evaluated throughout the entire quality regime. If the risks are considered acceptable, no
modifications to the initial acceptance plan are necessary. However, if the risks are considered unacceptable in terms of being too high for both or either party, a reassessment of the acceptance
plan is necessary.
As shown in the previous section, there is no easy answer to the question "Are the risks acceptable?" since this is to a great extent a subject of opinion, and opinions may vary from
agency-to-agency. Table 21 can provide some guidance regardinga andb risk levels, but these risks are not very useful when price adjustment acceptance plans are used. Even in accept/reject acceptance
plans, thea andb risks apply at only two specific levels of quality. An OC curve is still necessary to evaluate the risks over the full range of possible quality levels.
When a price adjustment acceptance plan is used it is essential that the agency develop both an EP curve and OC curves for the probability of receiving various payment factors over the total range of
quality levels. The agency may also wish to look at histograms of individual payment factors to obtain a picture of how much variability is associated with the payment factor determination. This is
shown in figures 37 through 40.
The decision regarding what does or does not constitute an acceptable level of risk will to a great extent be a subjective one. There is, however, one factor that is not subjective. There is
generally universal agreement that the expected payment should be 100 percent for quality that is at exactly the AQL. Although it should not be confused with the statistical risk,a, the agency may
wish to consider the "average payment" risk to the contractor, if the EP is less than 100 percent at the AQL, or to the agency, if the EP is greater than 100 percent at the AQL. The EP at the RQL
quality level is another point that is often specifically considered.
It must be remembered that the EP alone is not a complete measure, particularly of the likelihood that any individual lot will receive a correct payment factor. The variability of the individual
payment factors about the EP curve must also be considered. Ultimately, the decision regarding what constitutes acceptable or unacceptable risks rests with the individual agency. While the
determination of acceptable risks rests solely with the agency, by way of the joint industry/agency task force discussed in earlier chapters, there should be contractor input into this decision.
If the risks are considered unacceptable they are likely to be too high rather than too low. To reduce the risks it may be possible to change the specification limits, the acceptance limits, and/or
increase the sample size. The most straightforward approach would be to increase the sample size per lot.
An increase in the sample size may be accomplished by either increasing the lot size or increasing the sampling frequency. For example, if the lot size were 1800 Mg of HMAC, and the sampling
frequency was one test per 450 Mg, then the number of tests per lot could be increased from 4 to 8 by increasing the lot size to 3600 Mg. On the other hand, the number of tests per lot could be
increased from 4 to 8 by keeping the lot size as 1800 Mg but increasing the sample frequency to one test per 225 Mg.
Another way to change the risk levels would be to change the specification limits or the acceptance limits. This may be related to the definition of AQL and/or RQL material. For example, for the
example presented above for asphalt content, the AQL was defined as a population with a mean of 6.00 percent and a standard deviation of 0.20 percent. Using this definition, the specification (and,
in this case, acceptance) limits for an accept/reject decision based on a single test result were set at plus or minus two standard deviations from the target value of 6.00 percent, i.e., 6.00
percent ± (2 ´ 0.20 percent) or 5.60 percent to 6.40 percent. This provided an a risk to the seller of 0.0456, or 4.56 percent. This risk can be reduced to nearly zero by setting the specification
(and acceptance) limits at ± 3s rather than ± 2s. However, this will also increase the b risk, unless the definition of RQL is changed.
For accept/reject acceptance plans based on PWL, the acceptance limit could be reduced, say from 90 PWL to 85 PWL, to lower the a risk. It could also be raised, say from 90 PWL to 95 PWL, to increase
the a risk. It must be noted that whenever a is changed, b will also change unless the sample size is changed as well. For acceptance plans with price adjustments, the payment equation could be
changed to increase or decrease the expected payment values. While these changes would impact the EP values and the "payment" risks at the AQL and RQL, they would not necessarily change the
"statistical" risks,a and b.
New OC curves and EP curves must be developed for any changes that are proposed to the initial acceptance plan provisions. This is the only way to determine what impact the changes will have on the
risks to both the contractor and the agency. The agency should not proceed with developing the finalized draft specification until an acceptance plan has been developed for which the agency believes
the risks are appropriate.
Once all of the preceding steps have been completed, the agency can move forward to finalize the wording for the initial draft specification. At this point the agency is ready to move forward to the
implementation phase of the specification development process.
It is obvious from the above discussions that a great deal of thought should be put into the development of an acceptance plan. There are many "pieces" to the puzzle that must fit together for the
acceptance plan to be well-written and to work as intended. However, there are many resources that can be used to help accomplish this goal. QA acceptance plans have been under development and
evolution for over three decades. This history can be an invaluable resource for any agency that is in the process of developing QA acceptance plans. | {"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/pccp/02095/07.cfm","timestamp":"2014-04-19T19:57:36Z","content_type":null,"content_length":"60709","record_id":"<urn:uuid:a6e09896-8e4b-4f9b-8d1b-a06781d901ba>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Salle, CO Algebra 2 Tutor
Find a La Salle, CO Algebra 2 Tutor
...Education: B.S. Mathematics, University of Louisiana, Lafayette, LA B.S. Physics, University of Louisiana, Lafayette, LA M.S.
16 Subjects: including algebra 2, chemistry, calculus, physics
...I took two semesters of college level chemistry while in high school. During freshman year of college I took two semesters of organic chemistry follow by a organic chemistry lab. In addition I
have taken other chemistry and related courses such as: Analytical Chemistry, Thermodynamics, Intro to Quantum and more.
18 Subjects: including algebra 2, chemistry, calculus, geometry
...In addition to tutoring high school and college physics students, I have also aided many students in preparing for their AP Physics B and C Exams. A very important part of a successful career
is being able to communicate ideas and/or research results successfully to a broad audience. One of the most effective ways to do this is through the use of a PowerPoint presentation.
18 Subjects: including algebra 2, calculus, physics, GRE
...I have worked in various jobs after college using MS Windows as well as in software connecting MS Windows into the linux world (cygwin). MS Windows has been a staple of all the major
corporations I have worked with over the years and fluency in using MS Windows for DOS Batch programming and prov...
47 Subjects: including algebra 2, chemistry, calculus, physics
...I know a lot about learning styles and strategies for teaching students based on their specific needs. I believe that my success as a bachelor of arts student, tutor, and future teacher,
qualifies me to tutor in the area of study skills. I am a Special Education major.
20 Subjects: including algebra 2, English, writing, algebra 1
Related La Salle, CO Tutors
La Salle, CO Accounting Tutors
La Salle, CO ACT Tutors
La Salle, CO Algebra Tutors
La Salle, CO Algebra 2 Tutors
La Salle, CO Calculus Tutors
La Salle, CO Geometry Tutors
La Salle, CO Math Tutors
La Salle, CO Prealgebra Tutors
La Salle, CO Precalculus Tutors
La Salle, CO SAT Tutors
La Salle, CO SAT Math Tutors
La Salle, CO Science Tutors
La Salle, CO Statistics Tutors
La Salle, CO Trigonometry Tutors | {"url":"http://www.purplemath.com/la_salle_co_algebra_2_tutors.php","timestamp":"2014-04-16T19:13:46Z","content_type":null,"content_length":"23940","record_id":"<urn:uuid:9e4c900f-ffc7-4dcf-8eb5-c25187264530>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the number of connected components
02-17-2013 #1
Registered User
Join Date
Feb 2013
Finding the number of connected components
I can't figure out how to find the number of connected components from an undirected disconnected graph.
the input I'm getting is like this:
the top number is the number of vertices and the rest of the numbers are the edges, eg. 1--2 is an edge.
Is there a way that you can implement DFS algorithm to find the number of connected components? like increment a variable every time DFS is called or something?
Thanks in advance
So what's the intended output of all this?
Something like
There are 3 separate components
A first step would be to write the code to just read the input data into some kind of data structure.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support http://www.ukip.org/ as the first necessary step to a free Europe.
02-18-2013 #2 | {"url":"http://cboard.cprogramming.com/c-programming/154399-finding-number-connected-components.html","timestamp":"2014-04-17T01:58:16Z","content_type":null,"content_length":"43186","record_id":"<urn:uuid:2d568516-a280-4aa0-8ac9-ad90045ce9eb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Approximate Graph Matcher
Graphdiff: Approximate Graph Matcher and Clusterer
Dennis Shasha
Courant Institute of Mathematical Sciences
Department of Computer Science
New York University
Jason Wang
Department of Computer Science
New Jersey Institute of Technology
Approximate graph isomorphism and subgraph isomorphism are known to be NP-complete. After reviewing the literature, we have combined heuristics for these problems (geometric hashing and valence
heuristics mostly) that perform approximate graph matching and clustering extremely well on random data and on real data from the National Cancer Institute. This web page describes the installation
and use of this software which you may use for research purposes. A presentation done at the University of Pennsylvania describing this work is contained here . Here is a web demo of the capabilities
. (If graphs are more general than you need, then we also have an efficient, heuristic-free package for ordered tree comparison, our cousin-distance based unordered comparison software and a data
structure for tree searching among unordered trees .) You can find our project home page here .
The approximate graph matcher runs in a high performance interpreted environment called K.
• To begin with, therefore, please download trial K from here for a sun version. If you're on a PC, then download k.exe and k20.dll . (If this doesn't work, then go to k20dll and then rename the
file to k20.dll) If you're on linux, download from here . K and our program run equally well on linux and windows. All of K is under 300 kilobytes.
• Send email to shasha@cs.nyu.edu. If you care to describe your application, we'd be glad to hear about it. In any case, we will send you instructions for downloading the program (click for the
data files):
• You can then run the program by simply typing
k graph
• Each row of the output will tell you: the query graph, the score of the query graph to the best match (i.e. size in edges of best alignment divided by the size in edges of the query graph, as
explained below) the score of the best match to that query graph (i.e. size of best alignment divided by size of best match), the label of the database graph matching the query graph with that
score, e.g.
means that query graph 12 has a matching score of 0.8 with respect to query graph 7 (intuitively, size of match divided by size of graph 12, see below), and graph 7 has a matching score of 0.6
with respect to graph 12. If an alignment is offered, it follows.
Semantics of the Result and of the Input
The score is defined as follows. Given a query graph q and a database graph d , the score is based on the best alignment found between q and d . Call that alignment a . An alignment is a 1:1
(partial) mapping from nodes in q to nodes in d . The score depends on how good an alignment a is.
In the case that all edges represent the same distance (defined below), an edge {i,j} in q matches an edge {i',j'} in d with respect to alignment a , if, first, alignment a maps i to i' and j to j' ;
second, i and i' are of the same type and j and j' are of the same type. In the uniform distance case, the score is the number of matches between q and d with respect to alignment a divided by the
total number of edges in q . Thus, each match is worth 1 and, in the first score, we are looking for the number of matching edges divided by the total number of edges in q . The second score is the
same but is divided by the total number of edges in d .
When edges may have different distances, then an edge {i,j,s} in q matches an edge {i',j',s'} in d with respect to alignment a , if, first, alignment a maps i to i' and j to j' ; second, i and i' are
of the same type and j and j' are of the same type. This is the same as for the uniform distance case. The contribution of this match is min(s/s', s'/s) . Thus, the contribution of a match cannot be
greater than 1 and is often less, unlike in the uniform distance case. In the different distance case, the first score is the sum of the contributions of the matches between q and d with respect to
alignment a divided by the total number of edges in q . The second score is the same but is divided by the total number of edges in d .
The notion of distance is an abstraction from molecular comparisons: small distances (e.g. 1) connote a stronger bond than big distances (e.g. 3). Thus, to the program, distance is the inverse of
strength. So, if your application has some notion of strength (that may have nothing to do with physical or chemical strength), then invert it when assigning distances. Distances should be positive
The notion of type is a way to constrain matches. Each node in one graph must map to a node of the same type in the target graph. For example, when defining an alignment between molecules, types
might constrain mappings so each atom maps to another atom of the same type: oxygen to oxygen, carbon to carbon, etc. Types are defined using positive integers.
As you can see by looking at tempindb or tempinprobes, each graph is characterized by a label with which it will be identified for matching purposes. This can be a number or a string.
Next, there is a list of types which can be placed on one or more lines (any number of spaces or newlines may separate types). The first type is the type of node 0, the next one of node 1, etc.
Finally, there is a list of edges consisting of node id pairs and the distance. We assume that all edges are symmetric. There should be one edge per line.
The program has just a few options which you set by adjusting command line parameters or modifying graph.k. For the command line parameters, please type k graph +help You will see the following:
(+f filenameforalltoall (tempin) [+nc numclusters (1)] |
+p probefile (tempinprobes) +d dbfile (tempindb) |
+m manygraphs (-2) [+n nodecount (50)]
[+en extranodes (0)] [+ee extraedges (0)] [+t numtypes (2)]
[+a givemealignment (1)] [+h numberofhillclimbs (50)]
Example 1: All to all comparison of file tempin with 100 hillclimbs.
k graph +f tempin +h 100
Example 2: five 70 node random graphs and lab2 with 8 extra nodes and 20 extra edges.
k graph +m 5 +n 70 +en 8 +ee 20
You can also modify variables directly in the program if you want. Here is the meaning of these options and variables.
• You can take input from files or have it generated randomly.
□ If you want to take a single file and compare every graph with every other one in that file: specify a file after the +f option. (That will force the manygraphs variable to the value -2 or
you can explicitly set it to the value -2 using the +m option.) The default is that the input file is tempin and manygraphs is set to -2. In the case that you want the program to attempt a
clustering, specify the number of clusters with the parameter +nc.
□ If you want to take a query file consisting of graphs and compare those graphs with the graphs in a database file, then use the +p and +d options. (That will force the manygraphs variable to
the value -3 or you can explicitly set it to the value -3 using the +m option.)
□ If you want the system to generate random graphs, then set manygraphs to the number of random graphs you want in addition to a base random graph called lab1 and a permutation called lab2 .
The program then uses lab1 to probe among the random graphs generated including lab2. You can generate these nodes with a certain number of types by setting the +t parameter (the higher the
number of types, the easier the job for the graph matcher). You can also give lab2 extra nodes and extra edges using the +en and +ee parameters.
• Produce the best alignment when a query graph matches a database graph. If you want this, set the variable givemealignment to "1" or specifying parameter +a 1.
• Setting variable hillclimbing to 0 will give faster execution but will give poorer quality alignments. The command line option for the number of hillclimbs is +h.
• No other variables should be changed.
A note about the current clustering algorithm:
Define the similarity radius between a ``centroid'' and a set of other nodes to be the minimum similarity between that centroid and the other nodes.
Define the overall similarity radius , given a collection of centroid-node set pairs, to be the minimum of the similarity radii of each member of the collection.
In k maxmin cluster formation, the program finds k centroids and k partitions such that (i) the partitions cover the graphs; and (ii) the overall similarity radius is as high as (heuristically)
possible given k.
This material is based upon work partly supported by the United States National Science Foundation under grant IIS-9988636 Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the National Science Foundation. | {"url":"http://cs.nyu.edu/shasha/papers/agm.html","timestamp":"2014-04-19T01:49:57Z","content_type":null,"content_length":"11129","record_id":"<urn:uuid:640090aa-62b1-41ea-8cb0-3bddbda43023>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spectral Decomposition and Eisenstein Series A Paraphrase of the Scriptures 1st edition by Moeglin | 9780521070355 | Chegg.com
Spectral Decomposition and Eisenstein Series 1st edition
A Paraphrase of the Scriptures
Details about this item
Spectral Decomposition and Eisenstein Series: The decomposition of the space L2 (G(Q)\G(A)), where G is a reductive group defined over Q and A is the ring of adeles of Q, is a deep problem at the
intersection of number and group theory. Langlands reduced this decomposition to that of the (smaller) spaces of cuspidal automorphic forms for certain subgroups of G. This book describes this proof
in detail. The starting point is the theory of automorphic forms, which can also serve as a first step toward understanding the Arthur-Selberg trace formula. To make the book reasonably
self-contained, the authors also provide essential background in subjects such as: automorphic forms; Eisenstein series; Eisenstein pseudo-series, and their properties. It is thus also an
introduction, suitable for graduate students, to the theory of automorphic forms, the first written using contemporary terminology.
Back to top
Rent Spectral Decomposition and Eisenstein Series 1st edition today, or search our site for C. textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Cambridge University | {"url":"http://www.chegg.com/textbooks/spectral-decomposition-and-eisenstein-series-1st-edition-9780521070355-052107035x?ii=12&trackid=d8bf781c&omre_ir=1&omre_sp=","timestamp":"2014-04-21T00:12:22Z","content_type":null,"content_length":"21155","record_id":"<urn:uuid:a78a2e58-a06c-43cf-b066-732969f3432f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tarjan's strongly connected components algorithm - Top Videos
Tarjan's Algorithm (named for its discoverer, Robert Tarjan^[1]) is a graph theory algorithm for finding the strongly connected components of a graph. Although it precedes it chronologically, it can
be seen as an improved version of Kosaraju's algorithm, and is comparable in efficiency to the path-based strong component algorithm.
The algorithm takes a directed graph as input, and produces a partition of the graph's vertices into the graph's strongly connected components. Each vertex of the graph appears in exactly one of the
strongly connected components. Any vertex that is not on a directed cycle forms a strongly connected component all by itself: for example, a vertex whose in-degree or out-degree is 0, or any vertex
of an acyclic graph.
The basic idea of the algorithm is this: a depth-first search begins from an arbitrary start node (and subsequent depth-first searches are conducted on any nodes that have not yet been found). As
usual with depth-first search, the search visits every node of the graph exactly once, declining to revisit any node that has already been explored. Thus, the collection of search trees is a spanning
forest of the graph. The strongly connected components will be recovered as certain subtrees of this forest. The roots of these subtrees are called the "roots" of the strongly connected components.
Any node of a strongly connected component might serve as the root, if it happens to be the first node of the component that is discovered by the search.
Stack invariant[edit]
The nodes are placed on a stack in the order in which they are visited. When the depth-first search recursively explores a node v and its descendants, those nodes are not all necessarily popped from
the stack before this recursive call returns. The crucial invariant property is that a node remains on the stack after exploration if and only if it has a path to some node earlier on the stack.
At the end of the call that explores v and its descendants, we know whether v itself has a path to any node earlier on the stack. If so, the call returns, leaving v on the stack to preserve the
invariant. If not, then v must be the root of its strongly connected component, which consists of v together with any later nodes on the stack (such nodes all have paths back to v but not to any
earlier node, because if they had paths to earlier nodes then v would also have paths to earlier nodes which is false ). This entire component is then popped from the stack and returned, again
preserving the invariant.
Each node v is assigned a unique integer v.index, which numbers the nodes consecutively in the order in which they are discovered. It also maintains a value v.lowlink that represents (roughly
speaking) the smallest index of any node known to be reachable from v, including v itself. Therefore v must be left on the stack if v.lowlink < v.index, whereas v must be removed as the root of a
strongly connected component if v.lowlink == v.index. The value v.lowlink is computed during the depth-first search from v, as this finds the nodes that are reachable from v.
The algorithm in pseudocode[edit]
algorithm tarjan is
input: graph G = (V, E)
output: set of strongly connected components (sets of vertices)
index := 0
S := empty
for each v in V do
if (v.index is undefined) then
end if
end for
function strongconnect(v)
// Set the depth index for v to the smallest unused index
v.index := index
v.lowlink := index
index := index + 1
// Consider successors of v
for each (v, w) in E do
if (w.index is undefined) then
// Successor w has not yet been visited; recurse on it
v.lowlink := min(v.lowlink, w.lowlink)
else if (w is in S) then
// Successor w is in stack S and hence in the current SCC
v.lowlink := min(v.lowlink, w.index)
end if
end for
// If v is a root node, pop the stack and generate an SCC
if (v.lowlink = v.index) then
start a new strongly connected component
w := S.pop()
add w to current strongly connected component
until (w = v)
output the current strongly connected component
end if
end function
The index variable is the depth-first search node number counter. S is the node stack, which starts out empty and stores the history of nodes explored but not yet committed to a strongly connected
component. Note that this is not the normal depth-first search stack, as nodes are not popped as the search returns up the tree; they are only popped when an entire strongly connected component has
been found.
The outermost loop searches each node that has not yet been visited, ensuring that nodes which are not reachable from the first node are still eventually traversed. The function strongconnect
performs a single depth-first search of the graph, finding all successors from the node v, and reporting all strongly connected components of that subgraph.
When each node finishes recursing, if its lowlink is still set to its index, then it is the root node of a strongly connected component, formed by all of the nodes above it on the stack. The
algorithm pops the stack up to and including the current node, and presents all of these nodes as a strongly connected component.
1. Complexity: The Tarjan procedure is called once for each node; the forall statement considers each edge at most twice. The algorithm's running time is therefore linear in the number of edges and
nodes in G, i.e. $O(|V|+|E|)$.
2. The test for whether w is on the stack should be done in constant time, for example, by testing a flag stored on each node that indicates whether it is on the stack.
3. While there is nothing special about the order of the nodes within each strongly connected component, one useful property of the algorithm is that no strongly connected component will be
identified before any of its successors. Therefore, the order in which the strongly connected components are identified constitutes a reverse topological sort of the DAG formed by the strongly
connected components.^[2]
1. ^ Tarjan, R. E. (1972), "Depth-first search and linear graph algorithms", SIAM Journal on Computing 1 (2): 146–160, doi:10.1137/0201010
2. ^ Harrison, Paul. "Robust topological sorting and Tarjan's algorithm in Python". Retrieved 9 February 2011.
External links[edit] | {"url":"http://www.mashpedia.com/Tarjan's_strongly_connected_components_algorithm","timestamp":"2014-04-19T04:25:09Z","content_type":null,"content_length":"40462","record_id":"<urn:uuid:e1c29ca2-34cd-41bb-81f7-930843f31aa5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cubicity, boxicity, and vertex cover
Chandran, LS and Das, Anita and Shah, CD (2009) Cubicity, boxicity, and vertex cover. In: Discrete Mathematics, 309 (8). pp. 2488-2496.
pdf1.pdf - Published Version
Restricted to Registered users only
Download (628Kb) | Request a copy
A k-dimensional box is the cartesian product R-1 x R-2 x ... x R-k where each R-i is a closed interval on the real line. The boxicity of a graph G,denoted as box(G), is the minimum integer k such
that G is the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the cartesian product R-1 x R-2 x ... x R-k where each Ri is a
closed interval on the real line of the form [a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum k such that G is the intersection graph of a collection of k-cubes. In this paper
we show that cub(G) <= t + inverted right perpendicularlog(n - t)inverted left perpendicular - 1 and box(G) <= left perpendiculart/2right perpendicular + 1, where t is the cardinality of a minimum
vertex cover of G and n is the number of vertices of G. We also show the tightness of these upper bounds. F.S. Roberts in his pioneering paper on boxicity and cubicity had shown that for a graph G,
box(G) <= left perpendicularn/2right perpendicular and cub(G) <= inverted right perpendicular2n/3inverted left perpendicular, where n is the number of vertices of G, and these bounds are tight. We
show that if G is a bipartite graph then box(G) <= inverted right perpendicularn/4inverted left perpendicular and this bound is tight. We also show that if G is a bipartite graph then cub(G) <= n/2 +
inverted right perpendicularlog n inverted left perpendicular - 1. We point out that there exist graphs of very high boxicity but with very low chromatic number. For example there exist bipartite
(i.e., 2 colorable) graphs with boxicity equal to n/4. Interestingly, if boxicity is very close to n/2, then chromatic number also has to be very high. In particular, we show that if box(G) = n/2 -
s, s >= 0, then chi (G) >= n/2s+2, where chi (G) is the chromatic number of G.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/20375/","timestamp":"2014-04-19T07:53:32Z","content_type":null,"content_length":"26839","record_id":"<urn:uuid:409007cf-2a7b-49c0-ba83-b97a3b06b07e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Final Exam Study Guide Answers
• Write Scheme function(s) that will translate a Scheme expression into an English phrase. Assume that the expression may contain functions + - * / sin cos; if the expression has two operands,
conjoin them with ``and''. Example:
(exp->english '(+ (sin x) (cos y)))
==> (the sum of the sine of x and the cosine of y)
(define (exp->english x)
(if (pair? x)
(append (op->english (op x))
(exp->english (lhs x))
(if (null? (cddr x)) ; test for unary
'() ; if unary, that's all
(cons 'and ; else do rhs also
(exp->english (rhs x)) ) ) )
(list x) ) ) ; base case: make a list for append
(define (op->english op)
(list 'the
(cadr (assoc op '((+ sum) (- difference) (* product)
(/ quotient) (sin sine) (cos cosine))))
(define op car)
(define lhs cadr)
(define rhs caddr)
• Write a Scheme program to calculate GPA from a list of sublists, (course grade). Grades are letters, with A worth 4 grade points, B 3 grade points, etc. Assume that the list is non-empty and that
each course counts the same. Example:
(gpa '((cs307 a) (psy301 c)))
==> 3.0
(define (gpa l)
(let ((sum 0.0))
(dolist (pair l)
(set! sum
(+ sum (cadr (assoc (cadr pair)
'((a 4.) (b 3.) (c 2.)
(d 1.) (f 0.)))))) )
(/ sum (length l)) ))
• Write a function (pay hours rates) where the data formats are: hours = ((name hours-worked) ...) rates = ((name hourly-wage) ...). For each entry in hours, look up the corresponding hourly wage
in the list rates (order is arbitrary), multiply hours worked and hourly wage to get pay, and print name and pay. Example:
(pay '((smith 3)) '((jones 4.50) (smith 6.00)))
==> prints: smith 18.0
(define (pay hours rates)
(dolist (person hours)
(display (car person))
(display " ")
(display (* (cadr person)
(cadr (assoc (car person) rates))))
(newline) ) )
• Write function(s) (total items prices) that will compute the total cost of a list of things that are purchased. items is a list of sub-lists, items = ((dept item number) ...) giving the
department, name of item, and number of that item that were purchased. prices is a list of item prices by department, prices = ((dept (item price) ...) ...). Example:
(total '((clothing jeans 3)
(hardware wrench 2))
'((hardware (saw 9.95) (wrench 4.00))
(clothing (socks 2.50) (jeans 19.95))))
==> 67.85 [remember to multiply by number of items]
(define (total items prices)
(define dept car)
(define item cadr)
(define number caddr)
(let ((sum 0))
(dolist (it items)
(set! sum (+ sum
(* (number it)
(cadr (assoc (item it)
(cdr (assoc (dept it)
prices))))))) )
• Write a function (ngreater tree m) that will return the number of numbers greater than m that appear in the given tree. Example:
(ngreater '(+ (* 88 x) (+ 3 7) (/ 17 z))
==> 3
(define (ngreater tree m)
(if (pair? tree)
(+ (ngreater (car tree) m)
(ngreater (cdr tree) m))
(if (and (number? tree) (> tree m)) 1 0)))
• Write a function (deriv e var) that finds the derivative of expression e with respect to a variable var. Assume that the expression e may contain constants (numbers), variables (symbols), and the
binary operators + and *. Use these rules, where d/dx is derivative with respect to x:
d/dx(x) = 1 (derivative of x with respect to x is one)
d/dx(v) = 0 (derivative of any variable v other than x
with respect to x is zero)
d/dx(c) = 0 (derivative of a constant is zero)
d/dx(u + v) = d/dx(u) + d/dx(v)
d/dx(u * v) = u * d/dx(v) + v * d/dx(u)
Note that in the last two cases you must make new list structure (a new formula) as the result. Examples:
(deriv 'x 'x) ==> 1
(deriv 'y 'x) ==> 0
(deriv 5 'x) ==> 0
(deriv '(+ x 3) 'x) ==> (+ 1 0)
(deriv '(+ (* x 3) 7) 'x)
==> (+ (+ (* x 0) (* 3 1)) 0)
(define (deriv e var)
(define lhs cadr)
(define rhs caddr)
(if (pair? e)
(case (car e)
((+) (list '+ (deriv (lhs e) var)
(deriv (rhs e) var)))
((*) (list '+ (list '* (lhs e) (deriv (rhs e) var))
(list '* (rhs e) (deriv (lhs e) var)))) )
(if (eqv? e var) 1 0)))
• Write a recursive function (english code) that will translate Scheme code into English. Assume that the Scheme code can contain expressions using binary operators + - * / = and Scheme functions
set! and if. Examples:
(english 'x) ==> (x)
(english '(+ x 7)) ==> (the sum of x and 7)
(english '(if (= (+ i j) 3) (set! k 7)))
==> (if the sum of i and j equals 3 then set k to 7)
(define (english code)
(define lhs cadr)
(define rhs caddr)
(if (pair? code)
(case (car code)
((+ - * /)
(append (list 'the (opword (car code)) 'of)
(english (lhs code))
(english (rhs code))) )
(append (english (lhs code))
(english (rhs code))))
(append '(if) (english (lhs code))
'(then) (english (rhs code))
(if (null? (cdddr code)) '()
(append '(else) (english (cadddr code))))))
(append (list 'set (lhs code) 'to)
(english (rhs code)))))
(list code)))
(define (opword op)
(cadr (assoc op '((+ sum) (* product) (- difference)
(/ quotient)))))
• Write a function deep-reverse that will reverse not only the top level of a list, but also any sub-lists it may contain, recursively. Example:
(deep-reverse '(((a b) c) d)) ==> (d (c (b a)))
(define (deep-reverse l) (deep-rev l '()))
(define (deep-rev l result)
(if (pair? l)
(deep-rev (cdr l)
(cons (deep-reverse (car l))
(if (null? l) result l) ) )
(define (deep-reverse l)
(if (pair? l)
(let ((answer '()))
(dolist (x l answer)
(set! answer (cons (deep-reverse x) answer)) ))
• The genetic code is carried by DNA molecules. DNA is composed of matched base pairs, where bases are represented by the letters C, G, A, or T. C is always paired with G, and A is always paired
with T. We will represent a DNA fragment as a list of the base letters of one side of the DNA helix.
The complement of a DNA sequence is a sequence of the complementary letters (A-T, C-G) of the given sequence. Write a function (complement dna) to compute the complement of a given string. Use
auxiliary functions if you wish. Example:
(complement '(a a a g t g c)) ==> (t t t c a c g)
(define (complement lst) ; the easiest solution
(sublis '((a . t) (t . a) (c . g) (g . c)) lst))
(define (complement lst) (reverse (complementb lst '())))
(define (complementb lst result)
(if (pair? lst)
(complementb (cdr lst)
(cons (opposite (car lst)) result))
(define (opposite base)
(cdr (assoc base '((a . t) (t . a) (c . g) (g . c)) )))
• A DNA sequence codes for a sequence of amino acids that make up a protein. Each 3 letters of DNA code for one amino acid. Write a function (protein dna codes) to compute the sequence of amino
acids specified by a DNA string. Assume (define codes '(((A A A) Phe) ((A T A) Tyr) ((G T G) His)...) gives the 3 DNA bases, followed by an abbreviation of the amino acid for which they code, for
all 3-base sequences. Example:
(protein '(a a a g t g a t a) codes) = (Phe His Tyr)
(define (protein dna codes)
(if (pair? dna)
(cons (cadr (assoc (list (car dna) (cadr dna) (caddr dna))
(protein (cdddr dna) codes))
(define (protein dna codes) (proteinb dna codes '()))
(define (proteinb dna codes result)
(if (and (pair? dna) (pair? (cdr dna)) (pair? (cddr dna)))
(proteinb (cdddr dna) codes
(cons (cadr (assoc (list (car dna) (cadr dna) (caddr dna))
(reverse result)))
• A restriction enzyme is a molecule that will bind to a particular sequence of DNA bases, then cut the DNA at that point. Write a function (restrict dna enzyme) that returns a list of numbered
locations at which a given dna string would be cut by a given enzyme. If the enzyme matches the front part of the dna, that would be location 0, etc. Both the dna and the enzyme may be of
arbitrary length. Example:
(restrict '(g t g a a a g t g a t a) '(t g a)) = (1 7)
----- -----
(define (restrict dna enzyme) (restrictb dna enzyme 0 '()))
(define (restrictb dna enzyme position result)
(if (pair? dna)
(restrictb (cdr dna) enzyme (1+ position)
(if (matchenz dna enzyme)
(cons position result)
(reverse result)))
(define (matchenz dna enzyme) ; does enzyme match front of dna?
(if (pair? enzyme)
(if (pair? dna)
(if (eq? (car dna) (car enzyme))
(matchenz (cdr dna) (cdr enzyme))
(define (matchenz dna enzyme) ; 2nd version
(or (null? enzyme)
(and (pair? dna)
(eq? (car dna) (car enzyme))
(matchenz (cdr dna) (cdr enzyme)))))
• One way to determine a long DNA sequence is to break it into shorter pieces, find the sequences of the pieces, and then re-assemble the pieces based on areas where they overlap with the same
codes. Write function(s) (dnamatch one two) to compute the first location at which the back part of list one matches the front part of list two (return #f if no match).
Example: (dnamatch '(a a a g t g c) '(t g c g t g)) = 4
0 1 2 3 4 5 6 -----
(The sequence t g c matches beginning at position 4.)
(define (dnamatch one two) (dnamatchb one two 0))
(define (dnamatchb one two n)
(if (pair? one)
(if (matchfront one two)
(dnamatchb (cdr one) two (1+ n)))
; see if list x matches front of list two.
; (matchfront '(a b c) '(a b c d e)) = #t
(define (matchfront x two)
(if (pair? x)
(and (pair? two)
(eq? (car x) (car two))
(matchfront (cdr x) (cdr two)))
• Write a function (combine one two) that will produce a combined DNA list given two lists that match as above. You may use dnamatch in writing this function.
Example: (combine '(a a a g t g c)
'(t g c g t g))
= (a a a g t g c g t g)
(define (combine one two)
(append one
(list-tail two (- (length one) (dnamatch one two)))) )
• Write a function (evalexp expr vals) to interpret (evaluate) an arithmetic expression. expr is an expression that may contain operators + - * /; - may be binary or unary. vals is an association
list containing the values of variables. Note: for this problem, you may not use the functions sublis or eval; write an interpreter instead. Example:
(evalexp '(+ a (* 3 b)) '((a . 7) (b . 8))) = 31
(define (evalexp expr vals)
(define op car) (define lhs cadr) (define rhs caddr)
(if (pair? expr)
(case (op expr)
((+) (+ (evalexp (lhs expr) vals) (evalexp (rhs expr) vals)))
((*) (* (evalexp (lhs expr) vals) (evalexp (rhs expr) vals)))
((/) (/ (evalexp (lhs expr) vals) (evalexp (rhs expr) vals)))
((-) (if (pair? (cddr expr))
(- (evalexp (lhs expr) vals) (evalexp (rhs expr) vals)))
(- (evalexp (lhs expr) vals))))
(if (symbol? expr)
(cdr (assoc expr vals))
• A holiday mobile is either an ornament or a structure (left ornament right) where left and right are mobiles and ornament is an ornament. The weights of ornaments are given by an association
list, weights.
□ Write a function (weight mobile weights) to compute the weight of a mobile. Example:
(weight '((star ball star) star ball)
'((ball . 4) (star . 2))) = 14
(define (weight mobile weights)
(define left car) (define center cadr) (define right caddr)
(if (pair? mobile)
(+ (weight (left mobile) weights)
(cdr (assoc (center mobile) weights))
(weight (right mobile) weights))
(cdr (assoc mobile weights))))
□ Write a function (balanced mobile weights) to determine whether a mobile is balanced. A mobile is balanced if its left and right parts weigh the same and both of them are balanced. Example:
(balanced '((star ball star) ball bell)
'((ball . 4) (star . 2) (bell . 8))) = #t
(define (balanced mobile weights)
(define left car) (define right caddr)
(if (pair? mobile)
(and (balanced (left mobile) weights)
(= (weight (left mobile) weights)
(weight (right mobile) weights))
(balanced (right mobile) weights))
• Write function(s) (wlt scores) to determine the number of wins, losses, and ties given a list of game scores, each of which is (home-score opponent-score).
Example: (wlt '((14 7) (10 10) (21 24) (17 7)))
= (2 1 1) [2 wins, 1 loss, 1 tie]
(define (wlt scores)
(let ((wins 0) (losses 0) (ties 0))
(dolist (game scores)
(cond ((> (car game) (cadr game)) (set! wins (1+ wins)))
((< (car game) (cadr game)) (set! losses (1+ losses)))
(else (set! ties (1+ ties)))))
(list wins losses ties)))
• Write a function (evens tree) that returns the product of all the even numbers in tree (which may contain some non-numbers).
Example: (evens '(a (2 3 4) (b (5 6)))) = 48
(define (evens tree)
(if (pair? tree)
(* (evens (car tree))
(evens (cdr tree)))
(if (and (number? tree) (even? tree))
• A robot mouse is to find a path through a maze. Each junction in the maze is either #t (the exit from the maze, which is the goal), #f (a wall blocking the mouse), or a list (left center right)
of junctions. Write function(s) (mouse maze) to find a path through the maze. A path is a list of directions for the mouse (left, center, or right).
Example: (mouse '((#f #f #t) #f (#f #f #f))) = (left right)
(define (mouse maze)
(if (pair? maze)
(if (mouse (car maze))
(cons 'left (mouse (car maze)))
(if (mouse (cadr maze))
(cons 'center (mouse (cadr maze)))
(if (mouse (caddr maze))
(cons 'right (mouse (caddr maze)))
(if maze '() #f)))
• A stack machine is a kind of computer CPU that uses a stack for its internal storage. Write function(s) to simulate a stack machine: (sm instructions memory). memory is an association list
((variable value) ...). Use a list called stack, initialized to empty list, for the internal stack. instructions is a list of instructions:
(pushn n) put the number n onto the front of the stack
(pushv v) put the value of the variable v onto stack
(add) remove the top two elements from the stack,
add them, and put the result back on the stack
(mul) multiply (as above)
Return the top value on the stack.
Example: (sm '((pushv x) (pushn 7) (mul)) '((x 3))) = 21
(define (sm instructions memory)
(let ((stack '()))
(dolist (inst instructions)
(case (car inst)
((pushv) (set! stack (cons (cadr (assoc (cadr inst) memory))
((pushn) (set! stack (cons (cadr inst) stack)))
((add) (set! stack (cons (+ (car stack) (cadr stack))
(cddr stack))))
((mul) (set! stack (cons (* (car stack) (cadr stack))
(cddr stack)))) ) )
(car stack) ))
• Write function(s) (code e) to generate code for the stack machine from an arithmetic expression e using operators + and *. For variables or numbers, generate the appropriate push instruction;
otherwise, generate code for operands first, then generate the appropriate operation.
Example: (code 'x) = ((pushv x))
(code '(+ x 3)) = ((pushv x) (pushn 3) (add))
(code '(* a (+ b 3)))
= ((pushv a) (pushv b) (pushn 3) (add) (mul))
(define (code e) (reverse (codeb e '())))
(define (codeb e lst)
(if (pair? e)
(cons (cdr (assoc (car e) '((+ add) (* mul))))
(codeb (caddr e) (codeb (cadr e) lst)))
(cons (if (symbol? e)
(list 'pushv e)
(list 'pushn e))
• Write function(s) (backchain goal rules facts) to try to prove a goal given a set of rules and facts. facts is an association list ((var value) ...) that tells which variables are known to be
true or false. rules is a list of rules of the form (conclusion premise1 ... premisen); each rule states that the conclusion is true if all of the premises are true. backchain should work as
follows: if the goal is known to be #t or #f based on the list facts, return that value. Otherwise, try rules to see if some rule has the goal as conclusion and has premises that are true (using
Example: (backchain 'a '() '((a #t))) = #t
(backchain 'a '((a b)) '((b #t))) = #t
(backchain 'a '((a b c) (c d)) '((b #t) (d #t))) = #t
[the rules are "a is true if b and c are true",
"c is true if d is true"]
(define (backchain goal rules facts)
(if (assoc goal facts)
(cadr (assoc goal facts))
(some (lambda (rule)
(and (eq? (car rule) goal)
(every (lambda (subgoal)
(backchain subgoal rules facts))
(cdr rule))))
rules)) )
• Write a function (ship order inventory) that creates a shipping list from an order and an inventory. Both order and inventory are lists ((item quantity) ...). Return a list of what was ordered,
but without items that do not appear in the inventory list, and limiting the number shipped to the number actually in inventory.
(ship '((widgets 2) (gizmos 7) (thingies 14))
'((thingies 12) (widgets 5) (grommets 3)))
=> ((widgets 2) (thingies 12))
(define (ship order inventory)
(if (pair? order)
(let ((inv (assoc (caar order) inventory)))
(if inv
(cons (list (caar order)
(min (cadar order) (cadr inv)))
(ship (cdr order) inventory))
(ship (cdr order) inventory)))
• Write a function (limit tree m) that makes a copy of the tree, but replaces any number greater than m by m.
(limit '((9 to (5)) (spirit (of 76))) 7)
=> ((7 to (5)) (spirit (of 7)))
(define (limit tree m)
(if (pair? tree)
(cons (limit (car tree) m)
(limit (cdr tree) m))
(if (number? tree)
(min m tree)
• A secret formula is composed of binary operators and operands in Scheme notation. To protect a secret formula, write a function (encrypt formula keylist) that exchanges the operands of an
operator if the element of keylist is a 1 but leaves the order unchanged if it is a 0. The first element of keylist applies to the top level of the formula, etc.
(encrypt '(* (+ (/ a b) c) (- d (+ e f))) '(1 0 1))
=> (* (- d (+ f e)) (+ (/ b a) c))
(define op car)
(define lhs cadr)
(define rhs caddr)
(define (encrypt formula keylist)
(if (pair? formula)
(if (zero? (car keylist))
(list (op formula)
(encrypt (lhs formula) (cdr keylist))
(encrypt (rhs formula) (cdr keylist)))
(list (op formula)
(encrypt (rhs formula) (cdr keylist))
(encrypt (lhs formula) (cdr keylist))))
• Write function(s) (starmatch pattern input) that match a pattern list against an input list. The pattern list may contain a single *, which matches any number of symbols in the input list; the
other symbols in the pattern and input lists must be equal. If the pattern matches, return the sublist of symbols that matches the *; otherwise, return #f.
(starmatch '(i feel * today)
'(i feel wild and crazy today))
=> (wild and crazy)
(define (starmatch pattern input)
(if (pair? pattern)
(if (eq? (car pattern) '*)
(starmatchb (cdr pattern) '() input)
(and (pair? input)
(eq? (car pattern) (car input))
(starmatch (cdr pattern) (cdr input))))
(if (null? input) '() #f)))
(define (starmatchb pattern ans input)
(if (equal? pattern input)
(reverse ans)
(if (pair? input)
(starmatchb pattern (cons (car input) ans)
(cdr input))
• Write a psychiatrist program (doctor input patterns) that responds to a patient's input using patterns. Use starmatch to find a pattern whose input part matches; if no pattern matches, return '
(tell me more). Insert the matching words into the output part of the pattern. To make the output sound correct, you must invert pronouns: replace I with you, my with your, myself with yourself.
(doctor '(i argued with my mother today)
'( ... ((i argued with * today)
(what did * say))...) )
=> (what did your mother say)
(define (doctor input patterns)
(let ((pat (some (lambda (pair)
(and (starmatch (car pair) input)
(if pat
(sublis '((i . you) (my . your) (myself . yourself))
(starmatch (car pat) input))
(cadr pat))
'(tell me more)) ))
(define (splice new pattern)
(if (pair? pattern)
(if (eq? (car pattern) '*)
(append new (cdr pattern))
(cons (car pattern)
(splice new (cdr pattern))))
• Electric power distribution in Cons City is structured as a binary tree. When lightning causes power outages, the power company collects outage reports into a tree structure (name lhs rhs) where
name is a substation and lhs and rhs are subtrees or individual customers. Customers are #f if power is reported to be off, or #t otherwise. The power company wants to find the highest node that
has failed and send a repair crew to fix it. A node has failed if both of its subtrees have failed, or if one or both customers below a bottom-level node are without power. Assuming all outages
are due to a single substation failure and there are enough outage reports to identify it using the above rules, write a function (failure tree) to return the name of the top failing node, or #f
if no failure.
(failure '(a (b #t #t) (d (e #f #f) (f #t #f))))
=> d
(define (failure tree)
(if (pair? tree)
(if (or (eq? (lhs tree) #f)
(eq? (rhs tree) #f)
(and (failure (lhs tree))
(failure (rhs tree))))
(car tree)
(or (failure (lhs tree))
(failure (rhs tree))))
• An explorer wishes to retrieve a prize from a cave. A cave is either a number > 0, representing the value of a prize, or a list (dist cave1 ... caven) where dist is the distance to the next
junction and cave1 ... caven are the prizes or sub-caves accessible from the junction. The explorer cannot go deeper in the cave than the amount of rope available. Write a program (prize cave
rope) that returns the maximum prize that the explorer can obtain.
(prize '(20 (10 5) (20 (10 100) 50) (20 40)) 45)
=> 50
(define (prize cave rope)
(if (pair? cave)
(if (>= rope (car cave))
(let ((best 0))
(dolist (sub (cdr cave) best)
(set! best
(max best
(prize sub (- rope (car cave)))))))
This elegant solution was suggested by a student:
(define (prize cave rope)
(if (pair? cave)
(if (>= rope (car cave))
(apply max
(map (lambda (sub)
(prize sub (- rope (car cave))))
(cdr cave)))
• A database relation is a list of field names and values, ((field value) ...). Write an interpreter (check relation test) to evaluate a test for a relation. A test may use binary operators and,
or, =, or >, or the unary operator not; the operator = should be interpreted as equal? in Scheme. A symbol within a test has the value of that field in the relation. An item (quote value)
within a relation yields the specified value. Note: For this problem, you may not use eval or sublis; write an interpreter instead.
(check '((name john) (age 27) (sex m) (major cs))
'(and (= sex 'm) (> age 21)) )
=> #t
This function is similar to the version of eval in the class notes.
(define (check relation test)
(if (pair? test)
(case (car test)
((quote) (cadr test))
((and) (and (check relation (lhs test))
(check relation (rhs test))))
((or) (or (check relation (lhs test))
(check relation (rhs test))))
((not) (not (check relation (lhs test))))
((=) (equal? (check relation (lhs test))
(check relation (rhs test))))
((>) (> (check relation (lhs test))
(check relation (rhs test))))
(else #f))
(if (symbol? test)
(cadr (assoc test relation))
• Write a function (query database test), where database is a list of relations, that returns a list of all relations that satisfy the test. You may use check as above.
(query '(((name john) (sex m))
((name jill) (sex f)))
'(= sex 'f))
=> (((name jill) (sex f)))
(define (query database test)
(subset (lambda (x) (check x test))
• Write a function (deep-assoc item lists) that will retrieve the first sublist of lists, a list of lists, that contains the specified item. Example:
(deep-assoc 'cat '((rat mouse shrew) (dog cat) (horse pig cow)))
=> (dog cat)
(define (deep-assoc item lists)
(if (pair? lists)
(if (member item (car lists))
(car lists)
(deep-assoc item (cdr lists)))
• Write a function (paraphrase sentence synonyms) that will generate a paraphrase of the given sentence (a list of words). Synonyms is a list of lists of words that have approximately the same
meaning. You may use deep-assoc if you wish. The function (random n) will generate a random integer between 0 and (n - 1), inclusive. Choose a random synonym for each word that has a synonym (the
synonym could be the same word); otherwise, use the original word. Example:
(paraphrase '(my love is like a red rose)
'((scarlet crimson red) (flower rose posy) (love mate)))
=> (my love is like a scarlet posy)
(define (paraword word synonyms)
(let ((lst (deep-assoc word synonyms)))
(if lst
(list-ref lst (random (length lst)))
(define (paraphrase sentence synonyms)
(map (lambda (x) (paraword x synonyms))
• Write a function (sanitize alist tree) that will 'sanitize' the tree by replacing each word that occurs as the first element of a sublist in alist by the second word in the sublist. Example:
(sanitize '((guns toys) (dynamite candy-canes) (grenades lemons))
'(send (6 dynamite) and 12 grenades))
=> (send (6 candy-canes) and 12 lemons)
This is basically sublis with lists rather than dotted pairs.
(define (sanitize alist tree)
(if (pair? tree)
(cons (sanitize alist (car tree))
(sanitize alist (cdr tree)))
(let ((pair (assoc tree alist)))
(if pair
(cadr pair)
tree) ) ))
• Write a function (half-list lst) that returns a list containing every other element of lst in the original order. The length of lst is arbitrary; be sure not to take cdr of (), which would
generate an error. Example:
(half-list '(a b c d e)) => (a c e)
(define (half-list lst)
(if (pair? lst)
(cons (car lst)
(half-list (if (pair? (cdr lst))
(cddr lst)
• Write a function (shuffle lst) to shuffle a list. Use half-list to break the given list into two halves. Combine the two halves into a single list by calling (random 2) to generate a random
number that is either 0 or 1: take the next element from the first half-list if it is 0, else from the other half- list. Each element of the original list should appear exactly once in the
result. Example:
(shuffle '(a b c d e f g h)) => (b a c e d f h g)
(define (shuffle lst)
(if (pair? lst)
(shuffleb (half-list lst) (half-list (cdr lst)))
(define (shuffleb one two)
(if (pair? one)
(if (pair? two)
(if (zero? (random 2))
(cons (car one) (shuffleb (cdr one) two))
(cons (car two) (shuffleb one (cdr two))))
• Write function(s) (trip mileage tree) to plan an optimal vacation trip. mileage is the maximum number of miles that can be driven (one way). tree is a data structure (city miles value
destinations) where city is the name of a city, miles is the number of miles to get to it from the preceding city, value is the value of visiting that city, and destinations is a list of trees
that can be reached from that city. Return a list containing a list of city names and the maximum total value that can be obtained for the given mileage. Example:
(trip 250 '(austin 0 200 ((waco 100 20 ((college-station 40 -100 ())
(dallas 120 200 ()))))))
=> ((austin waco dallas) 420)
(define city car)
(define miles cadr)
(define value caddr)
(define destinations cadddr)
; This is a tree search.
; Base case: not enough mileage to reach the current city;
; return an empty list of cities and 0 points.
; Recursive case: subtract the miles for this city from the mileage,
; then find the best trip from this city.
; Return a list of: cons this city onto that trip,
; add points for this city to that trip's points.
(define (trip mileage tree)
(let ((best (list '() 0)) (new #f))
(if (< mileage (miles tree))
(dolist (d (destinations tree)
(list (cons (city tree) (car best))
(+ (cadr best) (value tree))))
(set! new (trip (- mileage (miles tree)) d))
(if (> (cadr new) (cadr best))
(set! best new)))))) | {"url":"http://www.cs.utexas.edu/~novak/cs307finalans.html","timestamp":"2014-04-19T15:50:37Z","content_type":null,"content_length":"36643","record_id":"<urn:uuid:01e27069-d906-4a5e-be40-9b881085edba>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Johnson Level & Tool Aluminum Grade Rod - 16ft., Model# 40-6320
Dual-face Grade Rod is constructed of durable, heavy-duty aluminum and features five telescoping sections that lock securely together and extend up to 16 feet. Telescoping rod for easy storage and
transportation. Increments in feet/tenths on front and feet/inches on back. Silver anodized finish on front and hi-vis yellow on back. Snap-on rod level with circular vial included for quick and
convenient leveling. Finish Type Silver anodized finish, Style Telescopic, Increments in. Feet/tenths on front, feet/inches on back, Lockable Yes, Case Included Yes, Length ft. 16, Material Aluminum,
Sides Dual sided, Measures Feet, tenths, inches, Color Silver/hi-vis Yellow.
Customer Questions & Answers: | {"url":"http://answers.northerntool.com/answers/0394/product/436323/johnson-level-tool-aluminum-grade-rod-16ft-model-40-6320-questions-answers/questions.htm?sort=affiliationaStaff","timestamp":"2014-04-18T10:39:27Z","content_type":null,"content_length":"61858","record_id":"<urn:uuid:02e9a5b8-67bb-40c9-b055-6e38dbbdead9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
North Hills Algebra 1 Tutor
Find a North Hills Algebra 1 Tutor
...I really enjoy educating and mentoring students of all ages. I have taught and tutored the following subjects: Elementary Math, Pre-Algebra, Algebra I, Algebra II, Geometry, Trigonometry,
Pre-Calculus, English, Literature, Writing, the Sciences, Biology, Intro to Chemistry, Social Studies, World...
27 Subjects: including algebra 1, reading, English, geometry
...I also love my two Portuguese water dogs, weather, video games, exercising, and having a good time with friends and family.I have taken quite a few courses in not just differential equations
but also in partial differential equations. You also learn the fundamentals in Calculus, but I feel comfo...
18 Subjects: including algebra 1, chemistry, calculus, geometry
...Examples: How can we use details from examining rocks to determine the type of rock they may be? Why do frogs and birds lay eggs, but different kinds and they hatch them in different ways in
different places. I am a Credentialed Teacher in the State of CA.
50 Subjects: including algebra 1, English, reading, writing
...First, a bit of background; I'm a self-described "knowledge hoarder" in that I enjoy learning for the sake of learning. I have a vast pool of knowledge, ranging from obscure physics
information, to a working knowledge of mechanics, to in-depth psychological study and the application of appropria...
27 Subjects: including algebra 1, English, reading, writing
...As a student I attended to the Autonomous University of Sinaloa, and graduated as Civil Engineer in Mexico in 1973. Recently I successfully passed both State Board exams for Engineering in
Training--EIT--and Land Surveyor in Training--LSIT--certification in California, USA. I trained my own son...
7 Subjects: including algebra 1, Spanish, ACT Math, elementary math | {"url":"http://www.purplemath.com/north_hills_ca_algebra_1_tutors.php","timestamp":"2014-04-19T05:25:08Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:915d9192-dacf-4bb3-9cda-7cfb9c95ea93>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
The following article by Charles Woods appeared in the June 1996 Bulletin. Charles Woods is a retired engineer who lives in Ridgecrest, California. He has been a SCAVM member for ten years.
WOOD TESTING
by Charles Woods
As an engineer I am more comfortable when I can assign numbers to things rather than guess and go by trial and error.
There have been a number of articles in the literature on wood properties and how to measure them, but most of them are not easily applied by the average violin maker. I have been making simple tests
on all of the wood that I have used since I started making in 1987 and have accumulated a lot of data that helps me sort out the good wood from the bad.
Rocky Awalt wrote an article for the SCAVM Bulletin in April, 1985, about his experiences in trying to measure some of the properties that determine good wood; see ref. 1.
One of the most important measurements that you can make to avoid a poor instrument is to measure the cross grain stiffness relative to the longitudinal stiffness of the spruce top. This can be done
by making test samples from the wood billet before carving or bending the plate. My experience has been that spruce with a stiffness ratio of greater than 30 to 1 (longitudinal stiffness 30 times
stiffer than across the grain) will result in plate that will not tune very well. If you tune your plates using glitter patterns with a sound generator, you will find that this wood will have a low
mode 2 frequency and mode 5 will be distorted. Above 50 to 1 ratio, mode 5 may not even be recognizable. I had to scrap a cello top that had a ratio of 51 because it performed so poorly. A top with a
ratio of 27 made a tremendous difference. I would have not wasted my time on the first one if I had heeded my own advice! Excellent spruce will have a stiffness ratio of 15 or less. I had a piece of
red spruce that measured 11.
There are several ways to measure the stiffness of a wood sample depending how much information you want on the wood quality. My sample beam tester is shown in the sketch below:
The beam samples are laid across the support blocks and calibrated weights are added to deflect the beam until the gage block will just slide under. I use the 10 cm spacing for the cross grain beam.
If you make your test beams exactly the same dimensions and use the same support length, the weights required to bend the beams an equal amount will represent the stiffness of each and the ratio of
the stiffness will be Rs = W/Wc, where Wc is the cross grain beam weight. You could also use the same weight on each beam and measure the deflection with a dial indicator.
My method is to bend the beams exactly 2 mm using the gage block and measuring the beam dimensions accurately. Using the simply supported beam deflection formula and solving for E, the modulus of
elasticity, we have:
E = W l^3/ 4bt^3y
W = weight applied
l = beam length between supports
b = beam width
t = beam thickness
y = deflection (2 mm)
The stiffness ratio is then, E / Ec = Rs.
The speed of sound in both directions in the wood, which is a very important quality parameter, can be calculated once you have determined E and Ec.
IMPORTANT: In the beam formula above, the distance between the supports, l, and the thickness of the beam, t, is a cubed parameter and must be cut or measured very carefully to obtain accurate
results. The beam should be measured several places along its length and an average thickness determined.
If you have a balance scale that weighs accurately to milligrams, you can calculate the density of the wood using the dimensions you have measured:
Density, d = w / b t L
w = beam weight
t = beam thickness
L = beam length (not distance between supports)
b = beam width
We are trying to find wood that is light and stiff, and since the speed of sound is a function of stiffness and density, it is another measure of wood quality. Speed of sound, C = sqrt (E g / d) (g
is the gravitational constant).
The article in reference 2 suggests a single number of overall quality using all of these parameters plus the damping factor. Since I don t have the equipment to measure damping, I have simplified
the formula to:
Quality factor Q = sqrt(C × Cc) / d where C and Cc are the speed of sound longitudinally and across the grain, and d is the density of the wood. A Q of 6 or more represents very good wood.
Longitudinal speed of sound in spruce will vary between 5000 and 6000 meters per second (m/s). Maple will be 4000 to 5000 m/s.
A word about dimensions: you have to be consistent in the use of metric or English units in the formulas for the numbers to make sense. The following are the units to use for each parameter if you
are working in the metric system:
Beam length, L; width, b; thickness, t; and deflection, y centimeters
Distance between supports, l centimeters
Weight to deflect the beam, W kilograms
Beam weight, w grams
Density, d = 1000 w / b t L (kilograms / cubic meter)
Modulus of elasticity, E =W l^3 /4 b t^3 y (kg / sq. cm)
Speed of sound, C = sqrt (498,070 E / d) (meters/sec.)
A few practical suggestions: I make my sample beams about 2 mm thick for the longitudinal grain and 4 mm thick for the cross grain. The beams are about 25 mm wide. I use two sets of supports for the
beam tests; l = 100 mm for the cross grain and 160 mm for the longitudinal. The beams can be any length, as long as they are longer than the support length. It is more accurate to measure the density
using a larger block of the wood, since the beams will weigh less than 10 grams, but if you have a very accurate weighing balance, use the beams.
If any of you have done work similar to this, I would like to hear from you also if you have questions about any of this that is unclear.
Ref. 1. Awalt, Rocky Violin Top Wood Cross Grain Stiffness Considerations. SCAVM Bulletin, vol. 21 April, 1985.
Ref. 2. Meyer, Hajo G. A Practical Approach To The Choice of Tone Wood For Instruments of The Violin Family." Catgut Acoustical Society, vol. 2 no. 7 (series II) May, 1995.
Ref. 3. Schelling, J. C. Wood for Violins. Catgut Acoustical Society, no. 37, May, 1982.
Mr. Woods' E-mail address is woods@ridgecrest.ca.us.
All Bulletin articles are copyrighted ©1997 by the Southern California Association of Violin Makers. Contact Bulletin editor John Gilson, at the address given on our home page, for permission to
reproduce Bulletin material.
Return to home page of the Southern California Association of Violin Makers . | {"url":"http://www.scavm.com/Woods.htm","timestamp":"2014-04-20T06:24:36Z","content_type":null,"content_length":"8780","record_id":"<urn:uuid:99d9a678-fe88-489d-855a-b3b73c9ab7d2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Section: C Library Functions (3) Updated: Thu Apr 7 2011 Local index Up
DenseMatrices -
typedef DenseMatrix< Real > RealDenseMatrix
typedef DenseMatrix< Complex > ComplexDenseMatrix
Detailed Description
Provide Typedefs for dense matrices
Typedef Documentation
typedef DenseMatrix<Complex> DenseMatrices::ComplexDenseMatrixNote that this typedef may be either a real-only matrix, or a truly complex matrix, depending on how Number was defined in
libmesh_common.h. Be also aware of the fact that DenseMatrix<T> is likely to be more efficient for real than for complex data.
Definition at line 414 of file dense_matrix.h.
typedef DenseMatrix<Real> DenseMatrices::RealDenseMatrixConvenient definition of a real-only dense matrix.
Definition at line 403 of file dense_matrix.h.
Generated automatically by Doxygen for libMesh from the source code.
typedef DenseMatrix<Complex> DenseMatrices::ComplexDenseMatrixNote that this typedef may be either a real-only matrix, or a truly complex matrix, depending on how Number was defined in
libmesh_common.h. Be also aware of the fact that DenseMatrix<T> is likely to be more efficient for real than for complex data.
typedef DenseMatrix<Real> DenseMatrices::RealDenseMatrixConvenient definition of a real-only dense matrix.
This document was created by man2html, using the manual pages.
Time: 21:44:51 GMT, April 16, 2011 | {"url":"http://www.makelinux.net/man/3/D/DenseMatrices","timestamp":"2014-04-19T07:24:06Z","content_type":null,"content_length":"9540","record_id":"<urn:uuid:f46bd9b4-0c47-4d04-8a9b-2ac69491834a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can someone help me understand these subjects a little better?
December 9th 2008, 11:45 AM #1
Dec 2008
Can someone help me understand these subjects a little better?
I'm having difficulties in two areas with Linear Algebra:
- Using undetermined coefficients or variation of parameters with a linear system
- finding equilibrium points for a non linear system:
X' = 6x - 2xy
Y' = 2xy - 4y
Any help would be appreciated!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/64162-can-someone-help-me-understand-these-subjects-little-better.html","timestamp":"2014-04-17T04:04:16Z","content_type":null,"content_length":"29456","record_id":"<urn:uuid:1a6a6de6-2b2d-4250-852b-52f232234cf5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex analysis problem regarding the maximum principle
March 8th 2009, 05:48 PM #1
Mar 2009
Complex analysis problem regarding the maximum principle
I have a problem in comple analysis which regards the maximum principle that I would love to get some help with. The question is :
Let g be an analytic function in a ring S = {z | a < |Z|< b }which is continuous on the circle {z | |z| = b}. Also f(z) = 0 for every z on that circle.
Prove that f(z) = 0 for every z in S.
As I mentioned above it has to do with the maximum principle.
Looking forward for your help, anyone.
will anybody help me please ?
I've posted this question more than one week with no reply since.
Will anybody please help/ give an idea ?
Thanks !
March 16th 2009, 08:03 AM #2
Mar 2009 | {"url":"http://mathhelpforum.com/differential-geometry/77598-complex-analysis-problem-regarding-maximum-principle.html","timestamp":"2014-04-19T17:44:09Z","content_type":null,"content_length":"32468","record_id":"<urn:uuid:9c19f199-683c-4a36-9d34-a998573e3aa1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
The n-Category Café
Progic IV
Posted by David Corfield
We’ve discussed matrix mechanics over rigs in many places over the years. I remember us toying with the idea that morphisms between rigs would allow us to pass in one direction or another between the
corresponding mechanics. Perhaps this might give us some link between, say, the quantum mechanics supported by a space and its topology, the latter being all about path integrals with truth values.
For an easy example, if there’s a non-zero possibility of a particle propagating from A to B within a space then there must be a path from A to B within that space. The fun would really begin if we
could reach higher homotopy. Can we couch the Bohm-Aharonov effect in these terms?
But if we wanted to get probabilities in on the act, we appear to be blocked by the fact that probability theory is not matrix mechanics over a rig. On the other hand, as John points out, at least in
the case of finite probability spaces, we can invoke Durov’s generalized rings or algebraic monads. So why not look at morphisms between generalized rings?
Who knows what fun might be had passing along such morphisms, given that for the generalized ring known as the field with one element, $\mathbb{F}_1$, it is claimed that
…a lot of statements in algebraic topology become statements about homological algebra over $\mathbb{F}_1$.
What is homological algebra over the other generalized rings? And if
…the higher K-theory of $\mathbb{F}_1$ must be the homotopy groups of spheres (p. 1),
what of the higher K-theory of other generalized rings?
Now a morphism $\alpha$ between monads, $P$ and $Q$, gives rise to a functor, $F$, between Kleisli categories. This functor is the identity on objects, and if $f$ is in $Hom_{Kleisli P}(X, Y)$, so
that it is a map between $X$ and $P(Y)$, then $F(f) = \alpha(Y) \cdot f$, which is between $X$ and $Q(Y)$, and so in $Hom_{Kleisli Q}(X, Y)$.
So we might want to look at morphisms between entries either of the right or left hand columns:
$\array{ \boldsymbol{Monad} & \boldsymbol {Kleisli category} \\ Identity & Set \\ +1 & Partial function \\ Powerset & Rel \\ Probability & Conditional distributions \\ R-module & Matrices over R}$
But where does the geometry, e.g., Fisher information metric, get in on the act? And how is it passed between different Kleisli categories? Well, there is a distance (or better, family of
divergences) for unnormalised densities (eqn. (2), page 5 of this), which passes to the ordinary one for normalised densities on restriction. And there is a Fisher metric in quantum information
geometry. It would be worth seeing how this relates to the probabilistic case.
Changing tack, to give an example where the geometry may do some work in Progic, imagine we have a large data base containing incidence of disease. We have someone who smokes over 40 a day, but who
is also vegetarian. Now we don’t have figures for heavy smoking vegetarians, but we do have them for vegetarians and for heavy smokers individually. Now we learn the probabilities for each class of
person of the four conditions {$\pm$ heart disease $\& \pm$ cancer}. So we have (estimates) for $Pr(\pm H \& \pm C | S)$ and $Pr(\pm H \& \pm C | V)$. The question is what should we say about $Pr(\pm
H \& \pm C | S \& V)$?
The temptation is to draw a ‘straight line’ between the extreme points of the distributions for vegetarian and smoker. But what counts as straight? Here the idea of geodesics in the space of
probability distributions appears. In these case, probability distributions over our two binary variables form a three dimensional surface, satisfying $\sum p(\pm H \& \pm C) = 1$. How would we join
two points in it? There’s a good argument for taking logarithms of the coordinates, $(log p(+H \& +C), log p(+H \& -C), log p(-H \& +C), log p(-H \& -C),$ and looking for straight lines there.
Let me hint at one reason. If heart disease and lung cancer are independent conditional on being a vegetarian, and they are also independent conditional on being a heavy smoker, then you might think
that intermediate distributions should continue possessing this independence property. But independence is expressed using a product, e.g $Pr(+H \& +C | V) = Pr(+H | V) \cdot Pr(+C | V)$, so we get
$log Pr(+H \& +C | V) = log Pr(+H | V) + log Pr(+C | V), etc.$
Drawing a straight line between two independent distributions represented by log coordinates, gives us a family of independent distributions.
Posted at October 9, 2007 8:56 AM UTC
Re: Progic IV
David Corfield wrote:
But if we wanted to get probabilities in on the act, we appear to be blocked by the fact that probability theory is not matrix mechanics over a rig.
That’s a provocative comment. Based on ideas from James Dolan, I’ve always espoused the opposite view. Yes, the rig $[0,\infty)$ does not in itself incorporate the constraint that probabilities
should sum to 1. But neither does the rig $\mathbb{C}$ incorporate the constaint that amplitudes should have absolute values whose squares sum to 1! So, if probability theory isn’t matrix mechanics
over $[0,\infty)$, then why do you think quantum theory is matrix mechanics over $\mathbb{C}$?
Jim’s resolution to this puzzle is that numbers in $[0,\infty)$ represent relative probabilities, just as numbers in $\mathbb{C}$ represent relative amplitudes. We have to normalize these numbers to
get actual probabilities. That’s what the partition function is for, in both statistical and quantum physics: it’s the normalizing factor.
I believe this is a consistent and sensible approach, though one would need to expand on what I’ve just said to really prove that.
It’s only much more recently, due to your progic project, that I noticed an alternate approach where we use one of Durov’s ‘generalized rings’ to incorporate — right from the start — the constraint
that probabilities sum to 1.
I haven’t figured out how to do something similar in the quantum case.
(Here’s a puzzle for fans of dagger compact categories: when we think of quantum mechanics as matrix mechanics over $\mathbb{C}$, we first consider a dagger compact category where morphisms are $\
mathbb{C}$-valued matrices. Then we note that unitary morphisms — those with
$U^{\dagger} U = 1, \qquad U^{\dagger} U = 1$
are especially important, because they ‘preserve probability’. What happens when we work with the rig $[0,\infty)$? Do we get doubly stochastic matrices?)
Posted by: John Baez on October 9, 2007 4:48 PM | Permalink | Reply to this
Re: Progic IV
Now that’s close to the options I listed back here:
Thinking about kinds of morphism from $X$ to $Y$ in {0,1}-valued matrices, we have:
□ a) Unnormalised: Rel;
□ b) Row normalised: Set;
□ c) Column normalised: $Set^{op}$;
□ d) Row and column normalised: Permutations.
Permutations, doubly stochastic matrices, and unitary transformations are avatars of the same idea.
So maybe an interesting question is whether there’s an interesting notion of row normalisation for complex matrices.
Posted by: David Corfield on October 9, 2007 6:33 PM | Permalink | Reply to this
Re: Progic IV
Is there any call in quantum mechanics for $m$ by $n$ matrices sending $\mathbb{C}^n$ to $\mathbb{C}^m$ such that vectors of amplitudes, the squares of whose absolute values sum to 1, are sent to
similar vectors in $\mathbb{C}^m$? These would have normalised rows.
Posted by: David Corfield on October 10, 2007 9:43 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2007/10/progic_iv.html","timestamp":"2014-04-21T02:01:09Z","content_type":null,"content_length":"33155","record_id":"<urn:uuid:76db30ab-fe66-4647-8c01-bb9493bd5727>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infections Prevented by Increasing HIV Serostatus Awareness... : JAIDS Journal of Acquired Immune Deficiency Syndromes
Increasing the proportion of persons living with HIV (PLWH) who are aware of their HIV-positive serostatus is a key objective of the Centers for Disease Control and Prevention's (CDC's) “serostatus
approach to fighting the epidemic,” which was initiated in 2001.^1 HIV-infected persons who are aware of their serostatus are less likely to engage in transmission risk behaviors than are
HIV-infected persons who are unaware of their serostatus^2 and, consequently, are less likely to infect at-risk sex partners.^3 Moreover, infected persons who are aware of their HIV status can
benefit from the availability of highly effective antiretroviral therapies and opportunistic infection prophylaxis, which can increase the length and quality of their lives.^4 Effective
antiretroviral therapy also decreases viral load, which, in turn, reduces the likelihood of HIV transmission.^5
The CDC estimates indicate that the number of PLWH who were unaware of their serostatus decreased between 2001 and 2004 despite an overall increase in the number of PLWH in the United States.^6,7 The
proportion of PLWH who were aware of their HIV status increased from approximately 70.5% at the start of 2001^6 to approximately 74.2% at the start of 2004.^7 According to CDC estimates, the number
of incident infections remained constant at 40,000 new infections per year during this period.^4
The failure of HIV prevention initiatives to reduce HIV incidence to lower than the 40,000 plateau is disappointing, but considering the counterfactual of increasing incidence as the number of PLWH
increased, the steady incidence of recent years could instead be viewed as an indicator of successful prevention efforts.^8 We demonstrate here that the increase in serostatus awareness between 2001
and 2004 can be credited, in part, with maintaining incidence at 40,000 new infections per year.
The present analysis addresses the following questions. First, what would the incidence in 2004 have been had the proportion of PLWH who were aware of their HIV status remained constant at the 2001
level rather than increasing between 2001 and 2004? Second, how many incident infections were prevented from 2002 to 2004 as a consequence of the increase in serostatus awareness?
We used methods developed by CDC analysts to model the growth of the HIV epidemic between 2001 and 2005.^6,7 Let N(t) denote the number of PLWH at the start of year t. The increase in prevalent
infections from 2001 and 2005 was modeled using the following equation:
where I(t) and D(t) are the numbers of incident infections and deaths among persons with AIDS during year t.
For the present analysis, it is useful to distinguish between incident infections attributable to the transmission risk behaviors of serostatus-aware and serostatus-unaware PLWH. Let x(t) denote the
proportion of PLWH who were aware of their HIV status at time t. The number of serostatus-aware PLWH equals x(t) · N(t), and the number of serostatus-unaware PLWH equals (1 − x(t)) · N(t). The
incidence in year t equals the sum of the numbers of infections transmitted by these 2 groups:
where γ[A] and γ[U] are the annual HIV transmission rates for serostatus-aware and -unaware PLWH, respectively. (The HIV transmission rate for a particular group of PLWH is defined as the average
[fractional] number of new infections per PLWH per year^9). Consistent with previous analyses,^3,10 our model assumed that γ[A] and γ[U] were constant between 2001 and 2004.
Marks et al^3 and Holtgrave and Pinkerton^10 estimate that γ[U] is 3.5 to 3.7 times larger than γ[A]. Exact values of γ[A] and γ[U] (or the ratio γ[U]/γ[A]) are not needed for the present analyses,
which assume only that γ[A] < γ[U] and, consequently, that increasing the proportion of PLWH who are aware of their HIV status decreases overall incidence.
If, contrary to the available evidence,^6,7 the proportion of PLWH who were aware of their HIV status had not increased from 2001 to 2004 but instead had remained at 2001 levels (ie, x(t) = x(2001)
for t = 2001 to 2004), then I(t) = (γ[U] − (γ[U] − γ[A]) · x(2001)) · N(t) for all years t. This would imply that I(t)/N(t) is a constant equal to γ[U] − (γ[U] − γ[A]) · x(2001) and, in particular,
that I(t)/N(t) = I(2001)/N(2001) for all years t. Therefore, in this counterfactual scenario,
The total number of incidence infections between 2002 and 2004 then would equal the following:
This total can be compared with the CDC's estimate of 120,000 incident infections in this period (40,000 per year) to determine the number of infections prevented by the increase in the serostatus
awareness from 2001 to 2004.
Importantly, the total number of incident infections in the counterfactual model, I[Total], depends only on the number of PLWH and the incidence in 2001 [N(2001) and I(2001), respectively] and on the
number of deaths in 2001 to 2003. Specifically, because I(t) = I(2001)/N(2001) · N(t) in the counterfactual model,
Equations 3 and 5 completely determine the counterfactual model.
In the base-case analysis, N(2001) and I(2001) were set to 950,000^6 and 40,000,^4 respectively. D(t) values were drawn from the 2005 CDC HIV/AIDS Surveillance Report.^11 The total number of incident
infections in the counterfactual scenario, I[Total], was calculated using Equation 4. Equation 5 was used to model the progression of the epidemic in the counterfactual scenario, and Equation 1 was
used for the “factual” model, which assumed an annual incidence of 40,000 new infections between 2001 and 2004.^4
In the base-case model, the number of PLWH increased from 950,000 at the start of 2001 to 1,041,522 at the start of 2005, as shown in Table 1. There were 160,000 incident infections and 68,478 deaths
among persons with AIDS between 2001 and 2004.
If the proportion of PLWH who were aware of their HIV status had not increased from 2001 to 2004, the incidence of infection would have increased to 43,029 during 2004 and the total number of PLWH
would have grown to 1,047,514 at the start of 2005. A total of 5992 additional incident infections would have occurred in 2002 to 2004. Thus, the increase in serostatus awareness between 2001 and
2004 can be credited with preventing nearly 6000 incident infections in the 3-year period from 2002 to 2004.
The base-case findings were not especially sensitive to the number of PLWH at the start of 2001, assuming a constant incidence of 40,000 new infections during 2001 to 2004. As indicated in Table 2,
the number of infections prevented by the increase in serostatus awareness ranged from 5407 to 6719 when the number of people living with HIV at the start of 2001 was varied from 850,000 to
1,050,000, with larger values of N(2001) associated with smaller numbers of prevented infections.
Additional sensitivity analyses were conducted to examine how the results might have changed if the incidence were 10% larger or smaller than the base-case value of 40,000 new infections per year. If
the annual incidence during 2001 to 2004 were 36,000, 4445 infections would have been prevented by the increase in serostatus awareness, whereas if the incidence equaled 44,000 infections per year,
7756 new infections would have been prevented (these results assume that there were 950,000 PLWH at the start of 2001; see Table 2). Overall, the number of prevented infections ranged from a minimum
of 4012 to a maximum of 8700 across various combinations of N(2001) (850,000 to 1,050,000) and annual incidence values (36,000 to 44,000).
Consistent with CDC modeling techniques,^6,7 the base-case model considered only deaths among persons with AIDS when updating the number of PLWH (Equations 1 and 4). We conducted a final sensitivity
analysis that assumed an average 2% annual mortality rate for all PLWH.^12 This increased the number of deaths in 2001 to 2003 by 14.2%, to 58,252. The estimated number of infections prevented by the
increase in serostatus awareness from 2001 to 2004 was reduced from 5992 to 5384 when deaths among all persons with HIV were included in the model.
Our analyses indicate that the increase in serostatus awareness between 2001 and 2004 can be credited with preventing nearly 6000 incident HIV infections during the 3-year period from 2002 to 2004.
These prevented infections are associated with a savings of more than $5 billion in averted lifetime economic productivity losses and HIV/AIDS-related medical care costs.^13 Additionally, of course,
each infection that is prevented in the present averts new infections in the future.
The results were not especially sensitive to differing assumptions about the number of PLWH at the start of 2001. They were somewhat more sensitive to the assumption that the annual incidence of HIV
has remained constant at 40,000 new infections per year. A plausible range of 4000 to 8700 prevented infections was obtained across a broad spectrum of potential values of the initial number of PLWH
and the annual incidence. Including deaths among all persons with HIV rather than just persons with AIDS did not substantially affect the estimated number of prevented infections.
Importantly, the analyses did not require specific assumptions regarding the magnitude of the increase in serostatus awareness between 2001 and 2004 or estimates of the transmission rates for
serostatus-aware and -unaware PLWH. Simply stated, the results indicate that if there were 950,000 PLWH at the start of 2001 and the incidence of infection during 2001 equaled 40,000, 125,992
infections would have occurred in 2002 to 2004 if the proportion of PLWH who were aware of their serostatus had remained at its 2001 level. Compared with the CDC's estimate of 120,000 incident
infections in 2002 to 2004, 5992 infections were prevented as a consequence of the increase in serostatus awareness.
Several limitations of this analysis should be noted. First, data for the epidemiologic model were derived from CDC estimates and, as such, are subject to the same limitations as those estimates. Of
particular note, the CDC's estimate of 40,000 new infections per year was based on “informal methods”^14 rather than on demonstrably sound empiric data or careful modeling. Second, in accordance with
Marks et al^3 and Holtgrave and Pinkerton,^10 our model assumed that the annual transmission rates for serostatus-aware and -unaware PLWH have remained constant in recent years and that the
transmission rate was the same for persons living with AIDS and for other persons who were aware of their HIV status. Persons with AIDS might be more likely than others to transmit HIV because of
increased infectiousness in late-stage disease^15 or less likely because of reductions in HIV transmission risk activities with deteriorating health. Likewise, the annual transmission rates did not
distinguish between sexually transmitted HIV and HIV transmitted through the sharing of drug injection equipment. Third, our mathematic analysis cannot determine why serostatus awareness rates
increased from 2001 to 2004. Overall increases in testing rates and increased use of testing services by at-risk subgroups may have contributed to the reported increase in serostatus awareness.^16
Our analyses suggest that the increase in serostatus awareness helped to maintain HIV incidence at a relatively stable level between 2002 and 2004, but other prevention initiatives likely helped as
In summary, one can argue that the incidence of HIV in the United States remains unacceptably high and that this indicates a failure of HIV prevention efforts. The first assertion may be true, but
the second fails to consider the counterfactual. The present analysis indicates that had serostatus awareness levels remained constant from 2001 to 2004, the incidence of HIV would have increased to
more than 43,000 infections in 2004 and nearly 6000 additional infections would have occurred in the 3-year period from 2002 to 2004. Although additional prevention activities are needed to reduce
HIV incidence to lower than current levels, the successes of past prevention efforts should not be overlooked. | {"url":"http://journals.lww.com/jaids/Fulltext/2008/03010/Infections_Prevented_by_Increasing_HIV_Serostatus.14.aspx","timestamp":"2014-04-19T09:52:48Z","content_type":null,"content_length":"222265","record_id":"<urn:uuid:15a33a2b-fca5-4021-87f5-4c528175231c>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Latent Growth Curve without the same observations
Somtawin posted on Wednesday, April 25, 2001 - 7:24 pm
Can I employ latent growth curve model to study the academic growth of schools taking the past mean of achievement score of each school, not using the same student for five consecutive years ?
Linda K. Muthen posted on Thursday, April 26, 2001 - 8:33 am
In principle, that is possible, although you would need to consider whether the school can be considered the same across years when the student body changes. This is a substantive issue that other
more substantive researchers might have better comments on.
Anonymous posted on Wednesday, July 07, 2004 - 7:32 am
I have data on children spanning two years and six waves, three in the last year of kindergarten, three in the first year of elementary school. Data are about the quality of the teacher-child
relationships. I want to estimate a model that captures all data waves, despit the fact that the teacher in the first three waves (kindergarten) is someone else than the teacher in elementary school
(the last three waves). So, though the same measures are used at all waves, the scores at first and last waves might not be compared directly. Nevertheless, I guess there might be a solution... How
can I include all six waves of data in a single LGC model framework?
Linda K. Muthen posted on Wednesday, July 07, 2004 - 9:04 am
You could have two sequential growth processes --one for each teacher. Or you could do one growth process. I would do both and then compare them.
Anonymous posted on Wednesday, July 07, 2004 - 11:52 pm
OK, that's of course what we could do, do both an compare. However, what's on my mind since last night, is whether a piecewise LGC model (maybe with differential slopes also, like in Willet, Singer &
Martin, 1998) also makes sense in this context? That is, when measures (scales, observers, etc.) are the same, but only target persons (teachers) differ from kindergarten to elementary school. So, we
allow the intercept and slope to vary in the second part of the model, compared to the first. Does this make sense to you, looking at the transition from kindergarten to elementary school as an
Linda K. Muthen posted on Thursday, July 08, 2004 - 6:39 am
I think this is the same as having two sequential growth processes.
Gareth Morgan posted on Tuesday, March 25, 2008 - 12:51 pm
I have data on children's weekly vocabulary scores from a curriculum based assessment and i would like to model their growth in vocabulary over time. However, while the number of vocabulary words is
the same for each week, the actual vocabulary words are different for each week. Thus, the scores go up and down from week to week. I was wondering if i could use a running total from one week to
next of number of correct vocabulary words and use that in LGM framework. Would this work? If not, do you have another suggestion?
Linda K. Muthen posted on Tuesday, March 25, 2008 - 1:49 pm
Growth curve modeling assumes the same outcome is measured repeatedly. I don't think a running total would work. Is there some standard test you can use each week in addition to their regular tests?
See the paper, The Metric Matters, by Mike Seltzer.
Annie Desrosiers posted on Thursday, July 03, 2008 - 8:35 am
I have 25 time points from 1975 to 2000 with different cases each year. Thus, it's not the same subjects at each year. I have approximately 3000 subjects each year. Year after year, the same set of
questions (on delinquency) was assigned at 3000 students. I want to see how delinquency progresses across time.
Can I do LGM on these subjects like example 6.1 even if it’s not the same cases at each time? Each year represents an independent sample. If not, what would be the best strategy of analysis in MPlus?
Thank you.
Linda K. Muthen posted on Thursday, July 03, 2008 - 10:33 am
You need to have the same subjects across time to estimate a growth model. I'm not sure how people use data such as yours.
Eugenia Parrett Gwynn posted on Thursday, July 19, 2012 - 6:37 am
I have data on children's relationships with their teachers from kindergarten to sixth grade. I did individual growth modeling to determine if relationsihp quality declines over time--it did. Then I
tried to use those growth models (with all 7 time points--ie., kindergarten to 6th graade) to predict an outcome at age 15 using latent growth curve modeling but the models would not converge. I
finally dropped the first 3 time points of the growth models (i.e., k-2nd grade) and I was able to get the lgcm to fit. When doing the latent growth curve models, did just the intercept and slope for
each child predict the outcome or did the model also include the values of teacher-child relationship quality at each time point as well as each child's intercept and slope?
Linda K. Muthen posted on Thursday, July 19, 2012 - 9:19 am
If you want help with the non-convergence, please send the output and your license number to support@statmodel.com.
Typically a distal outcome is regressed on the growth factors.
Eugenia Parrett Gwynn posted on Thursday, July 19, 2012 - 1:42 pm
I use the computer lab on campus so I am not sure of the license number. I guess what I am getting at is when the outcome is regressed on the growth factors, are the growth factors based on an
average intercept and an average slope for each child or do the growth parameters include data values for each child at each time point?
I'm sorry. Someone keeps asking me this and I don't know the answer.
Linda K. Muthen posted on Friday, July 20, 2012 - 10:43 am
Growth factors are random effects. They have means and variances. They include values for each child.
RuoShui posted on Monday, November 18, 2013 - 4:56 pm
Dear Dr. Muthen,
I am looking at emotional well-being over 4 waves. The subjects are the same across waves but the informants (teachers) are different at each wave. I am wondering whether I can justify using latent
growth curve modeling?
Thank you very much.
Linda K. Muthen posted on Tuesday, November 19, 2013 - 8:58 am
This question is more appropriate for a general discussion forum like SEMNET.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/14/125.html?1384880296","timestamp":"2014-04-18T18:15:57Z","content_type":null,"content_length":"37419","record_id":"<urn:uuid:03651146-6864-42a5-af8b-47817747fe95>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
§ ¶Introducing "warp resize"
In an earlier blog entry on video shaders I introduced an algorithm called warpedge as an example. It's a hybrid warpsharp+resize algorithm that attempts to make edges as crisp as possible. The
algorithm's output has proven interesting enough that I made a VirtualDub video filter out of it:
Warp resize, version 1.1
Warp resize, version 1.1 source code
Only works when enlarging a video, and requires a lot of CPU power. Read on for the algorithm description.
The basic "warp sharp" algorithm
Warp sharp is the name of an algorithm that I originally found coded as a filter for an image editing application called "The GIMP." It's based on the idea that if you can identify the edges in an
image, you can warp the image to shrink the edges and thus make them appear sharper. I don't remember how the original code worked, but here's one way to implement such an algorithm:
• Convert the original channel to grayscale.
• Compute a gradient map from the grayscale image. For each pixel in the original image, the gradient map encodes a vector that indicates the direction and rate of fastest ascent in luminance. The
usual way to do this is through a pair of edge detection filters that compute horizontal and vertical gradient. In a 3x3 window, subtracting (right-left) and (bottom-top) is one way to do this.
Sobel filters are also popular:
-1 0 +1 -1 -2 -1
-2 0 +2 0 0 0
-1 0 +1 +1 +2 +1
(Both filters can be computed together in 10 adds; how this is done is left as an exercise to the reader.)
• Convert the gradient map to a bump map by taking the length of each gradient vector.
• Optionally, apply a blur to the bump map.
• Compute a second gradient map from the bump map.
• Use the gradient map as a displacement map for warping the original image — that is, you use a scaled version of the gradient vector at each pixel to perturb the sampling location in the original
image. In other words, given a gradient (du, dv), you retrieve (u+du*k, v+dv*k) from the source, where k is usually set such that the displacement is under a single pixel. I usually use a
cardinal bicubic interpolator for the warp.
Problems with warp sharp
The first problem with warp sharp is that the length of the displacement map vectors is dependent upon the height of the bump map, which is in turn dependent upon the contrast of the original image.
This means that the amount of narrowing of edges varies with local contrast, which produces uneven edges.
A second problem has to do with the warp. If the warp is done using straight interpolation, the algorithm will never output a pixel that is not in the infinitely interpolated version of the original
image. In practice this means you can get "bumpiness" in edges the warped image, since the warp interpolator doesn't do edge-directed interpolation. You can often see this in near-horizontal or
near-vertical lines, where the interpolator creates gray pixels when the line straddles a scan line boundary. Unfortunately, while there are edge-directed interpolation algorithms, they usually don't
produce continuous output. For example, the Xin-Li New Edge Directed Interpolation (NEDI) algorithm only produces a rigid 2:1 enlargement. This doesn't help with a displacement-map based warp, which
requires fine perturbations in the sampling location.
A third problem is that since warp sharp essentially narrows transition regions, it has a tendency to crystallize images into discrete regions such that it resembles a Voronoi diagram. Blurring the
bump map decreases the filter's sensitivity to noise and helps this somewhat.
The warp resize algorithm
To solve the luminance sensitivity, warp resize normalizes the gradient vectors produced in the second pass. This has the downsides of amplifying noise and failing on (0,0) vectors, so the
normalization scale factor is lerped toward 1.0 as the vector becomes very short. Also, the gradient vectors are computed on the full-size version of the image, so that the normalization occurs after
resampling. Otherwise, the resampling operation would denormalize gradient vectors, which would introduce artifacts into the warp on original pixel boundaries.
The issue with the warp interpolation is corrected using the following ugly hack: We know that the problem becomes worst between pixel boundaries, where the interpolator is "most inaccurate" with
regard to edges. So what we do is compute the difference between the warped pixel and the interpolated pixel, and re-add some of that difference to emphasize it. The difference is greater where the
slope of the edge is higher, and thus this tends to flatten out edges and make borders crisper.
Throw in a liberal amount of high-powered, slightly-expensive interpolation to taste.
Thus, the algorithm used by the CPU version of the algorithm is as follows:
• Convert the image to grayscale.
• Compute a gradient map using Sobel filters.
• Resample the gradient map to the desired size using a B-spline bicubic filter. A B-spline bicubic filter has no negative lobes and thus introduces no ringing into the gradient map, at the cost of
some blurriness compared to a cardinal spline filter. Mild blurring is desirable anyway to improve the smoothness of the gradients. (Note that since both the gradient determination and B-spline
resample are linear operations, grad(resample(x)) is the same as resample(grad(x)).)
• Convert the gradient map to a bump map by taking the length of each vector. This bump map shows the areas of highest intensity change — in a way, the "edges of the edges."
• Compute a displacement map from the bump map, again using Sobel filters.
• Normalize the displacement map.
• Resample the original image to the desired size using a separable cardinal spline bicubic filter. Call the resultant image R.
• Warp the original image by sampling the original image at (u+du*k, v+dv*k) using a cardinal bicubic spline filter, where (u,v) is the location of the current pixel, (du,dv) is the vector at that
position in the displacement map, and k is a displacement scale factor. k is chosen such that the warp narrows regions of high intensity change (its sign depends on the sign of the gradient
filters). Call the resultant image D.
• Compute D + e(D-R) as the final image, where e is the factor for emphasizing changes in the image. For the CPU filter I just used 1.0.
The source code is a bit of a mess because it interleaves the first six passes together as a bank of row filters in order to increase cache locality. If you are looking into reimplementing the
algorithm, I highly recommend either working from the description above or starting with the HLSL code for the original GPU filter, which is much easier to understand.
As you might guess, the warp resize algorithm is better suited to cartoon-style imagery than natural images, although it can help on the latter too. There is also an issue with the filter sometime
introducing a gap in the center of edges, which splits edges into two; I haven't figured exactly what causes this, but it is sometimes caused by the displacement vectors being too strong. What's
weird is that the effect is sometimes very limited, such as a hair-width gap being seen in a 20-pixel wide edge after 4x enlargement. I think this may have to do with the gradient vector swinging
very close to (0,0) as the current pixel position crosses the true center of an edge, at which point the filter can't do anything to the image.
The CPU version is approximately one-sixth of the speed of the GPU version, even with MMX optimizations. This is comparing a Pentium M 1.86GHz against a GeForce Go 6800, so it's a little bit
unbalanced; a Pentium 4 or Athlon 64 would probably narrow this gap a bit. Also, the GPU version does nasty hacks with bilinear texture sampling hardware to get the final three passes running fast
enough and with a low enough instruction count to fit in pixel shader 2.0 limits, so its accuracy suffers compared to the CPU version. If you look closely, you can see some speckling and fine
checkerboard patterns that aren't present in the output of the CPU filter. Increasing the precision of the GPU filter to match would narrow the gap further. | {"url":"http://www.virtualdub.org/blog/pivot/entry.php?id=79","timestamp":"2014-04-16T16:29:16Z","content_type":null,"content_length":"31306","record_id":"<urn:uuid:eaee9485-85b6-4464-81eb-de59d5d3b679>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
In this section we describe two more involved examples of using an IPython cluster to perform a parallel computation. In these examples, we will be using IPython’s “pylab” mode, which enables
interactive plotting using the Matplotlib package. IPython can be started in this mode by typing:
at the system command line.
150 million digits of pi
In this example we would like to study the distribution of digits in the number pi (in base 10). While it is not known if pi is a normal number (a number is normal in base 10 if 0-9 occur with equal
likelihood) numerical investigations suggest that it is. We will begin with a serial calculation on 10,000 digits of pi and then perform a parallel calculation involving 150 million digits.
In both the serial and parallel calculation we will be using functions defined in the pidigits.py file, which is available in the docs/examples/parallel directory of the IPython source distribution.
These functions provide basic facilities for working with the digits of pi and can be loaded into IPython by putting pidigits.py in your current working directory and then doing:
Serial calculation
For the serial calculation, we will use SymPy to calculate 10,000 digits of pi and then look at the frequencies of the digits 0-9. Out of 10,000 digits, we expect each digit to occur 1,000 times.
While SymPy is capable of calculating many more digits of pi, our purpose here is to set the stage for the much larger parallel calculation.
In this example, we use two functions from pidigits.py: one_digit_freqs() (which calculates how many times each digit occurs) and plot_one_digit_freqs() (which uses Matplotlib to plot the result).
Here is an interactive IPython session that uses these functions with SymPy:
In [7]: import sympy
In [8]: pi = sympy.pi.evalf(40)
In [9]: pi
Out[9]: 3.141592653589793238462643383279502884197
In [10]: pi = sympy.pi.evalf(10000)
In [11]: digits = (d for d in str(pi)[2:]) # create a sequence of digits
In [12]: run pidigits.py # load one_digit_freqs/plot_one_digit_freqs
In [13]: freqs = one_digit_freqs(digits)
In [14]: plot_one_digit_freqs(freqs)
Out[14]: [<matplotlib.lines.Line2D object at 0x18a55290>]
The resulting plot of the single digit counts shows that each digit occurs approximately 1,000 times, but that with only 10,000 digits the statistical fluctuations are still rather large:
It is clear that to reduce the relative fluctuations in the counts, we need to look at many more digits of pi. That brings us to the parallel calculation.
Parallel calculation
Calculating many digits of pi is a challenging computational problem in itself. Because we want to focus on the distribution of digits in this example, we will use pre-computed digit of pi from the
website of Professor Yasumasa Kanada at the University of Tokyo (http://www.super-computing.org). These digits come in a set of text files (ftp://pi.super-computing.org/.2/pi200m/) that each have 10
million digits of pi.
For the parallel calculation, we have copied these files to the local hard drives of the compute nodes. A total of 15 of these files will be used, for a total of 150 million digits of pi. To make
things a little more interesting we will calculate the frequencies of all 2 digits sequences (00-99) and then plot the result using a 2D matrix in Matplotlib.
The overall idea of the calculation is simple: each IPython engine will compute the two digit counts for the digits in a single file. Then in a final step the counts from each engine will be added
up. To perform this calculation, we will need two top-level functions from pidigits.py:
def compute_two_digit_freqs(filename):
Read digits of pi from a file and compute the 2 digit frequencies.
d = txt_file_to_digits(filename)
freqs = two_digit_freqs(d)
return freqs
def reduce_freqs(freqlist):
Add up a list of freq counts to get the total counts.
allfreqs = np.zeros_like(freqlist[0])
for f in freqlist:
allfreqs += f
return allfreqs
We will also use the plot_two_digit_freqs() function to plot the results. The code to run this calculation in parallel is contained in docs/examples/parallel/parallelpi.py. This code can be run in
parallel using IPython by following these steps:
1. Use ipcluster to start 15 engines. We used 16 cores of an SGE linux cluster (1 controller + 15 engines).
2. With the file parallelpi.py in your current working directory, open up IPython in pylab mode and type run parallelpi.py. This will download the pi files via ftp the first time you run it, if they
are not present in the Engines’ working directory.
When run on our 16 cores, we observe a speedup of 14.2x. This is slightly less than linear scaling (16x) because the controller is also running on one of the cores.
To emphasize the interactive nature of IPython, we now show how the calculation can also be run by simply typing the commands from parallelpi.py interactively into IPython:
In [1]: from IPython.parallel import Client
# The Client allows us to use the engines interactively.
# We simply pass Client the name of the cluster profile we
# are using.
In [2]: c = Client(profile='mycluster')
In [3]: v = c[:]
In [3]: c.ids
Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
In [4]: run pidigits.py
In [5]: filestring = 'pi200m.ascii.%(i)02dof20'
# Create the list of files to process.
In [6]: files = [filestring % {'i':i} for i in range(1,16)]
In [7]: files
# download the data files if they don't already exist:
In [8]: v.map(fetch_pi_file, files)
# This is the parallel calculation using the Client.map method
# which applies compute_two_digit_freqs to each file in files in parallel.
In [9]: freqs_all = v.map(compute_two_digit_freqs, files)
# Add up the frequencies from each engine.
In [10]: freqs = reduce_freqs(freqs_all)
In [11]: plot_two_digit_freqs(freqs)
Out[11]: <matplotlib.image.AxesImage object at 0x18beb110>
In [12]: plt.title('2 digit counts of 150m digits of pi')
Out[12]: <matplotlib.text.Text object at 0x18d1f9b0>
The resulting plot generated by Matplotlib is shown below. The colors indicate which two digit sequences are more (red) or less (blue) likely to occur in the first 150 million digits of pi. We
clearly see that the sequence “41” is most likely and that “06” and “07” are least likely. Further analysis would show that the relative size of the statistical fluctuations have decreased compared
to the 10,000 digit calculation.
Parallel options pricing
An option is a financial contract that gives the buyer of the contract the right to buy (a “call”) or sell (a “put”) a secondary asset (a stock for example) at a particular date in the future (the
expiration date) for a pre-agreed upon price (the strike price). For this right, the buyer pays the seller a premium (the option price). There are a wide variety of flavors of options (American,
European, Asian, etc.) that are useful for different purposes: hedging against risk, speculation, etc.
Much of modern finance is driven by the need to price these contracts accurately based on what is known about the properties (such as volatility) of the underlying asset. One method of pricing
options is to use a Monte Carlo simulation of the underlying asset price. In this example we use this approach to price both European and Asian (path dependent) options for various strike prices and
The code for this example can be found in the docs/examples/parallel/options directory of the IPython source. The function price_options() in mckernel.py implements the basic Monte Carlo pricing
algorithm using the NumPy package and is shown here:
def price_options(S=100.0, K=100.0, sigma=0.25, r=0.05, days=260, paths=10000):
Price European and Asian options using a Monte Carlo method.
S : float
The initial price of the stock.
K : float
The strike price of the option.
sigma : float
The volatility of the stock.
r : float
The risk free interest rate.
days : int
The number of days until the option expires.
paths : int
The number of Monte Carlo paths used to price the option.
A tuple of (E. call, E. put, A. call, A. put) option prices.
import numpy as np
from math import exp,sqrt
h = 1.0/days
const1 = exp((r-0.5*sigma**2)*h)
const2 = sigma*sqrt(h)
stock_price = S*np.ones(paths, dtype='float64')
stock_price_sum = np.zeros(paths, dtype='float64')
for j in range(days):
growth_factor = const1*np.exp(const2*np.random.standard_normal(paths))
stock_price = stock_price*growth_factor
stock_price_sum = stock_price_sum + stock_price
stock_price_avg = stock_price_sum/days
zeros = np.zeros(paths, dtype='float64')
r_factor = exp(-r*h*days)
euro_put = r_factor*np.mean(np.maximum(zeros, K-stock_price))
asian_put = r_factor*np.mean(np.maximum(zeros, K-stock_price_avg))
euro_call = r_factor*np.mean(np.maximum(zeros, stock_price-K))
asian_call = r_factor*np.mean(np.maximum(zeros, stock_price_avg-K))
return (euro_call, euro_put, asian_call, asian_put)
To run this code in parallel, we will use IPython’s LoadBalancedView class, which distributes work to the engines using dynamic load balancing. This view is a wrapper of the Client class shown in the
previous example. The parallel calculation using LoadBalancedView can be found in the file mcpricer.py. The code in this file creates a LoadBalancedView instance and then submits a set of tasks using
LoadBalancedView.apply() that calculate the option prices for different volatilities and strike prices. The results are then plotted as a 2D contour plot using Matplotlib.
# <nbformat>2</nbformat>
# <markdowncell>
# # Parallel Monto-Carlo options pricing
# <markdowncell>
# ## Problem setup
# <codecell>
from __future__ import print_function
import sys
import time
from IPython.parallel import Client
import numpy as np
from mckernel import price_options
from matplotlib import pyplot as plt
# <codecell>
cluster_profile = "default"
price = 100.0 # Initial price
rate = 0.05 # Interest rate
days = 260 # Days to expiration
paths = 10000 # Number of MC paths
n_strikes = 6 # Number of strike values
min_strike = 90.0 # Min strike price
max_strike = 110.0 # Max strike price
n_sigmas = 5 # Number of volatility values
min_sigma = 0.1 # Min volatility
max_sigma = 0.4 # Max volatility
# <codecell>
strike_vals = np.linspace(min_strike, max_strike, n_strikes)
sigma_vals = np.linspace(min_sigma, max_sigma, n_sigmas)
# <markdowncell>
# ## Parallel computation across strike prices and volatilities
# <markdowncell>
# The Client is used to setup the calculation and works with all engines.
# <codecell>
c = Client(profile=cluster_profile)
# <markdowncell>
# A LoadBalancedView is an interface to the engines that provides dynamic load
# balancing at the expense of not knowing which engine will execute the code.
# <codecell>
view = c.load_balanced_view()
# <codecell>
print("Strike prices: ", strike_vals)
print("Volatilities: ", sigma_vals)
# <markdowncell>
# Submit tasks for each (strike, sigma) pair.
# <codecell>
t1 = time.time()
async_results = []
for strike in strike_vals:
for sigma in sigma_vals:
ar = view.apply_async(price_options, price, strike, sigma, rate, days, paths)
# <codecell>
print("Submitted tasks: ", len(async_results))
# <markdowncell>
# Block until all tasks are completed.
# <codecell>
t2 = time.time()
t = t2-t1
print("Parallel calculation completed, time = %s s" % t)
# <markdowncell>
# ## Process and visualize results
# <markdowncell>
# Get the results using the `get` method:
# <codecell>
results = [ar.get() for ar in async_results]
# <markdowncell>
# Assemble the result into a structured NumPy array.
# <codecell>
prices = np.empty(n_strikes*n_sigmas,
for i, price in enumerate(results):
prices[i] = tuple(price)
prices.shape = (n_strikes, n_sigmas)
# <markdowncell>
# Plot the value of the European call in (volatility, strike) space.
# <codecell>
plt.contourf(sigma_vals, strike_vals, prices['ecall'])
plt.title('European Call')
plt.ylabel("Strike Price")
# <markdowncell>
# Plot the value of the Asian call in (volatility, strike) space.
# <codecell>
plt.contourf(sigma_vals, strike_vals, prices['acall'])
plt.title("Asian Call")
plt.ylabel("Strike Price")
# <markdowncell>
# Plot the value of the European put in (volatility, strike) space.
# <codecell>
plt.contourf(sigma_vals, strike_vals, prices['eput'])
plt.title("European Put")
plt.ylabel("Strike Price")
# <markdowncell>
# Plot the value of the Asian put in (volatility, strike) space.
# <codecell>
plt.contourf(sigma_vals, strike_vals, prices['aput'])
plt.title("Asian Put")
plt.ylabel("Strike Price")
# <codecell>
To use this code, start an IPython cluster using ipcluster, open IPython in the pylab mode with the file mckernel.py in your current working directory and then type:
In [7]: run mcpricer.py
Submitted tasks: 30
Once all the tasks have finished, the results can be plotted using the plot_options() function. Here we make contour plots of the Asian call and Asian put options as function of the volatility and
strike price:
In [8]: plot_options(sigma_vals, strike_vals, prices['acall'])
In [9]: plt.figure()
Out[9]: <matplotlib.figure.Figure object at 0x18c178d0>
In [10]: plot_options(sigma_vals, strike_vals, prices['aput'])
These results are shown in the two figures below. On our 15 engines, the entire calculation (15 strike prices, 15 volatilities, 100,000 paths for each) took 37 seconds in parallel, giving a speedup
of 14.1x, which is comparable to the speedup observed in our previous example. | {"url":"http://ipython.org/ipython-doc/rel-0.13/parallel/parallel_demos.html","timestamp":"2014-04-20T18:23:29Z","content_type":null,"content_length":"51801","record_id":"<urn:uuid:f1202b97-6480-4903-a4ab-644d22b5672a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
• What do you do if the denominator c + dx might be zero?
• What if z makes a mistake and outputs the wrong number?
• What if you want z to output decimal digits instead of continued fraction terms?
• What if you want z to guess what is coming up without seeing it?
• How can you represent a number like π, whose continued fraction is unknown?
• How do you get around the need to represent arbitrarily large terms?
• How do you extract a square root? | {"url":"http://perl.plover.com/classes/cftalk/TALK/slide035.html","timestamp":"2014-04-19T15:05:14Z","content_type":null,"content_length":"2583","record_id":"<urn:uuid:ba648dd5-5f7b-4c12-b4f2-f97f2d23853d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expressivity of coalgebraic modal logic: The limits and beyond
Results 1 - 10 of 34
- IN LICS’06 , 2006
"... For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step
towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a sh ..."
Cited by 26 (15 self)
Add to MetaCart
For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a
general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation,
in PSPACE. This leads to a unified derivation of tight PSPACE-bounds for a number of logics including K, KD, coalition logic, graded modal logic, majority logic, and probabilistic modal logic. Our
generic algorithm moreover finds tableau proofs that witness pleasant prooftheoretic properties including a weak subformula property. This generality is made possible by a coalgebraic semantics,
which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way.
"... In recent years, a tight connection has emerged between modal logic on the one hand and coalgebras, understood as generic transition systems, on the other hand. Here, we prove that (finitary)
coalgebraic modal logic has the finite model property. This fact not only reproves known completeness result ..."
Cited by 24 (16 self)
Add to MetaCart
In recent years, a tight connection has emerged between modal logic on the one hand and coalgebras, understood as generic transition systems, on the other hand. Here, we prove that (finitary)
coalgebraic modal logic has the finite model property. This fact not only reproves known completeness results for coalgebraic modal logic, which we push further by establishing that every coalgebraic
modal logic admits a complete axiomatization of rank 1; it also enables us to establish a generic decidability result and a first complexity bound. Examples covered by these general results include,
besides standard Hennessy-Milner logic, graded modal logic and probabilistic modal logic.
- IN AUTOMATA, LANGUAGES AND PROGRAMMING, ICALP 07, VOL. 4596 OF LNCS , 2007
"... State-based systems and modal logics for reasoning about them often heterogeneously combine a number of features such as non-determinism and probabilities. Here, we show that the combination of
features can be reflected algorithmically and develop modular decision procedures for heterogeneous modal ..."
Cited by 16 (11 self)
Add to MetaCart
State-based systems and modal logics for reasoning about them often heterogeneously combine a number of features such as non-determinism and probabilities. Here, we show that the combination of
features can be reflected algorithmically and develop modular decision procedures for heterogeneous modal logics. The modularity is achieved by formalising the underlying state-based systems as
multi-sorted coalgebras and associating both a logical and an algorithmic description to a number of basic building blocks. Our main result is that logics arising as combinations of these building
blocks can be decided in polynomial space provided that this is the case for the components. By instantiating the general framework to concrete cases, we obtain PSPACE decision procedures for a wide
variety of structurally different logics, describing e.g. Segala systems and games with uncertain information.
- IN STACS 2007, 24TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE, PROCEEDINGS , 2007
"... Coalgebras provide a unifying semantic framework for a wide variety of modal logics. It has previously been shown that the class of coalgebras for an endofunctor can always be axiomatised in
rank 1. Here we establish the converse, i.e. every rank 1 modal logic has a sound and strongly complete coal ..."
Cited by 14 (11 self)
Add to MetaCart
Coalgebras provide a unifying semantic framework for a wide variety of modal logics. It has previously been shown that the class of coalgebras for an endofunctor can always be axiomatised in rank 1.
Here we establish the converse, i.e. every rank 1 modal logic has a sound and strongly complete coalgebraic semantics. As a consequence, recent results on coalgebraic modal logic, in particular
generic decision procedures and upper complexity bounds, become applicable to arbitrary rank 1 modal logics, without regard to their semantic status; we thus obtain purely syntactic versions of these
results. As an extended example, we apply our framework to recently defined deontic logics.
- In MFPS XXIII , 2007
"... Replace this file with prentcsmacro.sty for your meeting, or with entcsmacro.sty for your meeting. Both can be ..."
- In Proc. CALCO 2005, volume 3629 of LNCS , 2005
"... and relation-preserving functions. In this paper, the least (fibre-wise) of such liftings, L(B), is characterized for essentially any B. The lifting has all the useful properties of the relation
lifting due to Jacobs, without the usual assumption of weak pullback preservation; if B preserves weak pu ..."
Cited by 8 (1 self)
Add to MetaCart
and relation-preserving functions. In this paper, the least (fibre-wise) of such liftings, L(B), is characterized for essentially any B. The lifting has all the useful properties of the relation
lifting due to Jacobs, without the usual assumption of weak pullback preservation; if B preserves weak pullbacks, the two liftings coincide. Equivalence relations can be viewed as Boolean algebras of
subsets (predicates, tests). This correspondence relates L(B) to the least test suite lifting T (B), which is defined in the spirit of predicate lifting as used in coalgebraic modal logic. Properties
of T (B) translate to a general expressivity result for a modal logic for B-coalgebras. In the resulting logic, modal operators of any arity can appear. 1
- Electronic Notes in Theoretical Computer Science , 2007
"... Bialgebraic semantics, invented a decade ago by Turi and Plotkin, is an approach to formal reasoning about well-behaved structural operational semantics (SOS). An extension of algebraic and
coalgebraic methods, it abstracts from concrete notions of syntax and system behaviour, thus treating various ..."
Cited by 8 (3 self)
Add to MetaCart
Bialgebraic semantics, invented a decade ago by Turi and Plotkin, is an approach to formal reasoning about well-behaved structural operational semantics (SOS). An extension of algebraic and
coalgebraic methods, it abstracts from concrete notions of syntax and system behaviour, thus treating various kinds of operational descriptions in a uniform fashion. In this paper, bialgebraic
semantics is combined with a coalgebraic approach to modal logic in a novel, general approach to proving the compositionality of process equivalences for languages defined by structural operational
semantics. To prove compositionality, one provides a notion of behaviour for logical formulas, and defines an SOS-like specification of modal operators which reflects the original SOS specification
of the language. This approach can be used to define SOS congruence formats as well as to prove compositionality for specific languages and equivalences. Key words: structural operational semantics,
coalgebra, bialgebra, modal logic, congruence format 1
, 2010
"... Coalgebra is an abstract framework for the uniform study of different kinds of dynamical systems. An endofunctor F determines both the type of systems (F-coalgebras) and a notion of behavioral
equivalence (∼F) amongst them. Many types of transition systems and their equivalences can be captured by a ..."
Cited by 7 (4 self)
Add to MetaCart
Coalgebra is an abstract framework for the uniform study of different kinds of dynamical systems. An endofunctor F determines both the type of systems (F-coalgebras) and a notion of behavioral
equivalence (∼F) amongst them. Many types of transition systems and their equivalences can be captured by a functor F. For example, for deterministic automata the derived equivalence is language
equivalence, while for non-deterministic automata it is ordinary bisimilarity. The powerset construction is a standard method for converting a nondeterministic automaton into an equivalent
deterministic one as far as language is concerned. In this paper, we lift the powerset construction on automata to the more general framework of coalgebras with structured state spaces. Examples of
applications include partial Mealy machines, (structured) Moore automata, and Rabin probabilistic automata.
"... Abstract. We define an out-degree for F-coalgebras and show that the coalgebras of outdegree at most κ form a covariety. As a subcategory of all F-coalgebras, this class has a terminal object,
which for many problems can stand in for the terminal F-coalgebra, which need not exist in general. As exam ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. We define an out-degree for F-coalgebras and show that the coalgebras of outdegree at most κ form a covariety. As a subcategory of all F-coalgebras, this class has a terminal object, which
for many problems can stand in for the terminal F-coalgebra, which need not exist in general. As examples, we derive structure theoretic results about minimal coalgebras, showing that, for instance
minimization of coalgebras is functorial, that products of finitely many minimal coalgebras exist and are given by their largest common subcoalgebra, that minimal subcoalgebras have no inner
endomorphisms and show how minimal subcoalgebras can be constructed from Moore-automata. Since the elements of minimal subcoalgebras must correspond uniquely to the formulae of any logic
characterizing observational equivalence, we give in the last section a straightforward and self-contained account of the coalgebraic logic of D. Pattinson and L. Schröder, which we believe is
simpler and more direct than the original exposition. For every automaton A there exists a minimal automaton ∇(A), which displays
- ALGEBRA AND COALGEBRA IN COMPUTER SCIENCE , 2005
"... Recently, various process calculi have been introduced which are suited for the modelling of mobile computation and in particular the mobility of program code; a prominent example is the ambient
calculus. Due to the complexity of the involved spatial reduction, there is — in contrast to the situatio ..."
Cited by 4 (2 self)
Add to MetaCart
Recently, various process calculi have been introduced which are suited for the modelling of mobile computation and in particular the mobility of program code; a prominent example is the ambient
calculus. Due to the complexity of the involved spatial reduction, there is — in contrast to the situation in standard process algebra — up to now no satisfying coalgebraic representation of a mobile
process calculus. Here, we discuss a coalgebraic denotational semantics for the ambient calculus, viewed as a step towards a generic coalgebraic framework for modelling mobile systems. Crucial
features of our modelling are a set of GSOS style transition rules for the ambient calculus, a hardwiring of the so-called hardening relation in the functorial signature, and a set-based treatment of
hidden name sharing. The formal representation of this framework is cast in the algebraic-coalgebraic specification language CoCasl. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=295229","timestamp":"2014-04-21T11:45:55Z","content_type":null,"content_length":"38058","record_id":"<urn:uuid:1ac9c26b-67ec-4905-95a3-dc6235f6da81>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating mean absorbtion coefficient
Gd Day,
I have absorbtion coefficients of a material available to me at thickness of 25mm , 50 mm and 100 mm respectively.
Is there a way to calculate the absorbtion co-efficient of the same material at 150mm using the data available ?
No Gear .. No Fear .. | {"url":"http://www.gearslutz.com/board/bass-traps-acoustic-panels-foam-etc/846459-calculating-mean-absorbtion-coefficient.html","timestamp":"2014-04-17T00:53:18Z","content_type":null,"content_length":"50887","record_id":"<urn:uuid:9ce977d2-d8a2-499c-b35a-268cb46ce52a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivation of the Interest Rate Parity (IRP)
Suppose that you consider investing in the home or foreign country for one period. It means that you have some amount of money now (present value or PV) and, given an interest rate, you want to make
some amount of money in the future (future value or FV). The basic relation between PV and FV for one period is
Because you know how much money you have (PV) and what the interest rate (R) is now, the unknown is how much money you will make in the future (FV). You rewrite the preceding formula to have the
unknown variable in the left-hand side and get:
FV[H] = PV(1 + R[H])
Here, R[H] and (1 + R[H]) are the nominal interest rate and the interest factor (1 + R[H]) in the home country (H), respectively. For simplicity, assume a $1 investment so that you can simplify your
(dollar) earnings to the following:
FV[H] = (1 + R[H])
Similarly, your (euro) earnings in the foreign country by investing 1 in Eurozone are shown here:
FV[F] = (1 + R[F])
Here, R[F] and (1 + R[F]) imply the foreign country’s (F) nominal interest rate and interest factor (in this case, Eurozone’s), respectively.
You can’t directly compare R[H] and R[F ]because the home and foreign country’s interest rates are denominated in different currencies. Therefore, you need a conversion mechanism.
You can convert your earnings in euro into dollars by multiplying the interest factor in foreign currency with the percent change in exchange rate. But in order to calculate the percent change in the
exchange rate, you need to know the current exchange rate and the expected exchange rate.
While the current exchange rate is observable, there is no explicit series called expected exchange rate. Therefore, you need a measure for the expected exchange rate. The exchange rate on a forward
contract (namely, the forward rate) would be a good proxy for the expected exchange rate.
Therefore, express the nominal version of the MBOP’s parity condition as follows:
In this equation, F and S are the forward rate and spot rate, respectively. You can further write the forward rate (F) in a way that shows the relationship between F and S:
F[t] = S[t](1 + ρ)
This equation states that the difference between the forward rate and the spot rate is related to a factor ρ (rho). The variable ρ can be interpreted as the percentage difference between the forward
rate and the spot rate. Inserting the previous definition of the forward rate
and eliminating the spot rate in the bracket of the equation, you have:
(1 + R[H]) = (1 + R[F]) x (1 + ρ)
This equation is a different way of expressing interest rate parity. It implies that investors are indifferent between home and foreign securities denominated in home and foreign currencies if the
nominal return in the home country equals the nominal return in a foreign country, including the change in the exchange rate.
Look at this equation also from the viewpoint of which variables are known and which variable should be calculated. In the equation, you observe the home and foreign nominal interest rates and want
to know what ρ is. Therefore, you divide both sides by (1 + R[F]) and find
Conceptually, ρ implies the percent change in the exchange rate. Because the previous derivation was based on the change between the forward rate and the spot rate, you refer to ρ as a forward
premium or forward discount.
The terms forward premium and forward discount refer to the other currency. You can explain this by considering the sign of ρ. Clearly, ρ can be positive or negative. If the home nominal interest
rate (R[H]) is larger than the foreign nominal interest rate (R[F]), the ratio of the home and foreign interest factor [(1 + R[H]) / (1 + R[F])] becomes larger than 1, which makes ρ positive.
Because higher nominal interest rate in a country is consistent with higher inflation rates, a positive ρ is forward premium on the foreign currency.
If the home nominal interest rate (R[H]) is lower than the foreign nominal interest rate (R[F]), the ratio of the home and foreign interest factor [(1 + R[H]) / (1 + R[F])] becomes less than 1, which
makes ρ negative. Because lower nominal interest rates in a country is consistent with lower inflation rates, a negative ρ is forward discount on the foreign currency. | {"url":"http://www.dummies.com/how-to/content/derivation-of-the-interest-rate-parity-irp.navId-816997.html","timestamp":"2014-04-17T02:24:55Z","content_type":null,"content_length":"57264","record_id":"<urn:uuid:ea4e5b05-99cc-450a-bb5a-df9ea7d0a779>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Other versions of the parsing problem
Next: Post Correspondence problem Up: Parsing and generation Previous: A restricted version of
Other definitions of the parsing and generation problems have been defined as well. For example, in [99] two relaxations of the foregoing definition of the p-parsing problem are discussed. Such
relaxations may in certain applications allow for simpler grammars.
The first relaxation assumes that it is possible to make a distinction between cyclic and non-cyclic labels. A non-cyclic label will be a label with a finite number of possible values (i.e. it is not
recursive). For example the labels arg1 and arg2 may be cyclic whereas the label number may be non-cyclic. The completeness and coherence condition can be restricted to the values of cyclic labels.
If the proof procedure can only further instantiate acyclic labels no termination problems occur because there are only a finite number of possibilities to do this.
For certain applications this may be useful. For example, consider the case where monolingual grammars define semantic structures which are annotated with some syntactic information as well. If the
completeness and coherence conditions are restricted to cyclic labels, the input to the generator may be under-specified with respect to these syntactic decorations. These syntactic labels can then
be filled in by the generator on the basis of the monolingual grammar.
The second relaxation has to do with reentrancies in feature structures. It is possible to define a version of the parsing problem that does not take into account such reentrancies. As will be
explained in more detail in section 5.4.2, it turned out that in using p-parsing problem was investigated that did not require completeness and coherence of such reentrancies. The possible usefulness
of this conception of the parsing problem will be discussed in section 5.4.2.
Another possibility is investigated in [92]. The basic intuition of his approach is that the parser (or generator) should come up with those signs that are as close as possible to the input
structure. That is, answers to the parsing and generation problem consist of those signs that `minimally extend' the input and `maximally overlap' the input. The notions `minimally extend' and
`maximally overlap' are defined with respect to other possible answers to the parsing problem.
The problem with this approach seems to be that, although interesting, the implementation is far from straightforward. The difficulty is increased by the fact that in this approach for the proof
procedure to know whether something is an answer to a goal it is necessary to take into account all other possible answers. In the other versions of the parsing problem answers are independent of
each other.
Next: Post Correspondence problem Up: Parsing and generation Previous: A restricted version of Noord G.J.M. van | {"url":"http://www.let.rug.nl/~vannoord/papers/diss/diss/node31.html","timestamp":"2014-04-18T08:03:29Z","content_type":null,"content_length":"7361","record_id":"<urn:uuid:b035d512-d286-4691-8b61-26cab86d4470>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi approximation games
On Tue, 01 May 2012 18:16:25 -0500, Tim Wescott <(E-Mail Removed)>
>Instead of doing productive work, I just spent a few enjoyable minutes
>with Scilab finding approximations to pi of the form m/n.
>Because I'm posting to a couple of nerd groups, I can be confident that
>most of you probably know 22/7 off the tops of your heads.
>What interested me is how spotty things are -- after 22/7, the error
>drops for a bit until you get down to 355/113 (which, if you're at an
>equal level of nerdiness to me will ring a bell, but not have been
>swimming around in your brain to be found).
>But what's _really_ interesting, is that the next better fit isn't found
>until you get up to 52163/16604. Then things get steadily better until
>you hit 104348/33215 -- at which point the next lowest ratio which
>improves anything is 208341/66317, then 312689/99532. At this point I
>decided that I would post my answers for your amusement, and get back to
>being productive.
>Discrete math is so fun. And these newfangled chips are just destroying
>the joy, by making floating point efficient and cheap enough that you
>don't need to know little tricks like pi = (almost) 355/113.
My old HP35 calculators have a key for pi. The newer ones hide it, a
tiny pastel shift key thing. So I just key in 3.14. Rob down the hall
uses 3.
We are increasingly using floats in embedded stuff. Our ARM LPC3250
has SIMD hardware FP operations.
John Larkin Highland Technology, Inc
jlarkin at highlandtechnology dot com
Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom laser drivers and controllers
Photonics and fiberoptic TTL data links
VME thermocouple, LVDT, synchro acquisition and simulation | {"url":"http://www.motherboardpoint.com/pi-approximation-games-t254690.html","timestamp":"2014-04-16T05:08:11Z","content_type":null,"content_length":"67730","record_id":"<urn:uuid:96abfcce-664e-467f-9d21-4b803e8d6348>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |