content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Entropy (information theory)
819 VIEWS
Everipedia is now
- Join the
IQ Brainlist
and our
for early access to editing on the new platform and to participate in the beta testing.
Entropy (information theory)
Entropy (information theory)
Information entropy is the average rate at which information is produced by a stochastic source of data.
The measure of information entropy associated with each possible data value is the negative logarithm of the probability mass function for the value:
When the data source produces a low-probability value (i.e., when a low-probability event occurs), the event carries more "information" ("surprisal") than when the source data produces a
high-probability value. The amount of information conveyed by each event defined in this way becomes a random variable whose expected value is the information entropy. Generally, entropy refers to
disorder or uncertainty, and the definition of entropy used in information theory is directly analogous to the definition used in statistical thermodynamics. The concept of information entropy was
introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication".^[2]
The basic model of a data communication system is composed of three elements: a source of data, a communication channel, and a receiver, and – as expressed by Shannon – the "fundamental problem of
communication" is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel.^[3] ^[] The entropy provides an absolute limit on
the shortest possible average length of a lossless compression encoding of the data produced by a source, and if the entropy of the source is less than the channel capacity of the communication
channel, the data generated by the source can be reliably communicated to the receiver (at least in theory, possibly neglecting some practical considerations such as the complexity of the system
needed to convey the data and the amount of time it may take for the data to be conveyed).
Information entropy is typically measured in bits (alternatively called "shannons") or sometimes in "natural units" (nats) or decimal digits (called "dits", "bans", or "hartleys"). The unit of the
measurement depends on the base of the logarithm that is used to define the entropy.
The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources. For instance, the entropy of a fair coin toss is 1 bit, and the entropy
of m tosses is m bits. In a straightforward representation, log2(n) bits are needed to represent a variable that can take one of n values if n is a power of 2. If these values are equally probable,
the entropy (in bits) is equal to n. If one of the values is more probable to occur than the others, an observation that this value occurs is less informative than if some less common outcome had
occurred. Conversely, rarer events provide more information when observed. Since observation of less probable events occurs more rarely, the net effect is that the entropy (thought of as average
information) received from non-uniformly distributed data is always less than or equal to log2(n). Entropy is zero when one outcome is certain to occur. The entropy quantifies these considerations
when a probability distribution of the source data is known. The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account
the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.
The basic idea of information theory is that the "news value" of a communicated message depends on the degree to which the content of the message is surprising. If an event is very probable, it is no
surprise (and generally uninteresting) when that event happens as expected. However, if an event is unlikely to occur, it is much more informative to learn that the event happened or will happen. For
instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win.
However, knowledge that a particular number will win a lottery has high value because it communicates the outcome of a very low probability event. Theinformation content(also called the surprisal) of
an eventis an increasing function of the reciprocal of the probabilityof the event, precisely. Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome
of a random trial. This implies that casting a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability (about) than each outcome of a coin toss ().
Entropy is a measure of the unpredictability of the state, or equivalently, of its average information content. To get an intuitive understanding of these terms, consider the example of a political
poll. Usually, such polls happen because the outcome of the poll is not already known. In other words, the outcome of the poll is relatively unpredictable, and actually performing the poll and
learning the results gives some new information; these are just different ways of saying that the a priori entropy of the poll results is large. Now, consider the case that the same poll is performed
a second time shortly after the first poll. Since the result of the first poll is already known, the outcome of the second poll can be predicted well and the results should not contain much new
information; in this case the a priori entropy of the second poll result is small relative to that of the first.
Consider the example of a coin toss. If the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be for a two-outcome trial. There is
no way to predict the outcome of the coin toss ahead of time: if one has to choose, there is no average advantage to be gained by predicting that the toss will come up heads or tails, as either
prediction will be correct with probability. Such a coin toss has one bit of entropy since there are two possible outcomes that occur with equal probability, and learning the actual outcome contains
one bit of information. In contrast, a coin toss using a coin that has two heads and no tails has zero entropy since the coin will always come up heads, and the outcome can be predicted perfectly.
Analogously, a binary event with equiprobable outcomes has a Shannon entropy ofbit. Similarly, onetritwith equiprobable values contains(about 1.58496) bits of information because it can have one of
three values.
English text, treated as a string of characters, has fairly low entropy, i.e., is fairly predictable. If we do not know exactly what is going to come next, we can be fairly certain that, for example,
'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q',
or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.^[4] ^[]
If a compression scheme is lossless - one in which you can always recover the entire original message by decompression - then a compressed message has the same quantity of information as the
original, but communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless
compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be
attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains.
If one were to transmit sequences comprising the 4 characters 'A', 'B', 'C', and 'D', a transmitted message might be 'ABADDCAB'. Information theory gives a way of calculating the smallest possible
amount of information that will convey this. If all 4 letters are equally likely (25%), one can't do better (over a binary channel) than to have 2 bits encode (in binary) each letter: 'A' might code
as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. If 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, and could assign variable length codes, so that receiving a '1'
says to look at another bit unless 2 bits of sequential 1s have already been received. In this case, 'A' would be coded as '0' (one bit), 'B' as '10', and 'C' and 'D' as '110' and '111'. It is easy
to see that 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to
the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect.
Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In
practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or
digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.
Named afterBoltzmann's Η-theorem, Shannon defined the entropyΗ(Greek capital lettereta) of adiscrete random variablewith possible valuesandprobability mass functionas:
Hereis theexpected value operator, andIis theinformation contentofX.^[5]:11^[6]:19–20is itself a random variable.
The entropy can explicitly be written as
where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.^[7]
In the case of P(x**i) = 0 for some i, the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:
One may also define theconditional entropyof two eventsandtaking valuesandrespectively, as
whereis the probability thatand. This quantity should be understood as the amount of randomness in the random variablegiven the random variable.
Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modelled as a Bernoulli process.
The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum
uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because
However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come
up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p=0.7, then
Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a
double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new
information as the outcome of each coin toss is always certain.
Entropy can be normalized by dividing it by information length. This ratio is called metric entropy and is a measure of the randomness of the information.
To understand the meaning of -∑ p**i log(p**i), first define an information function I in terms of an event i with probability p**i. The amount of information acquired due to the observation of event
i follows from Shannon's solution of the fundamental properties of information:^[8]
1. I(p) is monotonically decreasing in p : an increase in the probability of an event decreases the information from an observed event, and vice versa.
2. I(p) ≥ 0 : information is a non-negative quantity.
3. I(1) = 0 : events that always occur do not communicate information.
4. I(p1 p2) = I(p1) + I(p2) : information due to independent events is additive.
The last is a crucial property. It states that joint probability of independent sources of information communicates as much information as the two individual events separately. Particularly, if the
first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn possible outcomes of the joint event. This means that if log2(n) bits are needed
to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both. Shannon discovered that the proper choice of function to quantify information,
preserving this additivity, is logarithmic, i.e.,
letbe the information function which one assumes to be twice continuously differentiable, one has:
Thisdifferential equationleads to the solutionfor any. Condition 2. leads toand especially,can be chosen on the formwith, which is equivalent to choosing a specificbase for the logarithm. The
differentunits of information(bitsfor thebinary logarithmlog[2],natsfor thenatural logarithmln,bansfor thedecimal logarithmlog[10]and so on) areconstant multiplesof each other. For instance, in case
of a fair coin toss, heads provideslog[2](2) = 1bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity,ntosses providenbits of information, which is
approximately0.693nnats or0.301ndecimal digits.
If there is a distribution where event i can happen with probability p**i, and it is sampled N times with an outcome i occurring n**i = N p**i times, the total amount of information we have received
Relationship to thermodynamic entropy
The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.
In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy,
where kB is the Boltzmann constant, and p**i is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872).^[9]
The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy, introduced by John von Neumann in 1927,
where ρ is the density matrix of the quantum mechanical system and Tr is the trace.
At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a
system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of
Boltzmann's constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to
anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which
is central to the definition of information entropy.
The connection between thermodynamics and what is now known as information theory was first made by Ludwig Boltzmann and expressed by his famous equation:
whereis the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of
particles in various energy states) that can yield the given macrostate, and *kBBoltzmann's constant. It is assumed that each microstate is equally likely, so that the probability of a given
microstate is *pi= 1/W*. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently *kB
• times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a
microstate, given the macrostate.
In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted
as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the
macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it
increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See
article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as
Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first
acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to
process a given amount of information, though modern computers are far less efficient.
Entropy as information content
Entropy is defined in the context of a probabilistic model. Independent fair coin flips have an entropy of 1 bit per flip. A source that always generates a long string of B's has an entropy of 0,
since the next character will always be a 'B'.
The entropy rate of a data source means the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per
character in English;^[10] the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.
From the preceding example, note the following points:
1. The amount of entropy is not always an integer number of bits.
2. Many data bits may not convey information. For example, data structures often store information redundantly, or have identical sections regardless of the information in the data structure.
Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits (see caveat below
in italics). The formula can be derived by calculating the mathematical expectation of the amount of information contained in a digit from the information source. See also Shannon–Hartley theorem.
Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language
structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. See Markov chain.
Entropy as a measure of diversity
Entropy is one of several ways to measure diversity. Specifically, Shannon entropy is the logarithm of 1D, the true diversity index with parameter equal to 1.
Entropy effectively bounds the performance of the strongest lossless compression possible, which can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or
arithmetic coding. See also Kolmogorov complexity. In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors.
World's technological capacity to store and communicate information
A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the
year 2007, therefore estimating the entropy of the technologically available sources.^[11] ^[]
All figures in entropically
compressed exabytes
Type of Information 1986 2007
Storage 2.6 295
Broadcast 432 1900
Telecommunications 0.281 65
The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store
information on a medium, to receive information through a one-way broadcast networks, or to exchange information through two-way telecommunication networks.^[11]
Limitations of entropy as information content
There are a number of entropy-related concepts that mathematically quantify information content in some way:
• the self-information of an individual message or symbol taken from a given probability distribution,
• the entropy of a given probability distribution of messages or symbols, and
• the entropy rate of a stochastic process.
(The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case
of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.
It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per
character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.
If very large blocks were used, the estimate of per-character entropy rate may become artificially low., due to the probability distribution of the sequence is not knowable exactly; it is only an
estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book. If there are N published books, and each book is only published
once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and
using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information
content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of
all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of
the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the
entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.
The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an
entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n
−2) for n = 3, 4, 5, …, F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.
Limitations of entropy in cryptography
Incryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its realuncertaintyis unmeasurable. An example would be a 128-bit key which is
uniformly and randomly generated has 128 bits of entropy. It also takes (on average)guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are
not chosen uniformly.^[12]^[13] Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.^[14]
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is
perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of
entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:
where p**i is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate
whereiis a state (certain preceding characters) andis the probability ofjgivenias the previous character.
For a second order Markov source, the entropy rate is
In general the b of a source= (S, P)with source alphabetS = {a[1], …, a[n]} anddiscrete probability distributionP = {p[1], …, p[n]} wherep[i]is the probability ofa[i](sayp[i]= p(a[i]))is defined by:
Note: the b in "b-ary entropy" is the number of different symbols of the ideal alphabet used as a standard yardstick to measure source alphabets. In information theory, two symbols are necessary and
sufficient for an alphabet to encode information. Therefore, the default is to let b = 2 ("binary entropy"). Thus, the entropy of the source alphabet, with its given empiric probability distribution,
is a number equal to the number (possibly fractional) of symbols of the "ideal alphabet", with an optimal probability distribution, necessary to encode for each symbol of the source alphabet. Also
note: "optimal probability distribution" here means a uniform distribution: a source alphabet with n symbols has the highest possible entropy (for an alphabet with n symbols) when the probability
distribution of the alphabet is uniform. This optimal entropy turns out to be logb(n).
A source alphabet with non-uniform distribution will have less entropy than if those symbols had uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a
ratio called efficiency:
Applying the basic properties of the logarithm, this quantity can also be expressed as:
Efficiency has utility in quantifying the effective use of acommunication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy.
Furthermore, the efficiency is indifferent to choice of (positive) baseb, as indicated by the insensitivity within the final logarithm above thereto.
Shannon entropy is characterized by a small number of criteria, listed below. Any definition of entropy satisfying these assumptions has the form
where K is a constant corresponding to a choice of measurement units.
In the following, p**i = Pr(X = x**i) and Ηn(p1, …, p**n) = Η(X).
The measure should be continuous, so that changing the values of the probabilities by a very small amount should only change the entropy by a small amount.
The measure should be unchanged if the outcomes x**i are re-ordered.
The measure should be maximal if all the outcomes are equally likely (uncertainty is highest when all possible events are equiprobable).
For equiprobable events the entropy should increase with the number of outcomes.
For continuous random variables, the multivariate Gaussian is the distribution with maximum differential entropy.
The amount of entropy should be independent of how the process is regarded as being divided into parts.
This last functional relationship characterizes the entropy of a system with sub-systems. It demands that the entropy of a system can be calculated from the entropies of its sub-systems if the
interactions between the sub-systems are known.
Given an ensemble of n uniformly distributed elements that are divided into k boxes (sub-systems) with b1, ..., b**k elements each, the entropy of the whole ensemble should be equal to the sum of the
entropy of the system of boxes and the individual entropies of the boxes, each weighted with the probability of being in that particular box.
For positive integers b**i where b1 + … + b**k = n,
Choosing k = n, b1 = … = b**n = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source alphabet with n symbols can be defined simply as
being equal to its n-ary entropy. See also Redundancy (information theory).
The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the amount of information learned (or uncertainty eliminated) by revealing the value of
a random variable X:
• Adding or removing an event with probability zero does not contribute to the entropy:
• The entropy of a discrete random variable is a non-negative number:
• It can be confirmed using the Jensen inequality that
This maximal entropy oflog[b](n)is effectively attained by a source alphabet having a uniform probability distribution: uncertainty is maximal when all possible events are equiprobable.
• The entropy or the amount of information revealed by evaluating (X,Y) (that is, evaluating X and Y simultaneously) is equal to the information revealed by conducting two consecutive experiments:
first evaluating the value of Y, then revealing the value of X given that you know the value of Y. This may be written as
• If where is a function, then . Applying the previous formula to yields
so, the entropy of a variable can only decrease when the latter is passed through a function.
• If X and Y are two independent random variables, then knowing the value of Y doesn't influence our knowledge of the value of X (since the two don't influence each other by independence):
• The entropy of two simultaneous events is no more than the sum of the entropies of each individual event, and are equal if the two events are independent. More specifically, if X and Y are two
random variables on the same probability space, and (X, Y) denotes their Cartesian product, then
• The entropy is concave in the probability mass function , i.e.
for all probability mass functionsand.^[15]^:32
Extending discrete entropy to the continuous case
The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable withprobability density functionf(x)with finite or infinite
supporton the real line is defined by analogy, using the above form of the entropy as an expectation:
This formula is usually referred to as the continuous entropy, or differential entropy. A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of
Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy
lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.
To answer this question, a connection must be established between the two functions:
In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities
are denoted by p**n. As the continuous domain is generalised, the width must be made explicit.
To do this, start with a continuous functionfdiscretized into bins of size. By the mean-value theorem there exists a valuex[i]in each bin such that
the integral of the function f can be approximated (in the Riemannian sense) by
where this limit and "bin size goes to zero" are equivalent.
and expanding the logarithm, we have
Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy:
which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the
Shannon entropy by an infinite offset (see also the article on information dimension)
Limiting density of discrete points
It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be
negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units
of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin
size") and therefore has the same units, then a modified differential entropy may be written in proper form as:
and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy aswould also include a term of, which would in general be infinite. This is expected, continuous
variables would typically have infinite entropy when discretized. Thelimiting density of discrete pointsis really a measure of how much easier a distribution is to describe than a distribution that
is uniform over its quantization scheme.
Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the
distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some
non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as
In this form the relative entropy generalises (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the
Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate
independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and implicitly entropy and differential
entropy, do depend on the "reference" measure m.
Entropy has become a useful quantity in combinatorics.
Loomis–Whitney inequality
A simple example of this is an alternate proof of the Loomis–Whitney inequality: for every subset A ⊆ Zd, we have
where P**i is the orthogonal projection in the ith coordinate:
The proof follows as a simple corollary of Shearer's inequality: if X1, …, X**d are random variables and S1, …, S**n are subsets of {1, …, d} such that every integer between 1 and d lies in exactly r
of these subsets, then
whereis the Cartesian product of random variablesX[j]with indexesjinS[i](so the dimension of this vector is equal to the size ofS[i]).
We sketch how Loomis–Whitney follows from this: Indeed, letXbe a uniformly distributed random variable with values inAand so that each point inAoccurs with equal probability. Then (by the further
properties of entropy mentioned above)Η(X) = log|A|, where|A|denotes the cardinality ofA. LetS[i]= {1, 2, …, i−1, i+1, …, d}. The range ofis contained inP[i](A)and hence. Now use this to bound the
right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.
Approximation to binomial coefficient
For integers 0 < k < n let q = k/n. Then
Here is a sketch proof. Note thatis one term of the expression
Rearranging gives the upper bound. For the lower bound one first shows, using some algebra, that it is the largest term in the summation. But then,
since there are n + 1 terms in the summation. Rearranging gives the lower bound.
A nice interpretation of this is that the number of binary strings of lengthnwith exactlykmany 1's is approximately.^[17]
• Conditional entropy
• Cross entropy – is a measure of the average number of bits needed to identify an event from a set of possibilities between two probability distributions
• Diversity index – alternative approaches to quantifying diversity in a probability distribution
• Entropy (arrow of time)
• Entropy encoding – a coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols.
• Entropy estimation
• Entropy power inequality
• Entropy rate
• Fisher information
• Graph entropy
• Hamming distance
• History of entropy
• History of information theory
• Information geometry
• Joint entropy – is the measure how much entropy is contained in a joint system of two random variables.
• Kolmogorov–Sinai entropy in dynamical systems
• Levenshtein distance
• Mutual information
• Negentropy
• Perplexity
• Qualitative variation – other measures of statistical dispersion for nominal distributions
• Quantum relative entropy – a measure of distinguishability between two quantum states.
• Rényi entropy – a generalization of Shannon entropy; it is one of a family of functionals for quantifying the diversity, uncertainty or randomness of a system.
• Randomness
• Shannon index
• Theil index
• Typoglycemia
Citation Linkbooks.google.comPathria, R. K.; Beale, Paul (2011). Statistical Mechanics (Third Edition). Academic Press. p. 51. ISBN 978-0123821881.
Sep 29, 2019, 5:03 PM
Citation Linkwww.alcatel-lucent.comShannon, Claude E. (July–October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal (PDF). 27 (3): 379–423. doi:10.1002/
j.1538-7305.1948.tb01338.x. hdl:11858/00-001M-0000-002C-4314-2 http://dml.cz/bitstream/handle/10338.dmlcz/101429/CzechMathJ_26-1976-4_6.pdf |url= missing title (help). (PDF, archived from here)
Sep 29, 2019, 5:03 PM
Citation Link//doi.org/10.1002%2Fj.1538-7305.1948.tb01338.xShannon, Claude E. (1948). "A Mathematical Theory of Communication" (PDF). Bell System Technical Journal. 27 (3): 379–423. doi:10.1002/
j.1538-7305.1948.tb01338.x., July and October
Sep 29, 2019, 5:03 PM
Citation Linkopenlibrary.orgSchneier, B: Applied Cryptography, Second edition, John Wiley and Sons.
Sep 29, 2019, 5:03 PM
Citation Linkbooks.google.comBorda, Monica (2011). Fundamentals in Information Theory and Coding. Springer. ISBN 978-3-642-20346-6.
Sep 29, 2019, 5:03 PM
Citation Linkbooks.google.comHan, Te Sun & Kobayashi, Kingo (2002). Mathematics of Information and Coding. American Mathematical Society. ISBN 978-0-8218-4256-0.CS1 maint: uses authors parameter
Sep 29, 2019, 5:03 PM
Citation Linkalum.mit.eduSchneider, T.D, Information theory primer with an appendix on logarithms, National Cancer Institute, 14 April 2007.
Sep 29, 2019, 5:03 PM
Citation Linkcsustan.csustan.eduCarter, Tom (March 2014). An introduction to information theory and entropy (PDF). Santa Fe. Retrieved 4 August 2017.
Sep 29, 2019, 5:03 PM
Citation Linkopenlibrary.orgCompare: Boltzmann, Ludwig (1896, 1898). Vorlesungen über Gastheorie : 2 Volumes – Leipzig 1895/98 UB: O 5262-6. English version: Lectures on gas theory. Translated by
Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover ISBN 0-486-68455-5
Sep 29, 2019, 5:03 PM
Citation Linkmarknelson.usMark Nelson (24 August 2006). "The Hutter Prize". Retrieved 27 November 2008.
Sep 29, 2019, 5:03 PM
Citation Linkwww.sciencemag.org"The World's Technological Capacity to Store, Communicate, and Compute Information", Martin Hilbert and Priscila López (2011), Science, 332(6025); free access to the
article through here: martinhilbert.net/WorldInfoCapacity.html
Sep 29, 2019, 5:03 PM
Citation Linkwww.isiweb.ee.ethz.chMassey, James (1994). "Guessing and Entropy" (PDF). Proc. IEEE International Symposium on Information Theory. Retrieved 31 December 2013.
Sep 29, 2019, 5:03 PM
Citation Linkwww.maths.tcd.ieMalone, David; Sullivan, Wayne (2005). "Guesswork is not a Substitute for Entropy" (PDF). Proceedings of the Information Technology & Telecommunications Conference.
Retrieved 31 December 2013.
Sep 29, 2019, 5:03 PM
Citation Link//doi.org/10.1007%2F3-540-46513-8_5Pliam, John (1999). "Guesswork and variation distance as measures of cipher security". International Workshop on Selected Areas in Cryptography.
Sep 29, 2019, 5:03 PM
Citation Linkopenlibrary.orgThomas M. Cover; Joy A. Thomas (18 July 2006). Elements of Information Theory. Hoboken, New Jersey: Wiley. ISBN 978-0-471-24195-9.
Sep 29, 2019, 5:03 PM
Citation Linkopenlibrary.orgAoki, New Approaches to Macroeconomic Modeling.
Sep 29, 2019, 5:03 PM
Citation Linkopenlibrary.orgProbability and Computing, M. Mitzenmacher and E. Upfal, Cambridge University Press
Sep 29, 2019, 5:03 PM
Citation Linkwww.inference.phy.cam.ac.ukInformation Theory, Inference, and Learning Algorithms
Sep 29, 2019, 5:03 PM
Citation Linkbooks.google.comMathematical Theory of Entropy
Sep 29, 2019, 5:03 PM
Citation Linkjim-stone.staff.shef.ac.ukInformation Theory: A Tutorial Introduction
Sep 29, 2019, 5:03 PM
|
{"url":"https://everipedia.org/wiki/lang_en/Entropy_%2528information_theory%2529","timestamp":"2024-11-12T17:04:17Z","content_type":"text/html","content_length":"424185","record_id":"<urn:uuid:0b26ff3a-f662-436d-829d-7ccc53c24cec>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00830.warc.gz"}
|
The Theory of Parallel Universes is Not Just Math – it is Science That Can Be Tested
The multiverse view is not actually a theory, it is rather a consequence of our current understanding of theoretical physics.
This stunning group of galaxies is far, far away – about 450 million light-years from planet Earth – cataloged as galaxy cluster Abell S0740. Credit: NASA, ESA, Hubble Heritage Team (STScI / AURA)
The existence of parallel universes may seem like something cooked up by science fiction writers, with little relevance to modern theoretical physics. But the idea that we live in a “multiverse” made
up of an infinite number of parallel universes has long been considered a scientific possibility – although it is still a matter of vigorous debate among physicists. The race is now on to find a way
to test the theory, including searching the sky for signs of collisions with other universes.
It is important to keep in mind that the multiverse view is not actually a theory, it is rather a consequence of our current understanding of theoretical physics. This distinction is crucial. We have
not waved our hands and said: “Let there be a multiverse”. Instead the idea that the universe is perhaps one of infinitely many is derived from current theories like quantum mechanics and string
The many-worlds interpretation
You may have heard the thought experiment of Schrödinger’s cat, a spooky animal who lives in a closed box. The act of opening the box allows us to follow one of the possible future histories of our
cat, including one in which it is both dead and alive. The reason this seems so impossible is simply because our human intuition is not familiar with it.
But it is entirely possible according to the strange rules of quantum mechanics. The reason that this can happen is that the space of possibilities in quantum mechanics is huge. Mathematically, a
quantum mechanical state is a sum (or superposition) of all possible states. In the case of the Schrödinger’s cat, the cat is the superposition of “dead” and “alive” states.
But how do we interpret this to make any practical sense at all? One popular way is to think of all these possibilities as book-keeping devices so that the only “objectively true” cat state is the
one we observe. However, one can just as well choose to accept that all these possibilities are true, and that they exist in different universes of a multiverse.
Miaaaaultiverse. Credit: Robert Couse-Baker/Flickr, CC BY-SA
The string landscape
String theory is one of our most, if not the most promising avenue to be able to unify quantum mechanics and gravity. This is notoriously hard because gravitational force is so difficult to describe
on small scales like those of atoms and subatomic particles – which is the science of quantum mechanics. But string theory, which states that all fundamental particles are made up of one-dimensional
strings, can describe all known forces of nature at once: gravity, electromagnetism and the nuclear forces.
However, for string theory to work mathematically, it requires at least ten physical dimensions. Since we can only observe four dimensions: height, width, depth (all spatial) and time (temporal), the
extra dimensions of string theory must therefore be hidden somehow if it is to be correct. To be able to use the theory to explain the physical phenomena we see, these extra dimensions have to be
“compactified” by being curled up in such a way that they are too small to be seen. Perhaps for each point in our large four dimensions, there exists six extra indistinguishable directions?
A problem, or some would say, a feature, of string theory is that there are many ways of doing this compactification –10^500 possibilities is one number usually touted about. Each of these
compactifications will result in a universe with different physical laws – such as different masses of electrons and different constants of gravity. However there are also vigorous objections to the
methodology of compactification, so the issue is not quite settled.
But given this, the obvious question is: which of these landscape of possibilities do we live in? String theory itself does not provide a mechanism to predict that, which makes it useless as we can’t
test it. But fortunately, an idea from our study of early universe cosmology has turned this bug into a feature.
The early universe
During the very early universe, before the Big Bang, the universe underwent a period of accelerated expansion called inflation. Inflation was invoked originally to explain why the current
observational universe is almost uniform in temperature. However, the theory also predicted a spectrum of temperature fluctuations around this equilibrium which was later confirmed by several
spacecraft such as Cosmic Background Explorer, Wilkinson Microwave Anisotropy Probe and the PLANCK spacecraft.
While the exact details of the theory are still being hotly debated, inflation is widely accepted by physicists. However, a consequence of this theory is that there must be other parts of the
universe that are still accelerating. However, due to the quantum fluctuations of space-time, some parts of the universe never actually reach the end state of inflation. This means that the universe
is, at least according to our current understanding, eternally inflating. Some parts can therefore end up becoming other universes, which could become other universes etc. This mechanism generates a
infinite number of universes.
By combining this scenario with string theory, there is a possibility that each of these universes possesses a different compactification of the extra dimensions and hence has different physical
The cosmic microwave background. Scoured for gravitational waves and signs of collisions with other universes. Credit: NASA / WMAP Science Team/wikimedia
Testing the theory
The universes predicted by string theory and inflation live in the same physical space (unlike the many universes of quantum mechanics which live in a mathematical space), they can overlap or
collide. Indeed, they inevitably must collide, leaving possible signatures in the cosmic sky which we can try to search for.
The exact details of the signatures depends intimately on the models – ranging from cold or hot spots in the cosmic microwave background to anomalous voids in the distribution of galaxies.
Nevertheless, since collisions with other universes must occur in a particular direction, a general expectation is that any signatures will break the uniformity of our observable universe.
These signatures are actively being pursued by scientists. Some are looking for it directly through imprints in the cosmic microwave background, the afterglow of the Big Bang. However, no such
signatures are yet to be seen. Others are looking for indirect support such as gravitational waves, which are ripples in space-time as massive objects pass through. Such waves could directly prove
the existence of inflation, which ultimately strengthens the support for the multiverse theory.
Whether we will ever be able to prove their existence is hard to predict. But given the massive implications of such a finding it should definitely be worth the search.
Eugene Lim is Lecturer in theoretical particle physics & cosmology at King’s College London
This article was originally published on The Conversation.
|
{"url":"https://science.thewire.in/science/the-theory-of-parallel-universes-is-not-just-math-it-is-science-that-can-be-tested/","timestamp":"2024-11-13T02:01:18Z","content_type":"text/html","content_length":"90997","record_id":"<urn:uuid:11deec9d-b10b-4fd9-a6c0-78c356958238>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00220.warc.gz"}
|
Improved Solutions to the Linearized Boussinesq Equation with Temporally Varied Rainfall Recharge for a Sloping Aquifer
Department of Soil and Water Conservation, National Chung Hsing University, Taichung 40227, Taiwan
Author to whom correspondence should be addressed.
Submission received: 27 March 2019 / Revised: 16 April 2019 / Accepted: 17 April 2019 / Published: 19 April 2019
Sloping unconfined aquifers are commonly seen and well investigated in the literature. In this study, we propose a generalized integral transformation method to solve the linearized Boussinesq
equation that governs the groundwater level in a sloping unconfined aquifer with an impermeable bottom. The groundwater level responses of this unconfined aquifer under temporally uniform recharge or
nonuniform recharge events are discussed. After comparing with a numerical solution to the nonlinear Boussinesq equation, the proposed solution appears better than that proposed in a previous study.
Besides, we found that the proposed solutions reached the convergence criterion much faster than the Laplace transform solution did. Moreover, the application of the proposed solution to temporally
changing rainfall recharge is also proposed to improve on the previous quasi-steady state treatment of an unsteady recharge rate.
1. Introduction
Groundwater level has been widely investigated by experimental or field data collection, numerical methods, and analytical approaches. In general, the groundwater level is difficult to estimate and
predict compared with the water on the ground. Various items of groundwater level estimation equipment are highly expensive; thus, substantial financial support is required. Therefore, some
researchers study groundwater problems mainly by employing numerical methods but others prefer to apply analytical approaches.
It is necessary to quantify the hydrological processes under the hillside and develop appropriate approaches to describe these processes. Regarding this subject, many models have been developed over
the past 40 years. Paniconi and Wood [
] developed a three-dimensional finite element numerical model based on the Richards equation to deal with catchment scale simulations. Such a large numerical code consumes much computer time and
memory, and it is very hard to examine the validation of the code. Brutsaert [
] derived an analytical solution to the linearized Boussinesq equation and studied the response of the groundwater flow per unit width of the slope with consideration of zero water depth at the
downstream boundary condition, corresponding to the free drainage of the unconfined aquifer. The analytical method provides a powerful framework for analyzing the effects of different features of the
slope on its hydrological response shape.
Chapman [
] used a simple empirical method to build a power–law relationship between storage and discharge in a hillside. Later on, Berne et al. [
] converted the linearized Boussinesq equation into a hillslope-storage Boussinesq equation and presented the moments of the characteristic response function (CRF) to study the latter equation with
fixed recharge, following the research of Troch et al. [
]. In fact, the thickness of the free seepage surface is known to vary with the configuration of the hillside by referring to Chapman [
]. Recently, Dralle et al. [
] derived a new analytical solution to the linearized hillslope Boussinesq equation with spatially variable recharge by the method of eigenfunction expansion, and discussed the hydrologic response of
topography to base flow discharge properties. In their study, they claimed that their solutions exactly reproduce previous results, e.g., Verhoest and Troch [
] and Troch et al. [
], for the case of spatially uniform recharge, and perfectly match the numerical solutions by a finite difference scheme for the case of spatially variable recharge. However, the linearization
$ε = 2 / 3$
in Verhoest and Troch [
] but
$ε = 1$
in Dralle et al. [
], is different, and in their modeling scenarios they hypothesized the spatially and temporally variable recharge and simplified its distribution in two intervals only.
Because the groundwater level is mainly influenced by flow seepage and external recharge, some researchers recently considered rainfall recharge. Vehoest and Troch [
] performed the Laplace transformation to solve the linearized Boussinesq equation to estimate the groundwater level under the effect of rainfall recharge. In their study, Arfken and Weber [
] applied a complex inversion formula to obtain the inverse Laplace transform by using a Bromwich integral. Moreover, the transient groundwater level was approximated by a steady state condition that
generated the same outflow. Zissis et al. [
] discussed a groundwater table that was affected by a river’s constant recharge and the variation in the water level of that river. They also linearized the nonlinear Boussinesq equation. The
results proved that under the same conditions and when not considering rainfall recharge, the solution of the linearized equation is very similar to that of the nonlinear equation. However, the
discrepancy of these two solutions becomes apparent for a case that has a high amount of rainfall recharge and a mild slope. Bansal and Das [
] proposed a groundwater model to discuss the water table in an unconfined sloping aquifer under constant recharge and seepage from a stream in which the water level varied. In their model, the
linearized Boussinesq equation was also employed as a governing equation.
Most published studies on groundwater problems have focused on uniform recharge, but this is not sufficient to delineate various real conditions such as rainfall recharge. Kazezyilmaz-Alhan [
] used the Heaviside function (also known as the unit step function) to represent temporally changing rainfall events, and the transient variation in the overland flow was discussed by employing the
diffusion wave theory. Such techniques are considered to treat the source term in this study.
On the basis of Chapman’s study [
], when the angle of the impermeable bottom slope is less than 30°, the flow in the aquifer appropriately conforms to Dupuit’s assumptions. Therefore, a modified one-dimensional Boussinesq equation
is presented for groundwater flow in a sloping aquifer. In this text, the first section explains the research background, motivation and purpose, content and basic structure of the paper. The second
section describes the mathematical derivation of the presented problem and its analytical solution as well as the introduction of the general integral transformation method (GITM). In the third
section, the differences among the present analytical solution, the previous analytical solution and the nonlinear numerical solution are discussed, and the groundwater level and flow fluctuations
under different conditions are simulated. Finally, the results of this research are concluded.
2. Mathematical Formulation
2.1. Conceptual and Mathematical Models
The groundwater flow in a sloping unconfined aquifer (
Figure 1
) based on Darcy’s law is governed by (see Childs [
$q = − K H w ( c o s θ ∂ H w ∂ x + s i n θ )$
and the flow satisfies the following continuity equation according to the law of mass conservation:
$n ∂ H w ∂ t + ∂ q ∂ x = r$
is the drainable or effective porosity (-),
is the flow rate in the
direction per unit width of the aquifer (L
is the hydraulic conductivity (L/T),
$H w$
is the elevation of the groundwater table measured perpendicularly to the underlying impermeable layer (L),
is the inclined angle of the aquifer bottom (-), and
r = r
) is the rainfall recharge rate (L/T).
To investigate the groundwater flow problem in a sloping unconfined aquifer, we substituted Equation (1) into Equation (2) and obtained a Boussinesq equation for a sloping aquifer by assuming no
spatial variability in
, and
$∂ H w ∂ t = K n [ c o s θ ∂ ∂ x ( H w ∂ H w ∂ x ) + s i n θ ∂ H w ∂ x ] + r n$
Brutsaert [
] stated that the nonlinear term
$H w ∂ H w / ∂ x$
on the right-hand side of Equation (3) can be linearized by changing the first
$H w$
$ε D$
is the thickness of the initially saturated aquifer, and
is a linearization constant given by
$0 < ε < 1$
. Thus, Equation (3) can be given as follows:
$∂ H w ∂ t = K n ( ε D c o s θ ∂ 2 H w ∂ x 2 + s i n θ ∂ H w ∂ x ) + r n$
Verhoest and Troch [
] improved on the study by Brutsaert [
] by adding a constant recharge to the aquifer. In their study, they assumed that water initially filled a rectangular aquifer to a depth of
$D − h$
, as displayed in
Figure 1
. Moreover,
was the distance from the ground surface to the average groundwater level. Moreover, they assumed that a sudden drawdown at the outlet (
$x = 0$
) of the aquifer caused the depth of the water level to be zero, and a zero-inflow boundary existed at the hilltop (
$x = L$
). Hence, the initial condition was
$H w = D − h , 0 < x < L , t = 0$
and the boundary conditions were
$H w = 0 , x = 0 , t > 0$
In our present study, we utilized the Heaviside function
$u ( t )$
to represent the temporally changing rainfall recharge rate:
$r ( t ) = ∑ i = 1 N r i [ u ( t − t i − 1 ) − u ( t − t i ) ]$
Moreover, by substituting
$α = K ε D c o s θ / n$
$U = K s i n θ / n$
in Equation (4), we obtained the following:
$∂ H w ∂ t = α ∂ 2 H w ∂ x 2 + U ∂ H w ∂ x + 1 n ∑ i = 1 N r i [ u ( t − t i − 1 ) − u ( t − t i ) ]$
To eliminate the first order derivative of
, we set
$H w ( x , t ) = e − U 2 α x e − U 2 4 α t H v ( x , t )$
By substituting Equation (10) into Equation (9), the initial condition provided by Equation (5), and the boundary conditions given in Equations (6) and (7), we obtained the following:
$∂ 2 H v ( x , t ) ∂ x 2 + ∑ i = 1 N r i [ u ( t − t i − 1 ) − u ( t − t i ) ] α n e U 2 α x e U 2 4 α t = 1 α ∂ H v ( x , t ) ∂ t$
$H v ( x , 0 ) = e U 2 α x ( D − h ) , 0 < x < L$
$H v ( 0 , t ) = 0 , t > 0$
$2 α ∂ H v ( L , t ) ∂ x + U H v ( L , t ) = 0 ,$
2.2. Present Improved Solutions
In the present study, we employed the generalized integral transformation (GITM) of Özisik [
] to solve Equation (11) in terms of the following formulas. The GITM is usually employed to solve boundary value problems of heat conduction, which eliminates the spatially quadratic differential
term of the governing equation by inserting a kernel function with a space variable only, and then the partial differential equation is transformed into an ordinary differential equation with a time
variable. The ordinary differential equation is of a first-order type and easily solved. In the generalized integral inverse transformation, an infinite series with corresponding eigenvalues is
included. In theory, while the infinite series is calculated, its eigenvalues need to be evaluated and summed to a maximum number of terms to obtain a more accurate solution. In fact, the
integral-transform technique helps to reach a fast convergence of the infinite series.
Transform formula:
$H v ¯ ( β m , t ) = ∫ x ′ = 0 L ξ ( β m , x ′ ) H v ( x ′ , t ) d x ′$
The inverse transform formula can be given as
$H v ( x , t ) = ∑ m = 1 ∞ ξ ( β m , x ) H v ¯ ( β m , t )$
$ξ ( β m , x ) ≡ 2 ( β m 2 + γ 2 L ( β m 2 + γ 2 ) + γ ) 1 / 2 × s i n ( β m x )$
$β m$
is the root of
for the presented problem. The solution to Equation (11) is
$H v ( x , t ) = ∑ m ∞ e − α β m 2 t B m η m s i n ( β m x ) [ ( D − h ) + 1 n ∑ i = 1 N r i ∫ t i − 1 t i e 4 α 2 β m 2 + U 2 4 α t ′ d t ′ ]$
$B m = 2 ( β m 2 + γ 2 ) L ( β m 2 + γ 2 ) + γ$
$η m = ∫ 0 L e U x / 2 α × s i n ( β m x ) d x = 2 α [ 2 α β m − 2 α β m e U 2 α L c o s ( β m L ) + U e U 2 α L s i n ( β m L ) ] U 2 + 4 α 2 β m 2$
By substituting Equation (20) into Equation (10), we obtained
$H w ( x , t ) = e − U 2 α x e − U 2 4 α t ∑ m = 1 ∞ e − α β m 2 t B m η m s i n ( β m x ) [ ( D − h ) + 1 n ∑ i N r i ∫ t i − 1 t i e 4 α 2 β m 2 + U 2 4 α t ′ d t ′ ]$
After the groundwater level has been estimated, the flow discharge at the outlet can be obtained by integrating Equation (2) as follows.
$q = − L ∑ i = 1 N r i [ u ( t − t i − 1 ) − u ( t − t i ) ] + n ∫ 0 L ∂ H w ∂ t d x = − L ∑ i = 1 N r i [ u ( t − t i − 1 ) − u ( t − t i ) ] − ∑ m = 1 ∞ B m η m λ m ( α β m 2 + U 2 4 α ) e 4 α 2 β
m 2 + U 2 4 α t × [ n ( D − h ) + ∑ i = 1 N r i ∫ t i − 1 t i e 4 α 2 β m 2 + U 2 4 α t ′ d t ′ ]$
$λ m = ∫ 0 L e − U 2 α x × s i n ( β m x ) d x = 2 α [ 2 α β m − 2 α β m e − U 2 α L c o s ( β m L ) − U e − U 2 α L s i n ( β m L ) ] U 2 + 4 α 2 β m 2$
The generalized integral transformation method is different from the Laplace transform method and the Fourier transform method, and can directly perform integral operations about the space variable
in a finite field, a semi-infinite domain and an infinite domain. In reality, the surface replenishment intensity will change with time; therefore, we used the Heaviside function to represent
temporally varying recharge rates as shown in Equations (23) and (24) to analyze groundwater level and flow, respectively.
3. Results and Discussions
3.1. Comparison of Analytical and Numerical Solutions
To validate the present analytical solutions, we followed the hypothetical case proposed by Verhoest and Troch [
] with
$D − h$
= 1.5 m,
= 0.001 m/s,
= 0.34,
= 3 mm/h, and
$ε = 2 / 3$
. For comparison, a numerical solution to the nonlinear Boussinesq Equation (3), subjected to the conditions of Equations (5)–(7), was obtained. In the numerical method, we employed the central
difference and the upwind scheme in the Swanson and Turke [
] with respect to space. The time reference was solved by the third-order Total Variation Diminishing Runge-Kutta scheme proposed by Shu and Osher [
After the parameters had been substituted into these solutions, the proposed analytical solution better matched the numerical solution than the solution of Verhoest and Troch [
], as depicted in
Figure 2
, which illustrates the spatial changes of the groundwater levels for various bottom slopes.
Figure 2
demonstrates that there is a large discrepancy between the curve of Verhoest and Troch [
] and the curve of the numerical solution for constant recharge. However, the curve of the present solution is closer to the numerical solution. The shift between both solutions is displayed in
Figure 2
a,b; it decreases as the bottom slope increases. Furthermore, for the case of simulation time of 3 days, the peak value of the present solution is close to that of the numerical solution while the
solution of Verhoest and Troch [
] shifts to the right, as indicated in
Figure 3
. Because they solved the linearized governing equation by the Laplace transform method instead of the fully nonlinear one, their solution obtained a response to the sloping effect slower than the
numerical solution. Similar results could be found in
Figure 4
for the case of simulation time of 5 days. On the contrary, the present solution by GITM made a response to the sloping effect a little faster than the nonlinear solution owing to the linearization,
as shown in
Figure 4
To quantitatively assess the difference between analytical and numerical solutions, we proceeded to an error analysis by evaluating the relative percentage difference (RPD), which is defined as
$RPD = H w n u m − H w a n a H w n u m$
The error analysis of groundwater level between the analytical solutions and the numerical solution is shown in
Figure 5
Figure 6
Figure 7
for different durations and different slopes.
Figure 3
illustrates that the maximum RPD value of the present solution is 12% as 20 <
< 80 m, but the maximum RPD value of the solution of Verhoest and Troch [
] is 44%. This indicates that the present solution is much better. Comparing
Figure 5
a,b, we also found that the results of the present solution for
$θ = 6 °$
were better than that for
$θ = 2 °$
. This implies that the accuracy might increase with the slope. A similar tendency could also be found in
Figure 6
Figure 7
Moreover, while inspecting
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
carefully, we found that as the dip angle of the aquifer increases, the difference between Verhoest and Troch [
] and the present solution becomes smaller, and it is speculated that
is the key to affect this difference. Verhoest and Troch [
] used a linearization parameter
of 2/3 constantly, but Koussis [
] argued that
should be affected by the net infiltration, slope, and hydraulic conductivity, and Brutsaert [
] suggested that
$H w$
in the nonlinear term could be replaced by
$ε D$
. Such suggestion of linearization will not create too much error in the solutions if the variation of groundwater table is small. Based on the foregoing statement, this study admits
= 0.17 in the case of
$θ = 2 °$
= 0.3 in the case of
$θ = 6 °$
to obtain better results.
Figure 8
displays the temporal change of the flow rate at the outlet for various bottom slopes. As can be seen from the figure, the trend of all the three solutions is consistent. However, as the bottom slope
increases, the present solution matches the numerical solution much better than the solution of Verhoest and Troch [
]. Although there is still a little discrepancy between the analytical linearized solution and the numerical nonlinear solution, the presented solution improves the analytical results of the previous
study. To sum up, the present solution is closer to the numerical solution of the nonlinear Boussinesq equation within a short period and tends to be constant and overlap with the numerical results
for a long time. Therefore, the present solutions seem to be more feasible.
3.2. Comparison of Unsteady State and Quasi-Steady State
Moreover, Verhoest and Troch [
] adopted a quasi-steady state method to calculate the groundwater response of a hillslope for a temporally changing recharge rate. However, the proposed unsteady state solutions could be directly
applied to the same hillslope case without requiring any extra treatment, as depicted in
Figure 9
. Note that the discrepancy between both solutions is not large, and the slight difference primarily arises from the temporal treatment conducted in the study of Verhoest and Troch [
In summary, Verhoest and Troch [
] used the Laplace transform method to solve the partial differential specified by Equation (4); however, that inverse Laplace transform is extremely difficult to obtain, even by applying a complex
inversion formula for an inverse Laplace transform. Arfken and Weber [
] used a Bromwich integral to overcome the problem; however, convergence could only be approached after a lengthy calculation. When the inclined angle
is equal to 2° and 6°, the summation in their solution requires the first 999 terms to reach convergence. However, the proposed solutions obtained by the generalized integral transformation method
only require 15 terms to obtain convergence; a reasonable example would be 10
m for the groundwater level and 10
/day for the outflow.
4. Conclusions
A generalized integral transformation method can provide an improved solution to a linearized Boussinesq equation for a sloping unconfined aquifer. The presented analytical results combine the effect
of the bottom slope and the time-varying recharge pattern on the water table fluctuations. Owing to the limitations and difficulties of directly measuring the groundwater level, we developed a
mathematical model such that we can predict or simulate the variation in the groundwater level that can be affected by any rainfall recharge rates. Some conclusions are proposed as follows.
• According to the error analysis, in the case of a constant recharge rate for a sloping aquifer, the results of the proposed solution are better than the results proposed by Verhoest and Troch [
] after comparing with the numerical solutions; therefore, the present analytical solution appears to be more feasible than that proposed in a previous study.
• The proposed solutions reach the convergence criteria faster than the solutions of Verhoest and Troch [
], thus saving computation time.
• The present solution can be directly applied to unsteady recharge rate cases without the requirement of the quasi-steady state method which was employed in the study of Verhoest and Troch [
Author Contributions
Conceptualization, P.-C.H.; Methodology, P.-C.H.; Software, M.-C.W.; Validation, M.-C.W.; Formal Analysis, M.-C.W.; Investigation, P.-C.H.; Resources, P.-C.H.; Data Curation, M.-C.W.;
Writing-Original Draft Preparation, M.-C.W.; Writing-Review & Editing, P.-C.H.; Visualization, P.-C.H.; Supervision, P.-C.H.; Project Administration, P.-C.H.; Funding Acquisition, P.-C.H.
This research was funded by “the Ministry of Science and Technology of Taiwan, grant number: MOST 106-2313-B-005 -007 –MY2:” and “The APC was funded by “the Ministry of Science and Technology of
This study was financially supported by the Ministry of Science and Technology of Taiwan under Grant No.: MOST 106-2313-B-005 -007 –MY2. In the meanwhile, this manuscript was mostly edited by Wallace
Academic Editing.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the
decision to publish the results.
Figure 2. Spatial variation in the groundwater table under constant recharge for (a) $θ = 2 °$ (b) $θ = 6 °$ (t = 1 day).
Figure 3. Spatial variation in the groundwater table under constant recharge for (a) $θ = 2 °$ (b) $θ = 6 °$ (t = 3 days).
Figure 4. Spatial variation in the groundwater table under constant recharge for (a) $θ = 2 °$ (b) $θ = 6 °$ (t = 5 days).
Figure 5. Relative percentage difference (RPD) between analytical and numerical solutions. (a) $θ = 2 °$ (b) $θ = 6 °$ (t = 1 day).
Figure 9.
Variation in the outflow corresponding to varying recharge rates, as illustrated in the study of Verhoest and Troch [
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Wu, M.-C.; Hsieh, P.-C. Improved Solutions to the Linearized Boussinesq Equation with Temporally Varied Rainfall Recharge for a Sloping Aquifer. Water 2019, 11, 826. https://doi.org/10.3390/w11040826
AMA Style
Wu M-C, Hsieh P-C. Improved Solutions to the Linearized Boussinesq Equation with Temporally Varied Rainfall Recharge for a Sloping Aquifer. Water. 2019; 11(4):826. https://doi.org/10.3390/w11040826
Chicago/Turabian Style
Wu, Ming-Chang, and Ping-Cheng Hsieh. 2019. "Improved Solutions to the Linearized Boussinesq Equation with Temporally Varied Rainfall Recharge for a Sloping Aquifer" Water 11, no. 4: 826. https://
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/11/4/826","timestamp":"2024-11-05T22:59:11Z","content_type":"text/html","content_length":"433559","record_id":"<urn:uuid:6497c09e-dd60-4277-a67c-96e8ee763dcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00622.warc.gz"}
|
Kuratowski's Theorem
Kuratowski's Theorem
Kuratowski's Theorem is critically important in determining if a graph is planar or not and we state it below.
Theorem 1 (Kuratowski's Theorem): A graph is planar if and only if it does not contain any subdivisions of the graphs $K_5$ or $K_{3,3}$.
We will not provide a formula proof, however, we will apply this theorem extensively. It turns out that this theorem is true since every non-planar graph is obtained by adding vertices and/or edges
to a subdivision of either $K_5$ or $K_{3,3}$. Hence if we can identify that a subdivision of either of these graphs exists within our graph, then we can easily determine if the graph is planar or
For example, the following graph:
Notice that this graph is simply a subdivision of $K_{3,3}$ as shown:
Hence the graph is NOT planar.
It turns out that if a graph $G$ contains no subgraph that is itself a contraction of $K_5$ or $K_{3,3}$ then the graph itself is also planar. We will omit the proof.
|
{"url":"http://mathonline.wikidot.com/kuratowski-s-theorem","timestamp":"2024-11-06T07:26:59Z","content_type":"application/xhtml+xml","content_length":"14656","record_id":"<urn:uuid:323e4136-1d71-4fc5-8463-4c644fbd19f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00100.warc.gz"}
|
Confusing Christmas with Halloween
Understanding how computers work is essential in allowing us to use them as tools. We therefore need to translate some of their codes and numbers to things that we more naturally understand.
Unfortunately, there’s no escaping the mathematics of it – converting binary numbers to decimal and vice versa is an algebraic operation. Though it can be done with a scientific calculator, the
process is straightforward and can be done simply enough with a pen and some paper.
To convert from binary to decimal is the easiest base transformation there is. We’ll walk through an example by converting \(1001 \space 1010\) (the space is included for much the same reason we use
commas to partition decimal numbers into groups of three, to make it easier to read). Begin by writing down the binary powers. Starting from the far right, the powers begin with \(1\) and double in
size. \(1, 2, 4, 8, 16, 32, 64, 128\). Calculate as many powers as there are digits in the number, in this case there are eight. Next line up the powers with the binary digits.
Every power next to a zero is discarded. The rest of the remaining powers are added together. \(128 + 16 + 8 + 2 = 154\). To be technically correct, the number in each column is multiplied by the
base. As we’re dealing with binary, the only non-zero number is a one which doesn’t change the original number under multiplication. To give an example of converting a number from a different base to
decimal, we will look at the previously mentioned base sixteen: hexadecimal.
This is a useful base for Computer Science because sixteen is a power of two, specifically it’s two to the power of four i.e. \(2^{4} = 2 \times 2 \times 2 \times 2 = 16\). One of the most crucial
parts to the performance of a modern computing device is how much memory it has, a figure usually measured in gigabytes. Giga (pronounced with a hard ‘g’ and not a ‘j’ Doc Brown) is an engineering
term which is short hand for a billion. A byte is a standard small measurement of a binary number with eight digits. You may think that a gigabyte would therefore be a billion bytes but alas, it’s \
(1,073,741,824\) bytes, just over \(7%\) larger than a round billion. More on why that is later.
A single byte can be used to store a number from \(0000 \space 0000\) up to \(1111 \space 1111\) which is… well, we should probably work this out.
It’s exactly the sum of the first eight base two powers: \(128 + 64 + 32 +16 + 8 + 4 + 2 + 1\), which is \(255\). Notice that \(255\) is one less than the next base two power, \(256\). This is
similar to \(99\) in decimal – the largest two digit number – being one less than the third decimal power, \(100\).
Say we wanted to represent a byte (eight binary digits) as a decimal number, how many decimal digits would that require? Seeing as a byte can represent any number from \(0\) to \(255\), three digits
would be required. But that’s rather wasteful because those three digits can represent more than just one byte. Three decimal digits could also represent 256 which uses nine binary digits \(1 \space
0000 \space 0000\). Or \(999\) which requires ten binary digits \(11 \space 1110 \space 0111\).
Hexadecimal isn’t wasteful however. The fact that sixteen is a power of two means that we can model some binary number lengths exactly without any leftover digits. After mastering the Mayan base
twenty, this number system should be less tricky because it shares the ten digits from decimal, and six letter from the Latin alphabet. Decimal is abbreviated to DEC and hexadecimal to HEX.
DEC HEX DEC HEX DEC HEX DEC HEX
0 0 4 4 8 8 12 C
1 1 5 5 9 9 13 D
2 2 6 6 10 A 14 E
3 3 7 7 11 B 15 F
And to complete the cycle \(16_{DEC} = 10_{HEX}\).
The same technique for translating binary to decimal works for transforming hexadecimal to decimal. Begin with \(123_{HEX}\). This number has three digits so we need the first three powers of
sixteen, namely, \(16^{0}\), \(16^{1}\) and \(16^{2}\). Or \(1, 16\) and \(256\).
This is addition of all the individual digits multiplied by their powers: \((256 \times 1) + (16 \times 2) + (1 \times 3) = 256 + 32 + 3 = 291_{DEC}\).
One silly pasttime of Computer Scientists is to write words using the letters that make up hexadecimal. Remember, though the letters A to F are part of our latin alphabet, in hexadecimal they’re
numbers. Look back at the table above to remind yourself what each represents.
Face Off, a ludicrous facial surgery high-jinks film, is also a hexadecimal number – if we replace the letter ‘O’ with the zero digit.
16,777,216 1,048,576 65,536 4,096 256 16 1
F A C E 0 F F
A calculator is required certainly but the steps to break down the conversion are still the same:
\[15 \times 16,777,216 +\]
\[10 \times 1,048,576 +\]
\[12 \times 65,536 +\]
\[14 \times 4,096 +\]
\[0 \times 256 +\]
\[15 \times 16 +\]
\[15 \times 1\]
\[= 262,988,031\]
So \(FACE0FF_{HEX} = 262,988,031_{DEC}\).
Now we’re more comfortable with hexadecimal, let’s consider representing a byte i.e. eight binary digits, as a hexadecimal number. \(0_{BIN}\) is similarly \(0_{HEX}\). What about a larger byte value
though? \(1010 \space 1100_{BIN}\) is \(172_{DEC}\), but in hexadecimal it’s \(AC_{HEX}\) (incidentally for fans of Aussie rock, \(ACDC_{HEX} = 44,252_{DEC}\), a good name for a tribute band that
hits that crucial classic rock / Computer Science crossover movement). The lack of wastefulness becomes apparent when considering \(1111 \space 1111_{BIN}\) in hexadecimal: \(FF_{HEX}\). Note that
the maximum binary value in eight digits is the maximum hexadecimal value in two digits. Adding one to this number pushes the binary form into the next column as does the hexadecimal form: \(1 \space
0000 \space 0000_{BIN} = 100_{HEX}\), or \(256_{DEC}\). This relationship occurs again and again with these two number systems. It’s for this reason that hexadecimal is often used to neatly capture a
binary number in a shorter form that humans are less likely to mistranscribe.
Digression over, let’s return to binary number conversion – this time from decimal to binary.
Converting the original example \(154_{DEC}\) back to binary is a little more complex but not unreasonably so. We discover the binary version by writing down one digit at a time from right to left.
The digit written down is the remainder after dividing by the base. In this example we’re repeatedly dividing by two – if the number is divided by two exactly i.e. it’s an even number, then a \(0\)
is written, otherwise if the the number is divided by two with a remainder of \(1\) i.e. it’s an odd number, then a \(1\) is written.
154 divided by 2 is 77 with no remainder. A zero is written down.
77 divided by 2 is 38 with remainder one. A one is written to the left.
38 divided by 2 is 19 with no remainder. Another zero is written.
19 divided by 2 is 9 with remainder one. The pattern continues.
9 divided by 2 is 4 with remainder one.
\[1 \space 1010\]
4 divided by 2 is 2 with no remainder.
\[01 \space 1010\]
2 divided by 2 is 1 with no remainder.
\[001 \space 1010\]
And the final number left over is a one, which when divided by two yields zero remainder one.
\[1001 \space 1010\]
Arriving at zero is the terminating case for this list of steps. Indeed, \(1001 \space 1010_{BIN}\) is the number we initially converted to \(154_{DEC}\).
One last example and we’ll conclude this extremely important topic. Another number system (admittedly used less frequently than binary or hexadecimal) is called Octal (OCT) which only has eight
digits. Like binary, it uses the digits from the decimal system but just the first eight: \(0, 1, 2, 3, 4, 5, 6\) and \(7\). Similar patterns abound: \(7_{OCT} + 1_{OCT} = 10_{OCT} = 8_{DEC}\).
Using the same rule for converting decimal numbers to binary, we will now convert \(25_{DEC}\) to octal.
\(25\) divided by \(8\) is \(3\) with remainder one. We write down the remainder at the far right.
\(3\) divided by \(8\) is zero (the terminating case) with remainder three. The remainder is written down to the left of the previous digit.
And already we’re done. \(25_{DEC} = 31_{OCT}\). Which explains why Computer Scientists get Christmas and Halloween confused.
|
{"url":"https://lifebeyondfife.com/converting-bases/","timestamp":"2024-11-12T16:48:58Z","content_type":"text/html","content_length":"42027","record_id":"<urn:uuid:5d0cef19-847c-43d4-85e6-af0deba53c77>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00736.warc.gz"}
|
Quantitative mixing for locally Hamiltonian flows with saddle loops on compact surfaces
Given a compact surface M with a smooth area form ω, we consider an open and dense subset of the set of smooth closed 1-forms on M with isolated zeros which admit at least one saddle loop homologous
to zero and we prove that almost every element in the former induces a mixing flow on each minimal component. Moreover, we provide an estimate of the speed of the decay of correlations for smooth
functions with compact support on the complement of the set of singularities. This result is achieved by proving a quantitative version for the case of finitely many singularities of a theorem by
Ulcigrai (Ergod Theory Dyn Syst 27(3):991–1035, 2007), stating that any suspension flow with one asymmetric logarithmic singularity over almost every interval exchange transformation is mixing. In
particular, the quantitative mixing estimate we prove applies to asymmetric logarithmic suspension flows over rotations, which were shown to be mixing by Sinai and Khanin.
|
{"url":"https://research.monash.edu/en/publications/quantitative-mixing-for-locally-hamiltonian-flows-with-saddle-loo","timestamp":"2024-11-13T15:59:15Z","content_type":"text/html","content_length":"45036","record_id":"<urn:uuid:3787cc36-ecef-43bb-9e69-05391a8b35b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00136.warc.gz"}
|
Exceptionally Beautiful Symmetries | Tamás Görbe
The classification of (semi)simple Lie algebras over the field of complex numbers is regarded by many as a jewel of mathematics. It was first described by German mathematician Wilhelm Killing in a
series of papers published between 1888-1890. A more rigorous proof (and the case of real Lie algebras) was presented by Élie Cartan in his 1894 PhD thesis. In 1947 the 22-year old Eugene Dynkin
worked out a modern, streamlined proof of the classification theorem.
The theorem states that every semisimple complex Lie algebra is a "sum of building blocks", most of which belong to one of four infinite families. These are denoted by A$_n$, B$_n$, C$_n$, D$_n$,
where $n$ is an arbitrary positive integer. Surprisingly, there exist five exceptional "building blocks" that don't fit into the above families. They are named E$_6$, E$_7$, E$_8$, F$_4$, G$_2$.
The classification theorem heavily relies on abstract mathematical objects called root systems, which are symmetric configurations of vectors (usually) sitting in higher dimensional space. The
dimension of this space is indicated by the subscripts, e.g. E$_8$ lives in an $8$-dimensional Euclidean space.
How it's made
The frame of a cube can cast shadows of different shapes, but only a few orientations lead to the most symmetric shadows, namely the projections through one of the four body diagonals. See the image
on the right. The method of finding the "most symmetric shadows" can be generalized to higher dimensions, and this is how the string arts of the exceptional root systems E$_6$, E$_7$, E$_8$, F$_4$,
G$_2$ were made. The needles of the string arts point where the end points of root vectors land after projection.
The connections are obtained by connecting every vector with their nearest neighbours before the projection.$^\dagger$ A fun fact is that due to the left-right symmetry of the connections we have an
even number of threads meeting at every needle, In graph theory such a structure is called an Eulerian Circuit. meaning that the connections with different colours can be drawn using a single, but
really long piece of thread without cutting.$^\ddagger$
$^\dagger$ There are some extra connections in the case of G$_2$.
$^\ddagger$ There's only one exception. Can you guess where?
Click the image to download a poster. [PDF, 3.7 MB]
|
{"url":"https://tamasgorbe.com/symmetry/","timestamp":"2024-11-04T17:02:49Z","content_type":"text/html","content_length":"19466","record_id":"<urn:uuid:c6538a9f-8f3e-435a-9719-5e1c8c815e31>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00038.warc.gz"}
|
Parameter Estimation Based on Censored Data under Partially Accelerated Life Testing for Hybrid Systems due to Unknown Failure Causes
Computer Modeling in Engineering & Sciences
DOI: 10.32604/cmes.2022.017532
Parameter Estimation Based on Censored Data under Partially Accelerated Life Testing for Hybrid Systems due to Unknown Failure Causes
Department of Basic Sciences, College of Science and Theoretical Studies, Saudi Electronic University, Dammam, 32256, Saudi Arabia
*Corresponding Author: Mustafa Kamal. Email: m.kamal@seu.edu.sa; kamal19252003@gmail.com
Received: 18 May 2021; Accepted: 11 October 2021
Abstract: In general, simple subsystems like series or parallel are integrated to produce a complex hybrid system. The reliability of a system is determined by the reliability of its constituent
components. It is often extremely difficult or impossible to get specific information about the component that caused the system to fail. Unknown failure causes are instances in which the actual
cause of system failure is unknown. On the other side, thanks to current advanced technology based on computers, automation, and simulation, products have become incredibly dependable and
trustworthy, and as a result, obtaining failure data for testing such exceptionally reliable items have become a very costly and time-consuming procedure. Therefore, because of its capacity to
produce rapid and adequate failure data in a short period of time, accelerated life testing (ALT) is the most utilized approach in the field of product reliability and life testing. Based on
progressively hybrid censored (PrHC) data from a three-component parallel series hybrid system that failed to owe to unknown causes, this paper investigates a challenging problem of parameter
estimation and reliability assessment under a step stress partially accelerated life-test (SSPALT). Failures of components are considered to follow a power linear hazard rate (PLHR), which can be
used when the failure rate displays linear, decreasing, increasing or bathtub failure patterns. The Tempered random variable (TRV) model is considered to reflect the effect of the high stress level
used to induce early failure data. The maximum likelihood estimation (MLE) approach is used to estimate the parameters of the PLHR distribution and the acceleration factor. A variance covariance
matrix (VCM) is then obtained to construct the approximate confidence intervals (ACIs). In addition, studentized bootstrap confidence intervals (ST-B CIs) are also constructed and compared with ACIs
in terms of their respective interval lengths (ILs). Moreover, a simulation study is conducted to demonstrate the performance of the estimation procedures and the methodology discussed in this paper.
Finally, real failure data from the air conditioning systems of an airplane is used to illustrate further the performance of the suggested estimation technique.
Keywords: Step stress partially accelerated life test; progressive hybrid censoring; data masking; power linear hazard rate distribution; hybrid system
Computers, cellphones, and many other systems of competitive innovation and modernization are getting increasingly complex, with numerous subsystems and sub-assemblies in each. Furthermore, these
subsystems and subassemblies are made up of several components, making the life testing procedure for such systems more complicated. The study of reliability and life testing of systems, subsystems,
or components is entirely depend on lifetime data, which is a combination of two important pieces of information. The first is to determine the product’s failure time, and the second is to determine
the reasons for its failure. The failure times of the system can easily be recorded, but the cause of the failure is not always recognized due to a variety of factors, including a lack of adequate
diagnosis, time and expense constraints for comprehensive failure analysis, and numerous component failures with destructive consequences. As a result, in the literature of reliability and life
testing analysis, such data in which the true cause of system failure is unknown and only a minimum random subset of the reasons that are responsible for system failure can be recognized is referred
to as masked data. See [1,2] for more details.
The overall quality of today’s products due to the existing advanced technology based on computers, automation and simulation has been improved drastically, which makes them extremely reliable and
trustworthy. Consequently, gathering failure data for testing of such extremely reliable products using ordinary reliability tests (ORTs) has become a very costly and time-consuming process, making
the use of ORTs impractical. Hence, ALTs are a more advanced approach for obtaining fast failure data by testing items under higher stress than normal, and then, a life stress relationship is used to
get the product’s life characteristics under normal usage settings. According to [3], stress loading in ALT may be performed in a variety of ways, although the most commonly used stress loadings are
constant, step, and progressive stress loadings. Many scholars so far have looked at the ALT models, including [4–10]. Assuming a lognormal lifetime distribution, Li et al. [5] proposed two types of
Bayesian accelerated acceptance sampling plans for illustrating product reliability based on the product’s operating characteristic curve under Type-I censoring. The first plan addresses both
producer and customer risks at the same time, whereas the second exclusively considers consumer risk. Rahman et al. [9] used MLE methods for estimating the parameters of Burr-X life distribution
parameters assuming that failure under arithmetically increasing stress levels of CSALT forms a geometric process. Ma et al. [10] proposed an optimum hybrid accelerated test plan under many
experimental design restrictions by combining ALT with accelerated degradation testing and modeling the degradation process with an inverse Gaussian process.
However, there are situations when these sorts of relationships are not possible. Because of the prevalence of this issue in ALTs, PALTs are considered to be preferable option and are usually
implemented in two ways. The first is constant stress PALT, while the second is SSPALT. In SSPALT, a sample of components or systems is first tested under normal usage circumstances for a certain
amount of time, and then systems or components that have not failed are allocated to testing at accelerated conditions until all items fail or a predetermined censorship scheme is met. SSPALT
analysis has been considered by many authors since it was first suggested by [11,12] as a TRV model. Bhattacharyya et al. [13] developed a tampered failure rate model using the TRV model. Bai et al.
[14–16] are others who considered SSPALT using the same concept of the TRV model for different distributions and censoring schemes. Assuming the TRV model, Zhang et al. [17] discussed the MLEs of the
unknown parameters for the extended Weibull distribution. Ismail et al. [18,19] investigated the MLEs of the parameters of the Weibull and Burr Type-XII distributions, respectively, and compared the
results based on two different PrHC schemes. Mahmoud et al. [20–22] deals with SSPALT based on an adaptive type PrHC scheme to obtain estimates of parameters using MLEs and Bayesian estimates (BEs)
of generalized Pareto, two-parameter exponentiated Weibull and Lindley distributions, respectively.
A considerable number of studies have been carried out on parameter estimation using mask data based on ORTs since it was first introduced by [1,2]. Considering different failure distributions, Guess
et al. [23–33] utilizes the MLE technique for estimating model parameters for a single component or a series or parallel systems of two or three components, whereas Reiser et al. [34–43] considered
BE technique based on different priors. As it was discussed earlier, many real-life systems or machines these days are made of hybrid structures which are a combination of series and parallel
subsystems. More complex systems can have many hybrid subsystems which are generally connected in parallel-series, series-parallel, series-parallel-series, and parallel-series-parallel configurations
and so on. Peng et al. [44] developed a Bayesian approach for system reliability evaluation and prediction in which pass-fail data, lifespan data, and deterioration data are integrated coherently at
multiple system levels. Yang et al. [45] developed an Adaptive Bayesian Melding reliability evaluation technique for analyzing and assessing the reliability of a hierarchical system with imperfect
prior knowledge. They also expanded the concept to a broader multi-level hierarchical structure by employing a more effective method of pooling inconsistent priors. Yang et al. [46] proposed a
Bayesian reliability approach for complex systems with dependent life metrics and developed a likelihood decomposition method to convert the overall likelihood into a product of explicit and implicit
evidence-based likelihood functions. As of now, only a limited number of research papers considering hybrid systems based on ORTs have been published. Considering three component parallel-series and
series-parallel systems, Wang et al. [47] obtained the MLEs of the parameters based on masked data assuming constant and linear hazard rate of independent components. Sha et al. [48] considered a
hybrid system of three dependent components and obtained the MLEs based on mask data assuming a bivariate exponential distribution. Cai et al. [49] considered the same system as in [48] and derived
the MLEs based copula function under mask cause of failure. Recently, Rodrigues et al. [50,51] used more complex structures based on four or five components and obtained MLEs and BEs under incomplete
Only a few studies on ALTs so far that focused on hybrid systems and masked data have been published in available research. Reference [52] under SSALT, describes the procedure of obtaining MLEs of
the parameters of the Weibull distribution for a series system based on an expectation minimization algorithm assuming symmetric masking. Considering masked interval data in SSALT, Fan et al. [53]
obtained estimates of the parameters of the exponential distribution. Xu et al. [54,55] described the general Bayesian analysis of the series system masked failure data for the log-location-scale and
Weibull distributions respectively under SSALT. Assuming the same exponential hazard rate for a four components hybrid system, Shi et al. [56,57] obtained the MLEs of unknown parameters of the model
based on the masked data under SSPALT and CSPALT respectively. Shi et al. [58] considered two different hybrid systems of three components and then obtained MLEs of the modified Weibull distribution.
To the best of our knowledge so far, no article has been published that considers SSALT for PLHR distribution for a hybrid system under masked data. Much of the sources listed above considered hazard
rates that are monotonic in nature, but there are cases where the hazard rate is not monotonic. The primary goal of this work is to describe the SSPALT using a more flexible PLHR that can be used
when failure rates indicate non-monotonic characteristics for hybrid systems. To illustrate the considered estimation procedure under SSPALT using PrHC masked data, a three-component hybrid system is
considered. The rest of the paper is organized as follows, Section 2 addresses the formulation of the SSPALT model for PrHC masked data from a hybrid system and some useful assumptions. The MLEs and
corresponding ACIs and ST-B CIs of the parameters and acceleration factor are discussed in Section 3. In Section 4, we conducted a simulation study to demonstrate the performance of the estimation
procedures and the methodology discussed in this paper. Section 5 addresses the suggested approach’s real-life data applicability. Finally, we conclude our study in Section 6.
2 Design and Assumptions of the Model
In reliability and life testing experiments/studies, hazard rate is the most important function since it plays a very important role in characterizing the aging process of the systems and hence in
classifying failure time distributions, more details can be seen in [59]. Commonly used hazard rates are constant, linear and power hazard rates. However, distributions derived from these hazard
rates are very useful and popular in reliability and life testing theory when the failure rate depicts monotonic properties, but they cannot be used to fit non-monotonic failure rates. Therefore, in
this paper, a more flexible PLHR distribution introduced by [60] that is derived from the combination of linear and power hazard rates is considered. Different shapes of hazard rate and density
function of PLHR distribution with different values of parameters are given in Figs. 1 and 2, respectively. The Probability density function (PDF), cumulative distribution function (CDF), reliability
function (RF) and the hazard rate (HR) of PLHR distribution are given respectively by the following equations:
f(t,γ,κ)=(t+γtκ)exp(−12t2−γκ+1tκ+1);t>0,γ>0,κ>−1&κ≠1 (1)
F(t)=1−exp(−12t2−γκ+1tκ+1) (2)
R(t)=exp(−12t2−γκ+1tκ+1) (3)
h(t)=(t+γtκ) (4)
In this paper, the following assumptions are made in order to describe the SSPALT model for a hybrid structure under PrHC masked data:
1. Xξ,ξ=1,2 represents the two stress levels, i.e., X1 and X2 are normal and accelerated stress level, respectively, used in SSPALT.
2. The system under consideration is a series parallel system containing three independent components j = 1, 2, 3 which is described by Fig. 3 as follows:
3. The lifetime Tξi,ξ=1,2;i=1,2,…,n of system i tested under SSPALT are i.i.d. at both normal and accelerated test conditions.
4. The data masking mechanism is statistically independent of the various stress conditions used in the experiment and the actual cause of failure of the system.
5. The lifetime of jth component at normal stress X1, in the considered hybrid system follows PLHR distribution and the corresponding PDF, CDF, SF and HR are given respectively by Eqs. (1)–(4).
6. Let the total lifetime T of a component in hybrid system under SSPALT is explained by TRV model and can be written as follows:
T={X,if X<ττ+ϑ−1(X−τ),if X>τ (5)
where X represents lifetime of the component at X1, τ is the time when items are switched from X1 to X2 and is known as stress change time and ϑ>1 is used to reflect the effect of stress change and
is known as acceleration factor. Now lifetime of the component at X2 by using Eqs. (1)–(5) can be obtained as
f2(t,γ,κ,ϑ)=ϑ{(τ+ϑ(t−τ))+γ(τ+ϑ(t−τ))κ}×exp{−12(τ+ϑ(t−τ))2−γκ+1(τ+ϑ(t−τ))κ+1} (6)
F2(t,γ,κ,ϑ)=1−exp{−12(τ+ϑ(t−τ))2−γκ+1(τ+ϑ(t−τ))κ+1} (7)
R2(t,γ,κ,ϑ)=exp{−12(τ+ϑ(t−τ))2−γκ+1(τ+ϑ(t−τ))κ+1} (8)
h2(t,γ,κ)=ϑ{(τ+ϑ(t−τ))+γ(τ+ϑ(t−τ))κ} (9)
Definition 2.1: Masking Probability
Suppose there are n system that are tested in the experiment and tξi represents observed value of Tξi, where ξ=1,2 represents the stress levels. Suppose also that tξij, i=1,2,3,…,n; j = 1, 2, 3 are
realizations of the life time Tξij of the jth component of the system i. Let ωξiϵΩξi be a masked event corresponding to the jth component of the system i that is one of the possible causes for system
failure. If ωi include only single component, e.g., ωi={1}, then it can be said that the failure cause of the system is exact, otherwise cause is called masked [1,47,56]. Let Cξij be the jth
component that is the exact cause of failure of system i with masked event ωξiϵΩξi, then the masking probability can be defined as follows:
P(Ωξi=ωξi|tξi<Tξi<tξi+dtξi,Cξij=j) (10)
Now, using the Assumption 4, above expression for masking probability can be written as follows:
P(Ωξi=ωξi|tξi<Tξi<tξi+dtξi,Cξij=j)=P(Ωξi=ωξi|Cξij=j)=ζξi (11)
2.3 Formulation of the SSPALT with PrHC Mask Data
PrHC scheme was first proposed and applied by [61] in traditional life tests. In SSPALT, suppose experiment is first started by randomly choosing n similar systems of three components described in
the Fig. 3 to be assigned to test at the normal stress level X1 with some predefined progressive values of random removals patterns e1,e2…,em, stress change time τ and a test termination time t0.
Now, test will progress by removing e1 systems randomly at the time t1 of first failure. Similarly, e2 systems will be removed randomly at the time t2 of second failure and so on. If, n1 is the
number of total observed failures occurred at normal use condition X1 and en1 is total systems removed at X1 before stress change time τ, then total remaining surviving systems will be (n − nu − en1
). At this point τ of the experiment, all the remaining (n − n1 − en1) survival systems are removed from the test and are assigned to test at accelerated test condition X2(>X1) with hybrid test
termination time t0∗=min(tm,m,n,t0) following the same procedure as it was on stress X1. If mth failure tm, m, n is observed before the pre-set time constraint t0, then we will stop the test by
removing all the remaining em=n−m−(e1+e2+⋯+em−1) test systems from the experiment. If mth failure tm, m, n is not observed before t0, then test will be terminated by removing all the remaining ed=
n−d−(e1+e2+⋯+ed−1) test systems, where d is the number system failures before t0. Let n2 be the total number of observed failures at accelerated test condition X1 during time interval [τ, t0], for
more details see [62,63]. In many complicated systems these days, the cause that exactly responsible for system failure is not known. Let Ωξi={{1},{2},{3},{1,2},{1,3},{2,3},{1,2,3}} be the set of all
the possible events that can be a reason for the system failure and ωξi be a subset (observed value) of the Ωξi. If observed subset ωξi contains exactly one element, then cause of the system failure
is known. In contrast, if observed subset ωξi contains more than one element, then the exact reason of system failure is not known (or Masked). In this scenario, following two cases of failure data
are observed in general:
I:(tξ,1,m,n,ωξ,1,m,n),(tξ,2,m,n,ωξ,2,m,n),…,(tξ,n1,m,n,ωξ,n1,m,n)<τ<(tξ,n1+1,m,n,ωξ,n1+1,m,n)…(tξ,m,m,n,ωξ,m,m,n),if tξ,m,m,n≤t0II:(tξ,1,m,n,ωξ,1,m,n),(tξ,2,m,n,ωξ,2,m,n),…,(tξ,n1,m,n,ωξ,n1,m,n)<τ
<(tξ,n1+1,m,n,ωξ,n1+1,m,n)…(tξ,(n1+n2),m,n,ωξ,m,m,n,if tξ,m,m,n>t0 (12)
Now, using Eqs. (10), (12) and the PrHC masked data given in Eq. (16), we can write the likelihood function for the hybrid system under SSPALT as follows:
L(ϑ,γ,κ)∝∏i=1n1[(∑j∈ωξiζξifξij){R1(ti)}ri]∏i=n1+1r[(∑j∈ωξifξij){R2(ti)}ri][R2(t0)]r∗ (13)
where, ti=tξ,i,m,n, r = m, r* = 0 for first data set and r = d = n1 +n2, r∗=n−d−(e1+e2+⋯+ed−1) for second data set given in Eq. (12).
Theorem 2.1: For a hybrid system consist of j independent components described in Fig. 3 having lifetime Tξi with masking probability P(Ωξi=ωξi|Cξij=j), the density function of the hybrid system due
to masked event ωξiϵΩξi with exact failure component j at time tξi is given by following expression:
P(tξi<Tξi<tξi+dtξi,Ωξi=ωξi)=∑j∈ωξiζξifξij (14)
Probability that the system i is failed due to component j under mask occurrence ωξi at time tξi can be obtained as follows [47,56]:
j) (15)
Now, the lifetime Tξi of the hybrid system i given in Fig. 3 is
And, therefore the RF of the system i can be obtained as follows:
P(Tξi>tξi)=P(Tξi1>tξi)P[max(Tξi2,Tξi3)]=P(Tξi1>tξi)[1−P(Tξi2≤tξi)P(Tξi2≤tξi)]=Rξi1(tξi)[1−Fξi2(tξi)Fξi3(tξi)] (16)
Similarly, the probability densities of hybrid system i which is failed due to component j at time tξi can be calculated as
For simplifying notations, let denote
(tξi)fξ3(tξi) (17)
Now using Eqs. (11), (15), (17) and Assumption 4, we obtained following:
which completes the proof.
Applying Eqs. (11), (14) and (16) in Eq. (13), we obtained the following form of the likelihood function [56]:
(t0−τ))κ+1)×{1−(1−exp(−12(τ+ϑ(t0−τ))2−γκ+1(τ+ϑ(t0−τ))κ+1))2}]r∗ (18)
where, at use stress level X1, f1(ti) and R1(ti) are failure density function and reliability of the system i respectively caused by each component and are obtained by using Eqs. (1), (3), (16), (17)
and Assumption 4 as follows, respectively:
similarly, at accelerated stress level X2, f2(ti) and R2(ti) are failure density function and reliability of the system i respectively caused by each component and are obtained by using Eqs. (6), (8)
, (16), (17) and Assumption 6 as follows, respectively:
3 Estimation of Model Parameters
To obtain the estimates of parameters, the MLE technique is used. Now, taking log on both sides of Eq. (18) and making some adjustments, the log likelihood function (or Score function) ℓ=L(ϑ,γ,κ) is
derived in the form of following equation:
ℓ∝r∗(r−nu)logϕ0+r∗(r−nu)log(1−(1−ϕ0)2)+∑i=1nulog(ti+γtiκ)+∑i=1nu(1+ri)logΨ1+∑i=1nu(1+ri)log(1−(1−Ψ1)2)+∑i=nu+1rlogϑ+∑i=nu+1rlog(Ψ2+γΨ2κ)+∑i=nu+1r(1+ri)logϕ1+∑i=nu+1r(1+ri)log(1−(1−ϕ1)2) (19)
where, we assume exp(−12ti2−γκ+1tiκ+1)=Ψ1; (τ+ϑ(ti−τ))=Ψ2;τ+ϑ(t0−τ)=Ψ0 exp(−12Ψ22−γκ+1Ψ2κ+1)=ϕ1 and exp(−12Ψ02−γκ+1Ψ0κ+1)=ϕ0 to make equations simple.
Now, differentiating Eq. (19) partially with respect to model parameter ϑ,γ and κ, we obtained following likelihood equations:
∂ℓ∂ϑ=−(t0−τ)r∗(r−nu)(Ψ0+γΨ0κ){1+2(1−ϕ0)ϕ0(1−(1−ϕ0)2)}+(r−nu)ϑ+∑i=nu+1r(ti−τ)(1+γκΨ2κ−1)(Ψ2+γΨ2κ)−∑i=nu+1r(ti−τ)(1+ri)(Ψ2+γΨ2κ){1+2(1−ϕ1)ϕ1(1−(1−ϕ1)2)}=0 (20)
∂ℓ∂γ=−r∗(r−nu)Ψ0κ+1κ+1{1−2(1−ϕ0)ϕ0(1−(1−ϕ0)2)}−∑i=1nu(1+ri)tiκ+1κ+1{1−2(1−Ψ1)Ψ1(1−(1−Ψ1)2)}+∑i=1nutiκ(ti+γtiκ)+∑i=nu+1rΨ2κ(Ψ2+γΨ2κ)−∑i=nu+1r(1+ri)Ψ2κ+1κ+1{1−2(1−ϕ1)ϕ1(1−(1−ϕ1)2)}=0 (21)
(1−(κ+1)logΨ2)(κ+1)2{1−2(1−ϕ1)(1−(1−ϕ1)2)}=0 (22)
The MLEs ϑ^,γ^ and κ^ of the parameters ϑ,γ and κ can be obtained by solving Eqs. (20)–(22) simultaneously but these equations are very complex non-linear equations to be solved analytically. In
fact, above equations cannot be solved analytically, therefore Newton–Raphson technique is used to solve these equations numerically.
One of the best redeeming features of MLE is due to its large sample properties. For large sets of data, the distribution of MLEs of the parameters is approximately normal. As it is discussed in the
last subsection, the likelihood equations for finding MLEs are virtually impossible to solve in close form and, hence the exact distribution of the MLEs is nearly impossible to find for a complex
situation such as the one we are dealing with. Therefore, we utilized the asymptotic properties of MLEs to construct the ACIs. The distribution of MLEs ϑ^,γ^ and κ^ of the unknown parameters ϑ,γ and
κ is asymptotically normal and can be described as follows:
(ϑ^γ^κ^)=((ϑγκ),V^) (23)
where, V^ is asymptotic variance covariance which can be calculated by inverting observed information matrix F and replacing parameters ϑ,γ and κ with their corresponding MLEs ϑ^,γ^ and κ^ as
V^=(V^11V^12V^13V^21V^22V^23V^31V^32V^33)=F−1=(−∂2ℓ∂ϑ2−∂2ℓ∂ϑ∂γ−∂2ℓ∂ϑ∂κ−∂2ℓ∂γ∂ϑ−∂2ℓ∂γ2−∂2ℓ∂γ∂κ−∂2ℓ∂κ∂ϑ−∂2ℓ∂κ∂γ−∂2ℓ∂κ2)−1 (24)
where the elements of F are given in Appendix A. Now 100(1−α)% ACIs for model parameters ϑ,γ and κ can be obtained as follows:
ϑ^±zα/2V^11;γ^±zα/2V^22;κ^±zα/2V^22 (25)
where, V^11=var(ϑ^),V^11=var(γ^),V^11=var(κ^) and zα/2 represent (1−α/2) quantile of standard normal distribution.
This subsection deals with the bootstrap re-sampling approach for constructing parameters CIs in which the original data is assumed to be a population and then several samples are generated using the
original data to create the CIs, for more details see [64–66]. We use the following algorithm to construct ST-B CIs [56]:
1. Find the MLEs ℧^=(ϑ^,γ^,κ^) of the parameters ℧=(ϑ,γ,κ) based on PrHC masked data generated from PLHR distribution under SSPALT which is obtained by following the Steps 1–6 in Section 4.
2. Now using MLEs ℧^=(ϑ^,γ^,κ^) and PrHC masked data in Step 1, generate following bootstrap sample:
(I:tξ,1,m,n∗,ωξ,1,m,n∗),(tξ,1,m,n∗,ωξ,1,m,n∗),…,(tξ,n1,m,n∗,ωξ,n1,m,n∗)<τ<(tξ,n1,m,n∗,ωξ,n1,m,n∗)…(tξ,m,m,n∗,ωξ,m,m,n∗),if tξ,m,m,n∗≤t0(II: (tξ,1,m,n∗,ωξ,1,m,n∗),(tξ,1,m,n∗,ωξ,1,m,n∗),…,
(tξ,n1,m,n∗,ωξ,n1,m,n∗)<τ<(tξ,n1,m,n∗,ωξ,n1,m,n∗)…(tξ,(n1+n2),,m,n∗,ωξ,(n1+n2),m,n∗),if tξ,m,m,n∗>t0)
3. Using bootstrap sample obtained in Step 2, find estimate ℧^∗=(ϑ^∗,γ^∗,κ^∗).
4. Repeat above 3 steps N times, say for example N=5000 and obtain a set {℧^i∗,i=1,2,…N} of bootstrap estimates and the corresponding statistics
5. Compute 100(1−α)% ST-B CIs now, therefore can be constructed for ℧^ as
Here in this section of the paper, we are going to perform a simulation study to investigate and compare the performance of estimates for the hybrid system under SSPALT for PrHC masked data by
utilizing the Monte Carlo simulation technique. The performance of MLEs assessed with their respective mean square errors (MSEs) and relative absolute biases (RABs). 95% ACIs are also constructed and
their performance is investigated in terms of their respective ILs. First, we generated the hybrid progressive censored data from the considered distribution under SSPALT following some of the steps
given in [58,64,67] and the complete steps of the algorithm are given as:
1. Specify the values of [τ,n,m,t0,(e1,e2,…,em)] and the values of the parameters ϑ,γ,κ.
2. To obtain a random censored sample of size r, first generate a random sample of size r from uniform distribution U(0, 1), suppose these generated samples are (U1,U2,…,Ur).
3. Specify, Xi=Ui1/(i+∑m=n−i+1rem for given values of censoring scheme (e1,e2,…,em), where, i=1,2,…r, fore case I, r = m, and r = d, for Case II.
4. Define Ui,m,n=1−∏m=n−i+1rem in order to generate progressive censored sample of size r from uniform distribution as U1,m,n,U2,m,n…Ur,m,n.
5. Set Un1,m,n<F1(τ)≤Un1+1,m,n, in order to find sample of size n1 from PLHR distribution at normal stress level for fixed values of parameters γ,κ and ϑ using the expression log(1−Ui,m,n)
6. Similarly, find a sample at accelerated condition for fixed values of parameters ϑ,γ,κ,τ using the following equation log(1−Ui,m,n)+12(τ+ϑ(ti−τ))2+γκ+1(τ+ϑ(ti−τ))κ+1=0.
7. Following above steps, we generated the desired PrHC masked data from PLHR distribution in the form of Eq. (12).
8. Now, using the data obtained in Step 7, compute MLEs ℧^=(ϑ^,γ^,κ^) of the parameters ℧=(ϑ,γ,κ) by solving Eqs. (20)–(22) and their respective MSEs and RABs.
9. Replicate Steps 2–9, 10,000 times, and obtain average MLEs, MSEs and RABs of the parameters.
10. Compute the 95% ACIs for the parameters of PLHR distribution and acceleration factor.
11. For different values of [τ,n,m,t0,(e1,e2,…,em)], ϑ,γ,κ and test schemes, the above procedure from Steps 1 to 10 should be repeated.
In our study, we used four different censoring schemes generated from six different combinations of n and m with different fixed values of τ and t0. The censoring schemes, initially fixed values of
parameters and the values of τ,n,m,t0 that we used in this paper are given in Table 1. Averages of MLEs based on 10,000 replications described in simulation study and their respective MSEs and RABs
are reported in Tables 2 and 3. 95% ACIs and ST-B CIs along with their expected interval length are reported in Tables 4 and 5.
Fig. 4 presents the plots of simulated samples and the histograms of the parameters to verify the convergence of the parameters. For this demonstration, plots are made using the Scheme 1 simulation
results. The graphs indicate that the parameters are converging, and the histograms reveal that they are asymptotically normal over large number of simulation runs. As a result, the simulation study
is consistent with the statistical assumptions for parameter estimation. The step-by-step process of the proposed estimation method is demonstrated through a flow chart given in Fig. 5.
From the results listed in Tables 2 and 3, it can be observed that, for fixed τ and t0, the MSEs and RABs of the model parameters are decreasing in most of the test schemes as a result of an increase
in the values of n and m and MLEs ϑ^,γ^ and κ^ of the model parameters ϑ,γ and κ are getting more closer to their respective real values and this is reasonable because when sample data is large, we
can expect more precise estimates. A decreasing pattern in the values of MSEs and RABs can also be observed as n and m increases simultaneously in all cases for fixed values of n,τ and t0. For fixed
n and m, the MSEs and RABs are also getting smaller as the values of τ and t0 getting larger and MLEs are approaching to their respective real values and this is also very much possible because with
a larger experimental time, we can expect more failures and hence sample size will be larger in that case. Results reported in Tables 4 and 5 shows that expected lengths of both 95% ACIs and ST-B CIs
are getting shorter with increasing values of n and m, for fixed values of τ and t0 in almost all cases except for ϑ which is quite acceptable since ϑ is not a parameter of the distribution under
study instead it is acceleration factor. It is also observed that both 95% ACIs and ST-B CIs are getting shorter when the values of n and m increases simultaneously. As a comparison between 95% ACIs
and ST-B CIs, it is observed that the ST-B CIs provides more narrower expected ILs in almost every case. Therefore, now based on the above findings, we can conclude that the proposed model and
methods of estimation in this paper have performed very well, and hence all the statistical assumptions for fitting the model and regarding the estimation are satisfactory.
In this section, an actual data set is utilized to further illustrate the performance of the suggested estimation technique and to demonstrate the application of the PLHR distribution in practice in
the field of reliability engineering. The R statistical programming language is used for computation. The dataset reported in Table 6 is an uncensored dataset consisting of real failure times of an
airplane’s air conditioning systems (in hours) was first discussed by [68].
The Kolmogorov-Smirnov (K-S) goodness of fit test is used to fit the PLHR distribution to real data. The K-S test is used to compare a given data sample with a reference continuous probability
distribution. It is based on the K-S distance, which is the absolute maximum distance between the sample’s empirical distribution function and the reference distribution’s cumulative distribution
function and its corresponding p-value. In this current example, the K-S distance was determined to be 0.14146 with a p-value of 0.5856, which is greater than 0.05. Fig. 6 displays the plots of the
empirical CDF vs. fitted CDF of PLHR distribution and histogram of data vs. fitted PDF of PLHR distribution. Consequently, it is evident from the K-S distance, p-value and Fig. 6 that the PHLR
distribution and considered sample data tabulated in Table 6 both have the same probability distribution. Therefore, given data can be used as an illustration for our model.
Now under SSPALT, for illustrative purposes, let’s consider the stress change time τ to be 24, the test termination time t0 to be 130, and the sample size m to be 20 with respect to the progressive
censoring scheme (e1=e2=…=e6=0, e7 = 1, e8 = 0, e9 = e10 = 1, e11 = 0, e12 = e13 = 1, e14 = 0, e15 = 1, e16 = e17 = 0, e18 = e19 = 0, e20 = 1). So, utilizing the data provided in Table 6 with 0%
masking, we have the following PrHC data reported in Table 7 on normal and accelerated stress.
For PrHC data in Table 7 with 0% masking under SSPALT, the MLEs of the parameters with initial values γ=0.042003 and κ=−0.3054115 which are the estimates of the parameters based on the complete data.
And the initial value of the acceleration factor ϑ is set to be 1.2. The MLEs with their corresponding standard errors are reported in Table 9.
Now, to demonstrate the effect of masking, at each stress level, 20% of failed systems are chosen at random to be masked. So, utilizing the data provided in Table 7 with 20% masking, the data
obtained after masking with the PrHC scheme under SSPALT at each stress level is reported in the Table 8 as follows:
For PrHC data in Table 8 with 20% masking under SSPALT, we choose the same initial values of γ,κ and ϑ as it was in the case of 0% masking. The MLEs with their corresponding standard errors are
obtained and reported in Table 9.
From Table 9, it can be observed that the ML estimates are more accurate with smaller standard errors when the sample size is large or the masking proportion is 0%, than the estimates under a smaller
sample size or the masking level is 20%.
In this article, the SSPALT model has been developed for PrHC data to analyse the lifetime of a hybrid system of tree components under mask causes of failure. Assuming that the failure of components
independently follows PLHR distribution, estimates of the parameters of PLHR distribution and the acceleration factor are then obtained using the MLE technique. The performance of MLEs is
investigated through their respective MSEs and RABs. 95% ACIs and ST-B CIs are also constructed and their performance is investigated in terms of their respective ILs. A simulation study has also
been conducted to investigate and compare the performance of estimates for the hybrid system under SSPALT for PrHC masked data by utilizing the Monte Carlo simulation technique. Additionally, a
real-world data application for an airplane’s air conditioning systems was utilized to demonstrate the proposed approach. As a comparison between 95% ACIs and ST-B CIs, it is observed that the ST-B
CIs provide narrower expected ILs in almost every case. Based on the findings, it can be concluded that the proposed model and method of estimation performed well. Hence all the statistical
assumptions for fitting the model and regarding the estimation are satisfactory. As a future research project, the present study may be extended to more complex systems using the Bayesian estimation
technique with different censored data.
Data Availability: The data used in this paper is available in the paper.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. Miyakawa, M. (1984). Analysis of incomplete data in competing risks model. IEEE Transactions on Reliability, 33(4), 293–296. DOI 10.1109/TR.1984.5221828. [Google Scholar] [CrossRef]
2. Usher, J. S., Hodgson, T. J. (1988). Maximum likelihood analysis of component reliability using masked system life-test data. IEEE Transactions on Reliability, 37(5), 550–555. DOI 10.1109/24.9880.
[Google Scholar] [CrossRef]
3. Nelson, W. (1990). Accelerated testing: Statistical models, test plans and data analysis. New York: John Wiley & Sons. [Google Scholar]
4. Ma, H., Meeker, W. Q. (2010). Strategy for planning accelerated life tests with small sample sizes. IEEE Transactions on Reliability, 59(4), 610–619. DOI 10.1109/TR.2010.2083251. [Google Scholar]
5. Li, X., Chen, W., Sun, F., Liao, H., Kang, R. et al. (2018). Bayesian accelerated acceptance sampling plans for a lognormal lifetime distribution under Type-I censoring. Reliability Engineering &
System Safety, 171(3), 78–86. DOI 10.1016/j.ress.2017.11.012. [Google Scholar] [CrossRef]
6. Han, D., Bai, T. (2019). On the maximum likelihood estimation for progressively censored lifetimes from constant-stress and step-stress accelerated tests. Electronic Journal of Applied Statistical
Analysis, 12(2), 392–404. DOI 10.1285/i20705948v12n2p392. [Google Scholar] [CrossRef]
7. Bai, X., Shi, Y., Ng, H. K. T. (2020). Statistical inference of Type-I progressively censored step-stress accelerated life test with dependent competing risks. Communications in Statistics-Theory
and Methods, 1–27. DOI 10.1080/03610926.2020.1788081. [Google Scholar] [CrossRef]
8. Kamal, M., Rahman, A., Ansari, S. I., Zarrin, S. (2020). Statistical analysis and optimum step stress accelerated life test design for Nadarajah Haghighi distribution. Reliability: Theory &
Applications, 15(4), 1–9. DOI 10.24411/1932-2321-2020-14005. [Google Scholar] [CrossRef]
9. Rahman, A., Sindhu, T. N., Lone, S. A., Kamal, M. (2020). Statistical inference for Burr Type X distribution using geometric process in accelerated life testing design for time censored data.
Pakistan Journal of Statistics and Operation Research, 16(3), 577–586. DOI 10.18187/pjsor.v16i3.2252. [Google Scholar] [CrossRef]
10. Ma, Z., Liao, H., Ji, H., Wang, S., Yin, F. et al. (2021). Optimal design of hybrid accelerated test based on the inverse Gaussian process model. Reliability Engineering & System Safety, 210(2),
107509. DOI 10.1016/j.ress.2021.107509. [Google Scholar] [CrossRef]
11. Goel, P. K. (1971). Some estimation problems in the study of tampered random variables (Ph.D. Thesis). Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania. [Google
12. DeGroot, M. H., Goel, P. K. (1979). Bayesian estimation and optimal designs in partially accelerated life testing. Naval Research Logistics Quarterly, 26(2), 223–235. DOI 10.1002/(ISSN)1931-9193.
[Google Scholar] [CrossRef]
13. Bhattacharyya, G. K., Soejoeti, Z. (1989). A tampered failure rate model for step-stress accelerated life test. Communications in Statistics-Theory and Methods, 18(5), 1627–1643. DOI 10.1080/
03610928908829990. [Google Scholar] [CrossRef]
14. Bai, D. S., Chung, S. W. (1992). Optimal design of partially accelerated life tests for the exponential distribution under Type-I censoring. IEEE Transactions on Reliability, 41(3), 400–406. DOI
10.1109/24.159807. [Google Scholar] [CrossRef]
15. Bai, D. S., Chung, S. W., Chun, Y. R. (1993). Optimal design of partially accelerated life tests for the lognormal distribution under Type I censoring. Reliability Engineering & System Safety, 40
(1), 85–92. DOI 10.1016/0951-8320(93)90122-F. [Google Scholar] [CrossRef]
16. Ismail, A. A. (2012). Inference in the generalized exponential distribution under partially accelerated tests with progressive Type-II censoring. Theoretical and Applied Fracture Mechanics, 59(1)
, 49–56. DOI 10.1016/j.tafmec.2012.05.007. [Google Scholar] [CrossRef]
17. Zhang, C., Shi, Y. (2016). Estimation of the extended Weibull parameters and acceleration factors in the step-stress accelerated life tests under an adaptive progressively hybrid censoring data.
Journal of Statistical Computation and Simulation, 86(16), 3303–3314. DOI 10.1080/00949655.2016.1166366. [Google Scholar] [CrossRef]
18. Ismail, A. A. (2014). Inference for a step-stress partially accelerated life test model with an adaptive Type-II progressively hybrid censored data from Weibull distribution. Journal of
Computational and Applied Mathematics, 260(2), 533–542. DOI 10.1016/j.cam.2013.10.014. [Google Scholar] [CrossRef]
19. Nassar, M., Nassr, S. G., Dey, S. (2017). Analysis of burr Type-XII distribution under step stress partially accelerated life tests with Type-I and adaptive Type-II progressively hybrid censoring
schemes. Annals of Data Science, 4(2), 227–248. DOI 10.1007/s40745-017-0101-8. [Google Scholar] [CrossRef]
20. Mahmoud, M. A., Soliman, A. A., Abd Ellah, A. H., El-Sagheer, R. M. (2013). Estimation of generalized Pareto under an adaptive Type-II progressive censoring. Intelligent Information Management, 5
(3), 73–83. DOI 10.4236/iim.2013.53008. [Google Scholar] [CrossRef]
21. Sobhi, M. M. A., Soliman, A. A. (2016). Estimation for the exponentiated Weibull model with adaptive Type-II progressive censored schemes. Applied Mathematical Modelling, 40(2), 1180–1192. DOI
10.1016/j.apm.2015.06.022. [Google Scholar] [CrossRef]
22. Hafez, E. H., Riad, F. H., Mubarak, S. A., Mohamed, M. S. (2020). Study on Lindley distribution accelerated life tests: Application and numerical simulation. Symmetry, 12(12), 2080. DOI 10.3390/
sym12122080. [Google Scholar] [CrossRef]
23. Guess, F. M., Usher, J. S., Hodgson, T. J. (1991). Estimating system and component reliabilities under partial information on cause of failure. Journal of Statistical Planning & Inference, 29(1–2
), 75–85. DOI 10.1016/0378-3758(92)90123-A. [Google Scholar] [CrossRef]
24. Doganaksoy, N. (1991). Interval estimation from censored and masked system-failure data. IEEE Transactions on Reliability, 40(3), 280–286. DOI 10.1109/24.85440. [Google Scholar] [CrossRef]
25. Lin, D. K., Usher, J. S., Guess, F. M. (1993). Exact maximum likelihood estimation using masked system data. IEEE Transactions on Reliability, 42(4), 631–635. DOI 10.1109/24.273596. [Google
Scholar] [CrossRef]
26. Sarhan, A. M. (2001). Reliability estimation of components from masked system life data. Reliability Engineering & System Safety, 74(1), 107–113. DOI 10.1016/S0951-8320(01)00072-2. [Google
Scholar] [CrossRef]
27. Flehinger, B. J., Reiser, B., Yashchin, E. (2002). Parametric modeling for survival with competing risks and masked failure causes. Lifetime Data Analysis, 8(2), 177–203. DOI 10.1023/
A:1014891707936. [Google Scholar] [CrossRef]
28. Sarhan, A. M. (2003). Estimations of parameters in Pareto reliability model in the presence of masked data. Reliability Engineering & System Safety, 82(1), 75–83. DOI 10.1016/S0951-8320(03)
00125-X. [Google Scholar] [CrossRef]
29. Sarhan, A. M. (2004). Parameter estimations in linear failure rate model using masked data. Applied Mathematics and Computation, 151(1), 233–249. DOI 10.1016/S0096-3003(03)00335-7. [Google
Scholar] [CrossRef]
30. Tan, Z. (2005). Estimation of component failure probability from masked binomial system testing data. Reliability Engineering & System Safety, 88(3), 301–309. DOI 10.1016/j.ress.2004.08.013. [
Google Scholar] [CrossRef]
31. Craiu, R. V., Reiser, B. (2006). Inference for the dependent competing risks model with masked causes of failure. Lifetime Data Analysis, 12(1), 21–33. DOI 10.1007/s10985-005-7218-3. [Google
Scholar] [CrossRef]
32. Tan, Z. (2007). Estimation of exponential component reliability from uncertain life data in series and parallel systems. Reliability Engineering & System Safety, 92(2), 223–230. DOI 10.1016/
j.ress.2005.12.010. [Google Scholar] [CrossRef]
33. Liu, Y., Shi, Y. (2010). Statistical analysis of the reliability for power supply of spacecraft with masked system life test data. Aerospace Control, 28(2), 70–74. [Google Scholar]
34. Reiser, B., Guttman, I., Lin, D. K., Guess, F. M., Usher, J. S. (1995). Bayesian inference for masked system lifetime data. Journal of the Royal Statistical Society: Series C (Applied Statistics)
, 44(1), 79–90. DOI 10.2307/2986196. [Google Scholar] [CrossRef]
35. Lin, D. K. J., Usher, J. S., Guess, F. M. (1996). Bayes estimation of component-reliability from masked system-life data. IEEE Transactions on Reliability, 45(2), 233–237. DOI 10.1109/24.510807.
[Google Scholar] [CrossRef]
36. Kuo, L., Yang, T. Y. (2000). Bayesian reliability modeling for masked system lifetime data. Statistics & Probability Letters, 47(3), 229–241. DOI 10.1016/S0167-7152(99)00160-1. [Google Scholar] [
37. Sarhan, A. M. (2001). The Bayes procedure in exponential reliability family models using conjugate convex tent prior family. Reliability Engineering & System Safety, 71(1), 97–102. DOI 10.1016/
S0951-8320(00)00086-7. [Google Scholar] [CrossRef]
38. Basu, S., Sen, A., Banerjee, M. (2003). Bayesian analysis of competing risks with partially masked cause of failure. Journal of the Royal Statistical Society: Series C (Applied Statistics), 52(1)
, 77–93. DOI 10.1111/1467-9876.00390. [Google Scholar] [CrossRef]
39. Mukhopadhyay, C., Basu, S. (2007). Bayesian analysis of masked series system lifetime data. Communications in Statistics–-Theory and Methods, 36(2), 329–348. DOI 10.1080/03610920600853357. [
Google Scholar] [CrossRef]
40. Xu, A., Tang, Y. (2011). Nonparametric Bayesian analysis of competing risks problem with masked data. Communications in Statistics–Theory and Methods, 40(13), 2326–2336. DOI 10.1080/
03610921003786830. [Google Scholar] [CrossRef]
41. Sarhan, A. M., Kundu, D. (2008). Bayes estimators for reliability measures in geometric distribution model using masked system life test data. Computational Statistics & Data Analysis, 52(4),
1821–1836. DOI 10.1016/j.csda.2007.05.031. [Google Scholar] [CrossRef]
42. Yousif, Y., Elfaki, F. A., Hrairi, M., Adegboye, O. A. (2020). A Bayesian approach to competing risks model with masked causes of failure and incomplete failure times. Mathematical Problems in
Engineering, 2020, 1–7. DOI 10.1155/2020/8248640. [Google Scholar] [CrossRef]
43. Cai, J., Shi, Y., Liu, B. (2017). Statistical analysis for masked system life data from Marshall-Olkin Weibull distribution under progressive hybrid censoring. Naval Research Logistics, 64(6),
490–501. DOI 10.1002/nav.21769. [Google Scholar] [CrossRef]
44. Peng, W., Huang, H. Z., Xie, M., Yang, Y., Liu, Y. (2013). A Bayesian approach for system reliability analysis with multilevel pass-fail, lifetime and degradation data sets. IEEE Transactions on
Reliability, 62(3), 689–699. DOI 10.1109/TR.2013.2270424. [Google Scholar] [CrossRef]
45. Yang, L., Guo, Y., Wang, Q. (2020). Reliability assessment of a hierarchical system subjected to inconsistent priors and multilevel data. IEEE Transactions on Reliability, 69(1), 277–292. DOI
10.1109/TR.2019.2895501. [Google Scholar] [CrossRef]
46. Yang, L., Wang, P., Wang, Q., Bi, S., Peng, R. et al. (2021). Reliability analysis of a complex system with hybrid structures and multi-level dependent life metrics. Reliability Engineering &
System Safety, 209(1–2), 107469. DOI 10.1016/j.ress.2021.107469. [Google Scholar] [CrossRef]
47. Wang, R., Sha, N., Gu, B., Xu, X. (2015). Parameter inference in a hybrid system with masked data. IEEE Transactions on Reliability, 64(2), 636–644. DOI 10.1109/TR.2015.2412537. [Google Scholar]
48. Sha, N., Wang, R., Hu, P., Xu, X. (2015). Statistical inference in dependent component hybrid systems with masked data. Advances in Statistics, 2015, 525136. DOI 10.1155/2015/525136. [Google
Scholar] [CrossRef]
49. Cai, J., Shi, Y., Bai, X. (2017). Statistical analysis of masked data in a hybrid system based on copula theory under progressive hybrid censoring. Sequential Analysis, 36(2), 240–250. DOI
10.1080/07474946.2017.1319686. [Google Scholar] [CrossRef]
50. Rodrigues, A. S., Pereira, C. A. D. B., Polpo, A. (2019). Estimation of component reliability in coherent systems with masked data. IEEE Access, 7, 57476–57487. DOI 10.1109/ACCESS.2019.2913675. [
Google Scholar] [CrossRef]
51. Shi, X., Zhan, P., Shi, Y. (2020). Statistical inference for a hybrid system model with incomplete observed data under adaptive progressive hybrid censoring. Concurrency and Computation: Practice
and Experience, 32(14), e5708. DOI 10.1002/cpe.5708. [Google Scholar] [CrossRef]
52. Fan, T. H., Wang, W. L. (2011). Accelerated life tests for Weibull series systems with masked data. IEEE Transactions on Reliability, 60(3), 557–569. DOI 10.1109/TR.2011.2134270. [Google Scholar]
53. Fan, T. H., Hsu, T. M. (2012). Accelerated life tests of a series system with masked interval data under exponential lifetime distributions. IEEE Transactions on Reliability, 61(3), 798–808. DOI
10.1109/TR.2012.2209259. [Google Scholar] [CrossRef]
54. Xu, A., Basu, S., Tang, Y. (2014). A full Bayesian approach for masked data in step-stress accelerated life testing. IEEE Transactions on Reliability, 63(3), 798–806. DOI 10.1109/TR.2014.2315940.
[Google Scholar] [CrossRef]
55. Xu, A., Tang, Y., Guan, Q. (2014). Bayesian analysis of masked data in step-stress accelerated life testing. Communications in Statistics-Simulation and Computation, 43(8), 2016–2030. DOI 10.1080
/03610918.2013.848894. [Google Scholar] [CrossRef]
56. Shi, X., Liu, Y., Shi, Y. (2017). Statistical analysis for masked hybrid system lifetime data in step-stress partially accelerated life test with progressive hybrid censoring. PLOS One, 12(10),
e0186417. DOI 10.1371/journal.pone.0186417. [Google Scholar] [CrossRef]
57. Shi, X., Shi, Y., Song, Q. (2018). Inference for four-unit hybrid system with masked data under partially acceleration life test. Systems Science & Control Engineering, 6(1), 195–206. DOI 10.1080
/21642583.2018.1474503. [Google Scholar] [CrossRef]
58. Shi, X. L., Lu, P., Shi, Y. M. (2018). Inference and optimal design on step-stress partially accelerated life test for hybrid system with masked data. Journal of Systems Engineering and
Electronics, 29(5), 1089–1100. DOI 10.21629/JSEE.2018.05.19. [Google Scholar] [CrossRef]
59. Rinne, H. (2014). The hazard rate: Theory and inference (with supplementary MATLAB-programs). Justus-Liebig-University, Germany. http://geb.uni-giessen.de/geb/volltexte/2014/10793/. [Google
60. Tarvirdizade, B., Nematollahi, N. (2019). The power-linear hazard rate distribution and estimation of its parameters under progressively Type-II censoring. Hacettepe Journal of Mathematics and
Statistics, 48(3), 818–844. DOI 10.15672/HJMS.2018.608. [Google Scholar] [CrossRef]
61. Kundu, D., Joarder, A. (2006). Analysis of Type-II progressively hybrid censored data. Computational Statistics & Data Analysis, 50(10), 2509–2528. DOI 10.1016/j.csda.2005.05.002. [Google Scholar
] [CrossRef]
62. Balakrishnan, N. (2007). Progressive censoring methodology: An appraisal. Test, 16(2), 211–296. DOI 10.1007/s11749-007-0061-y. [Google Scholar] [CrossRef]
63. Balakrishnan, N., Kundu, D. (2013). Hybrid censoring: Models, inferential results and applications. Computational Statistics & Data Analysis, 57(1), 166–209. DOI 10.1016/j.csda.2012.03.025. [
Google Scholar] [CrossRef]
64. Efron, B. (1982). The jackknife, the bootstrap and other resampling plans. Philadelphia, PA, USA: Society for Industrial and Applied Mathematics. [Google Scholar]
65. Hall, P. (1988). Theoretical comparison of bootstrap confidence intervals. The Annals of Statistics, 16(3), 927–953. DOI 10.1214/aos/1176350933. [Google Scholar] [CrossRef]
66. Efron, B., Tibshirani, R. J. (1993). An introduction to the bootstrap (1st ed.). London, UK: Chapman and Hall/CRC. DOI 10.1201/9780429246593. [Google Scholar] [CrossRef]
67. Balakrishnan, N., Sandhu, R. A. (1995). A simple simulation algorithm for generating progressive Type-II censored samples. The American Statistician, 49(2), 229–230. DOI 10.1080/
00031305.1995.10476150. [Google Scholar] [CrossRef]
68. Linhart, H., Zucchini, W. (1986). Model selection. New York: Wiley. [Google Scholar]
Elements of F are:
{1−(1−Ψ1)2}}+∑i=nu+1r{Ψ2κ+1ϕ1(κ+1){1−ϕ1}−Ψ2κ+1(κ+1)+2Ψ2κ+1(1−ϕ1)ϕ1(κ+1){1−(1−ϕ1)2}} ∂2ℓ∂κ2=r*(r-nu)γ{Ψ0κ+1Λ0(κ+1)2}{logΨ0-logΨ0Λ0-2(κ+1)}-{r*(r-nu)γ2Ψ0κ+1Λ0(1-ϕ0)(κ+1)2(1-(1-ϕ0)2)}×
where, (1−(κ+1)logΨ0)=Λ0; (1−(κ+1)logti)=Λ1; (1−(κ+1)logΨ2)=Λ2.
This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
|
{"url":"https://www.techscience.com/CMES/v130n3/46086/html","timestamp":"2024-11-01T20:11:03Z","content_type":"application/xhtml+xml","content_length":"436311","record_id":"<urn:uuid:74aad736-7f0c-4e5a-8b73-bcec20194865>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00814.warc.gz"}
|
The traits class Arr_non_caching_segment_traits_2 is a model of the ArrangementTraits_2 concept that allows the construction and maintenance of arrangements of line segments.
It is parameterized with a CGAL-Kernel type, and it is derived from it. This traits class is a thin layer above the parameterized kernel. It inherits the Point_2 from the kernel and its
X_monotone_curve_2 and Curve_2 types are both defined as Kernel::Segment_2. Most traits-class functor are inherited from the kernel functor, and the traits class only supplies the necessary functors
that are not provided by the kernel. The kernel is parameterized with a number type, which should support exact rational arithmetic in order to avoid robustness problems, although other number types
could be used at the user's own risk.
The traits-class implementation is very simple, yet may lead to a cascaded representation of intersection points with exponentially long bit-lengths, especially if the kernel is parameterized with a
number type that does not perform normalization (e.g. Quotient<MP_Float>). The Arr_segment_traits_2 traits class avoids this cascading problem, and should be the default choice for implementing
arrangements of line segments. It is recommended to use Arr_non_caching_segment_traits_2 only for very sparse arrangements of huge sets of input segments.
While Arr_non_caching_segment_traits_2 models the concept ArrangementDirectionalXMonotoneTraits_2, the implementation of the Are_mergeable_2 operation does not enforce the input curves to have the
same direction as a precondition. Moreover, Arr_non_caching_segment_traits_2 supports the merging of curves of opposite directions.
See also
|
{"url":"https://doc.cgal.org/5.5.1/Arrangement_on_surface_2/classCGAL_1_1Arr__non__caching__segment__traits__2.html","timestamp":"2024-11-13T15:37:11Z","content_type":"application/xhtml+xml","content_length":"14169","record_id":"<urn:uuid:3416d43a-3beb-40d4-835e-5f20d99ea9fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00677.warc.gz"}
|
Polynomial-time approximation schemes for piercing and covering with applications in wireless networks
Let D be a set of disks of arbitrary radii in the plane, and let P be a set of points. We study the following three problems: (i) Assuming P contains the set of center points of disks in D, find a
minimum-cardinality subset ^P of P (if exists), such that each disk in D is pierced by at least h points of ^P, where h is a given constant. We call this problem minimum h-piercing. (ii) Assuming P
is such that for each D∈D there exists a point in P whose distance from D's center is at most αr(D), where r(D) is D's radius and 0≤α<1 is a given constant, find a minimum-cardinality subset ^P of P,
such that each disk in D is pierced by at least one point of ^P. We call this problem minimum discrete piercing with cores. (iii) Assuming P is the set of center points of disks in D, and that each
D∈D covers at most l points of P, where l is a constant, find a minimum-cardinality subset ^D of D, such that each point of P is covered by at least one disk of ^D. We call this problem minimum
center covering. For each of these problems we present a constant-factor approximation algorithm (trivial for problem (iii)), followed by a polynomial-time approximation scheme. The polynomial-time
approximation schemes are based on an adapted and extended version of Chan's [T.M. Chan, Polynomial-time approximation schemes for packing and piercing fat objects, J. Algorithms 46 (2003) 178-189]
separator theorem. Our PTAS for problem (ii) enables one, in practical cases, to obtain a (1+)-approximation for minimum discrete piercing (i.e., for arbitrary P).
• Approximation algorithms
• Covering
• Discrete piercing
• Geometric optimization
• Wireless networks
ASJC Scopus subject areas
• Computer Science Applications
• Geometry and Topology
• Control and Optimization
• Computational Theory and Mathematics
• Computational Mathematics
Dive into the research topics of 'Polynomial-time approximation schemes for piercing and covering with applications in wireless networks'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/polynomial-time-approximation-schemes-for-piercing-and-covering-w","timestamp":"2024-11-13T09:21:57Z","content_type":"text/html","content_length":"61626","record_id":"<urn:uuid:aaf09552-d483-457b-bcc0-d4ddbbe2e1d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00862.warc.gz"}
|
planetary ball mill pm 100 cm
The asreceived NMC was ball milled with a Retsch PM 100 planetary ball mill using 50 mL zirconia jar containing the NMC material and either 5 or 10 mm zirconia balls according to details
described in Table 1. Prior to ball milling, 10 g batches were loaded into the zirconia jar within an Arfilled glovebox (O 2 and H 2 O levels ≤ 1 ppm). The ...
WhatsApp: +86 18838072829
Planetary Ball Mill PM 400. zirconium oxide, for PM 100 and PM 400 Counter wrench IQ/OQ Documentation for PM 400 Grinding jars "comfort" PM 100 / PM 200 / PM 400 Hardened steel 50 ml 125 ml 250
ml 500 ml Stainless steel 12 ml 25 ml 50 ml
WhatsApp: +86 18838072829
The PM 100 is a convenient bench top model with 1 grinding station. You may also be interested in the High Energy Ball Mill Emax, an entirely new type of mill for high energy input. The unique
combination of high friction and impact results in extremely fine particles within the shortest amount of time. Add to Quote Description Specification
WhatsApp: +86 18838072829
PM 100 CM Golyós malmok Aprítás Termékek. Retsch GmbH. Search
WhatsApp: +86 18838072829
The PM 100 CM has a speed ratio of 1:1, size reduction is effected by pressure and friction, rather than by impact, which means it is gentler on the material. The PM 400 is a robust compact floor
model on castors with 4 grinding stations for grinding jars with a nominal volume of 12 to 500 ml.
WhatsApp: +86 18838072829
In the planetary ball, the mill balls collide with each other and w wall of the milling jar, thus creating friction, which helps in size reduction [47,4 created friction grinds the largesize ...
WhatsApp: +86 18838072829
The PM 100 CM is a convenient benchtop model with 1 grinding station. It operates in centrifugal mode mode, which leads to a more gentle size reduction process with less abrasion. Key Features
and Specifications Powerful and quick grinding down to nano range Perfect stability on lab bench thanks to FFCS technology
WhatsApp: +86 18838072829
The vibratory micromill "Pulverisette 0" (noted: P0) made by FRITSCH is driven by 50 W power, which is much lower than planetary ball mills (PM 100). This micromill is equipped with a 50mL agate
jar and a 5cm agate ball (Fig. 1 ) [ 38 ].
WhatsApp: +86 18838072829
Product Information Planetary Ball Mill PM 100 Videolink Function Principle The grinding jar is arranged eccentrically on the sun wheel of the planetary ball mill. The direction of movement of
the sun wheel is opposite to that of the grinding jars in the ratio 1:2.
WhatsApp: +86 18838072829
Planetary Ball Mill. Brand: Retsch | Category: Power Tool ... Safety Instructions for Starting the PM 100. 18. Balancing Only Required for the PM100. 20. ... Retsch PM100 CM ; Retsch PP 35 ;
Retsch Categories. Laboratory ...
WhatsApp: +86 18838072829
The samples were milled with the steel jar with 50 mL volume of the Retsch PM 100 planetary ball mill (Retsch PM 100 MA, Retsch GmbH) combined with mm ZrO 2 beads as the grinding media. The
concentrated (10% w/w) ... a total of 48 scanning in the spectral range of 3,300200 cm 1 with cosmic ray and fluorescence corrections.
WhatsApp: +86 18838072829
Wet milling process was performed using a planetary ball mill ( CM, from Retsch, Haan, Germany), Hardened steel vial (500 cc), Hardened steel balls (5 mm in diameter). Graphite powders (Sigma
Aldrich, <20 µm, Schnelldorf, Germany) were milled at which the weight of the milled graphite powders was 10 g, and the weight of the milling balls ...
WhatsApp: +86 18838072829
The Retsch Planetary Ball Mill / Laboratory Mill Range meets and exceeds all requirements for fast and reproducible grinding down to the nano range. We use cookies to enhance your experience. By
continuing to browse this site you agree to our use of cookies. ... Planetary Ball Mill PM 100 Introduction. Request Quote;
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 100 CM is a powerful benchtop model with a single grinding station and an easytouse counterweight which compensates masses up to 8 kg. It allows for grinding up to 220
ml sample material per batch. The extremely high centrifugal forces of Planetary Ball Mills result ...
WhatsApp: +86 18838072829
PM 100 and PM 200 planetary ball mill: Material: Zirconium Oxide: Capacity (English) oz. For Use With (Application) Ideal for extreme working conditions such as longterm trials, wet grinding,
high mechanical loads and maximum speeds as well as mechanical alloying: Includes:
WhatsApp: +86 18838072829
The new Planetary Ball Mill PM 100 CM offers all the performance and convenience of the classic PM 100. It pulverizes and mixes soft, mediumhard to extremely hard, brittle and fibrous materials
and is used wherever the highest degree of fineness down to the submicron range is required. It is suitable for dry and wet grinding.
WhatsApp: +86 18838072829
The widest ball mills in RETSCH's comprehensive range of ball mills comprises High Energy Ball Mills, Planetary Ball Mills and Mixer Mills. Whereas the Mixer Mills are used for dry/wet/cryogenic
grinding and homogenizing small sample volumes, the Planetary Ball Mills meet and exceed all requirements for fast and reproducible Wet grinding Dry grinding Grind size Adapters for disposable
vials ...
WhatsApp: +86 18838072829
Technical data. Retsch Planetary Ball Mill PM 100, 230 V, 50/60 Hz, with 1 grinding station, speed ratio 1 : 2. Planetary Ball Mills are used wherever the highest degree of fineness is required.
In addition to wellproven mixing and size reduction processes, these mills also meet all technical requirements for colloidal grinding and provide ...
WhatsApp: +86 18838072829
PM 100 CM Kulemøller Knusing og nedmaling Produkter. Carbolite Gero Heat Treatment ELTRA Elemental Analysis QATM Materialography Hardness Testing" Retsch Milling Sieving" ELTRA Elemental Analysis
QATM Materialography Hardness Testing" Retsch Milling Sieving"
WhatsApp: +86 18838072829
A Planetary Ball Mill for rapid fine crushing of soft, hard, brittle and fibrous material to end fineness <1µm Quick and easy to clean Rapid fine crushing Easy exchange of grinding jars and balls
Grinding jars and balls made from a wide range of materials available Grinding jar volume up to 500cc Progr. control End fineness < 1µm CEcertified Planetary Ball Mills for fine grinding of soft
WhatsApp: +86 18838072829
The Planetary Ball Mill PM 300 is a powerful and ergonomic benchtop model with two grinding stations for grinding jar volumes up to 500 ml. This setup allows for processing up to 2 x 220 ml
sample material per batch. Thanks to the high maximum speed of 800 rpm, extremely high centrifugal forces result in very high pulverization energy and ...
WhatsApp: +86 18838072829
Ball milling of lactose powders was performed in a planetary ball mill (PM 100 CM, Retsch, Germany). The milling operation was carried out in a stainless steel milling jar with a constant volume
of 12 cm 3 and a diameter of 3 cm using balls of the same material of 1, 5 and 10 mm in diameter.
WhatsApp: +86 18838072829
The present operating instructions for the ball mills of type PM100/200 provide all the necessary information on the headings contained in the table of contents. They act as a guide for the
target group(s) of readers defined for each topic for the safe use of the PM100/200 in accordance with its intended purpose. Familiarity with the relevant
WhatsApp: +86 18838072829
divided into three groups: tumbler ball mills, vibratory mills and planetary mills (Fig. 2b). A tumbler mill consists of ... Downloaded on 12/5/2023 9:17:19 PM. This article is licensed under a
Creative Commons AttributionNonCommercial Unported Licence. View Article Online. for 48 hours in dry and wet conditions with three solvents
WhatsApp: +86 18838072829
|
{"url":"https://pixelium.fr/planetary/ball/mill/pm/100/cm-2890.html","timestamp":"2024-11-06T05:18:42Z","content_type":"application/xhtml+xml","content_length":"23126","record_id":"<urn:uuid:8331e4ff-405a-4a2e-a4e7-c400e3d4f48e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00607.warc.gz"}
|
# some random data in three variables
c1 <- runif(25)
c2 <- runif(25)
c3 <- runif(25)
# basic plot
par(mfrow=c(1, 2))
PlotTernary(c1, c2, c3, args.grid=NA)
if (FALSE) {
# plot with different symbols and a grid using a dataset from MASS
data(Skye, package="MASS")
PlotTernary(Skye[c(1,3,2)], pch=15, col=DescTools::hred, main="Skye",
lbl=c("A Sodium", "F Iron", "M Magnesium"))
|
{"url":"https://www.rdocumentation.org/packages/DescTools/versions/0.99.57/topics/PlotTernary","timestamp":"2024-11-13T12:40:24Z","content_type":"text/html","content_length":"94337","record_id":"<urn:uuid:9b10f150-9869-4f36-862a-d4b077bbea24>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00463.warc.gz"}
|
Acquiring Relationships Between Two Volumes
One of the conditions that people face when they are dealing with graphs is normally non-proportional romantic relationships. Graphs can be used for a selection of different things nevertheless often
they can be used improperly and show a wrong picture. A few take the sort of two models of data. You have a set of product sales figures for a particular month therefore you want to plot a trend
collection on the info. But if you piece this line on a y-axis plus the data selection starts by 100 and ends for 500, you might a very deceiving view in the data. How could you tell if it’s a
non-proportional relationship?
Ratios are usually proportional when they are based on an identical romantic relationship. One way to tell if two proportions are proportional is always to plot them as excellent recipes and cut
them. In the event the range beginning point on one area with the device much more than the various other side than it, your percentages are proportional. Likewise, if the slope with the x-axis much
more than the y-axis value, your ratios happen to be proportional. This is certainly a great way to plot a fad line because you can use the choice of one variable to establish a trendline on an
additional variable.
Yet , many persons don’t realize that the concept of proportionate and non-proportional can be split up a bit. In the event the two measurements within the graph really are a constant, such as the
sales quantity for one month and the typical price for the similar month, then this relationship between these two volumes is non-proportional. In this situation, a single dimension will be
over-represented on one side with the graph and over-represented on the other side. This is called a „lagging“ trendline.
Let’s look at a real life case in point to understand what I mean by non-proportional relationships: baking a recipe for which we want to calculate the volume of spices required to make this. If we
piece a lines on the graph and or chart representing our desired measurement, like the volume of garlic we want to put, we find that if each of our actual glass of garlic clove is much more than the
glass we measured, we’ll own over-estimated the amount of spices needed. If the recipe calls for four glasses of garlic herb, then we would know that each of our http://www.bestmailorderbrides.info
real cup must be six oz .. If the slope of this path was downward, meaning that how much garlic necessary to make the recipe is significantly less than the recipe says it ought to be, then we would
see that us between our actual cup of garlic clove and the wanted cup is a negative incline.
Here’s some other example. Imagine we know the weight of any object By and its certain gravity can be G. If we find that the weight with the object is proportional to its specific gravity,
consequently we’ve discovered a direct proportional relationship: the higher the object’s gravity, the bottom the excess weight must be to continue to keep it floating in the water. We could draw a
line via top (G) to underlying part (Y) and mark the point on the data where the line crosses the x-axis. Now if we take the measurement of that specific section of the body over a x-axis, straight
underneath the water’s surface, and mark that time as each of our new (determined) height, then simply we’ve found our direct proportionate relationship between the two quantities. We can plot a
number of boxes around the chart, every box describing a different height as determined by the gravity of the thing.
Another way of viewing non-proportional relationships is usually to view these people as being either zero or near nil. For instance, the y-axis inside our example could actually represent the
horizontal direction of the earth. Therefore , if we plot a line right from top (G) to bottom level (Y), we’d see that the horizontal range from the plotted point to the x-axis is zero. It indicates
that for virtually any two amounts, if they are plotted against one another at any given time, they will always be the very same magnitude (zero). In this case in that case, we have an easy
non-parallel relationship amongst the two quantities. This can end up being true in the event the two quantities aren’t parallel, if for example we wish to plot the vertical height of a system above
an oblong box: the vertical height will always accurately match the slope in the rectangular pack.
Schreibe einen Kommentar
|
{"url":"https://anhaengervermietunghoofdmann.de/2021/01/01/acquiring-relationships-between-two-volumes/","timestamp":"2024-11-14T17:50:42Z","content_type":"text/html","content_length":"43514","record_id":"<urn:uuid:79fd14eb-5bd7-495e-9f8c-7ede1d55850c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00621.warc.gz"}
|
Remove Duplicates from Sorted Array | Leetcode #26 | Easy
Given a sorted array nums, remove the duplicates in-place such that each element appears only once and returns the new length.
Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.
Confused why the returned value is an integer but your answer is an array?
Note that the input array is passed in by reference, which means a modification to the input array will be known to the caller as well.
Internally you can think of this:
// nums is passed in by reference. (i.e., without making a copy)
int len = removeDuplicates(nums);// any modification to nums in your function would be known by the caller.
// using the length returned by your function, it prints the first len elements.
for (int i = 0; i < len; i++) {
Example 1:
Input: nums = [1,1,2]
Output: 2, nums = [1,2]
Explanation: Your function should return length = 2, with the first two elements of nums being 1 and 2 respectively. It doesn't matter what you leave beyond the returned length.
Example 2:
Input: nums = [0,0,1,1,1,2,2,3,3,4]
Output: 5, nums = [0,1,2,3,4]
Explanation: Your function should return length = 5, with the first five elements of nums being modified to 0, 1, 2, 3, and 4 respectively. It doesn't matter what values are set beyond the returned length.
• 0 <= nums.length <= 3 * 104
• -104 <= nums[i] <= 104
• nums is sorted in ascending order.
There are 2 key pieces of information in the problem statement.
1. The input array is sorted
2. We can not allocate extra space.
If the problem allowed to have another array for storing output, then it would have been a piece of cake and you could solve it with just one iteration. But doing it in space is a really nice catch.
Let’s jump to the solution then.
When we say “doing it in space”, we need to modify existing input array such that it does not have duplicates anymore. That means we need to iterate over the array at least once for sure. In worst
case scenario, the array does not have any duplicates so we will have to return the length of the input array as output.
What we can do when the array has duplicates is — keep moving forward in the array as long as the current element and the previous elements are same otherwise store the next element at the current
index. Sounds confusing? Let’s look at the steps.
1. As a base condition, if the nums array is empty, we can straightaway return 0 as output.
2. We can iterate the array with 2 pointers. Let’s call them slow and fast and initialize both of them to 1 as we will start the array iteration from the second element of the array.
3. While iterating the input array, we compare current element with the previous element. Since we are starting with index 1, there is no problem of running out of bound.
4. If they are not equal, we store the element at index fast to the index of slow and increment slow by one. Otherwise slow remains the same.
5. Return slow as the output as that is the count of distinct elements in the array.
Here is how the code looks like —
class Solution {
public int removeDuplicates(int[] nums) {
if (nums.length == 0) {
return 0;
int slow = 1;
for (int fast = 1; fast < nums.length; fast++) {
if (nums[fast] != nums[fast - 1]) {
nums[slow] = nums[fast];
return slow;
Hope this helps! Happy coding! 🙂
If you think the solution can be improved or misses something, feel free to comment. There is always some room for improvement.
Find the solutions to the leetcode problems here — https://github.com/rishikeshdhokare/leetcode-problems
|
{"url":"https://rishikesh-dhokare.medium.com/remove-duplicates-from-sorted-array-leetcode-26-easy-550c47e9367d","timestamp":"2024-11-14T22:05:14Z","content_type":"text/html","content_length":"114652","record_id":"<urn:uuid:2099a1df-da80-41da-a713-eded19f89c98>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00767.warc.gz"}
|
10.2. math — Mathematical functions
This module is always available. It provides access to the mathematical functions defined by the C standard.
These functions cannot be used with complex numbers; use the functions of the same name from the cmath module if you require support for complex numbers. The distinction between functions which
support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a
complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place.
The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats.
10.2.1. Number-theoretic and representation functions
Note that frexp() and modf() have a different call/return pattern than their C equivalents: they take a single argument and return a pair of values, rather than returning their second return value
through an ‘output parameter’ (there is no such thing in Python).
For the ceil(), floor(), and modf() functions, note that all floating-point numbers of sufficiently large magnitude are exact integers. Python floats typically carry no more than 53 bits of precision
(the same as the platform C double type), in which case any float x with abs(x) >= 2**52 necessarily has no fractional bits.
10.2.2. Power and logarithmic functions
10.2.3. Trigonometric functions
10.2.4. Angular conversion
Converts angle x from radians to degrees.
Converts angle x from degrees to radians.
10.2.5. Hyperbolic functions
10.2.6. Constants
The mathematical constant pi.
The mathematical constant e.
CPython implementation detail: The math module consists mostly of thin wrappers around the platform C math library functions. Behavior in exceptional cases is loosely specified by the C standards,
and Python inherits much of its math-function error-reporting behavior from the platform C implementation. As a result, the specific exceptions raised in error cases (and even whether some arguments
are considered to be exceptional at all) are not defined in any useful cross-platform or cross-release way. For example, whether math.log(0) returns -Inf or raises ValueError or OverflowError isn’t
defined, and in cases where math.log(0) raises OverflowError, math.log(0L) may raise ValueError instead.
All functions return a quiet NaN if at least one of the args is NaN. Signaling NaNs raise an exception. The exception type still depends on the platform and libm implementation. It’s usually
ValueError for EDOM and OverflowError for errno ERANGE.
Changed in version 2.6: In earlier versions of Python the outcome of an operation with NaN as input depended on platform and libm implementation.
See also
Module cmath
Complex number versions of many of these functions.
|
{"url":"http://ld2014.scusa.lsu.edu/python-2.6.4-docs-html/library/math.html","timestamp":"2024-11-10T06:39:51Z","content_type":"application/xhtml+xml","content_length":"36528","record_id":"<urn:uuid:1fceecfd-a3f0-4afd-b9f0-4e2abfda0e95>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00265.warc.gz"}
|
The Problem of the Many
First published Thu Jan 9, 2003; substantive revision Thu Aug 31, 2023
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply
bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of
the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other
object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything
whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical
puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from
philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem.
1. Introduction
In his (1980), Peter Unger introduced the “Problem of the Many”. A similar problem appeared simultaneously in P. T. Geach (1980), but Unger’s presentation has been the most influential over recent
years. The problem initially looks like a special kind of puzzle about vague predicates, but that may be misleading. Some of the standard solutions to Sorites paradoxes do not obviously help here, so
perhaps the Problem reveals some deeper truths involving the metaphysics of material constitution, or the logic of statements involving identity.
The puzzle arises as soon as there is an object without clearly demarcated borders. Unger suggested that clouds are paradigms of this phenomenon, and recent authors such as David Lewis (1993) and
Neil McKinnon (2002) have followed him here. Here is Lewis’s presentation of the puzzle:
Think of a cloud—just one cloud, and around it a clear blue sky. Seen from the ground, the cloud may seem to have a sharp boundary. Not so. The cloud is a swarm of water droplets. At the
outskirts of the cloud, the density of the droplets falls off. Eventually they are so few and far between that we may hesitate to say that the outlying droplets are still part of the cloud at
all; perhaps we might better say only that they are near the cloud. But the transition is gradual. Many surfaces are equally good candidates to be the boundary of the cloud. Therefore many
aggregates of droplets, some more inclusive and some less inclusive (and some inclusive in different ways than others), are equally good candidates to be the cloud. Since they have equal claim,
how can we say that the cloud is one of these aggregates rather than another? But if all of them count as clouds, then we have many clouds rather than one. And if none of them count, each one
being ruled out because of the competition from the others, then we have no cloud. How is it, then, that we have just one cloud? And yet we do. (Lewis 1993: 164)
The paradox arises because in the story as told the following eight claims each seem to be true, but they are mutually inconsistent.
0. There are several distinct sets of water droplets s[k] such that for each such set, it is not clear whether the water droplets in s[k] form a cloud.
1. There is a cloud in the sky.
2. There is at most one cloud in the sky.
3. For each set s[k], there is an object o[k] that the water droplets in s[k] compose.
4. If the water droplets in s[i] compose o[i], and the objects in s[j] compose o[j], and the sets s[i] and s[k] are not identical, then the objects o[i] and o[j] are not identical.
5. If o[i] is a cloud in the sky, and o[j] is a cloud in the sky, and o[i] is not identical with o[j], then there are at least two clouds in the sky.
6. If any of these sets s[i] is such that its members compose a cloud, then for any other set s[j], if its members compose an object o[j], then o[j] is a cloud.
7. Any cloud is composed of a set of water droplets.
To see the inconsistency, note that by 1 and 7 there is a cloud composed of water droplets. Say this cloud is composed of the water droplets in s[i], and let s[j] be any other set whose members
might, for all we can tell, form a cloud. (Premise 0 guarantees the existence of such a set.) By 3, the water droplets in s[j] compose an object o[j]. By 4, o[j] is not identical to our original
cloud. By 6, o[j] is a cloud, and since it is transparently in the sky, it is a cloud in the sky. By 5, there are at least two clouds in the sky. But this is inconsistent with 2. A solution to the
paradox must provide a reason for rejecting one of the premises, or a reason to reject the reasoning that led us to the contradiction, or the means to live with the contradiction. Since none of the
motivations for believing in the existence of dialetheia apply here, let us ignore the last possibility. And since 0 follows directly from the way the story is told, let us ignore that option as
well. That leaves open eight possibilities.
(The classification of the solutions here is slightly different from that in Chapter One of Hud Hudson’s “A Materialist Metaphysics of the Human Person.” But it has a deep debt to Hudson’s
presentation of the range of solutions, which should be clear from the discussion that follows.)
2. Nihilism
Unger’s original solution was to reject 1. The concept of a cloud involved, he thought, inconsistent presuppositions. Since those presuppositions were not satisfied, there are no clouds. This is a
rather radical move, since it applies not just to clouds, but to any kind of sortal for which a similar problem can be generated. And, Unger pointed out, this includes most sortals. As Lewis puts it,
“Think of a rusty nail, and the gradual transition from steel … to rust merely resting on the nail. Or think of a cathode, and its departing electrons. Or think of anything that undergoes evaporation
or erosion or abrasion. Or think of yourself, or any organism, with parts that gradually come loose in metabolism or excretion or perspiration or shedding of dead skin” (Lewis 1993: 165).
Despite Lewis’s presentation, the Problem of the Many is not a problem about change. The salient feature of these examples is that, in practice, change is a slow process. Hence whenever a cathode, or
a human, is changing, be it by shedding electrons, or shedding skin, there are some things that are not clearly part of the object, nor clearly not part of it. Hence there are distinct sets that each
have a good claim to being the set of parts of the cathode, or of the human, and that is what is important.
It would be profoundly counterintuitive if there were no clouds, or no cathodes, or no humans, and that is probably enough to reject the position, if any of the alternatives are not also equally
counterintuitive. It also, as Unger noted, creates difficulties for many views about singular thought and talk. Intuitively, we can pick out one of the objects composed of water droplets by the
phrase ‘that cloud’. But if it is not a cloud, then possibly we cannot. For similar reasons, we may not be able to name any such object, if we use any kind of reference-fixing description involving
‘cloud’ to pick it out from other objects composed of water droplets. If the Problem of the Many applies to humans as well as clouds, then by similar reasoning we cannot name or demonstrate any
human, or, if you think there are no humans, any human-like object. Unger was happy to take these results to be philosophical discoveries, but they are so counterintuitive that most theorists hold
that they form a reductio of his theory. Bradley Rettler (2018) argues that the nihilist has even more problems than this. Nihilism solves some philosophical problems, such as explaining which of 0–7
is false. But, he argues, for any problem it solves, there is a parallel problem which it does not solve, but rival solutions do solve. For instance, if you think of the problem here as a version of
a Sorites paradox, nihilism does not help with versions of the paradox which concern predicates applied to simples.
It is interesting that some other theories of vagueness have adopted positions resembling Unger’s in some respects, but without the extreme conclusions. Matti Eklund (2002) and Roy Sorensen (2001)
have argued that all vague concepts involve inconsistent presuppositions. Sorensen spells this out by saying that there are some inconsistent propositions that anyone who possesses a vague concept
should believe. In the case of a vague predicate F that is vulnerable to a Sorites paradox, the inconsistent propositions are that some things are Fs, some things are not Fs, any object that closely
resembles (in a suitable respect) something that is F is itself F, and that there are chains of ‘suitably resembling’ objects between an F and a non-F. Here the inconsistent propositions are that a
story like Lewis’s is possible, and in it 0 through 7 are true. Neither Eklund nor Sorensen conclude from this that nothing satisfies the predicates in question; rather they conclude that some
propositions that we find compelling merely in virtue of possessing the concepts from which they are constituted are false. So while they don’t adopt Unger’s nihilist conclusions, two contemporary
theorists agree with him that vague concepts are in some sense incoherent.
3. Overpopulation
A simple solution to the puzzle is to reject premise 2. Each of the relevant fusions of water droplets looks and acts like a cloud, so it is a cloud. As with the first option this leads to some very
counterintuitive results. In any room with at least one person, there are many millions of people. But this is not as bad as saying that there are no people. And perhaps we don’t even have to say the
striking claim. In many circumstances, we quantify over a restricted domain. We can say, “There’s no beer,” even when there is beer in some non-salient locales. With respect to some restricted
quantifier domains, it is true that there is exactly one person in a particular room. The surprising result is that with respect to other quantifier domains, there are many millions of people in that
room. The defender of the overpopulation theory will hold that this shows how unusual it is to use unrestricted quantifiers, not that there really is only one person in the room.
The overpopulation solution is not popular, but it is not without defenders. J. Robert G. Williams (2006) endorses it, largely because of a tension between the supervaluationist solution (that will
be discussed in section 7) and what supervaluationism says about the Sorites paradox. James Openshaw (2021) and Alexander Sandgren (forthcoming) argue that the overpopulation solution is true, and
each offer a theory of how singular thought about the cloud is possible given overpopulation. Sandgren also points out that there might be multiple sources of overpopulation. Even given a particular
set of water droplets, some metaphysical theories will say that there are multiple objects those droplets compose, which differ in their temporal or modal properties.
Hudson (2001: 39–44) draws out a surprising consequence of the overpopulation solution as applied to people. Assume that there are really millions of people just where we’d normally say there was
exactly one. Call that person Charlie. When Charlie raises her arm, each of the millions must also raise their arms, for the millions differ only in whether or not they contain some borderline skin
cells, not in whether their arm is raised or lowered. Normally, if two people are such that whenever one acts a certain way, then so must the other, we would say that at most one of them is acting
freely. So it looks like at most one of the millions of people around Charlie are free. There are a few possible responses here, though whether a defender of the overpopulation view will view this
consequence as being more counter-intuitive than other claims to which she is already committed, and hence whether it needs a special response, is not clear. There are some other striking, though not
always philosophically relevant, features of this solution. To quote Hudson:
Among the most troublesome are worries about naming and singular reference … how can any of us ever hope to successfully refer to himself without referring to his brothers as well? Or how might
we have a little private time to tell just one of our sons of our affection for him without sharing the moment with uncountably many of his brothers? Or how might we follow through on our vow to
practice monogamy? (Hudson 2001: 39)
4. Brutalism
As Unger originally states it, the puzzle relies on a contentious principle of mereology. In particular, it assumes mereological Universalism, the view that for any objects, there is an object that
has all of them as its fusion. (That is, it has each of those objects as parts, and has no parts that do not overlap at least one of the original objects.) Without this assumption, the Problem of the
Many may have an easy solution. The cloud in the sky is the object up there that is a fusion of water droplets. There are many other sets of water droplets, other than the set of water droplets that
compose the cloud, but since the members of those sets do not compose an object, they do not compose a cloud.
There are two kinds of theories that imply that only one of the sets of water droplets is such that there exists a fusion of its atoms. First, there are principled restrictions on composition,
theories that say that the xs compose an object y iff the xs are F, for some natural property F. Secondly, there are brutal theories, which say it’s just a brute fact that in some cases the xs
compose an object, and in others they do not. It would be quite hard to imagine a principled theory solving the Problem of the Many, since it is hard to see what the principle could be. (For a more
detailed argument for this, set against a somewhat different backdrop, see McKinnon 2002.) But a brutal theory could work. And such a theory has been defended. Ned Markosian (1998) argues that not
only does brutalism, the doctrine that there are brute facts about when the xs compose a y, solve the Problem of the Many, the account of composition it implies fits more naturally with our
intuitions about composition.
It seems objectionable, in some not easy-to-pin-down way, to rely on brute facts in just this way. Here is how Terrence Horgan puts the objection:
In particular, a good metaphysical theory or scientific theory should avoid positing a plethora of quite specific, disconnected, sui generis, compositional facts. Such facts would be ontological
danglers; they would be metaphysically queer. Even though explanation presumably must bottom out somewhere, it is just not credible—or even intelligible—that it should bottom out with specific
compositional facts which themselves are utterly unexplainable and which do not conform to any systematic general principles. (Horgan 1993: 694–5)
On the other hand, this kind of view does provide a particularly straightforward solution to the Problem of the Many. As Markosian notes, if we have independent reason to view favourably the idea
that facts about when some things compose an object are brute facts, which he thinks is provided by our intuitions about cases of composition and non-composition, the very simplicity of this solution
to the Problem of the Many may count as an argument in favour of brutalism.
5. Relative Identity
Assume that the brutalist is wrong, and that for every set of water droplets, there is an object those water droplets compose. Since that object looks for all the world like a cloud, we will say it
is a cloud. The fourth solution accepts those claims, but denies that there are many clouds. It is true that there are many fusions of atoms, but these are all the same cloud. This view adopts a
position most commonly associated with P. T. Geach (1980), that two things can be the same F but not the same G, even though they are both Gs. To see the motivation for that position, and a
discussion of its strengths and weaknesses, see the article on relative identity.
Here is one objection that many have felt is telling against the relative identity view: Let w be a water droplet that is in s[1] but not s[2]. The relative identity solution says that the droplets
in s[1] compose an object o[1], and the droplets in s[2] compose an object o[2], and though o[1] and o[2] are different fusions of water droplets, they are the same cloud. Call this cloud c. If o[1]
is the same cloud as o[2], then presumably they have the same properties. But o[1] has the property of having w as a part, while o[2] does not. Defenders of the relative identity theory here deny the
principle that if two objects are the same F, they have the same properties. Many theorists find this denial to amount to a reductio of the view.
6. Partial Identity
Even if o[1] and o[2] exist, and are clouds, and are not the same cloud, it does not immediately follow that there are two clouds. If we analyze “There are two clouds” as “There is an x and a y such
that x is a cloud and y is a cloud, and x is not the same cloud as y” then the conclusion will naturally follow. But perhaps that is not the correct analysis of “There are two clouds.” Or, more
cautiously, perhaps it is not the correct analysis in all contexts. Following some suggestions of D. M. Armstrong’s (Armstrong 1978, vol. 2: 37–8), David Lewis suggests a solution along these lines.
The objects o[1] and o[2] are not the same cloud, but they are almost the same cloud. And in everyday circumstances (AT) is a good-enough analysis of “There is one cloud”
There is a cloud, and all clouds are almost identical with it.
As Lewis puts it, we ‘count by almost-identity’ rather than by identity in everyday contexts. And when we do, we get the correct result that there is one cloud in the sky. Lewis notes that there are
other contexts in which we count by some criteria other than identity.
If an infirm man wishes to know how many roads he must cross to reach his destination, I will count by identity-along-his-path rather than by identity. By crossing the Chester A. Arthur Parkway
and Route 137 at the brief stretch where they have merged, he can cross both by crossing only one road. (Lewis 1976: 27)
There are two major objections to this theory. First, as Hudson notes, even if we normally count by almost-identity, we know how to count by identity, and when we do it seems that there is one cloud
in the sky, not many millions. A defender of Lewis’s position may say that the only reason this seems intuitive is that it is normally intuitive to say that there is only one cloud in the sky. And
that intuition is respected! More contentiously, it may be argued that it is a good thing to predict that when we count by identity we get the result that there are millions of clouds. After all, the
only time we’d do this is when we’re doing metaphysics, and we have noted that in the metaphysics classroom, there is some intuitive force to the argument that there are millions of clouds in the
sky. It would be a brave philosopher to endorse this as a virtue of the theory, but it may offset some of the costs.
Secondly, something like the Problem of the Many can arise even when the possible objects are not almost identical. Lewis notes this objection, and provides an illustrative example to back it up. A
similar kind of example can be found in W. V. O. Quine’s Word and Object (1960). Lewis’s example is of a house with an attached garage. It is unclear whether the garage is part of the house or an
external attachment to it. So it is unclear whether the phrase ‘Fred’s house’ denotes the basic house, call it the home, or the fusion of the home and the garage. What is clear is that there is
exactly one house here. However, the home might be quite different from the fusion of the home and the garage. It will probably be smaller and warmer, for example. So the home and the home–garage
fusion are not even almost identical. Quine’s example is of something that looks, at first, to be a mountain with two peaks. On closer inspection we find that the peaks are not quite as connected as
first appeared, and perhaps they could be properly construed as two separate mountains. What we could not say is that there are three mountains here, the two peaks and their fusion, but since neither
peak is almost identical to the other, or to the fusion, this is what Lewis’s solution implies.
But perhaps it is wrong to understand almost-identity in this way. Consider another example of Lewis’s, one that Dan López de Sa (2014) argues is central to a Lewisian solution to the problem.
You draw two diagonals in a square; you ask me how many triangles; I say there are four; you deride me for ignoring the four large triangles and counting only the small ones. But the joke is on
you. For I was within my rights as a speaker of ordinary language, and you couldn’t see it because you insisted on counting by strict identity. I meant that, for some w, x, y, z, (1) w, x, y, and
z are triangles; (2) w and x are distinct, and … and so are y and z (six clauses); (3) for any triangle t, either t and w are not distinct, or … or t and z are not distinct (four clauses). And by
‘distinct’ I meant non-overlap rather than non-identity, so what I said was true. (Lewis 1993, fn. 9)
One might think this is the general way to understand counting sentences in ordinary language. So There is exactly one F gets interpreted as There is an F, and no F is wholly distinct from it; and
There are exactly two Fs gets interpreted as There are wholly distinct things x and y that are both F, and no F is wholly distinct from both of them, and so on. Lewis writes as if this is an
explication of the almost-identity proposal, but this is at best misleading. A house with solar panels partially overlaps the city’s electrical grid, but it would be very strange to call them
almost-identical. It sounds like a similar, but distinct, proposed solution.
However we understand the proposal, López de Sa notes that it has a number of virtues. It seems to account for the puzzle involving the house, what he calls the Problem of the Two. If in general
counting involves distinctness, then we have a good sense in which there is one cloud in the sky, and Fred owns one house.
There still remain two challenges for this view. First, one could still follow Hudson and argue that even if we ordinarily understand counting sentences this way, we do still know how to count by
identity. And when we do, it seems that there is just one cloud, not millions of them. Second, it isn’t that clear that we always count by distinctness, in the way López de Sa suggests. If I say
there are three ways to get from my house to my office, I don’t mean to say that these three are completely distinct. Indeed, they probably all start with going out my door, down my driveway etc.,
and end by walking up the stairs into my office. So the general claim about how to understand counting sentences seems false.
C. S. Sutton (2015) argued that we can get around something like the second of these problems if we do two things. First, the rule isn’t that we don’t normally quantify over things that overlap, just
that we don’t normally quantify over things that substantially overlap. We can look at a row of townhouses and say that there are seven houses there even if the walls of the house overlap. Second,
the notion of overlap here is not sensitive to the quantity of material that the objects have in common, but to their functional role. If two objects play very different functional roles, she argues
that we will naturally count them as two, even if they have a lot of material in common. This could account for a version of the getting to work example where the three different ways of getting to
work only differ on how to get through one small stretch in the middle. That is, if there are three (wholly distinct ways) to get from B to C, and the way to get from A to D is to go A-B-C-D, then
there is a good sense in which there are three ways to get from A to D. Sutton’s theory explains how this could be true even if the B-C leg is a short part of the trip.
David Liebesman (2020) argued for a different way of implementing this kind of theory. He argues that the kinds of constraints that Lewis, López de Sa and Sutton have suggested don’t get incorporated
into our theory of counting, but into the proper interpretation of the nouns involved in counting sentences. It helps to understand Liebesman’s idea with an example.
There is, we’ll presumably all agree, just one colour in Yves Klein’s Blue Monochrome. (It says so in the title.) But Blue Monochrome has blue in it, and it has ultramarine in it, and blue doesn’t
equal ultramarine. What’s gone on? Well, says Liebesman, whenever a noun occurs under a determiner, it needs an interpretation. That interpretation will almost never be maximal. When we ask how many
animals a person has in their house, we typically don’t mean to count the insects. When we say every book is on the shelf, we don’t mean every book in the universe. And typically, the relevant
interpretation of the determiner phrase (like ‘every book’, or ‘one cloud’) will exclude overlapping objects. Typically but not, says Liebesman, always. We can say, for instance, that every shade of
a colour is a colour, and in that sentence ‘colour’ includes both blue and ultramarine.
This offers a new solution to the Problem of the Many. He argues that all of 0 to 7 are true, but they are true for different interpretations of phrases including the word ‘cloud’. When we interpret
it in a way that rules out overlap, then 6 is false. When we interpret it in the maximal way, like the way we interpret ‘colour’ in Every shade of a colour is a colour, then 2 is false. But to get
the inconsistency, we have to equivocate. It’s an easy equivocation to make, since each of the meanings is one that we frequently use.
7. Vagueness
Step 6 in the initial setup of the problem says that if any of the o[i] is a cloud, then they all are. There are three important arguments for this premise, two of them presented explicitly by Unger,
and the other by Geach. Two of the arguments seem to be faulty, and the third can be rejected if we adopt some familiar, though by no means universally endorsed, theories of vagueness.
7.1 Argument from Duplication
The first argument, due essentially to Geach, runs as follows. Geach’s presentation did not involve clouds, but the principles are clearly stated in his version of the argument. (The argument shows
that if an o[k] is a cloud for arbitrary k, we can easily generalize to the claim that for every i, o[i] is a cloud.)
If all the water droplets not in s[k] did not exist, then o[k] would be a cloud.
Whether o[k] is a cloud does not depend on whether things distinct from it exist.
o[k] is a cloud.
D2 implies that being a cloud is an intrinsic property. The idea is that by changing the world outside the cloud, we do not change whether or not it is a cloud. There is, however, little reason to
believe this is true. And given that it leads to a rather implausible conclusion, that there are millions of clouds where we think there is one, there is some reason to believe it is false. We can
argue directly for the same conclusion. Assume many more water droplets coalesce around our original cloud. There is still one cloud in the sky, but it determinately includes more water droplets than
the original cloud. The fusion of those water droplets exists, and we may assume that they did not change their intrinsic properties, but they are now a part of a cloud, rather than a cloud. Even if
something looks like a cloud, smells like a cloud and rains like a cloud, it need not be a cloud, it may only be a part of a cloud.
7.2 Argument from Similarity
Unger’s primary argument takes a quite different tack.
For some j, o[j] is a typical cloud.
Anything that differs minutely from a typical cloud is a cloud.
o[k] differs minutely from o[j].
o[k] is a cloud.
Since we only care about the conditional if o[j] is a cloud, so is o[k], it is clearly acceptable to assume that o[j] is a cloud for the sake of the argument. And S3 is guaranteed to be true by the
setup of the problem. The main issue then is whether S2 is true. As Hudson notes, there appear to be some clear counterexamples to it. The fusion of a cloud with one of the water droplets in my
bathtub is clearly not a cloud, but by most standards it differs minutely from a cloud, since there is only one droplet of water difference between them.
7.3 Argument from Meaning
The final argument is not set out as clearly, but it has perhaps the most persuasive force. Unger says that if exactly one of the o[i] is a cloud, then there must be a ‘selection principle’ that
picks it over the others. But it is not clear just what kind of selection principle that could be. The underlying argument seems to be something like this:
For some j, o[j] is a cloud.
If o[j] is a cloud and the rest of the o[i] are not, then some principle selects o[j] to be the unique cloud.
There is no principle that selects o[j] to be the unique cloud.
At least one of the other o[i] is a cloud.
The idea behind M2 is that word meanings are not brute facts about reality. As Jerry Fodor put it, “if aboutness is real, it must be really something else” (Fodor 1987: 97). Something makes it the
case that o[j] is the unique thing (around here) that satisfies our term ‘heap’. Maybe that could be because o[j] has some unique properties that make it suitable to be in the denotation of ordinary
terms. Or maybe it is something about our linguistic practices. Or maybe it is some combination of these things. But something must determine it, and whatever it is, we can (in theory) say what that
is, by giving some kind of principled explanation of why o[j] is the unique cloud.
It is at this point that theories of vagueness can play a role in the debate. Two of the leading theories of vagueness, epistemicism and supervaluationism, provide principled reasons to reject this
argument. The epistemicist says that there are semantic facts that are beyond our possible knowledge. Arguably we can only know where a semantic boundary lies if that boundary was fixed by our use or
by the fact that one particular property is a natural kind. But, say the epistemicists, there are many other boundaries that are not like this, such as the boundary between the heaps and the
non-heaps. Here we have a similar kind of situation. It is vague just which of the o[i] is a cloud. What that means is that there is a fact about which of them is a cloud, but we cannot possibly know
it. The epistemicist is naturally read as rejecting the very last step in the previous paragraph. Even if something (probably our linguistic practices) makes it the case that o[j] is the unique
cloud, that need not be something we can know and state.
The supervaluationist response is worth spending more time on here, both because it engages directly with the intuitions behind this argument and because two of its leading proponents (Vann McGee and
Brian McLaughlin, in their 2001) have responded directly to this argument using the supervaluationist framework. Roughly (and for more detail see the section on supervaluations in the entry on
vagueness) supervaluationists say that whenever some terms are vague, there are ways of making them more precise consistent with our intuitions on how the terms behave. So, to use a classic case,
‘heap’ is vague, which to the supervaluationist means that there are some piles of sand that are neither determinately heaps nor determinately non-heaps, and a sentence saying that that object is a
heap is neither determinately true nor determinately false. However, there are many ways to extend the meaning of ‘heap’ so it becomes precise. Each of these ways of making it precise is called a
precisification. A precisification is admissible iff every sentence that is determinately true (false) in English is true (false) in the precisification. So if a is determinately a heap, b is
determinately not a heap and c is neither determinately a heap nor determinately not a heap, then every precisification must make ‘a is a heap’ true and ‘b is a heap’ false, but some make ‘c is a
heap’ true and others make it false. To a first approximation, to be admissible a precisification must assign all the determinate heaps to the extension of ‘heap’ and assign none of the determinate
non-heaps to its extension, but it is free to assign or not assign things in the ‘penumbra’ between these groups to the extension of ‘heap’. But this is not quite right. If d is a little larger than
c, but still not determinately a heap, then the sentence “If c is a heap so is d” is intuitively true. As it is often put, following Kit Fine (1975), a precisification must respect ‘penumbral
connections’ between the borderline cases. If d has a better case for being a heap than c, then a precisification cannot make c a heap but not d. These penumbral connections play a crucial role in
the supervaluationist solution to the Problem of the Many. Finally, a sentence is determinately true iff it is true on all admissible precisifications, determinately false iff it is false on all
admissible precisifications.
In the original example, described by Lewis, the sentence “There is one cloud in the sky” is determinately true. None of the sentences “o[1] is a cloud”, “o[2] is a cloud” and so on are determinately
true. So a precisification can make each of these either true or false. But, if it is to preserve the fact that “There is one cloud in the sky” is determinately true, it must make exactly one of
those sentences true. McGee and McLaughlin suggest that this combination of constraints lets us preserve what is plausible about M2, without accepting that it is true. The term ‘cloud’ is vague;
there is no fact of the matter as to whether its extension includes o[1] or o[2] or o[3] or …. If there were such a fact, there would have to be something that made it the case that it included o[j]
and not o[k], and as M3 correctly points out, no such facts exist. But this is consistent with saying that its extension does contain exactly one of the o[i]. The beauty of the supervaluationist
solution is that it lets us hold these seemingly contradictory positions simultaneously. We also get to capture some of the plausibility of S2—it is consistent with the supervaluationist position to
say that anything similar to a cloud is not determinately not a cloud.
Penumbral connections also let us explain some other puzzling situations. Imagine I point cloudwards and say, “That is a cloud.” Intuitively, what I have said is true, even though ‘cloud’ is vague,
and so is my demonstrative ‘that’. (To see this, note that there’s no determinate answer as to which of the o[i] it picks out.) On different precisifications, ‘that’ picks out different o[i]. But on
every precisification it picks out the o[i] that is in the extension of ‘cloud’, so “That is a cloud” comes out true as desired. Similarly, if I named the cloud ‘Edgar’, then a similar trick lets it
be true that “Edgar” is vague, while “Edgar is a cloud” is determinately true. So the supervaluationist solution lets us preserve many of the intuitions about the original case, including the
intuitions that seemed to underwrite M2, without conceding that there are millions of clouds. But there are a few objections to this package.
• Objection: There are many telling objections to supervaluationist theories of vagueness.
□ Reply: This may be true, but it would take us well beyond the scope of this entry to outline them all. See the entries on vagueness (the section on supervaluation) and the Sorites Paradox for
more detail.
• Objection: The supervaluationist solution makes some existentially quantified sentences, like “There is a cloud in the sky” determinately true even though no instance of them is determinately
□ Reply: As Lewis says, this is odd, but no odder than things that we learn to live with in other contexts. Lewis compares this to “I owe you a horse, but there is no particular horse that I
owe you.”
• Objection: The penumbral connections here are not precisely specified. What is the rule that says how much overlap is required before two objects cannot both be clouds?
□ Reply: It is true that the connections are not precisely specified. It would be quite hard to carefully analyze ‘cloud’ to work out exactly what they are. But that we cannot say exactly what
the rules are is no reason for saying no such rules exist, any more than our inability to say exactly what knowledge is provides a reason for saying that no one ever knows anything.
Scepticism can’t be proven that easily.
• Objection: The penumbral connections appealed to here are left unexplained. At a crucial stage in the explanation, it seems to be just assumed that the problem can be solved, and that it is a
determinate truth that there is one cloud in the sky.
□ Reply 1: We have to start somewhere in philosophy. This kind of reply can be spelled out in two ways. There is a ‘Moorean’ move that says that the premise that there is one cloud in the sky
is more plausible than the premises that would have to be used in an argument against supervaluationism. Alternatively, it might be claimed that the main argument for supervaluationism is an
inference to the best explanation. In that case, the intuition that there is exactly one cloud in the sky, but it is indeterminate just which object it is, is something to be explained, not
something that has to be proven. This is the kind of position defended by Rosanna Keefe in her book Theories of Vagueness. Although Keefe does not apply this directly to the Problem of the
Many, the way to apply her position to the Problem seems clear enough.
□ Reply 2: The penumbral connections we find for most words are generated by the inferential role provided by the meaning of the terms. It is because the inference from “This pile of sand is a
heap”, and “That pile of sand is slightly larger than this one, and arranged roughly the same way,” to “That pile of sand is a heap” is generally acceptable that precisifications which make
the premises true and the conclusions false are inadmissible. (We have to restrict this inferential rule to the case where ‘this’ and ‘that’ are ordinary demonstratives, and not used to pick
out arbitrary fusions of grains, or else we get bizarre results for reasons that should be familiar by this point in the story.) And it isn’t too hard to specify the inferential rule here.
The inference from “o[j] is a cloud” and “o[j] and o[k] massively overlap” to “o[k] is a cloud” is just as acceptable as the above inference involving heaps. Indeed, it is part of the meaning
of ‘cloud’ that this inference is acceptable. (Much of this reply is drawn from the discussion of ‘maximal’ predicates in Sider 2001 and 2003, though since Sider is no supervaluationist, he
would not entirely endorse this way of putting things.)
• Objection: The second reply cannot explain the existence of penumbral connections between ‘cloud’ and demonstratives like ‘that’ and names like ‘Edgar’. It would only explain the existence of
those penumbral connections if it was part of the meaning of names and demonstratives that they fill some kind of inferential role. But this is inconsistent with the widely held view that
demonstratives and names are directly referential. (For more details on debates about the meanings of names, see the entries on propositional attitude reports and singular propositions.)
□ Reply: One response to this would be to deny the view that names and demonstratives are directly referential. Another would be to deny that inferential roles provide the only penumbral
constraints on precisifications. Weatherson 2003b sketches a theory that does exactly this. The theory draws on David Lewis’s response to some quite different work on semantic indeterminacy.
As many authors (Quine 1960, Putnam 1981, Kripke 1982) showed, the dispositions of speakers to use terms are not fine-grained enough to make the language as precise as we ordinarily think it
is. As far as our usage dispositions go, ‘rabbit’ could mean undetached rabbit part, ‘vat’ could mean vat image, and ‘plus’ could mean quus. (Quus is a function defined over pairs of numbers
that yields the sum of the two numbers when they are both small, and 5 when one is sufficiently large.) But intuitively our language is not that indeterminate: ‘plus’ determinately does not
mean quus.
Lewis (1983, 1984) suggested the way out here is to posit a notion of ‘naturalness’. Sometimes a term t denotes the concept C[1] rather than C[2] not because we are disposed to use t as if it
meant C[1] rather than C[2], but simply because C[1] is a more natural concept. ‘Plus’ means plus rather than quus simply because plus is more natural than quus. Something like the same story
applies to names and demonstratives. Imagine I point in the direction of Tibbles the cat and say, “That is Edgar’s favourite cat.” There is a way of systematically (mis)interpreting all my
utterances so ‘that’ denotes the Tibbles-shaped region of space-time exactly one metre behind Tibbles. (We have to reinterpret what ‘cat’ means to make this work, but the discussions of
semantic indeterminacy in Quine and Kripke make it clear how to do this.) So there’s nothing in my usage dispositions that makes ‘that’ mean Tibbles, rather than the region of space-time that
‘follows’ him around. But because Tibbles is more natural than that region of space-time, ‘that’ does pick out Tibbles. It is the very same naturalness that makes ‘cat’ denote a property that
Tibbles (and not the trailing region of space-time) satisfies that makes ‘that’ denote Tibbles, a fact that will become important below.
The same kind of story can be applied to the cloud. It is because the cloud is a more natural object than the region of space-time a mile above the cloud that our demonstrative ‘that’ denotes
the cloud and not the region. However, none of the o[i] are more natural than any other, so there is still no fact of the matter as to whether ‘that’ picks out o[j] or o[k]. Lewis’s theory
does not eliminate all semantic indeterminacy; when there are equally natural candidates to be the denotation of a term, and each of them is consistent with our dispositions to use the term,
then the denotation of the term is simply indeterminate between those candidates.
Weatherson’s theory is that the role of each precisification is to arbitrarily make one of the o[i] more natural than the rest. Typically, it is thought that the denotation of a term
according to a precisification is determined directly. It is a fact about a precisification P that, according to it, ‘cloud’ denotes property c[1]. On Weatherson’s theory this is not the
case. What the precisification does is provide a new, and somewhat arbitrary, standard of naturalness, and the content of the terms according to the precisification is then determined by
Lewis’s theory of content. The denotations of ‘cloud’ and ‘that’ according to a precisification P are those concepts and objects that are the most natural-according-to-P of the concepts and
objects that we could be denoting by those terms, for all one can tell from the way the terms are used. The coordination between the two terms, the fact that on every precisification ‘that’
denotes an object in the extension of ‘cloud’ is explained by the fact that the very same thing, naturalness-according-to-P, determines the denotation of ‘cloud’ and of ‘that’.
• Objection (From Stephen Schiffer 1998): The supervaluationist account cannot handle speech reports involving vague names. Imagine that Alex points cloudwards and says, “That is a cloud.” Later
Sam points towards the same cloud and says, “Alex said that that is a cloud.” Intuitively, Sam’s utterance is determinately true. But according to the supervaluationist, it is only determinately
true if it is true on every precisification. So it must be true that Alex said that o[1] is a cloud, that Alex said that o[2] is a cloud, and so on, since these are all precisifications of “Alex
said that that is a cloud.” But Alex did not say all of those things, for if she did she would be committed to saying that there are millions of clouds in the sky, and of course she is not, as
the supervaluationists have been arguing.
□ Reply: There is a little logical slip here. Let P[i] be a precisification of Sam’s word ‘that’ that makes it denote o[i]. All the supervaluationist who holds that Sam’s utterance is
determinately true is committed to is that for each i, according to P[i], Alex said that o[i] is a cloud. And this will be true if the denotation of Alex’s word ‘that’ is also o[i] according
to P[i]. So as long as there is a penumbral connection between Sam’s word ‘that’ and Alex’s word ‘that’, the supervaluationist avoids the objection. Such a connection may seem mysterious at
first, but note that Weatherson’s theory predicts that just such a penumbral connection obtains. So if that theory is acceptable, then Schiffer’s objection misfires.
• Objection (From Neil Mackinnon 2002 and Thomas Sattig 2013): It is part of our notion of a mountain that facts about mountain-hood are not basic. If something is a mountain, there are facts in
virtue of which it is a mountain, and nearby things are not. Yet on each precisification of ‘mountain’, this won’t be true; it will be completely arbitrary which fusion of rocks is a mountain.
□ Reply: It is true that on any precisification there will be no principled reason why this fusion of rocks is a mountain, and another is not. And it is true that there should be such a
principled reason; mountainhood facts are not basic. But that problem can be avoided by the theory that denies that “the supervaluationist rule [applies] to any statement whatever, never mind
that the statement makes no sense that way” (Lewis 1993, 173). Lewis’s idea, or at least the application of Lewis’s idea to this puzzle, is that we know how to understand the idea that
mountainhood facts are non-arbitrary: we understand it as a claim that there is some non-arbitrary explanation of which precisifications of ‘mountain’ are and are not admissible. If we must
apply the supervaluationist rule to every statement, including the statement that it is not arbitrary which things are mountains, this understanding is ruled out. Lewis’s response is to deny
that the rule must always be applied. As long as there is some sensible way to understand the claim, we don’t have to insist on applying the supervaluationist machinery to it.
That said, it does seem like this is likely to be somewhat of a problem for everyone (even the theorist like Lewis who uses the supervaluationist machinery only when it is helpful). Sattig
himself claims to avoid the problem by making the mountain be a maximal fusion of candidates. But for any plausible mountain, it will be vague and somewhat arbitrary what the boundary is
between being a mountain-candidate and not being one. The lower boundaries of mountains are not, in practice, clearly marked. Similarly, there will be some arbitrariness in the boundaries
between the admissible and inadmissible precisifications of ‘mountain’. We may have to live with some arbitrariness.
• Objection (From J. Robert G. Williams 2006): When we look at an ordinary mountain, there definitely is (at least) one mountain in front of us. That seems clear. The vagueness solution respects
this fact. But in many contexts, people tend to systematically confuse “Definitely, there is an F” with “There is a definite F.” Indeed, the standard explanation of why Sorites arguments seem,
mistakenly, to be attractive is that this confusion gets made. Yet on the vagueness account, there is no definite mountain; all the candidates are borderline cases. So by parity of reasoning, we
should expect intuition to deny that there is definitely a mountain. And intuition does not deny that, it loudly confirms it. At the very least, this shows a tension between the standard account
of the Sorites paradox, and the vagueness solution to the Problem of the Many.
□ Reply: This is definitely a problem for the views that many philosophers have put forward. As Williams stresses, it isn’t on its own a problem for the vagueness solution to the Problem of the
Many, but it is a problem for the conjunction of that solution with a widely endorsed, and independently plausible, explanation of the Sorites paradox. In his dissertation, Nicholas K. Jones
(2010) argues that the right response is to give up the idea that speakers typically confuse “Definitely, there is an F” with “There is a definite F”, and instead use a different resolution
of the Sorites.
Space prevents a further discussion of all possible objections to the supervaluationist account, but interested readers are particularly encouraged to look at Neil McKinnon’s objection to the account
(see the Other Internet Resources section), which suggests that distinctive problems arise for the supervaluationist when there really are two or more clouds involved.
Even if the supervaluationist solution to the Problem of the Many has responses to all of the objections that have been levelled against it, some of those objections rely on theories that are
contentious and/or underdeveloped. So it is far from clear at this stage how well the supervaluationist solution, or indeed any solution based on vagueness, to the Problem of the Many will do in
future years.
8. Rethinking Parthood
Some theorists have argued that the underlying cause of the problem is that we have the wrong theory about the relation between parts and wholes. Peter van Inwagen (1990) argues that the problem is
that we have assumed that the parthood relation is determinate. We have assumed that it is always determinately true or determinately false that one object is a part of another. According to van
Inwagen, sometimes neither of these options applies. He thinks that we need to adopt some kind of fuzzy logic when we are discussing parts and wholes. It can be true to degree 0.7, for example, that
one object is part of another. Given these resources, van Inwagen says, we are free to conclude that there is exactly one cloud in the sky, and that some of the ‘outer’ water droplets are part of it
to a degree strictly between 0 and 1. This lets us keep the intuition that it is indeterminate whether these outlying water droplets are members of the cloud without accepting that there are millions
of clouds. Note that this is not what van Inwagen would say about this version of the paradox, since he holds that some simples only constitute an object when that object is alive. For van Inwagen,
as for Unger, there are no clouds, only cloud-like swarms of atoms. But van Inwagen recognises that a similar problem arises for cats, or for people, two kinds of things that he does believe exist,
and he wields this vague constitution theory to solve the problems that arise there.
Traditionally, many philosophers thought that such a solution was downright incoherent. A tradition stretching back to Bertrand Russell (1923) and Michael Dummett (1975) held that vagueness was
always and everywhere a representational phenomenon. From this perspective, it didn’t make sense to talk about it being vague or indeterminate whether a particular droplet was part of a particular
cloud. But this traditional view has come under a lot of pressure in recent years; see Barnes (2010) for one of the best challenges, and Sorensen (2013, section 8) for a survey of more work. So let
us assume here it is legitimate to talk about the possibility that parthood itself, and not just our representation of it, is vague. As Hudson (2001) notes though, it is far from clear just how the
appeal to fuzzy logic is meant to help here. Originally it was clear for each of n water droplets whether they were members of the cloud to degree 1 or degree 0. So there were 2^n candidate clouds,
and the Problem of the Many is finding out how to preserve the intuition when faced with all these objects. It is unclear how increasing the range of possible relationships between each particle and
the cloud from 2 to continuum-many should help here, for now it seems there are at least continuum-many cloud-like objects to choose between, one for each function from each of the n droplets to [0,
1], and we need a way of saying exactly one of them is a cloud. Assume that some droplet is part of the cloud to degree 0.7. Now consider the object (or perhaps possible object) that is just like the
cloud, except this droplet is only part of it to degree 0.6. Does that object exist, and is it a cloud? Van Inwagen says, in a way reminiscent of Markosian’s brutal composition solution, that such an
‘object’ does not even exist.
A different kind of solution is offered by Mark Johnston (1992) and E. J. Lowe (1982, 1995). Both of them suggest that the key to solving the Problem is to distinguish cloud-constituters from clouds.
They say it is a category mistake to identify clouds with any fusion of water droplets, because they have different identity conditions. The cloud could survive the transformation of half its
droplets into puddles on the footpath (or whatever kind of land it happens to be raining over), it would just be a smaller cloud, the fusion could not. As Johnston says, “Hence Unger’s insistent and
ironic question ‘But which of o[1], o[2], o[3], … is our paradigm cloud c?’ has as its proper answer ‘None’” (1992: 100, numbering slightly altered).
Lewis (1993) listed several objections to this position, and Lowe (1995) responds to them. (Lewis and Lowe discuss a version of the problem using cats not clouds, and we will sometimes follow them
Lewis’s first objection is that positing clouds as well as cloud-constituting fusions of atoms is metaphysically extravagant. As Lowe (and, for separate reasons, Johnston) point out, these extra
objects are arguably needed to solve puzzles to do with persistence. Hence it is no objection to a solution to the Problem of the Many that it posits such objects. Resolving these debates would take
us too far afield, so let us assume (as Lewis does) that we have reason to believe that these objects exist.
Secondly, Lewis says that even with this move, we still have a Problem of the Many applied to cloud-constituters, rather than to clouds. Lowe responds that since ‘cloud-constituter’ is not a folk
concept, we don’t really have any philosophically salient intuitions here, so this cannot be a way in which the position is unintuitive.
Finally, Lewis says that each of the constituters is so like the object it is meant to merely constitute (be it a cloud, or a cat, or whatever), it satisfies the same sortals as that object. So if we
were originally worried that there were 1001 cats (or clouds) where we thought there was one, now we should be worried that there are 1002. But as Lowe points out, this argument seems to assume that
being a cat, or being a cloud, is an intrinsic property. If we assume that it is extrinsic, if it turns on the history of the object, perhaps its future or its possible future, and on which object it
is embedded in, then the fact that a cloud-constituter looks, when considered in isolation, to be a cloud is little reason to think it actually is a cloud.
Johnston provides an argument that the distinction between clouds and cloud-constituting fusions of water droplets is crucial to solving the Problem. He thinks that the following principle is sound,
and not threatened by examples like our cloud.
(9′) If y is a paradigm F, and x is an entity that differs from y in any respect relevant to being an F only very minutely, and x is of the right category, i.e. is not a mere quantity or piece of
matter, then x is an F. (Johnston 1992: 100)
The theorist who thinks that clouds are just fusions of water droplets cannot accept this principle, or they will conclude that every o[i] is a cloud, since for them each o[i] is of the right
category. On the other hand, Johnston himself cannot accept it either, unless he denies there can be another object c′ which is in a similar position to c, and is of the same category as c, but
differs with respect to which water droplets constitute it. It seems that what is doing the work in Johnston’s solution is not just the distinction between constitution and identity, but a tacit
restriction on when there is a ‘higher-level’ object constituted by certain ‘lower-level’ objects. To that extent, his theory also resembles Markosian’s brutal composition theory, though since
Johnston can accept that every set of atoms has a fusion his theory has different costs and benefits to Markosian’s theory.
A recent version of this kind of view comes from Nicholas K. Jones (2015), though he focusses on constitution, not composition. (Indeed, a distinctive aspect of his view is that he takes constitution
to be metaphysically prior to composition.) Jones rejects the following principle, which is similar to 4 in the original inconsistent set.
• If the water droplets in s[i] constitute o[i], and the objects in s[j] constitute o[j], and the sets s[i] and s[k] are not identical, then the objects o[i] and o[j] are not identical.
He rejects this claim. He argues that some water droplets can constitute a cloud, and some other water droplets can constitute the very same cloud. On this view, the predicate constitute x behaves a
bit like the predicate surround the building. It can be true that the Fs surround the building, and the Gs surround the building, without the Fs being the Gs. And on Jones’s view, it can be true that
the Fs constitute x, and the Gs constitute x, without the Fs being the Gs. This resembles Lewis’s solution in terms of almost-identity, since both Jones and Lewis say that there is one cloud, yet
both s[i] and s[k ]can be said to compose it. But for Lewis, this is possible because he rejects the inference from There is one cloud, to If a and b are clouds, they are identical. Jones accepts
this inference, and rejects the inference from the premise that s[i] and s[k] are distinct, and each compose a cloud, to the conclusion that they compose non-identical clouds.
9. Rethinking Parthood
After concluding that all of these kinds of solutions face serious difficulties, Hudson (2001: Chapter 2) outlines a new solution, one which rejects so many of the presuppositions of the puzzle that
it is best to count him as rejecting the reasoning, rather than rejecting any particular premise. (Hudson is somewhat tentative about endorsing this view, as opposed to merely endorsing the claim
that it looks better than its many rivals, but for expository purposes let us refer to it here as his view.) To see the motivation behind Hudson’s approach, consider a slightly different case, a
variant of one discussed in Wiggins 1968. Tibbles is born at midnight Sunday, replete with a splendid tail, called Tail. An unfortunate accident involving a guillotine sees Tibbles lose his tail at
midday Monday, though the tail is preserved for posterity. Then midnight Monday, Tibbles dies. Now consider the timeless question, “Is Tail part of Tibbles?” Intuitively, we want to say the question
is underspecified. Outside of Monday, the question does not arise, for Tibbles does not exist. Before midday Monday, the answer is “Yes”, and after midday the answer is “No”. This suggests that there
is really no proposition that Tail is part of Tibbles. There is a proposition that Tail is part of Tibbles on Monday morning (that’s true) and that Tail is part of Tibbles on Monday afternoon (that’s
false), but no proposition involving just the parthood relation and two objects. Parthood is a three-place relation between two objects and a time, not a two-place relation between two objects.
Hudson suggests that this line of reasoning is potentially on the right track, but that the conclusion is not quite right. Parthood is a three-place relation, but the third place is not filled by a
time, but by a region of space-time. To a crude approximation, x is part of y at s is true if (as we’d normally say) x is a part of y and s is a region of space-time containing no region not occupied
by y and all regions occupied by x. But this should be taken as a heuristic guide only, not as a reductive definition, since parthood is really a three-place relation, so the crude approximation does
not even express a proposition according to Hudson.
To see how this applies to the Problem of the Many, let’s simplify the case a little bit so there are only two water droplets, w[1] and w[2], that are neither determinately part of the cloud nor
determinately not a part of it. As well there is the core of the cloud, call it a. On an orthodox theory, there are four proto-clouds here, a, a + w[1], a + w[2] and a + w[1] + w[ 2]. On Hudson’s
theory the largest and the smallest proto-clouds still exist, but in the middle there is a quite different kind of object, which we’ll call c. Let r[1] be the region occupied by a and w[1], and r[2]
the region occupied by a and w[2]. Then the following claims are all true according to Hudson:
• c exactly occupies r[1];
• c exactly occupies r[2];
• c does not occupy the region consisting of the union of r[1] and r[2];
• c has w[1] as a part at r[1], but not at r[2];
• c has w[2] as a part at r[2], but not at r[1];
• c has no parts at the region consisting of the union of r[1] and r[2].
Hudson defines “x exactly occupies s” as follows:
• x has a part at s,
• there is no region of space-time, s*, such that s* has s as a subregion, while x has a part at s*, and
• for every subregion of s, s′, x has a part at s′. (Hudson 2001: 63)
At first, it might look like not much has been accomplished here. All that we did was turn a Problem of 4 clouds into a Problem of 3 clouds, replacing the fusions a + w[1] and a + w[2] with the new,
and oddly behaved, c. But that is to overlook a rather important feature of the remaining proto-clouds. The three remaining proto-clouds can be strictly ordered by the ‘part of’ relation. This was
not previously possible, since neither a + w[1] nor a + w[2] were part of the other. If we adopt the principle that ‘cloud’ is a maximal predicate, so no cloud can be a proper part of another cloud,
we now get the conclusion that exactly one of the proto-clouds is a cloud, as desired.
This is a quite ingenious approach, and it deserves some attention in the future literature. It is hard to say what will emerge as the main costs and benefits of the view in advance of that
literature, but the following two points seem worthy of attention. First, if we are allowed to appeal to the principle that no cloud is a proper part of another, why not appeal to the principle that
no two clouds massively overlap, and get from 4 proto-clouds to one actual cloud that way? Secondly, why don’t we have an object that is just like the old a + w[1], that is, an object that has w[1]
as a part at r[1], and does not have w[2] (or anything else) as a part at r[2]? If we get it back, as well as a + w[2], then all of Hudson’s tinkering with mereology will just have converted a
problem of 4 clouds into a problem of 5 clouds.
Neither of these points should be taken to be conclusive refutations. As things stand now, Hudson’s solution joins the ranks of the many and varied proposed solutions to the Problem of the Many. For
such a young problem, the variety of these solutions is rather impressive. Whether the next few years will see these ranks whittled down by refutation, or swelled by imaginative theorising, remains
to be seen.
The numbers after each entry refer to the sections to which that book or article is relevant.
• Armstrong, D. M., 1978, Universals and Scientific Realism. 2 vols. Cambridge: Cambridge University Press. [6]
• Barnes, Elizabeth, 2010, “Ontic Vagueness: A Guide for the Perplexed,” Noûs, 44: 601–627. [8]
• Dummett, Michael, 1975, “Wang’s Paradox,” Synthese, 30: 301–324. [8]
• Eklund, Matti, 2002, “Inconsistent Languages,” Philosophy and Phenomenological Research, 64: 251–275. [2, 7]
• Fine, Kit, 1975, “Vagueness, Truth and Logic,” Synthese, 30: 265–300. [7]
• Fodor, Jerry, 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press. [7]
• Geach, P. T., 1980, Reference and Generality. 3^rd edn., Ithaca: Cornell University Press. [1, 5]
• Horgan, Terrence, 1993, “On What There Isn’t,” Philosophy and Phenomenological Research, 53: 693–700. [4]
• Hudson, Hud, 2001, A Materialist Metaphysics of the Human Person, Ithaca: Cornell University Press [1, 2, 3, 4, 5, 6, 7, 8, 9]
• Johnston, Mark, 1992, “Constitution is Not Identity,” Mind, 101: 89–105. [6, 8]
• Jones, Nicholas J., 2010, Too Many Cats: The Problem of the Many and the Metaphysics of Vagueness, Ph.D. Dissertation, Birkbeck College, University of London.[7]
• Jones, Nicholas J., 2015, “Multiple Constitution“, in Oxford Studies in Metaphysics 9, Karen Bennett and Dean W. Zimmerman (eds.), Oxford: Oxford University Press, 217–261. [8]
• Keefe, Rosanna, 2000, Theories of Vagueness. Cambridge: Cambridge University Press. [7]
• Kripke, Saul, 1982, Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard University Press. [7]
• Lewis, David, 1983, “New Work for a Theory of Universals,” Australasian Journal of Philosophy, 61: 343–77. [7]
• Lewis, David, 1984, “Putnam’s Paradox,” Australasian Journal of Philosophy, 62: 221–36. [7]
• Lewis, David, 1993, “Many, but Almost One,” in Ontology, Causality and Mind: Essays in Honour of D M Armstrong, John Bacon (ed.), New York: Cambridge University Press. [3, 6, 7, 8]
• Liebesman, David, 2020, “Double-counting and the problem of the many,” Philosophical Studies, first online 13 February 2020; doi:10.1007/s11098-020-01428-9 [6]
• López de Sa, Dan, 2014, “Lewis vs Lewis on the Problem of the Many,” Synthese, 191: 1105–1117. [6]
• Lowe, E. J., 1982, “The Paradox of the 1,001 Cats,” Analysis, 42: 27–30. [8]
• Lowe, E. J., 1995, “The Problem of the Many and the Vagueness of Constitution,” Analysis, 55: 179–82. [8]
• Marksonian, Ned, 1998, “Brutal Composition,” Philosophical Studies, 92: 211–49. [4]
• McGee, Vann and Brian McLaughlin, 2000, “The Lessons of the Many,” Philosophical Topics, 28: 129–51. [7]
• McKinnon, Neil, 2002, “Supervaluations and the Problem of the Many,” Philosophical Quarterly, 52: 320–39. [7]
• Openshaw, James, 2021, “Thinking about many,” Synthese, 199: 2863–2882. [3]
• Putnam, Hilary, 1981, Reason, Truth and History, Cambridge: Cambridge University Press. [7]
• Quine, W. V. O., 1960, Word and Object, Cambridge, MA: Harvard University Press. [6, 7]
• Rettler, Bradley, 2018, “Mereological Nihilism and Puzzles About Material Objects,” Pacific Philosophical Quarterly, 99: 842–868. [2]
• Russell, Bertrand, 1923, “Vagueness,” Australasian Journal of Philosophy and Psychology, 1: 84–92. [8]
• Sandgren, Alexander, forthcoming, “Thought and Talk in a Generous World,” Ergo. [3]
• Sattig, Thomas, 2013, “Vague Objects and the Problem of the Many,” Metaphysica, 14: 211–223. [7]
• Schiffer, Stephen, 1998, “Two Issues of Vagueness,” The Monist, 81: 193–214. [7]
• Sorensen, Roy, 2001, Vagueness and Contradiction, Oxford: Oxford University Press. [2, 7]
• Sorensen, Roy, 2013, “Vagueness,” The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2013/entries/vagueness/>.
• Sutton, C.S., 2015, “Almost One, Overlap and Function,” Analysis, 75: 45–52. [6]
• Unger, Peter, 1980, “The Problem of the Many,” Midwest Studies in Philosophy, 5: 411–67. [1, 2, 7]
• van Inwagen, Peter, 1990, Material Beings, Ithaca: Cornell University Press. [4, 8]
• Williams, J. Robert G., 2006, “An Argument for the Many,” Proceedings of the Aristotelian Society, 106: 411–419. [7]
• Weatherson, Brian, 2003a, “Epistemicism, Parasites and Vague Names” Australasian Journal of Philosophy, 81(2): 276–279. [7]
• Weatherson, Brian, 2003b, “Many Many Problems,” Philosophical Quarterly, 53(213): 481–501. [7]
Academic Tools
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
[Please contact the author with suggestions.]
|
{"url":"https://plato.stanford.edu/ENTRIES/problem-of-many/","timestamp":"2024-11-06T12:21:51Z","content_type":"text/html","content_length":"94570","record_id":"<urn:uuid:b9a32b5c-fa50-4621-a51e-abcdc3d5c5a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00215.warc.gz"}
|
Click here to go to our lumped element page
Here is an introduction to various types of inductors used at microwave frequencies. This is a companion page to our pages on microwave capacitors and microwave resistors.
Inductor background and definitions
Inductor mathematics (separate page)
Spiral inductors (wire)
Wirebond inductance rule of thumb
Coming: how to make your own
Inductor modeling software (if someone steps up to sponsor this topic!)
Inductor background and definitions
What is inductance? Inductance is the opposite of capacitance, it is a property that opposes an instantaneous shift in current. Inductance has no effect at DC (an inductor passes direct current), but
as frequency increases an ideal inductor starts to look like an open circuit.
The units of inductance are Henries, named after Microwave Hall of Famer Joseph Henry, who was the first curator of the Smithsonian among other achievements. At microwave frequencies, inductors are
usually specified in nano-Henries (10^-9 Henries).
Inductors are the problem step-child of microwave circuits. They are harder to model than capacitors, and cut off earlier in frequency. They also have limited current carrying capability, low quality
factor (and are lossy), and can radiate. But you'll need them anyway, so learn more about them here.
Wirewound inductors
two types: air core, and other core
Ferrite beads
Spiral inductors
See a formula for wire spiral inductors here:
Microstrip spiral microstrip inductors are commonplace on MMICs, and are offered as discrete components as well. Some day they will get their own Microwaves101 page (as soon a sponsor steps up!)
Most often spiral inductors are rectangular because this is easier to generate and to analyze with CAD software. True circular spiral inductors have better performance at higher frequencies.
Spiral inductors are notoriously lossy, especially for large values. This is because when the inductor is basically a very skinny line made up of many squares, all of which add resistance. Q-factors
for spiral inductors can be quite low.
Computing the DC resistance of a spiral inductor is simple, and is often overlooked by designers until they build an amplifier circuit and the part doesn't bias up correctly on the first iteration.
First you need to know the sheet resistance of your metalization, in ohms per square, then it is easy to approximate the number of squares to get the resistance. Computing the RF resistance, you may
have to consider the skin depth effect.
One word of caution, spiral inductors can radiate. The telltale sign is when you measure them in both directions using a network analyzer, and S11 and S22 magnitude differ greatly.
Distributed inductance (T-line)
More to come!
Vendors for inductors
Silly Rabbit, recommendations are reserved for paying sponsors... you'll have to go find your own inductor vendors for now!
Attention inductor vendors... consider sponsoring this page, it will soon get more hits per month than your company web site does!
|
{"url":"https://www.microwaves101.com/encyclopedias/inductors","timestamp":"2024-11-08T19:05:57Z","content_type":"application/xhtml+xml","content_length":"37676","record_id":"<urn:uuid:26c6f154-574b-4903-b130-432e5c0ca426>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00682.warc.gz"}
|
The stability of an ellipsoidal vortex in a background shear flow
We consider the motion of a single quasi-geostrophic ellipsoid of uniform potential vorticity in equilibrium with a linear background shear flow. This motion depends on four parameters: the
height-to-width aspect ratio of the vortex, h/r, and three parameters characterizing the background shear flow, namely the strain rate, y, the ratio of the background rotation rate to the strain,
beta, and the angle from which the shear is applied, theta. We generate the equilibria over a large range of these parameters and analyse their linear stability. For the second-order (m = 2) modes
which preserve the ellipsoidal form, we are able to derive equations for the eigenmodes and growth rates. For the higher-order modes we use a numerical method to determine the full linear stability
to general disturbances (m > 2).
Overall we find that the equilibria are stable over most of the parameter space considered, and where instability does occur the marginal instability is usually ellipsoidal. From these results, we
determine the parameter values for which the vortex is most stable, and conjecture that these are the vortex characteristics which would be the most commonly observed in turbulent flows.
• STRATIFIED FLUID
• VORTICES
• MOTION
Dive into the research topics of 'The stability of an ellipsoidal vortex in a background shear flow'. Together they form a unique fingerprint.
|
{"url":"https://research-portal.st-andrews.ac.uk/en/publications/the-stability-of-an-ellipsoidal-vortex-in-a-background-shear-flow","timestamp":"2024-11-14T01:57:06Z","content_type":"text/html","content_length":"56821","record_id":"<urn:uuid:7c6da6cc-7fc7-4e05-b232-e4a14be85e61>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00048.warc.gz"}
|
Leetcode 1673. Find the Most Competitive Subsequence | Video Summary and Q&A | Glasp
Leetcode 1673. Find the Most Competitive Subsequence | Summary and Q&A
6.2K views
โ ข
December 2, 2020
Leetcode 1673. Find the Most Competitive Subsequence
This video explains how to find the most competitive subsequence using a monotonic stack approach.
Key Insights
• โ พ The problem revolves around selecting a subsequence based on competitiveness defined by lexicographical order, which is essential in algorithmic challenges.
• ๐ ช Utilizing a monotonic stack simplifies the complexity of maintaining order and selection criteria by providing structural integrity as elements are added or removed.
• โ ๏ธ The process emphasizes the balance between greedy selections and future planning to ensure enough elements remain for subsequent choices.
• ๐ จโ ๐ ป It is important to tailor solutions to meet computational constraints, as brute-force methods often yield inefficiencies in coding challenges.
• โ Understanding the fundamentals of stacks can enhance problem-solving techniques in competitive programming.
• ๐ ฎ The video includes detailed explanations of both the problem and its solution strategy, making it accessible for learners at different levels.
• ๐ จโ ๐ ป Future tutorials are promised, indicating ongoing engagement and educational support for the audience seeking to enhance their coding skills.
hey there everyone welcome back to lead coding in this video we will be looking at the solution to the problem number two of lead code weekly contest 217 name of the problem is find the most
comparative subsequence given an integer array nums and a positive integer k return the most comparative subsequence of nums of size k and arrays subsequence i... Read More
Questions & Answers
Q: What is the definition of a "most competitive subsequence" in this context?
A most competitive subsequence is defined as a subsequence of a given array that is lexicographically smallest among all subsequences of a specific length. It prioritizes the smallest numbers at
their respective indices while ensuring that the length condition is satisfied.
Q: Can you explain the naive O(n^2) approach for this problem?
The naive O(n^2) approach involves iterating through the array and at each position selecting the smallest possible element while ensuring that enough remaining elements are left for future
selections. This method can lead to excessive complexity and is inefficient for larger datasets.
Q: What is a monotonic stack, and how is it used in this problem?
A monotonic stack is a data structure that maintains its elements in a sorted order (either increasing or decreasing) as new elements are added. In this problem, it is used to efficiently manage
element selections while maintaining the order needed for the competitive subsequence, allowing for an optimal O(n) solution.
Q: How does the selection process in the monotonic stack work?
The selection process involves iterating through the array and using the stack to hold the current subsequence. If the current element is smaller than the top of the stack and if itโ s possible to
remove elements without violating the subsequence conditions, elements are popped from the stack, ensuring that the final subsequence remains lexicographically smallest.
Q: What is the significance of leaving elements for future selection when building the subsequence?
Leaving elements for future selection ensures that there are enough remaining elements to reach the required length of the subsequence. This consideration is critical for maintaining the
lexicographical order and staying within the constraints of the problem, which requires a fixed number of total elements in the subsequence.
Q: What are the time and space complexities of the discussed solution?
The O(n) solution using the monotonic stack has a time complexity of O(n) and a space complexity of O(n), as each element can only be added and removed from the stack once. In contrast, the naive O(n
^2) method has a time complexity of O(n^2) and O(1) space complexity.
Summary & Key Takeaways
• The content focuses on a coding problem from LeetCode involving the selection of the most competitive subsequence from an integer array based on lexicographical order.
• It explains the naive O(n^2) solution and introduces a more efficient O(n) solution using a monotonic stack to manage elements while maintaining order.
• The presenter walks through step-by-step examples and showcases coding strategies, emphasizing the importance of leaving elements for future selection to maintain competitiveness.
Explore More Summaries from Fraz ๐
|
{"url":"https://glasp.co/youtube/p/leetcode-1673-find-the-most-competitive-subsequence","timestamp":"2024-11-03T09:36:48Z","content_type":"text/html","content_length":"362775","record_id":"<urn:uuid:73f82d1a-6f10-4546-9e54-4d3e3945916f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00008.warc.gz"}
|
Interactive Bingo Slides for Algebra 1
Opening Screen Bingo Board Sample Question Slide
Sample Answer Slide Sample Question Slide Sample Answer Slide
Possible Ways to Use This Activity in the Classroom:
To Play traditional Bingo Game as a Class:
1. Hand out the matching BINGO card -- Algebot Bingo Card
2. Ask students to number the card in any random fashion (of their choosing) from 1 to 24 (assuming a "Free Space") or 1 to 25. (This game has 25 slides if you want to use ALL squares.) The students
are making their own cards.
3. Project the Bingo Game slides and stop on the Bingo Board.
4. As the "BINGO caller", you need to pick a number from 1 to 24. You can do this by rolling a die (game stores have multi-sided die), using a spinner, picking numbers from a bag, or generating the
random number on your graphing calculator.
│ TI-84+: │
│ 1. From the HOME screen, MATH -- PRB -- #5randInt(1,24) │
│ This function will generate random numbers from 1 to 24, but the numbers may repeat. │
│ │
│ 2. (OS2.55MP) From the HOME screen, MATH -- PRB -- #8randIntNoRep(1,24) │
│ This function will generate random numbers from 1 to 24 with NO repetition. │
│ TI-Inspire: From a Calculator Page -- Menu -- #5 Probability -- #4 Random -- #2 Integer -- randInt(1,24). │
5. When you announce the number, click on that number from the Bingo Board to show the question. Students solve the problem and place their answer on their BINGO card in the box containing the number
of the question.
6. BINGO is achieved in the normal manner – horizontally, diagonally, and vertically (or 4 corners if you wish).
7. Students raise their hands if they have BINGO. You check the answers to see if the student has won.
8. When a student wins, EVERYONE continues playing on the SAME card, including the winning student (he/she could win again). Place a red line through the win line so the student can continue
9. You can play until all the problems have been solved, or you can stop after 5 or 6 students win. Stopping after 5 or 6 students usually covers approximately 70-85% of the problems. Tell the
students in advance that you may be ending early, so they do not BEG you to continue saying "I only need one more to win!"
10. Collecting student work at the end of the game will ensure everyone's participation.
1. If there is a particular problem among the slides that you want the students to solve, you may "throw" the die in the direction of that number. Smiles! Do not, however, do this too often.
2. Offering "prizes" is a wonderful motivator, but is not a mandatory aspect of this activity. Prizes can be candy, "free" homework cards, exemption from an assignment or short quiz, holiday
pencils, trinkets (whatever is easily available) or a simple posting of the BINGO winners on the classroom wall. Candy seems to be the most popular prize. : )
3. If you think the students will have trouble solving the problems, you can have students work in pairs (but each student should maintain his/her own card), offer hints while the students are
working, ask for class discussions of the solutions, and/or even show the answer screens to be sure everyone is understanding the concepts.
4. This is a "fun" activity that is a wonderful motivator. We have used it with classes ranging from "general" math to AP Calculus and it is always successful.
Other Options:
1. Short Class Activity: Set an activity "number" ("This activity will require that we answer five questions from the Bingo Board.") Let volunteers choose questions. The task is for each student
(working alone or in groups) to show HOW to solve each question. None of the work needed to arrive at the final answer is shown on the slides, for this purpose.
2. Quiz: Pick the questions that you wish to show for a quiz. Do not click on the "See answer" button. Show only your questions. Answers can be shown after the quiz is completed.
3. Assign as Independent Review: Students can work independently using the On-Line activity for additional practice. Students must supply "all the work" needed to arrive at the answer, or be ready to
explain the answer if called on in class.
Used in a Computer Lab or Laptop Setting:
1. Lab Activity: Set an activity "number" ("This activity will require that you answer 15 of the 25 questions.") Let students choose their own questions. The task is for each student (working alone
or in a group) to show HOW to solve each question. None of the work needed to arrive at the final answer is shown on the slides, for this purpose.
2. Lab Quiz: Pick the questions that you wish to use for the quiz. Put the question numbers on a worksheet with sufficient work space next to each question for the work to be shown. The task is for
each student, working alone, to show HOW to solve each question. Yes, they can look at the answer -- but as we all know, the "good stuff" is in the trip TO the answer.
3. Assign as Independent Review: Students can work independently using the On-Line activity for additional practice.
|
{"url":"http://www.mathbits.com/MathBits/PPT/AlgebotBingoReviewPage.html","timestamp":"2024-11-02T23:12:18Z","content_type":"application/xhtml+xml","content_length":"14157","record_id":"<urn:uuid:5370f3b3-1666-492d-800f-cccbb4d01f11>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00263.warc.gz"}
|
Reverse Mortgage Rate Calculator - Certified Calculator
Reverse Mortgage Rate Calculator
Are you considering a reverse mortgage and wondering about the associated rates? Our Reverse Mortgage Rate Calculator is here to simplify the process for you. This user-friendly tool allows you to
quickly estimate your monthly payments, providing valuable insights into your financial planning.
Formula: The reverse mortgage calculation is based on the formula for calculating a fixed-rate mortgage payment. The monthly payment is determined using the formula:
M = \frac{PV \cdot \frac{r(1+r)^{n}}{(1+r)^{n}-1}}
• �M is the monthly payment,
• ��PV is the present value or home value,
• �r is the monthly interest rate (annual rate divided by 12),
• �n is the total number of payments (loan term multiplied by 12).
How to Use:
1. Enter your home value.
2. Input the loan term in years.
3. Provide the annual interest rate.
4. Click the “Calculate” button to obtain your estimated monthly payment.
Example: Suppose your home value is $300,000, the loan term is 15 years, and the interest rate is 5%. The calculated monthly payment would be displayed as an outcome.
1. Q: How accurate is the calculator’s result? A: The calculator provides a close estimate; however, actual rates may vary based on additional factors.
2. Q: Can I use the calculator for adjustable-rate mortgages? A: No, this calculator is designed for fixed-rate reverse mortgages.
3. Q: Is the result inclusive of insurance and property taxes? A: No, the result reflects the principal and interest only.
4. Q: Can I input my own formula for calculation? A: The calculator uses a predefined formula for accuracy and consistency.
5. Q: Are there any fees associated with using the calculator? A: No, the calculator is free to use.
Conclusion: Empower yourself with the knowledge of reverse mortgage rates using our Reverse Mortgage Rate Calculator. Make informed decisions about your financial future by understanding your
potential monthly payments. Use this tool to plan effectively and confidently for a secure retirement.
Leave a Comment
|
{"url":"https://certifiedcalculator.com/reverse-mortgage-rate-calculator/","timestamp":"2024-11-02T18:57:44Z","content_type":"text/html","content_length":"55736","record_id":"<urn:uuid:0d7ae240-cf12-42de-83a6-6795deecf59e>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00755.warc.gz"}
|
How To Calculate Planetary Gear Ratio
Planetary gear systems, also known as epicyclic gear systems, are important components in modern engineering. They are useful for speed variation and can be found in everything from automatic car
transmissions and industrial food mixers to operating tables and solar arrays. With four core components – the ring gear, the sun gear and the planetary gears connected to the carrier – the idea of
calculating the gear ratio of a planetary system may sound daunting. However, the single-axis nature of the system makes it easy. Just be sure to note the state of the carrier in the gear system.
TL;DR (Too Long; Didn't Read)
When calculating planetary or epicyclic gear ratios, first note the number of teeth on the sun and ring gears. Add them together to calculate the number of planetary gear teeth. Following this step,
the gear ratio is calculated by dividing the number of driven teeth by the number of driving teeth – there are three combinations possible, depending on whether the carrier is moving, being moved or
standing still. You may require a calculator to determine the final ratio.
First Steps
To make calculating planetary gear ratios as simple as possible, note the number of teeth on the sun and ring gears. Next, add the two numbers together: The sum of the two gears' teeth equals the
number of teeth on the planetary gears connected to the carrier. For example, if the sun gear has 20 teeth and the ring gear has 60, the planetary gear has 80 teeth. The next steps depend on the
state of the planetary gears connected to the carrier, although all use the same formula. Calculate gear ratio by dividing the number of teeth on the driven gear by the number of teeth on the driving
Carrier as Input
If the carrier is acting as the input in the planetary gear system, rotating the ring gear while the sun gear is still, divide the number of teeth on the ring gear (the driven gear) by the number of
teeth on the planetary gears (the driving gears). According to the first example:
for a ratio of 3:4.
Carrier as Output
If the carrier is acting as the output in the planetary gear system, being rotated by the sun gear while the ring gear stays still, divide the number of teeth on the planetary gears (the driven gear)
by the number of teeth on the sun gear (the driving gear). According to the first example:
for a ratio of 4:1.
Carrier Standing Still
If the carrier is standing still in the planetary gear system while the ring gear rotates the sun gear, divide the number of teeth on the sun gear (the driven gear) by the number of teeth on the ring
gear (the driving gear). According to the first example:
for a ratio of 1:3.
Cite This Article
Flournoy, Blake. "How To Calculate Planetary Gear Ratio" sciencing.com, https://www.sciencing.com/calculate-planetary-gear-ratio-6002241/. 8 November 2020.
Flournoy, Blake. (2020, November 8). How To Calculate Planetary Gear Ratio. sciencing.com. Retrieved from https://www.sciencing.com/calculate-planetary-gear-ratio-6002241/
Flournoy, Blake. How To Calculate Planetary Gear Ratio last modified August 30, 2022. https://www.sciencing.com/calculate-planetary-gear-ratio-6002241/
|
{"url":"https://www.sciencing.com:443/calculate-planetary-gear-ratio-6002241/","timestamp":"2024-11-11T14:25:03Z","content_type":"application/xhtml+xml","content_length":"73928","record_id":"<urn:uuid:b537debc-73af-44de-9855-d2b2f60bc332>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00015.warc.gz"}
|
How to Perform Multiple Linear Regression Analysis on Time Series Data Using R Studio - KANDA DATA
How to Perform Multiple Linear Regression Analysis on Time Series Data Using R Studio
Multiple linear regression analysis on time series data, along with its assumption tests, can be performed using R Studio. In a previous article, I explained how to conduct multiple linear regression
analysis and assumption tests for cross-sectional data.
To read the complete article, please refer to my earlier post titled: β How to Perform Multiple Linear Regression Analysis Using R Studio: A Complete Guide.β In this tutorial, Kanda Data will
specifically outline how to conduct analysis and interpretation using time series data.
The fundamental difference lies in the assumption tests, which slightly differ between cross-sectional data and time series data. For time series data, multiple linear regression requires an
autocorrelation test, whereas for cross-sectional data, the autocorrelation test is not needed.
I would like to emphasize that multiple linear regression analysis typically employs the Ordinary Least Squares (OLS) method. Therefore, we need to perform several assumption tests required for OLS
linear regression. In this article, we will practice using a sample case study involving time series data.
In this guide, Kanda Data will thoroughly explain how to conduct multiple linear regression analysis on time series data using R Studio, including various diagnostic tests or OLS assumption tests
such as normality test, homoscedasticity test, multicollinearity test, linearity test, and autocorrelation test, to ensure that the regression model is valid and scientifically justifiable.
Case Study: Multiple Linear Regression Analysis on Time Series Data
Before diving into the technical details of multiple linear regression analysis and its assumption tests, it is important to understand that multiple linear regression is used when there is more than
one independent variable affecting a dependent variable.
Establishing the regression equation specification is the initial step we need to take when conducting multiple linear regression analysis on time series data. To facilitate understanding of this
fundamental theory, letβ s create a sample research case.
A researcher observed that, according to theory and previous research, inflation and unemployment rates are determinants of economic growth. The researcher wants to verify whether inflation and
unemployment rates negatively affect economic growth.
Therefore, the researcher conducted observations over 30 quarterly periods for inflation, unemployment rates, and economic growth in country ABC. Based on the collected data, the researcher
successfully gathered 30 observations for each of the variables studied.
Based on this case example, the first step we need to take is to construct a multiple linear regression equation. The multiple linear regression equation for the given case study can be formulated as
π =π ½0+π ½1π 1+π ½2π 2+…+π ½π π π +π
π is economic growth (%) as the dependent variable,
π 1 is the inflation rate (%) as the 1st independent variable,
π 2 is the unemployment rate (%) as the 2nd independent variable,
π ½0 is the intercept (constant),
π ½1 and π ½2 are the regression coefficients indicating the change in π based on changes in π 1 and X2,
π is the error or residual.
After constructing the specification of the multiple linear regression equation, the next step is to tabulate the collected data over the 30 observed time periods. The input data results from the
study, in accordance with the specified multiple linear regression equation, can be seen in the table below:
Period Inflation_Rate (X1) Unemployment_ Rate (X2) Economic_Growth (Y)
1 2.5 5.0 3.1
2 2.7 4.8 3.3
3 3.0 4.6 3.0
4 3.1 4.7 2.9
5 3.3 5.1 2.8
6 2.9 5.0 3.1
7 3.4 5.2 2.7
8 3.5 5.1 2.8
9 3.2 4.9 3.2
10 2.8 4.7 3.4
11 2.6 4.6 3.5
12 2.5 4.4 3.7
13 2.9 4.3 3.4
14 3.2 4.5 3.1
15 3.6 4.7 2.8
16 3.8 5.0 2.5
17 3.9 5.2 2.3
18 4.1 5.3 2.1
19 4.0 5.1 2.4
20 3.7 5.0 2.6
21 3.6 4.8 2.9
22 3.3 4.7 3.0
23 3.1 4.6 3.2
24 2.9 4.5 3.3
25 2.7 4.4 3.4
26 2.8 4.3 3.5
27 2.6 4.5 3.7
28 2.5 4.6 3.8
29 2.4 4.7 3.9
30 2.3 4.9 4.0
Multiple Linear Regression Analysis Command in R Studio and Interpretation of the Results
Once we have the data to be used for the analysis in this article, you can download and install the R application on your laptop. If R Studio has been successfully installed, the next step is to
conduct a multiple linear regression analysis.
After opening R Studio, the next step is to input the data into R Studio for analysis. There are two ways to do this: importing the data directly from Excel or typing it directly into the command
line in R Studio. In this article, I will demonstrate the second method.
Please copy all the data for each variable from Excel, then paste it and separate it using a comma (,). Next, enter the following command:
# Inputting data
data <- data.frame(
Inflation_Rate = c(2.5, 2.7, 3.0, 3.1, 3.3, 2.9, 3.4, 3.5, 3.2, 2.8, 2.6, 2.5, 2.9, 3.2, 3.6, 3.8, 3.9, 4.1, 4, 3.7, 3.6, 3.3, 3.1, 2.9, 2.7, 2.8, 2.6, 2.5, 2.4, 2.3),
Unemployment_Rate = c(5.0, 4.8, 4.6, 4.7, 5.1, 5.0, 5.2, 5.1, 4.9, 4.7, 4.6, 4.4, 4.3, 4.5, 4.7, 5, 5.2, 5.3, 5.1, 5, 4.8, 4.7, 4.6, 4.5, 4.4, 4.3, 4.5, 4.6, 4.7, 4.9),
Economic_Growth = c(3.1, 3.3, 3.0, 2.9, 2.8, 3.1, 2.7, 2.8, 3.2, 3.4, 3.5, 3.7, 3.4, 3.1, 2.8, 2.5, 2.3, 2.1, 2.4, 2.6, 2.9, 3.0, 3.2, 3.3, 3.4, 3.5, 3.7, 3.8, 3.9, 4.0))
The next step is to perform multiple linear regression analysis using R Studio. To conduct a multiple linear regression analysis, enter the command below:
# Performing multiple linear regression analysis
model <- lm(Economic_Growth ~ Inflation_Rate + Unemployment_Rate, data = data)
# Viewing the summary of the results
After pressing Enter or clicking β Run,β the analysis output will appear as follows:
lm(formula = Economic_Growth ~ Inflation_Rate + Unemployment_Rate,
data = data)
Min 1Q Median 3Q Max
-0.39920 -0.05473 0.00262 0.08124 0.31354
Estimate Std. Error t value Pr(>|t|)
(Intercept) 7.07429 0.50282 14.069 6.00e-14 ***
Inflation_Rate -0.77174 0.06999 -11.026 1.68e-11 ***
Unemployment_Rate -0.32915 0.12634 -2.605 0.0148 *
Signif. codes: 0 β ***β 0.001 β **β 0.01 β *β 0.05 β .β 0.1 β β 1
Residual standard error: 0.1523 on 27 degrees of freedom
Multiple R-squared: 0.9054, Adjusted R-squared: 0.8984 F-statistic: 129.1 on 2 and 27 DF, p-value: 1.503e-14
Based on the analysis results, there are at least three key values that need to be interpreted, which include the R-squared value, the F-statistic value, and the T-statistic values. The R-squared
value of 0.9054 can be interpreted as indicating that 90.54% of the variation in the economic growth variable can be explained by variations in the inflation rate and unemployment rate variables,
while the remaining 9.46% is explained by other variables not included in this regression equation.
Next, the F-statistic value of 129.1 with a p-value of 1.503e-14 (p-value < 0.05) indicates that, simultaneously, the inflation rate and unemployment rate significantly affect economic growth.
The T-statistic value for the inflation rate variable is -0.77174 with a p-value of 1.68e-11 (p-value < 0.05), which can be interpreted as the inflation rate having a significant partial effect on
economic growth (assuming the unemployment rate variable is constant).
Similarly, the T-statistic value for the unemployment rate variable is -0.32915 with a p-value of 0.0148 (p-value < 0.05), indicating that the unemployment rate has a significant partial effect on
economic growth (assuming the inflation rate variable is constant).
To ensure that the analysis results and interpretation yield the best linear unbiased estimator, it is necessary to conduct assumption tests, which will be discussed in the next section.
Command for Residual Normality Test in R Studio
The first assumption that needs to be tested is to ensure that the residuals from the multiple linear regression equation in the above case study follow a normal distribution. The residual normality
test can be performed using the Shapiro-Wilk test or by inspecting the QQ plot.
In this article, we will use both methods. Type the following command to perform the residual normality test:
# Residual normality test using Shapiro-Wilk
# Plot QQ
The output from the Shapiro-Wilk test for the analysis we conducted is as follows:
Shapiro-Wilk normality test
data: residuals(model) W = 0.96792, p-value = 0.4839
The analysis results show that the W value is 0.96792 with a p-value of 0.4839 (p-value > 0.05), indicating that the residuals follow a normal distribution. Additionally, we can check the graph from
the QQ plot as shown below:
The QQ plot shows that the points follow a straight line, which can be concluded that the residuals are normally distributed.
Heteroskedasticity Analysis Command in R Studio
The next assumption test we need to perform is to detect the presence of heteroskedasticity. In multiple linear regression for time series data using the OLS method, it is assumed that the residual
variance is constant (homoscedasticity). Therefore, we need to ensure that there is no heteroskedasticity in our regression equation.
Heteroskedasticity detection can be tested using the Breusch-Pagan test in R Studio. Initially, we need to install the β lmtestβ package. Use the command below to perform the
non-heteroskedasticity test on the regression equation.
# Using the lmtest package for Breusch-Pagan test
The output of the Breusch-Pagan test analysis can be seen as follows:
studentized Breusch-Pagan test
data: model BP = 11.09, df = 2, p-value = 0.003906
The analysis results show that the Breusch-Pagan value is 11.09 with a p-value of 0.003906 (p-value < 0.05), indicating that there is a heteroskedasticity problem, which means that the residual
variance is not constant across the range of independent variable values. We need to take further action if our regression equation experiences heteroskedasticity.
Multicollinearity Analysis Command in R Studio
The next assumption test is to check for multicollinearity in our time series regression equation. In multiple linear regression equations, it is assumed that there is no strong correlation between
independent variables. One way to check for multicollinearity is by using the Variance Inflation Factor (VIF).
To obtain the Variance Inflation Factor (VIF) values, we need to install the β carβ package in R Studio. Please enter the command below for the multicollinearity test in the regression analysis.
# Using the car package to calculate VIF
The output VIF values based on the analysis results are as follows:
Inflation_Rate Unemployment_Rate Β Β Β Β Β Β Β Β Β
1.58248Β Β Β Β Β Β Β Β Β Β 1.58248
The analysis results show that the VIF values for the correlation between the inflation rate and unemployment rate are 1.58248, which is less than 10. Thus, it can be concluded that there is no
multicollinearity between the independent variables in the multiple linear regression equation in the above case study using time series data.
Linearity Analysis Command in R Studio
The linearity test can be performed by plotting the residuals against the fitted values. Please enter the following command for the linearity test in R Studio.
# Plot residuals vs fitted values
plot(fitted(model), residuals(model))
abline(h = 0, col = “red”)
The resulting plot output is as follows:
The analysis results show that the points are randomly scattered around the horizontal line, indicating that the multiple linear regression model meets the linearity assumption.
Autocorrelation Analysis Command in R Studio
The final assumption test that needs to be performed in multiple linear regression analysis for time series data is the autocorrelation test. This test is not necessary for cross-section data. The
purpose of the autocorrelation test is to examine whether there is a correlation between residuals at time period t and residuals at time period t-1. One commonly used test for this is the
Durbin-Watson test.
Since we have already installed the lmtest package during the heteroskedasticity test, we only need to write the following command in R Studio:
# Load package lmtest
# Perform the Durbin-Watson autocorrelation test
After entering the command and pressing β Enterβ , the output of the autocorrelation test will appear as follows:
Durbin-Watson test
data: model
DW = 0.58787, p-value = 5.165e-07
alternative hypothesis: true autocorrelation is greater than 0
For accurate interpretation, we need to refer to the DW table to find the values of dL and dU according to the number of observations we used. However, generally, a DW value of 0.58787 is very low,
indicating a potential presence of positive autocorrelation. This means that the residuals of the previous period positively influence the residuals of the following period. The p-value of 5.165e-07
(p-value < 0.05) indicates that significant positive autocorrelation is present.
Therefore, autocorrelation is an undesirable condition in OLS linear regression. Researchers need to consider steps to address autocorrelation in the multiple linear regression equation.
Conducting multiple linear regression analysis in R Studio involves several steps, starting from model analysis to assumption testing. The validity of the results depends on assumption tests such as
residual normality, heteroskedasticity, multicollinearity, linearity, and autocorrelation tests.
This is an article that Kanda Data can write and share with you. Hopefully, this article is useful and provides solutions for those conducting multiple linear regression analysis on time series data
using R Studio. Happy learning!
|
{"url":"https://kandadata.com/how-to-perform-multiple-linear-regression-analysis-on-time-series-data-using-r-studio/","timestamp":"2024-11-11T23:32:51Z","content_type":"text/html","content_length":"205248","record_id":"<urn:uuid:4d9cb718-a88b-4ddd-bcec-b19c946ec63e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00574.warc.gz"}
|
2.2: Atomic Orbitals and Quantum Numbers (2024)
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vectorC}[1]{\textbf{#1}}\)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Skills to Develop
• Understand the general idea of the quantum mechanical description of electrons in an atom, and that it uses the notion of three-dimensional wave functions, or orbitals, that define the
distribution of probability to find an electron in a particular part of space
• List and describe traits of the four quantum numbers that form the basis for completely specifying the state of an electron in an atom
Understanding Quantum Theory of Electrons in Atoms
Video \(\PageIndex{1}\): A preview of electrons in orbitals.
The goal of this section is to understand the electron orbitals (location of electrons in atoms), their different energies, and other properties. The use of quantum theory provides the best
understanding to these topics. This knowledge is a precursor to chemical bonding.
As was described previously, electrons in atoms can exist only on discrete energy levels but not between them. It is said that the energy of an electron in an atom is quantized, that is, it can be
equal only to certain specific values and can jump from one energy level to another but not transition smoothly or stay between these levels.
The energy levels are labeled with an n value, where n = 1, 2, 3, …. Generally speaking, the energy of an electron in an atom is greater for greater values of n. This number, n, is referred to as the
principal quantum number. The principal quantum number defines the location of the energy level. It is essentially the same concept as the n in the Bohr atom description. Another name for the
principal quantum number is the shell number. The shells of an atom can be thought of concentric circles radiating out from the nucleus. The electrons that belong to a specific shell are most likely
to be found within the corresponding circular area. The further we proceed from the nucleus, the higher the shell number, and so the higher the energy level (Figure \(\PageIndex{1}\)). The positively
charged protons in the nucleus stabilize the electronic orbitals by electrostatic attraction between the positive charges of the protons and the negative charges of the electrons. So the further away
the electron is from the nucleus, the greater the energy it has.
Figure \(\PageIndex{1}\): Different shells are numbered by principal quantum numbers.
This quantum mechanical model for where electrons reside in an atom can be used to look at electronic transitions, the events when an electron moves from one energy level to another. If the
transition is to a higher energy level, energy is absorbed, and the energy change has a positive value. To obtain the amount of energy necessary for the transition to a higher energy level, a photon
is absorbed by the atom. A transition to a lower energy level involves a release of energy, and the energy change is negative. This process is accompanied by emission of a photon by the atom. The
following equation summarizes these relationships and is based on the hydrogen atom:
\[ \begin{align*} ΔE &=E_\ce{final}−E_\ce{initial} \\[5pt] &=−2.18×10^{−18}\left(\dfrac{1}{n^2_\ce f}−\dfrac{1}{n^2_\ce i}\right)\:\ce J \end{align*} \]
The values n[f] and n[i] are the final and initial energy states of the electron.
The principal quantum number is one of three quantum numbers used to characterize an orbital. An atomic orbital, which is distinct from an orbit, is a general region in an atom within which an
electron is most probable to reside. The quantum mechanical model specifies the probability of finding an electron in the three-dimensional space around the nucleus and is based on solutions of the
Schrödinger equation. In addition, the principal quantum number defines the energy of an electron in a hydrogen or hydrogen-like atom or an ion (an atom or an ion with only one electron) and the
general region in which discrete energy levels of electrons in a multi-electron atoms and ions are located.
Another quantum number is l, the angular momentum quantum number. It is an integer that defines the shape of the orbital, and takes on the values, l = 0, 1, 2, …, n – 1. This means that an orbital
with n = 1 can have only one value of l, l = 0, whereas n = 2 permits l = 0 and l = 1, and so on. The principal quantum number defines the general size and energy of the orbital. The l value
specifies the shape of the orbital. Orbitals with the same value of l form a subshell. In addition, the greater the angular momentum quantum number, the greater is the angular momentum of an electron
at this orbital.
Orbitals with l = 0 are called s orbitals (or the s subshells). The value l = 1 corresponds to the p orbitals. For a given n, p orbitals constitute a p subshell (e.g., 3p if n = 3). The orbitals with
l = 2 are called the d orbitals, followed by the f-, g-, and h-orbitals for l = 3, 4, 5, and there are higher values we will not consider.
There are certain distances from the nucleus at which the probability density of finding an electron located at a particular orbital is zero. In other words, the value of the wavefunction ψ is zero
at this distance for this orbital. Such a value of radius r is called a radial node. The number of radial nodes in an orbital is n – l – 1.
Figure \(\PageIndex{2}\): The graphs show the probability (y axis) of finding an electron for the 1s, 2s, 3s orbitals as a function of distance from the nucleus.
Video \(\PageIndex{2}\): Looking into the probability of finding electrons.
Consider the examples in Figure \(\PageIndex{3}\). The orbitals depicted are of the s type, thus l = 0 for all of them. It can be seen from the graphs of the probability densities that there are 1 –
0 – 1 = 0 places where the density is zero (nodes) for 1s (n = 1), 2 – 0 – 1 = 1 node for 2s, and 3 – 0 – 1 = 2 nodes for the 3s orbitals.
The s subshell electron density distribution is spherical and the p subshell has a dumbbell shape. The d and f orbitals are more complex. These shapes represent the three-dimensional regions within
which the electron is likely to be found.
Figure \(\PageIndex{3}\): Shapes of s, p, d, and f orbitals.
If an electron has an angular momentum (l ≠ 0), then this vector can point in different directions. In addition, the z component of the angular momentum can have more than one value. This means that
if a magnetic field is applied in the z direction, orbitals with different values of the z component of the angular momentum will have different energies resulting from interacting with the field.
The magnetic quantum number, called m[l,] specifies the z component of the angular momentum for a particular orbital. For example, for an s orbital, l = 0, and the only value of m[l] is zero. For p
orbitals, l = 1, and m[l] can be equal to –1, 0, or +1. Generally speaking, m[l] can be equal to –l, –(l – 1), …, –1, 0, +1, …, (l – 1), l. The total number of possible orbitals with the same value
of l (a subshell) is 2l + 1. Thus, there is one s-orbital for ml = 0, there are three p-orbitals for ml = 1, five d-orbitals for ml = 2, seven f-orbitals for ml = 3, and so forth. The principal
quantum number defines the general value of the electronic energy. The angular momentum quantum number determines the shape of the orbital. And the magnetic quantum number specifies orientation of
the orbital in space, as can be seen in Figure \(\PageIndex{3}\).
Figure \(\PageIndex{4}\): The chart shows the energies of electron orbitals in a multi-electron atom.
Figure \(\PageIndex{4}\) illustrates the energy levels for various orbitals. The number before the orbital name (such as 2s, 3p, and so forth) stands for the principal quantum number, n. The letter
in the orbital name defines the subshell with a specific angular momentum quantum number l = 0 for s orbitals, 1 for p orbitals, 2 for d orbitals. Finally, there are more than one possible orbitals
for l ≥ 1, each corresponding to a specific value of m[l]. In the case of a hydrogen atom or a one-electron ion (such as He^+, Li^2^+, and so on), energies of all the orbitals with the same n are the
same. This is called a degeneracy, and the energy levels for the same principal quantum number, n, are called degenerate energy levels. However, in atoms with more than one electron, this degeneracy
is eliminated by the electron–electron interactions, and orbitals that belong to different subshells have different energies. Orbitals within the same subshell (for example ns, np, nd, nf, such as 2p
, 3s) are still degenerate and have the same energy.
While the three quantum numbers discussed in the previous paragraphs work well for describing electron orbitals, some experiments showed that they were not sufficient to explain all observed results.
It was demonstrated in the 1920s that when hydrogen-line spectra are examined at extremely high resolution, some lines are actually not single peaks but, rather, pairs of closely spaced lines. This
is the so-called fine structure of the spectrum, and it implies that there are additional small differences in energies of electrons even when they are located in the same orbital. These observations
led Samuel Goudsmit and George Uhlenbeck to propose that electrons have a fourth quantum number. They called this the spin quantum number, or m[s].
The other three quantum numbers, n, l, and m[l], are properties of specific atomic orbitals that also define in what part of the space an electron is most likely to be located. Orbitals are a result
of solving the Schrödinger equation for electrons in atoms. The electron spin is a different kind of property. It is a completely quantum phenomenon with no analogues in the classical realm. In
addition, it cannot be derived from solving the Schrödinger equation and is not related to the normal spatial coordinates (such as the Cartesian x, y, and z). Electron spin describes an intrinsic
electron “rotation” or “spinning.” Each electron acts as a tiny magnet or a tiny rotating object with an angular momentum, even though this rotation cannot be observed in terms of the spatial
The magnitude of the overall electron spin can only have one value, and an electron can only “spin” in one of two quantized states. One is termed the α state, with the z component of the spin being
in the positive direction of the z axis. This corresponds to the spin quantum number \(m_s=\dfrac{1}{2}\). The other is called the β state, with the z component of the spin being negative and
\(m_s=−\dfrac{1}{2}\). Any electron, regardless of the atomic orbital it is located in, can only have one of those two values of the spin quantum number. The energies of electrons having
\(m_s=−\dfrac{1}{2}\) and \(m_s=\dfrac{1}{2}\) are different if an external magnetic field is applied.
Figure \(\PageIndex{5}\): Electrons with spin values \(±\ce{1/2}\) in an external magnetic field.
Figure \(\PageIndex{5}\) illustrates this phenomenon. An electron acts like a tiny magnet. Its moment is directed up (in the positive direction of the z axis) for the \(\dfrac{1}{2}\) spin quantum
number and down (in the negative z direction) for the spin quantum number of \(−\ce{1/2}\). A magnet has a lower energy if its magnetic moment is aligned with the external magnetic field (the left
electron) and a higher energy for the magnetic moment being opposite to the applied field. This is why an electron with \(m_s=\dfrac{1}{2}\) has a slightly lower energy in an external field in the
positive z direction, and an electron with \(m_s=−\dfrac{1}{2}\) has a slightly higher energy in the same field. This is true even for an electron occupying the same orbital in an atom. A spectral
line corresponding to a transition for electrons from the same orbital but with different spin quantum numbers has two possible values of energy; thus, the line in the spectrum will show a fine
structure splitting.
Video \(\PageIndex{3}\): The uncertainty of the location of electrons.
The Pauli Exclusion Principle
An electron in an atom is completely described by four quantum numbers: n, l, m[l], and m[s]. The first three quantum numbers define the orbital and the fourth quantum number describes the intrinsic
electron property called spin. An Austrian physicist Wolfgang Pauli formulated a general principle that gives the last piece of information that we need to understand the general behavior of
electrons in atoms. The Pauli exclusion principle can be formulated as follows: No two electrons in the same atom can have exactly the same set of all the four quantum numbers. What this means is
that electrons can share the same orbital (the same set of the quantum numbers n, l, and m[l]), but only if their spin quantum numbers m[s] have different values. Since the spin quantum number can
only have two values \(\left(±\dfrac{1}{2}\right)\), no more than two electrons can occupy the same orbital (and if two electrons are located in the same orbital, they must have opposite spins).
Therefore, any atomic orbital can be populated by only zero, one, or two electrons. The properties and meaning of the quantum numbers of electrons in atoms are briefly summarized in Table \(\
Table \(\PageIndex{1}\): Quantum Numbers, Their Properties, and Significance
Name Symbol Allowed values Physical meaning
principal quantum number n 1, 2, 3, 4, …. shell, the general region for the value of energy for an electron on the orbital
angular momentum or azimuthal quantum number l 0 ≤ l ≤ n – 1 subshell, the shape of the orbital
magnetic quantum number m[l] – l ≤ m[l] ≤ l orientation of the orbital
spin quantum number m[s] \(\dfrac{1}{2},\:−\dfrac{1}{2}\) direction of the intrinsic quantum “spinning” of the electron
Example \(\PageIndex{1}\): Working with Shells and Subshells
Indicate the number of subshells, the number of orbitals in each subshell, and the values of l and m[l] for the orbitals in the n = 4 shell of an atom.
For n = 4, l can have values of 0, 1, 2, and 3. Thus, s, p, d, and f subshells are found in the n = 4 shell of an atom. For l = 0 (the s subshell), m[l] can only be 0. Thus, there is only one 4s
orbital. For l = 1 (p-type orbitals), m can have values of –1, 0, +1, so we find three 4p orbitals. For l = 2 (d-type orbitals), m[l] can have values of –2, –1, 0, +1, +2, so we have five 4d
orbitals. When l = 3 (f-type orbitals), m[l] can have values of –3, –2, –1, 0, +1, +2, +3, and we can have seven 4f orbitals. Thus, we find a total of 16 orbitals in the n = 4 shell of an atom.
Exercise \(\PageIndex{1}\)
Identify the subshell in which electrons with the following quantum numbers are found:
1. n = 3, l = 1;
2. n = 5, l = 3;
3. n = 2, l = 0.
Answer a
Answer b
Answer c
Example \(\PageIndex{2}\): Maximum Number of Electrons
Calculate the maximum number of electrons that can occupy a shell with (a) n = 2, (b) n = 5, and (c) n as a variable. Note you are only looking at the orbitals with the specified n value, not those
at lower energies.
(a) When n = 2, there are four orbitals (a single 2s orbital, and three orbitals labeled 2p). These four orbitals can contain eight electrons.
(b) When n = 5, there are five subshells of orbitals that we need to sum:
&\phantom{+}\textrm{1 orbital labeled }5s\\
&\phantom{+}\textrm{3 orbitals labeled }5p\\
&\phantom{+}\textrm{5 orbitals labeled }5d\\
&\phantom{+}\textrm{7 orbitals labeled }5f\\
&\underline{+\textrm{9 orbitals labeled }5g}\\
&\,\textrm{25 orbitals total}
Again, each orbital holds two electrons, so 50 electrons can fit in this shell.
(c) The number of orbitals in any shell n will equal n^2[.] There can be up to two electrons in each orbital, so the maximum number of electrons will be 2 × n^2
Exercise \(\PageIndex{2}\)
If a shell contains a maximum of 32 electrons, what is the principal quantum number, n?
n = 4
Example \(\PageIndex{3}\): Working with Quantum Numbers
Complete the following table for atomic orbitals:
Orbital n l m[l] degeneracy Radial nodes (no.)
The table can be completed using the following rules:
• The orbital designation is nl, where l = 0, 1, 2, 3, 4, 5, … is mapped to the letter sequence s, p, d, f, g, h, …,
• The m[l] degeneracy is the number of orbitals within an l subshell, and so is 2l + 1 (there is one s orbital, three p orbitals, five d orbitals, seven f orbitals, and so forth).
• The number of radial nodes is equal to n – l – 1.
Orbital n l m[l] degeneracy Radial nodes (no.)
4f 4 3 7 0
4p 4 1 3 2
7f 7 3 7 3
5d 5 2 5 2
Exercise \(\PageIndex{3}\)
How many orbitals have l = 2 and n = 3?
The five degenerate 3d orbitals
Video \(\PageIndex{4}\): An overview of orbitals.
An atomic orbital is characterized by three quantum numbers. The principal quantum number, n, can be any positive integer. The general region for value of energy of the orbital and the average
distance of an electron from the nucleus are related to n. Orbitals having the same value of n are said to be in the same shell. The angular momentum quantum number, l, can have any integer value
from 0 to n – 1. This quantum number describes the shape or type of the orbital. Orbitals with the same principal quantum number and the same l value belong to the same subshell. The magnetic quantum
number, m[l], with 2l + 1 values ranging from –l to +l, describes the orientation of the orbital in space. In addition, each electron has a spin quantum number, m[s], that can be equal to \(±\dfrac
{1}{2}\) . No two electrons in the same atom can have the same set of values for all the four quantum numbers.
angular momentum quantum number (l)
quantum number distinguishing the different shapes of orbitals; it is also a measure of the orbital angular momentum
wavefunction (ψ)
mathematical description of an atomic orbital that describes the shape of the orbital; it can be used to calculate the probability of finding the electron at any given location in the orbital, as
well as dynamical variables such as the energy and the angular momentum
set of orbitals in an atom with the same values of n and l
spin quantum number (m[s])
number specifying the electron spin direction, either \(+\dfrac{1}{2}\) or \(−\dfrac{1}{2}\)
set of orbitals with the same principal quantum number, n
s orbital
spherical region of space with high electron density, describes orbitals with l = 0. An electron in this orbital is called an s electron
quantum mechanics
field of study that includes quantization of energy, wave-particle duality, and the Heisenberg uncertainty principle to describe matter
principal quantum number (n)
quantum number specifying the shell an electron occupies in an atom
Pauli exclusion principle
specifies that no two electrons in an atom can have the same value for all four quantum numbers
p orbital
dumbbell-shaped region of space with high electron density, describes orbitals with l = 1. An electron in this orbital is called a p electron
magnetic quantum number (m[l])
quantum number signifying the orientation of an atomic orbital around the nucleus; orbitals having different values of m[l] but the same subshell value of l have the same energy (are degenerate),
but this degeneracy can be removed by application of an external magnetic field
Heisenberg uncertainty principle
rule stating that it is impossible to exactly determine both certain conjugate dynamical properties such as the momentum and the position of a particle at the same time. The uncertainty principle
is a consequence of quantum particles exhibiting wave–particle duality
f orbital
multilobed region of space with high electron density, describes orbitals with l = 3. An electron in this orbital is called an f electron
electron density
a measure of the probability of locating an electron in a particular region of space, it is equal to the squared absolute value of the wave function ψ
d orbital
region of space with high electron density that is either four lobed or contains a dumbbell and torus shape; describes orbitals with l = 2. An electron in this orbital is called a d electron
atomic orbital
mathematical function that describes the behavior of an electron in an atom (also called the wavefunction), it can be used to find the probability of locating an electron in a specific region
around the nucleus, as well as other dynamical variables
• Paul Flowers (University of North Carolina - Pembroke),Klaus Theopold (University of Delaware) andRichard Langley (Stephen F. Austin State University) with contributing authors.Textbook content
produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/85abf193-2bd...a7ac8df6@9.110).
• Adelaide Clark, Oregon Institute of Technology
• Crash Course Physics: Crash Course is a division of Complexly and videos are free to stream for educational purposes.
• Crash Course Chemistry: Crash Course is a division of Complexly and videos are free to stream for educational purposes.
• TED-Ed’s commitment to creating lessons worth sharing is an extension of TED’s mission of spreading great ideas. Within TED-Ed’s growing library of TED-Ed animations, you will find carefully
curated educational videos, many of which represent collaborations between talented educators and animators nominated through the TED-Ed website.
Have feedback to give about this text? Click here.
Found a typo and want extra credit? Click here.
|
{"url":"https://tokyowashoku.com/article/2-2-atomic-orbitals-and-quantum-numbers","timestamp":"2024-11-13T15:49:13Z","content_type":"text/html","content_length":"101548","record_id":"<urn:uuid:9dbd30ca-6bc7-4919-ae85-cafa5b38230e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00362.warc.gz"}
|
Bathtubs | Advanced Competitive Strategies
What bathtubs can teach us about business creativity, by Mark Chussil
You have a bathtub in your house. It’s filling up with water. How many ways can you imagine to stop it from overflowing? They don’t have to be practical; use your creativity. Seriously: conjure up as
many ideas as you can before you continue reading.
How many ideas did you come up with?
I ran a two-day workshop on business war gaming, and asked the attendees to come up with scenarios that might cause the CEO of an airline to lose sleep. One by one, the attendees said they’d
exhausted their imaginations and written down every idea they could think of. I then asked each person to come up with five more ideas. All of them did.
Now, come up with five more ideas for the bathtub.
How many ideas do you have now?
I’ve heard about 50 solutions in discussing bathtubs with many audiences.
Knowing that, can you devise even more ideas?
True, strategists aren’t often called upon to imaginatively prevent bathtubs from overflowing at corporate headquarters. The concept matters, though, because it illustrates availability bias. Imagine
asking people to create a fault tree, a diagram of paths to some specific fault or problem. According to professors Jay Russo and Paul Schoemaker, availability bias makes people “assume that the
causes they have listed will account for almost everything that could go wrong, and they dramatically underestimate the impact of events in the final category of ‘all other’ causes.” (“Dramatically”
underestimate is right. Their experiments are stunning. See pages 112 and 114-5 of their book Winning Decisions.)
There are three bottom lines.
One: You probably came up with lots more ideas than you thought you could. Have more confidence in yourself and your creativity.
Two: You probably found that one idea would lead to others. Even impractical ideas can lead to useful ideas.
Three: The farther you go, the farther you get. More ideas means more opportunities and fewer surprises.
So: How many new ideas can you generate for your business’ strategy?
This post was adapted from an exercise in Nice Start, by Mark Chussil, forthcoming.
No Comments
|
{"url":"https://whatifyourstrategy.com/2008/08/20/bathtubs/","timestamp":"2024-11-13T08:44:32Z","content_type":"application/xhtml+xml","content_length":"33118","record_id":"<urn:uuid:30d40399-6fb4-4b4e-b151-140fc3522f0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00788.warc.gz"}
|
What Makes Stocks Go Up?
If you read the title of this and are hoping for a magic formula that mints money, then read no further. You’re not going to find what you’re looking for in here.
Both stock prices and changes in stock prices can be deconstructed into simple mathematical terms. In this post we’ll walk through these basics.
Outline for this post:
1. What is a stock price?
2. What causes stock prices to change?
3. What matters more - EPS growth, or multiple contraction?
4. How can investors tell what expectations are 'priced-in' to a stock?
5. What can investors do about this?
6. Summary
1: What is a stock price?
The value of a financial asset, such as a stock, is the present value of future cash flows available to the owners of that asset.
A stock price is the market’s current estimate of the net present value (“NPV”) of all the future cash flows available to the owners of that business (i.e. shareholders).
Note the difference between price and value: value is the actual present value of a stock’s future cash flows (which is unknowable with precision), whereas price is the market’s current estimate of
that value.
The buy & sell actions of investors set prices for publicly traded companies on the stock market – this is supply & demand at work. Market participants individually assess the magnitude and timing of
cash flows, as well as the appropriate rate to discount those cash flows. The market price informs us of investors’ aggregate expectations for those factors (i.e. the magnitude and timing of cash
flows as well as the discount rate).
If the market believes that the net present value of Company X’s cash flows is $100,000, then the stock’s market capitalization will be $100,000. If there are 100 shares outstanding, then the stock’s
price per share is $1,000 (i.e. $100,000 market cap divided by 100 shares outstanding).
This discounted cash flow (“DCF”) value can be expressed as the product of a company’s earnings per share (“EPS” i.e. total earnings divided by the number of shares outstanding) and the multiple of
those earnings that investors are currently prepared to pay (the “price/earnings” or “P/E” multiple)[i].
If Company X’s EPS next year is expected to be $100, the P/E multiple is 10x (i.e. $1,000 stock price divided by $100 of earnings).
The inverse of the P/E multiple is the E/P, which is the earnings yield. Company X’s earnings yield is 10% ($100 of EPS divided by $1,000 share price).
2: What causes stock prices to change?
A stock goes up when EPS rises, the P/E multiple expands, or both[ii].
What makes EPS rise?
EPS rises when one or more of the following occurs:
• Revenue grows
• Margins expand
• The number of shares outstanding is reduced (due to the company repurchasing its shares)
Revenue growth and margin expansion are both a function of return on incrementally invested capital (“ROIIC”) and the reinvestment rate. I’ll elaborate on these points further below.
Share repurchases occur at the discretion of a company’s board of directors and management team.
What makes the P/E multiple rise?
P/E multiples rise when the perceived DCF value of a business increases relative to nearer-term earnings. Investors’ DCF projections of a business rise when one or more of the following occurs:
• Investors believe the rate of earnings growth in the future will be higher than they had previously expected
• Investors believe that ROIIC will be higher in the future than they had previously expected (either due to expectations of higher future margins or expectations of higher ‘capital velocity’)
• The expected timeframe during which the company will earn ROIIC above its cost of capital (“weighted average cost of capital” or “WACC”) increases. This is known as the competitive advantage
period (“CAP”)
• Broad-market discount rates fall due to a lower risk-free rate (US Treasury bonds are typically referenced as the ‘risk free rate’)
• Company-specific discount rates fall due to investors having more confidence in the company’s future earnings streams than they previously had. Investors believe that uncertainty has been
We can break down stock returns visually:
In a future post I’ll demonstrate the relationship between multiples, returns on capital, reinvestment rate, growth and value in more depth. But for now let’s stay focused on the top part of that
3: What matters more – EPS growth, or multiple expansion/contraction?
Let’s use a simplified DCF analysis to see what IRR (internal rate of return) and MOIC (multiple on invested capital) an investor generates investing in a company whose multiple gets cut in half
(from 30x to 15x), with the following characteristics:
• EPS growth is 20%
• 10% of EPS is distributed annually as dividends
• The holding period is 3 years
The 3-year IRR is -9.9% and MOIC of 0.7x.
Now let’s look at the IRR for the same company when the investor’s holding period is 20 years, rather than 3 years:
The IRR is 15.4%... The IRR is closing in on the EPS growth.
And now for a 99 year holding period:
For a 99 year holding period, the IRR is 19.9% - virtually identical to the EPS growth. Of course, it would be very impressive if a business was able to sustain 20% EPS growth for a century, but you
get the point!
The longer an investor owns a business, the closer the investor’s IRR approximates the EPS growth rate of the business and the less the impact of the change in the company’s P/E multiple.
Predicting what multiple market participants will slap onto a company’s earnings is a form of Keynesian Beauty Contest[iii]. However, it’s an activity you’re forced to engage in if you plan to only
hold a stock for a few years.
EPS growth is not forecastable with any precision either, although this activity involves more ‘fundamental’ research than predicting multiple changes. This is the area that business analysts with a
long-term horizon should focus on.
4: How can investors tell what expectations are ‘priced-in’ to a stock?
Market prices contain information about investors expectations for the following:
• Growth in EPS (which is a function of returns on incrementally invested capital and the reinvestment rate)
• CAP
• Discount rate
I’ve written two posts that cover this in more detail, Unit Economics and Cohort Retention Curves and What's Priced In.
You can also check out these great resources:
5: What can investors do about this?
The one job of an equity investor is to take advantage of gaps between expectations and fundamentals. Expectations reflect the future free cash flows a company must deliver to justify today’s
stock price. Fundamentals capture the company’s actual results. Tomorrow’s outcomes that are different than today’s perceptions lead to revisions in expectations that are the source of excess
Expectations are like the odds on the tote board that a racehorse will win. Fundamentals are the result of the race. Handicappers know that you don’t make money by picking favorites. You make
money by spotting mispriced odds and investing accordingly.”
6: Summary
Economists, it is said, know the price of everything but the value of nothing[v]. Stock prices contain information about investors’ expectations for the future cash flow generation of a business,
among other factors. Investors’ one job is to take advantage of gaps between expectations and fundamentals.
If an investor plans to hold a stock for less than a handful of years, the investor’s fortunes will be tied to multiple re-ratings of the stock (absent astronomical growth/decline rates). If an
investor plans to hold a stock forever, the investor’s fortunes will be tied to the growth in free cash flow per share of the company (absent bubble-like starting/ending points).
Pick your research focus accordingly.
Business quality is the most important factor in growth investing. By ‘business quality’ I mean the ability for a company to earn high returns on capital, which can be achieved by high sales turnover
or high margins (or some combination thereof). Sales velocity aka asset turnover is a measure of how productively a company can use its assets to generate sales. Margin is a function of the
difference between total revenues and total costs, where revenues = price * volume. I’ll explore all of this and more in a future post.
[i] For simplicity I use EPS and P/E multiples in this post. The same principle applies for all valuation measures.
[ii] EPS growth is the largest determinant of dividend potential in the long run. For simplicity we’ll be ignoring dividends explicitly in this post. Dividends are a critical component of the value
equation (shareholders believe that they will eventually be able to take their percentage ownership of the business’s cash generation out of the business in the form of dividends).
[iii] For a fun read on Keynesian Beauty Contests, read this article by Richard Thaler: https://www.ft.com/content/6149527a-25b8-11e5-bd83-71cb60e8f08c
[iv] https://www.morganstanley.com/im/publication/insights/articles/articles_onejob.pdf
[v] A search on Google informs me that this is an adapted quote from Oscar Wilde, who was referring to cynics, rather than economists.
|
{"url":"https://www.incrementaleconomics.com/what-makes-stocks-go-up/","timestamp":"2024-11-09T10:44:21Z","content_type":"text/html","content_length":"33651","record_id":"<urn:uuid:0282bdd3-5781-411c-b196-3491233000ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00839.warc.gz"}
|
Sudoku P
Sue Gleason's Sudoku Puzzle Tutorial Page
One general Sudoku solution approach:
Never write a number into a spot unless you are sure that's where it must go. There are only two possible ways that you can be sure. Either all other numbers are blocked from that spot, or that
number is blocked from all other spots in an enclosing region: either column, row or square.
The individual SPOTs are designated by labels like R2C3, for the spot in row 2, column 3. Spot R2C3 is part of SQUARE TL, the top left corner. TL, TC, TR, ML, MC, MR, BL, BC, and BR refer to the nine
3x3 square REGIONs. T, M and B refer to the three horizontal BLOCKs; L, C and R refer to the three vertical blocks. Each block contains 3 squares in a line, or 27 individual spots. Besides the SQUARE
regions, we also consider each COLUMN and ROW as a REGION. A STRIP is a line of 3 SPOTS within one SQUARE.
Each row, column and 3x3 square must contain exactly one of each digit, 1-9. Four strategies for this example:
If any number appears exactly twice in a block, often the third spot may be deduced. In our example, a 2 appears twice in the top third. So in square TL, rows 1 and 3 may not have a 2. There is
only one free spot in TL row 2; thus R2C1 must have a 2.
Look for completely filled strips of 3 spots within one 3x3 square. Consider the LEFT third of the puzzle. The ML square has a completely full strip in column 1. The ML square has no value 1. The 1
is blocked from column 1 because column 1 is full, and from column 3 because the TL square has a 1 in column 3 at R2C3. So the 1 must go in column 2. Row 6 already contains a 1 and R5C2 is filled.
Therefore the 1 must go in R4C2
Look for the fullest regions. If a row, column or square contains at least 5 values, consider the whole set of missing numbers and see whether any of their spots can be deduced. In our example, now
square ML has 5 spots full. The missing values are 2,4,5 and 7. The strip of C3 within square ML is completely empty. A 7 appears in C3 in the square below, so the ML 7 must go in the only non-C3
empty spot of ML, namely R6C2.
Scan the grid, concentrating on one digit at a time. Square TC needs a 1. The 1 is blocked from row 2 and columns 4 and 6, (because a 1 already appears in those regions) so the only possible spot is
http://www.doublecrostic.com © 2006 Sue Gleason Last updated: Feb. 28, 2006
|
{"url":"https://www.doublecrostic.com/prettprt.php","timestamp":"2024-11-06T13:38:24Z","content_type":"text/html","content_length":"3054","record_id":"<urn:uuid:26ed53d9-52a0-4731-81cd-d635bfe1c6e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00730.warc.gz"}
|
How to generate unique numbers using Fisher-Yates Algorithm with Java
In this article, we will be writing a java program that implements the paper and pencil method of the Fisher-Yates algorithm to generate nth unique numbers.
You can also use any list of numbers (or anything else it doesn't matter ) to shuffle their sequence.
But first ...
What is this fish?
The Fisher-Yates shuffle is an algorithm named after Ronald Fisher and Frank Yates, and it is used to shuffle a sequence. Time Complexity: O(n). The main idea is that imagine you have ordered numbers
written on a scratch paper, and you randomly strike out a number and write it down on another piece of paper. You do this until no unstruck number remains. The order in which the numbers are written
is your shuffled sequence.
Here we go! ๐
1. Start by declaring the variables you will use
throughout the program
int n = Integer.parseInt(args[0]) ; // amount of numbers to generate
int k; // random index of unstruckNums
ArrayList<Integer> unstruckNums = new ArrayList<Integer>();
ArrayList<Integer> results = new ArrayList<Integer>();
2. Fill the UnstruckNums list with numbers
from 0 to n (exclusive)
for (int i = 0; i < n; i++) {
Now, what we want to do is strike out random numbers from the unstruckNums list and add them to a separate list which is the results. To achieve this we
3. generate a random index k
which is between 0 and the amount of unstruckNums remaining
for (int i = 0; i < n; i++) {
// k represents the index of the number we want to strike out from the unstruckNums
k = (int) Math.floor(Math.random() * (unstruckNums.size()));
4. Add the number at index k to our result List
and strike it out from the unstruckNums list.
for (int i = 0; i < n; i++) {
k = (int) Math.floor(Math.random() * (unstruckNums.size()));
That's it, your done. Print out the results
An example:
@tebza> javac UniqueNums.java
@tebza> java UniqueNums.java 12
[6, 1, 0, 5, 4, 10, 2, 11, 3, 8, 9, 7]
Here is the full code:
import java.util.ArrayList;
public class UniqueNums{
public static void main(String[] args) {
int n = Integer.parseInt(args[0]) ; // amount of numbers to generate
int k; // random index of unstruckNums
ArrayList<Integer> unstruckNums = new ArrayList<Integer>();
ArrayList<Integer> results = new ArrayList<Integer>();
// Fill the UnstruckNums list with numbers from 0 to n
for (int i = 0; i < n; i++) {
for (int i = 0; i < n; i++) {
// k represents the index of the number we want to strike out from the unstruckNums
k = (int) Math.floor(Math.random() * (unstruckNums.size()));
The fisher-yates shuffle is a simple algorithm used to shuffle the sequence of lists. We have used it to shuffle an ordered list of numbers, to generate a list of unique numbers.
|
{"url":"https://tebza.dev/how-to-generate-unique-numbers-using-fisher-yates-algorithm-with-java","timestamp":"2024-11-03T01:01:36Z","content_type":"text/html","content_length":"116822","record_id":"<urn:uuid:dac7ab91-9c59-48bc-8130-6e83b9e8cfb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00265.warc.gz"}
|
Schematic Reading notes
Notes on Reading Schematics
First edition 02/23/01
Tube design information
Basic electrical training (NEETS)
Schematic Reading Help
First edition 02/23/01 Last update 02/23/01
1. A dot where two wires cross means there is a connection (solder joint) there.
A three wire connection that looks like "T" means there is a connection there even if there is no dot. It is good practice to use a dot on a "T" connection.
2. The standards say that two wires crossing in a + are not supposed to be connected unless there is a dot in the middle of the +.
It is good practice to never use a + connection with a dot. Why? The dot can disappear when the schematic is copied for the 12th time. Caution: not everyone follows the standards.
3. Generally, inputs are supposed to be on the left, outputs on the right and the current flow from top of the page down to the bottom of the page. Few people follow this standard.
4. When a wire goes to a name, like B+, you can write the name of the wire on all locations the wire appears instead of drawing lots of wires to show this wire is hooked up. Good practice is to only
do this on shared signals like power supply wires. The purpose of this is to make the schematic more readable.
It is good practice in a schematic to always use the same character for a "space" in a name to avoid mistakes. Do not use INPUT_FILTERED and INPUT-FILTERED. Stick with underlines where ever
possible to avoid confusion with "-" minus signs.
All Those Greek Letters
First edition 02/23/01 Last update 03/02/01
p means pico or 1/1,000,000,000 this is "mm" or "uu" in some old designs for (1/1,000,000)/1,000,000
n means nano or 1/1,000,000,000
"u" or on old parts "m" means micro or 1/1,000,000
m on new designs means milli or 1/1,000
k means kilo or 1,000
M or meg means mega or 1,000,000. I prefer meg to M.
G or giga means 1,000,000,000
A 2.0 K and a 2k0 resistor are the same value and both are 2000 ohms.
A 1.0 u and a 1u0 capacitor are the same value and are 1 microfarad.
I prefer the 1u0 over 1.0u because the "." can easily get erased and become 10u capacitor.
w (omega) is usually means 2 * PI * frequency.
j (or i) is the square root of -1 (don't sweat this, you'll go nuts)
XC is the impedance of a capacitor = 1/( jwC). Current occurs before voltage on a capacitor.
XL is the impedance of an inductor = jwL. Voltage occurs before current on an inductor.
On a pure sinewave, the peak voltage is sqrt( 2) times the RMS.
Beware, the average RMS and true RMS are only the same on a pure sinewave. True RMS is always higher than average RMS.
RMS Root Mean Squared.
Voltages on transformers in schematics are usually loaded RMS voltages. Unloaded RMS voltages can be 1 to 50% higher than the loaded RMS voltages. Expect the unloaded voltage to be 5% higher than the
loaded RMS voltage on fairly good transformers.
The voltages on transformers in schematics ARE NOT loaded RMS voltages when you are using a PSPICE schematics. PSPICE transformers use the unloaded peak voltage. PSPICE outputs also usually read out
in peak voltage, not RMS voltage. It is easy to get mixed up in PSPICE whether or not you have peak or RMS voltages. So when you see a PSPICE output or schematic, be very careful on whether the
voltage is peak or RMS. If you are doing relative measurements (using dB) in PSPICE, as long as you consistently use peak or consistently use RMS in the equations, you will get the right answer.
Those Log Beasties
First edition 03/02/01 Last update 03/02/01
LOG or log means the logarithm of a number in base 10. The inverse of LOG is 10^x.
LN or ln means the logarithm of a number in base "e". The inverse of LN is e^ x.
When calculating gain in dB, the impedance must stay the same for numbers to be legal.
When calculating the ratio of two voltages we use:
20 * log( V1/ V2) V1 and V2 are the absolute value of the voltage, leave the "-" signs off front of the number.
20 * log( -10/ 20) is not right.
20 * log( 10/ 20) is right and equals -6.020599913. We normally say -6 dB for this or 6 dB down.
20 * log( 10E-6/ 20E-6) is right. The "-" after the "E" indicates that 10E-6 is (10 * 10^-6) or 0.00001 or (10/1,000,000).
When calculating the ratio of two power we use:
10 * log( P1/ P2)
This works is because the resistance used is the same for both measurements.
P! = V1^2 / Rload
P2 =V2^2/ Rload
P1/P2 = (V1/V2)^2
10 Log (P1/P2) = dB
10 Log (V1/V2)^2 = 10 * 2 * log(V1/V2) = 20 Log (V1/V2) = dB
. First version 19 Jan 01 Last change 1/26/02.
|
{"url":"https://voltsecond.com/2003_Site/Schematic_notes/Schematic_reading_notes.html","timestamp":"2024-11-07T03:14:57Z","content_type":"text/html","content_length":"10018","record_id":"<urn:uuid:43140621-6161-45c5-be6e-4307d2c9d362>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00546.warc.gz"}
|
Phase factors $e^{i \phi(\vec x,t)}$, like they appear in quantum mechanics, are just complex numbers with amplitude $1$. Therefore, we can picture them as points on a circle with radius $1$:
This collection of all complex numbers with amplitude $1$ is what we call the group $U(1)$.
The Lie algebra corresponding to the group U(1) is usually identified with the set of pure imaginary numbers $Im \mathbb{C} = \{ i \theta : \theta \in \mathbb{R} \}$.
Take note that the tangent space of a circle is, of course, just a copy of $\mathbb{R}$ but the isomorphic space $Im \mathbb{C}$ is more convenient because its elements can be "exponentiated" to give
the elements $e^{i \theta}$ of $U(1)$.
The diagram below shows the defining representation of $U(1)$ in its upper branch and the conjugate representations of the same group in its lower branch. For a more detailed explanation of this
diagram and more representations of $U(1)$ see Fun with Symmetry.
|
{"url":"https://physicstravelguide.com/advanced_tools/group_theory/u1","timestamp":"2024-11-12T01:00:44Z","content_type":"text/html","content_length":"75270","record_id":"<urn:uuid:90bcf61f-482a-4a56-871c-3ddd617c9963>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00150.warc.gz"}
|
MQTT: retain not working properly
I found an issue with data retention and it took me quite a while to figure out when exactly it happens.
Using MQTT my data used to be stored just fine until an update earlier this year (not sure when exactly, probably between march and june 2017, maybe earlier).
Before the update, all data would be retained automatically. Now the ‘retain’ flag in the MQTT header has to be set. So far so good. It works when I test it using MQTT.fx to manually submit messages.
BUT: only if the channel in the topic is an integer. If I use any other name for the channel number it gets published and displayed fine but the data is not retained. This used to work fine prior to
the upgrade.
topic: “v1/username/things/clientID/data/2” data: “temp,c=12” → works fine (if retain flag is set)
topic: “v1/username/things/clientID/data/Sensor1” data: “temp,c=12” → does not work (channel shows in dashboard but data is not retained)
The only workaroud is to only use integer channel names. Can this be fixed?
I was actually under the impression that only integers were allowed for MQTT channels. @rsiegel was there a change recently that would change the behavior here?
Hi, sorry to jump on this, but I’m new to cayenne, and to mqtt, and to hacking about with IoT stuff in general.
I’m experiencing a similar (the same?) problem - using mqtt for python from an RPi, and naming my widgets, I have no persistence, which for my project is pretty much the whole point…but I don’t see
any change when renaming to integers…
Should I be raising a new topic, or can we aim for two birds with one stone here?
Thank you,
IF with the latest version only integers are allowed then this should be stated in the documentation. Or did I miss it?
It certainly did work before, in fact I have screenshots of it working back in january on my blog: [ESP8266 & Cayenne – Bäschteler of Science Blog]
In the code there I am sending a string as the channel and in the screenshots you can see how it works as a graph, i.e. data is retained.
I’ll say that I’ve never considered using anything other than integers here (to be clear, for the ‘Channel’ field – you can name the widgets whatever you want in the ‘Widget Name’ field in the
Cayenne UI of course)
That said, I don’t know if its a technical limitation or just something I hadn’t considered trying. I know that MQTT as a protocol is OK with strings, but maybe there is something in our
implementation that limits us to integers here. I’m tagging @eptak and @jburhenn to see if they know more.
1 Like
Just to clarify, when you mention retained data are you referring to the historical data that is shown in the charts? If so, that should actually be unrelated to the retain flag, which is just an
MQTT feature to retain the last MQTT message sent for a topic, so you can receive it the next time you subscribe to the topic.
If you are referring to the historical data not being saved then that could be a backend issue, or potentially as-designed since we’ve only really used integer channels before. Perhaps it only
happened to work with string channels before by luck. I’m not too familiar with the backend code so I’m not sure the answer. Adding @asanchezdelc1, in case he would know.
1 Like
Might be worth updating the docs for Bring Your Own Thing API. It doesn’t really specify that you should be using an integer, but all the examples use integers which is probably why I assumed that’s
all you could use.
For the time being, it has to be an integer. In the near future, channels will be alphanumeric.
1 Like
Thanks for looking into this.
By retain I actually mean the storing of the data and the graph display. When I tested that using MQTT.fx I thought it only worked with the ‘retain’ flag set in the MQTT message but I may be wrong.
Using alphanumeric channel names it does work actually i.e. the widget is created and displayed just none of the values are stored. When the widget page is reloaded, all values are blank until the
next update arrives.
|
{"url":"https://community.mydevices.com/t/mqtt-retain-not-working-properly/6169","timestamp":"2024-11-14T12:20:25Z","content_type":"text/html","content_length":"38323","record_id":"<urn:uuid:bebd43af-3c01-44bc-b0cd-7f810aba8b24>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00704.warc.gz"}
|
On Minimum Sum of Radii and Diameters Clustering
Journal Title
Title of Journal: Algorithmica
Search In Journal Title:
Abbravation: Algorithmica
Search In Journal Abbravation:
Springer US
Search In Publisher:
Search In ISSN:
Search In Title Of Papers:
Authors: Babak Behsaz Mohammad R Salavatipour
Publish Date: 2014/07/04
Volume: 73, Issue: 1, Pages: 143-165
Given a metric Vd and an integer k we consider the problem of partitioning the points of V into at most k clusters so as to minimize the sum of radii or the sum of diameters of these clusters The
former problem is called the minimum sum of radii MSR problem and the latter is the minimum sum of diameters MSD problem The current best polynomial time algorithms for these problems have
approximation ratios 3504 and 7008 respectively We call a cluster containing a single point a singleton cluster For the MSR problem when singleton clusters are not allowed we give an exact algorithm
for metrics induced by unweighted graphs In addition we show that in this case a solution consisting of the best single cluster for each connected component of the graph is a frac32approximation
algorithm For the MSD problem on the plane with Euclidean distances we present a polynomial time approximation scheme In addition we settle the open problem of complexity of the MSD problem with
constant k by giving a polynomial time exact algorithm in this case The previously best known approximation algorithms for MSD on the plane or for MSD with constant k have both ratio 2We would like
to thank two anonymous referees for their great comments and suggestions especially for bringing to our attention the connection of Lemma 8 and 7 Babak Behsaz Supported in part by Alberta Innovates
Graduate Student Scholarship Mohammad R Salavatipour Supported by NSERC and an Alberta Ingenuity New Faculty Award
Search In Abstract Of Papers:
Other Papers In This Journal:
|
{"url":"https://pdf-paper.com/2014/185/116","timestamp":"2024-11-05T01:42:00Z","content_type":"application/xhtml+xml","content_length":"24660","record_id":"<urn:uuid:8c6e454d-21f3-434c-8500-a55a83f0a886>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00358.warc.gz"}
|
Kinetic energy of runner race question
• Thread starter OVB
• Start date
In summary, the father's kinetic energy is two times that of his son, but when the father increases his speed by one m/s, the kinetic energies are equal.
Say a father who has a mass that is two times that of his son is racing against him, and his kinetic energy is half of his son. When the father increases his speed by one m/s, the kinetic energies
are equal.
I do this:
M = mass of father
V = velocity of father
0.5MV^2 = 0.5(0.5mv^2)
2MV^2 = mv^2
(4m)V^2 = mv^2
4V^2 = v^2
2V = v
0.5M(V+1)^2 = 0.5m(2V)^2
(V+1)^2 = 4V^2
V^2 + 2V + 1 = 4V^2
-3V^2 +2V + 1 = 0
(3V +1) (-V + 1)
V = 1, -1/3
so V = 1 m/s
However, my book says the speeds are 2.4 m/s and 4/8 m/s for father and son, respectively. What am I doing wrong?
M & V =mass and velocity of father right?
m & v =mass and velocity of son?
since the father initial KE is half his son, why did u multiply .5 on the KE of the son instead of the father?
btw,my ans for velocity of son is 3.414m/s
Last edited:
No, that is how it should be. KE of F = 0.5(KE of son)
Does anyone know why the answers are 2.4 and 4.8?
Science Advisor
Homework Helper
OVB said:
Say a father who has a mass that is two times that of his son is racing against him, and his kinetic energy is half of his son. When the father increases his speed by one m/s, the kinetic
energies are equal.
I do this:
M = mass of father
V = velocity of father
0.5MV^2 = 0.5(0.5mv^2)
2MV^2 = mv^2
(4m)V^2 = mv^2
4V^2 = v^2
2V = v
0.5M(V+1)^2 = 0.5m(2V)^2
2m(V+1)^2 = m(2V)^2 <== added
(V+1)^2 = 2V^2
V^2 + 2V + 1 = 2V^2
-1V^2 +2V + 1 = 0
(3V +1) (-V + 1)
V = 1, -1/3
so V = 1 m/s
However, my book says the speeds are 2.4 m/s and 4/8 m/s for father and son, respectively. What am I doing wrong?
See the colors
The first thing we do is relate the father's kinetic energy to the son's according to the question. I will keep the father on the LHS and son on the RHS. I will used lowercase v for the father's
velocity and uppercase V for the son's velocity.
1) (0.5)(m)(v*v) = (0.5)(0.5)(0.5m)(V*V)
// Now multiply by 8 to remove fraction...
4m(v*v) = m(V*V)
// Now divide by m to simplify...
4(v*v) = (V*V)
// Now take square root of both sides.
2v = V
// This gives us the son's velovity V in terms of the
// father's velocity v. ie: V = 2v.
Now in order to have the father's K equal the son's K we do two things.
- Add 1 to the father's velocity on the LHS.
- Multiply the RHS by 2 since we are not relating the father's K to
half the son's K anymore. ie: Instead of K = 0.5K we now have
K = K since that's what happens when we add 1 to the father's
2) (0.5)(m)(v+1)(v+1) = (0.5)(0.5m)(2v)(2v)
// Remember V = 2v
// Multiply by 2 and divide by m to simplify...
(v+1)(v+1) = (2)(v*v)
// Take the square root of both sides...
v+1 = sqrt(2)*v
v = 1 / (sqrt(2) - 1)
v = 2.41 m/s.
Now all we have to do is substitute into V = 2v to get the son's original
V = 2 * 2.41 = 4.82 m/s.
you went wrong on this part:
0.5M(V+1)^2 = 0.5m(2V)^2
(V+1)^2 = 4V^2
V^2 + 2V + 1 = 4V^2
-3V^2 +2V + 1 = 0
(3V +1) (-V + 1)
V = 1, -1/3
so V = 1 m/s
0.5M(V+1)^2 = 0.5m(2V)^2 ... subtitute M=2m here. this will give you ..
2(V+1)^2 = 4V^2
V^2 + 2V + 1 = 2V^2
-V^2 +2V + 1 = 0
0r V^2-2V-1=0
solving I get V= 2.4142 m/sec.
use this to calculate v=2V=2*2.4142=4.8284 meters per second.
FAQ: Kinetic energy of runner race question
1. What is kinetic energy?
Kinetic energy is the energy an object possesses due to its motion. It is a type of energy that is associated with an object's mass and velocity.
2. How is kinetic energy calculated?
Kinetic energy is calculated using the formula KE = 1/2 * m * v^2, where m is the mass of the object and v is its velocity.
3. How does the kinetic energy of a runner change during a race?
The kinetic energy of a runner will change during a race depending on their speed. As the runner accelerates, their kinetic energy increases and as they slow down, their kinetic energy decreases.
4. Is kinetic energy the only factor that affects a runner's speed?
No, there are other factors that can affect a runner's speed such as their body composition, muscle strength, and the surface they are running on.
5. How does kinetic energy affect a runner's performance?
Kinetic energy plays a crucial role in a runner's performance as it determines their speed and ability to overcome resistance. The higher the kinetic energy, the faster the runner can move and the
greater their performance will be.
|
{"url":"https://www.physicsforums.com/threads/kinetic-energy-of-runner-race-question.139495/","timestamp":"2024-11-08T22:01:37Z","content_type":"text/html","content_length":"96093","record_id":"<urn:uuid:7d11d2a5-acd4-4cb3-b564-14abd332c4ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00842.warc.gz"}
|
In geometry, a
is an object such as a line or vector that is perpendicular to a given object. For example, in the two-dimensional case, the
normal line
to a curve at a given point is the line perpendicular to the tangent line to the curve at the point.
The above text is a snippet from Wikipedia: Normal (geometry)
and as such is available under the Creative Commons Attribution/Share-Alike License.
1. A line or vector that is perpendicular to another line, surface, or plane.
2. A person who is normal, who fits into mainstream society, as opposed to those who live alternative lifestyles.
1. According to norms or rules.
2. Usual; ordinary.
3. Healthy; not sick or ill.
4. Pertaining to a school to teach teachers how to teach.
5. Of, relating to, or being a solution containing one equivalent weight of solute per litre of solution.
6. Describing a straight chain isomer of an aliphatic hydrocarbon, or an aliphatic compound in which a substituent is in the 1- position of such a hydrocarbon.
7. (Of a mode in an oscillating system) In which all parts of an object vibrate at the same frequency; See .
8. Perpendicular to a tangent line or derivative of a surface in Euclidean space.
9. (Of a subgroup) whose cosets form a group.
10. (Of a field extension of a field K) which is the splitting field of a family of polynomials in K.
11. (Of a distribution) which has a very specific bell curve shape.
12. (Of a family of continuous functions) which is pre-compact.
13. (Of a function from the ordinals to the ordinals) which is strictly monotonically increasing and continuous with respect to the order topology.
14. (Of a matrix) which commutes with its conjugate transpose.
15. (Of a Hilbert space operator) which commutes with its adjoint.
16. (Of an epimorphism) which is the cokernel of some morphism.
17. (Of a monomorphism) which is the kernel of some morphism.
18. (Of a morphism) which is a normal epimorphism or a normal monomorphism.
19. (Of a category) in which every monomorphism is normal.
20. (Of a real number) whose digits, in any base representation, enjoy a uniform distribution.
21. (Of a topology) in which disjoint closed sets can be separated by disjoint neighborhoods.
22. in the default position, set for the most frequently used route.
The above text is a snippet from Wiktionary: normal
and as such is available under the Creative Commons Attribution/Share-Alike License.
Need help with a clue?
Try your search in the crossword dictionary!
|
{"url":"https://www.crosswordnexus.com/word/NORMAL","timestamp":"2024-11-04T02:13:41Z","content_type":"application/xhtml+xml","content_length":"12692","record_id":"<urn:uuid:74e6eeb7-7a6d-47cb-a19c-8d426f6ae72e>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00460.warc.gz"}
|
Kevin - MATLAB Central
of 295,177
28 Questions
26 Answers
8,662 of 20,184
1 File
of 153,314
0 Problems
26 Solutions
Which values occur exactly three times?
Return a list of all values (sorted smallest to largest) that appear exactly three times in the input vector x. So if x = [1 2...
11 years ago
Write a function that accepts a cell array of strings and returns another cell array of strings *with only the duplicates* retai...
11 years ago
Return the 3n+1 sequence for n
A Collatz sequence is the sequence where, for a given number n, the next number in the sequence is either n/2 if the number is e...
11 years ago
Summing digits
Given n, find the sum of the digits that make up 2^n. Example: Input n = 7 Output b = 11 since 2^7 = 128, and 1 + ...
11 years ago
Finding Perfect Squares
Given a vector of numbers, return true if one of the numbers is a square of one of the other numbers. Otherwise return false. E...
11 years ago
Create times-tables
At one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th...
11 years ago
Make a checkerboard matrix
Given an integer n, make an n-by-n matrix made up of alternating ones and zeros as shown below. The a(1,1) should be 1. Example...
11 years ago
Fibonacci sequence
Calculate the nth Fibonacci number. Given n, return f where f = fib(n) and f(1) = 1, f(2) = 1, f(3) = 2, ... Examples: Inpu...
11 years ago
Most nonzero elements in row
Given the matrix a, return the index r of the row with the most nonzero elements. Assume there will always be exactly one row th...
11 years ago
Remove any row in which a NaN appears
Given the matrix A, return B in which all the rows that have one or more <http://www.mathworks.com/help/techdoc/ref/nan.html NaN...
11 years ago
Given a circular pizza with radius _z_ and thickness _a_, return the pizza's volume. [ _z_ is first input argument.] Non-scor...
11 years ago
Sum all integers from 1 to 2^n
Given the number x, y must be the summation of all integers from 1 to 2^x. For instance if x=2 then y must be 1+2+3+4=10.
11 years ago
Triangle Numbers
Triangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa...
13 years ago
Add two numbers
Given a and b, return the sum a+b in c.
13 years ago
Swap the first and last columns
Flip the outermost columns of matrix A, so that the first column becomes the last and the last column becomes the first. All oth...
13 years ago
Column Removal
Remove the nth column from input matrix A and return the resulting matrix in output B. So if A = [1 2 3; 4 5 6]; and ...
13 years ago
Select every other element of a vector
Write a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s...
13 years ago
Find the sum of all the numbers of the input vector
Find the sum of all the numbers of the input vector x. Examples: Input x = [1 2 3 5] Output y is 11 Input x ...
13 years ago
Make the vector [1 2 3 4 5 6 7 8 9 10]
In MATLAB, you create a vector by enclosing the elements in square brackets like so: x = [1 2 3 4] Commas are optional, s...
13 years ago
Times 2 - START HERE
Try out this test problem first. Given the variable x as your input, multiply it by two and put the result in y. Examples:...
13 years ago
|
{"url":"https://ch.mathworks.com/matlabcentral/profile/authors/66284?detail=cody","timestamp":"2024-11-09T22:49:00Z","content_type":"text/html","content_length":"113051","record_id":"<urn:uuid:e7da530a-f1d3-40d0-b005-ccab2e65d62c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00701.warc.gz"}
|
S2.4 online textbook
Does anyone know why is there a -1/4 in the middle equation in picture 1?
From section 2.3, we have the equation on the second picture.
So I think it is a typo. If it is not, please let me know! Thanks in advance.
PS: Does anyone know how to use latex in the post?
|
{"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=6410n9vs37tk534pfjg8rloan1&topic=2295.0;prev_next=prev","timestamp":"2024-11-03T16:23:57Z","content_type":"application/xhtml+xml","content_length":"23768","record_id":"<urn:uuid:74b15177-b6e2-40f5-b5c8-c0a3baeadb19>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00018.warc.gz"}
|
Moonkin Stats and My New Model
: Found an error in my model. Crit is worth a little more then I previously stated. It has been updated.
Ahhhh. Remember the good old days in BC when the moonkin Rotation consisted of Moonfire and casting Starfire 4 or 5 times. That was when theorycrafting was easy. If you figure out how each stat
affects each spell you can come up with a good estimate of how much each stat is worth realtively quickly.
That is all ancient history with the introduction of Eclipse, new set bonuses and Glyphs. If you look around the various forums you will see a lot of people saying how much each stat is worth and all
the different versions are not as close as they used to be. As a result, I have built a new model to try and determine the value of each of DPS stats for a Lunar Rotation. Well, the results are in.
I'm going to do this post a little bit backwards then I normally do. I'm going to talk about each of the stats and give my results first. Then I'm going to talk about how I got the results in the
The Stats:
First off all, these results are for a perfect Lunar rotation with no lag, and no movement on a single target. I have also excluded both Force of Nature and Starfall from the analysis, and the
results are based purely on DPS. Also, these stats are based upon a pre-3.1 BiS gear set with 4T7 set bonuses. I expect these numbers to change as our gear improves since our SP will go way up and
our Crit and Haste will stay realtively the same. I will redo this analysis after we know more about what a BIS 3.1 Moonkin looks like.
The results aren't all that shocking. The values I came up with still fall into the standard Hit > Spell Power > Haste > Crit explanation.
Hit Rating: Relative Value
- Hit Rating has the highest per point value of all the DPS stats assuming you are below the hit cap. This valuation just shows you how important it is to be hit capped. In my opinion, all raiding
Moonkin should have the talents of Balance of Power and Improved Faerie Fire. This means we need to pickup an additional 10% hit chance from gear and other buffs to be hit capped.
So, what is the hit cap? 263 for Tauren and Night Elves with out a Draenei in their party. 236 for Night Elves with a Draenei in their party.
Spell Power: Relative Value
- Spell Power is the standard by which all other stats are measured. After you are hit capped, Spell Power is the best DPS Stat on a point for point basis. This is why, it is always recommended to
use Spell Power gems and food when possible.
Haste Rating: Relative Value
- This is where the debate usually begins. Haste Rating is a great DPS stat the makes your spells cast faster and lowers the global cooldown. The down side to Haste Rating is that it also makes you
consume mana faster and if you have mana issues the Haste will make them worse. Also, spells with short cast times like Wrath and the DoTs can reach the minimum global cooldown fairly easily with
reasonable amounts of Haste Rating. You can find my post on the Haste Cap
Crit Rating: Relative Value
- Crit Rating is the least valuable of the "pure" DPS stats, but that doesn't mean it isn't a good stat. Crit Rating also has the side benefit of helping you to generate mana using the Mana on Crit
There are two important things to remember about crit. The first is that even though you don't want to necessarily stack Crit Rating, you don't want to completely avoid it either. The second is that
you get a lot of Crit Chance from other Sources like talents and raid buffs. This is the main reason why Crit Rating is ranked so low.
Spirit: Relative Value
- In patch 3.1.2, Improved Moonkin Form is being buffed so that your Spell Power will be increased by 30% (up from 15%) of your Spirit. Also, Spirit will help you regen mana if you have the talent
As a general rule Spirit is a stat to be avoided, because Haste Rating and Crit Rating are so much better in terms of DPS. However, Spirit is becoming more and more unavoidable. Most caster items now
have Spirit, and majority of the ones that don't have Hit Rating. It is fine to have spirit on your gear, but never intentionally stack it.
Intellect: Relative Value
- Intellect is not a good stat in terms of DPS but it is almost completely unavoidable. Virtually all caster items have some amount of Intellect on them. Intellect has a lot of side benefits though
in Mana Regen since it impacts all of our major mana regen talents. You will pick up enough Intellect just by making commonsense gear choices, therefore you should never intentionally stack Int.
I updated the Spirit and Int numbers to take into account Stat Multipliers like Blessing of Kings. Thanks to Antonetz for pointing this out.)
Other Stats: Relative Value
- All other stats have very little direct impact on your DPS.
is the only other stat that you need in any significant quantity, but you should pick up plenty making normal gearing choices.
can also be helpful if you have mana issues. However, most moonkin don't have big mana issues that can't be solved with talents. Avoid Mp5 if possible.
Spell Penetration
are useless in a PvE environment and should be avoided.
Hit = 1.54, SP = 1.00, Haste = 0.80, Crit = 0.62, Spirit = 0.34, Int = 0.35
The Model:
If you want to look at the model and check my work you can find it
. However, I want you to keep a few things in mind. 1. It is a very large file. Download at your own risk. 2. It does have a couple of macros on it but you don't have to enable them for the
spreadsheet to work. 3. This is
a tool to help you model your DPS. If you want something like that go check out
. 4. In conjunction with #3, it is not userfriendly. It even confuses me and I built the damn thing.
How it Works:
I built the model to choose which spell to cast based on the current situation. It uses the random number generator to determine crits, misses, and procs. The model assumes the
Talent/Glyph build. The model assumes the following Cast priority:
1. If FF is not up, cast FF.
2. If IS is not up, cast IS.
3. If MF is not up, cast MF.
4. If Eclipse is not on cooldown, cast Wrath.
5. If Eclipse is on cooldown, cast SF.
This means that FF and the DoTs will be refreshed during Eclipse. I won't go into detail about it now, but I now think this will give you better DPS then weighting for Eclipse to run out.
The model uses a 60,000 cast sequence. This is equivalent to about 26 hours of continuous casting. This may seem like overkill but I wanted to smooth out as much of the randomness as I could.
Using base stats of 2900 spell power, 41% crit chance, 16.5% haste from gear, and 100% hit chance, I ran the model 1,000 times to find the average total damage, average total cast time, and average
DPS for a control group. I then adjusted each of the stats by significant amounts and reran the model another 1,000 times for each adjustment to see how the DPS changed.
The adjustments were made in significant amounts and then averaged to a per point value. For example, to find the value of Spell Power, I increased the spell power used by 50 to 2950. This resulted
in an average increase of 61.81 DPS or 1.24 DPS per point average. To evaluate Crit and Haste I increased both by 1%. To evaluate Hit Rating I decreased hit chance by 1%.
The stats where then weighted relative to each other, with Spell Power being the standard. These new values will be applied to my gear ranking and I hope to have it updated soon.
Whats Next:
I am kind of excited about what I can do with this model. I plan to create additional models to evaluate a solar rotation. I would also like to add lag into the model to see the impact. I would also
like to build a model that takes movement into account if possible. Finally I will be using the model to evaluate new set bonuses, glyphs and talents.
26 comments:
Excellent post!
After a 5 month hiatus from WoW, it's good to be back to blowing stuff up again. : )
I'd love to the this for the Solar Rotation, cause I really have enough of casting SF all the time. I hade to do so in Sunwell for too long.
Spamming Wrath with a cast time of just 1 second and make 10k critical hits is much more my thing.
By the way...will you rewrite your gear list in order to adjust for the 30% spirit buff and maybe also with focus on solar rotation?
Nightedahs from Khaz'Goroth (EU)
Heya Graylo ~
This looks great, I'm downloading the excel spreadsheet now. I'm looking forward to messing with it to model a solar set, although I'm sure I will have to take some time and learn what you did
here. I'm sorta a newb when it comes to these things.
@ Nightedahs
The spirit buff does very little to impact the gear list. When I added the spirit buff into the calculations for my gear list as well as a couple gear lists using both Murmurs' and Graylo's
scales, nothing but the relative values changed. The increase is minimal on a per item basis.
You based your models assuming:
"2900 spell power, 41% crit chance, 16.5% haste from gear, and 100% hit chance".
However, would it be the same for, say, lower crit/spellpower values?
With my current gear, self-buffed, I am sitting at ~2100 spell power, 404 haste rating, and am hit capped, but have a measly 21.13% crit chance.
On top of this, due to being "out of the loop" in terms of raiding pre-3.1, I am also lacking the T7-4pc bonus.
In this scenario, would crit scale to be more important than haste? Or would haste still trump crit? (I have a couple of Ulduar items that I can swap out which would increase my haste further,
but at the cost of some crit.)
- Boize
Thanks for the new stats, Graylo
Do you find that the value of haste remains constant (or nearly so) even after you hit the haste cap?
Well I agree that the change from 15% to 30% of spirit as spellpower wont change that much in the calcs that rated spirit with 15% before.
But as I read in the gear list spirit isn rated at all and some items would gain 20 or even more spellpower out of this now.
And that might change the rating of the item itself.
I will update the Lunar Gear list soon. I do plan to build a Solar Model and provide a solar gear list at some point.
If you increased all of my stat values by 5% or 10% and then redid the analysis then the conclustions would probably be very similar.
Now if you kept the spell power constant, but reduced the Crit and haste values then they would go up. Since you do not have the 4T7 then Crit will be much more valuable for you.
If the GCD didn't have a minimum value then my haste value would be higher. However, if your using a Lunar rotation you can forget about the haste cap. Haste is still very valuable for a Lunar
rotation even when haste has little impact on Wrath.
I don't know if your refering to my gear list or not, but in my list the dps value of Spirit is used to rate the items, but the mana benifits are ignored.
just to clear something up, are you suggesting to use SF over Wrath after the lunar eclipse is over? the longer cast time and crit chance of both would seem that wrath spam after that rotation. i
dont know. i dont crunch numbers i just blow shit up and see what works.
Is this basically the output of Monte Carlo modeling then?
Do these models take into consideration full raid buffs and IMotW? If so spirit and intellect should see some gains from stat multipliers.
Sorry if I missed something; I just browsed this and I'm tired :P
-A drowsy Antonetz
Graylo A few things,
Whys is your haste and crit values so different to Murrums(In terms of the amount they are about)?
And Considering we will be aleast getting Tier 8 2 set, does that increase crit somewhat since we loss 5%?
I use SF as my filler spell. Wrath techically has more DPS in theory but it has other problems with lag and Spell Queuing. Plus I have the SF idol equiped for Eclipse. Therefore I think SF makes
more sense for a Lunar rotation.
Though I didn't know the techical name for it, I think you are correct. It has been over 10 years since I took any kind of Statistics so I don't know the margin of error and all that fun stuff,
but I assumed 1000 trials would be enough. If some one with more stats experiance then I would like to run a regression analysis on it and give me the results that would be great.
Good Point. I will update my results soon.
I have several issues with Murmurs numbers and posted a big post on TMR.
I think there are some big issues with his crit number. First of all using a starting value of 30% is pretty low given 15% haste and 2700 SP. A moonkin with that quality of gear is likely to have
4T7. Therefore he would only be getting 10% crit from the base, Int and Crit rating. I think his starting number should be at least 35% if not higher.
The second big issue with his crit number is he doesn't take Eclipse into account. Eclipse actually decreases the value of crit because it provides such a large amount of it, and 2T8 will make
that even worse.
I also didn't like the way he talked about haste. I thought it was missleading, and there were some issues with how he calculated it in my opinion.
None of this is to say my numbers are 100% correct. I expect these numbers to increase by the end of ulduar. Our SP numbers will go up quite a bit, and our Crit and haste numbers will actually go
down a little bit. This means that Crit and Haste will improve dramatically in value over this patch. I will update this analysis when more is known about what a 3.1 BiS geared moonkin looks
Another great post Gray!!
Alrdy D/L both links you have here. Plan to play with them 2nite. One question for you and the gallery..given that the "best" gear while upgrading is always in flux what site, mod and/or program
do you recommend to calculate and recalcuate (after an upgrade) what loot is best for each slot. (ei. like the weight scale in wowhead.)
Thx again for all the info love this site :).
Any intuition for why int and spirit have the same value? It is the one part of the numbers that I can't rationalize quickly --- how does the 30% of spirit to damage come out to the same ranking
as a little bonus crit and bonus damage from int? I understand if that's what the model spits out (and if it does, it means blizzard has done a remarkable job of balancing the dps values of the
two stats), but it is still very surprising.
I don't really like any of them. I do use WoWheads comparison sometimes but I use my own scale. This will sound self serving but I would check back here.
The spirit number is easy to calculate. Its basically 0.30 * (1+0.1+0.02) = 0.34 rounded. The 0.1 and 0.02 are Kings and Imp MOTW.
For int you have to remember that 45.9 Crit Rating equals 1% crit, and 166.667 Int equals 1% crit. So you can take 45.9/166.667 = 0.28. Since Crit rating is worth 0.57 you can say that in terms
of Crit chance Int is worth 0.16 (0.57 * 0.28 = 0.16). Then add the 12% spell power you get from Int and Int is worth 0.28. Now you have to take into account stat multipliers and you get 0.28*
(1+0.1+0.1+0.02) = 0.34.
I argee Gray...so when can we expect to see your new list based on these numbers??
Thanks for explain that Graylo.
A final question, do your model predict a rating or % of crit we should not drop under?
Hail to the king, baby!
Thanks alot for these numbers Graylo, I'm curious what these numbers will do to the gearlist..
I'm still not convinced it's worth it to lose my 4pc T7 using a lunar rotation (or solar for that matter), looking at the haste and crit we lose..
I am always surprise by the values you announce speially on sp.
I often raid in 10 man and I never reach that high SP - and yet I am gully naxx25 equiped (4t7.5) with lots of best in slot items (not all anyway).
My sp is more like 2050 unbuff (so more flasks it means +- 2200). I am curious to know how you can reach 2900.
(got also a lot of e219 stuff anyway)
Have you changed the stat weight of spirit to accomodate for the 3.1.2 +15% sp buff from spirit or is the version currently online just from the 15%?
I'm actually finding that with 2t8 a solar rotation actually starts pulling ahead of a lunar one really fast. It would be nice to have a model of a solar rotation given 2t8. though i imagine it
would just switch crit and haste.
Thanks for the excellent post and blog Graylo - keep up the good work!
My Moonkin alt is currently 77, and I am hoping to raid Boomkin with her at 80 so your site is invaluable.
I have a question. I love my AOE - hurricane, starfall and typoon. I wanted to be able to retain that in my raid build, both for the trash and for soloing and grinding. But would I be losing a
too much DPS if I stole 3 talent points to put into Gale of Winds and Typhoon, and where would be the least harmful place to steal it from in your opinion? Or can you convince me that this would
be a very bad idea.
Btw my guild is a 10man raiding guild, so unless I am pugging, my moonkin is probably going to be doing heroics and lower 10mans, with my hunter main brought in for the higher level content).
You can find my over at my hunter blog - Steady Shot (http://wowwhimsy.wordpress.com/)
I don't think there is a minimum level of crit we must have, but as our Crit chance decreased the value of crit will go up. I don't know how much Crit will go up but in the end it will balance
I'm convinced that T8 > T7. We do lose a lot of Crit, but we gain a lot of Spell Power, and the new Set Bonuses are nice also.
My numbers are fully raid buffed in a 25man raid. They are based upon the numbers I've seen myself reach in game. Unfortunately it is impossible to model for every possible combination.
Since your Moonkin is not your main, you can probably spec how ever you want without issue. That said, Typhoon and Gale Winds are of marginal utility in raids. In a majority of raids (especially
in T7) the opprotunity to AoE aren't that important and your just draining your mana. Other classes do AoE better.
That said, If you really want those talents I would try and pull the points out of Mana regen if you can. If not pull it out of Imp IS.
Do you find that you swap gear for moments when you have a Draenai in your group due to the hit buff? I'm just wondering if it's prudent to have additional gear available for any eventuality.
For me optimization of items based raid composition is quite dangerous.
But indeed changing a trinket/item if you have some spare seems pretty smart and feasible.
Well, i have a question reguarding talents. I've noticed that raid damage is a significant part of Ulduar. That being said I am currently spec'ed into Owlkin frenzy, at the cost of moonglow and
gale winds. I never go oom anymore with the buff to innervate. I normally see it proc 8-12 times a fight probably an average of 10 times. I was wondering if we could find some numbers for this.
Granted it is in random parts of our rotation but i was thinking if we can pull 4.4kdps without it. That 4.4 x.10 x 100seconds = 44k extra damage in a fight. I was just wondering if anyone else
has been taking this into consideration lately and maybe running some numbers to see what kind of dps increase this would be.
|
{"url":"https://graymatterwow.blogspot.com/2009/05/moonkin-stats-and-my-new-model.html?showComment=1242997502963","timestamp":"2024-11-05T03:25:09Z","content_type":"application/xhtml+xml","content_length":"108176","record_id":"<urn:uuid:b082b5fb-7153-41e3-9d44-115f7ac62d80>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00178.warc.gz"}
|
Grade 6 Combined Operations Online Lessons
Get premium membership
and access video lessons, revision papers with marking schemes, notes and live classes.
• Grade 6 online lessons: Angles on a straight line and angles in a triangle
Online Lessons on angles on a straight line and angles in a triangle
2 Video Lessons
• Grade 6 Online Lessons on Areas of Triangles and Combined Shapes
Grade 6 Online Lessons on Areas of Triangles and Combined Shapes
7 Video Lessons
• Grade 6 Online Lessons on Numbers: Whole Numbers
Whole Numbers 1. Place value 2. Total value 3. Reading numbers in symbols 4. Reading and writing numbers in words 5. Ordering numbers 6. Rounding off numbers 7. Squares 8. Square roots
45 Video Lessons
• Grade 6 online video lessons on time
Areas covered 1. Identifying time in am and pm 2. Writing time in am and pm 3. Converting time from 12-hour clock system to 24-hour clock system 4. Converting time from 24-hour clock system to
12-hour clock system 5. Travel timetables
16 Video Lessons
• Grade 6 Video Lessons on Capacity
Grade 6 Video Lessons on Capacity - Relationship among cubic centimeters, millilitres and litres.
28 Video Lessons
• Grade 6 Online Lessons on Algebra
Grade 6 Online Lessons on Algebra. Areas Covered: 1. Forming simple inequalities 2. Simplifying simple inequalities
12 Video Lessons
• Grade 6 online lessons on mass
Grade 6 online lessons on mass -Tonne as a unit of measuring mass -Relationship between the kilogram and the tonne
29 Video Lessons
• Grade 6 Online Lessons on Money
Grade 6 Online Lessons on Money. Areas Covered: 1. Price list 2. Budget 3. Profit and loss 4. Types of taxes
22 Video Lessons
|
{"url":"https://www.tutorke.com/courses/2613-grade-6-combined-operations-online-lessons.aspx","timestamp":"2024-11-05T00:46:23Z","content_type":"text/html","content_length":"33670","record_id":"<urn:uuid:fddd5dee-f3cd-448b-af5f-b167ffe434db>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00800.warc.gz"}
|
TLDR: Learn about leap years, Earth's orbit, and calendar synchronization
📍 Article Source
AULÃO OBA - ANOS BISSEXTOShttps://www.youtube.com/watch?v=hZCpF7BPfqg
Explaining Leap Years
The concept of leap years in the context of the Brazilian Astronomy and Astronautics Olympiad test is explained. The explanation covers the fundamental aspects of leap years applicable to all levels
of the test. The video outlines the standard 365-day year and distinguishes leap years, which consist of 366 days with an extra day added in February. The astronomical phenomenon behind leap years is
detailed, including the Earth's translation movement around the sun and its elliptical orbit. The video describes the Earth's complete turn around the sun, which takes approximately 365.24 days. This
leads to the inclusion of an extra day in leap years to compensate for the 0.24 days (approximately 6 hours) missing in a normal year. The purpose of leap years is emphasized as a means to maintain
synchronization with the Earth's orbit and prevent divergence in the calendar of seasons. Additionally, the video delves into specific questions related to leap years, such as determining the number
of days in a leap year, identifying the next and previous leap years, and pinpointing the month in which the extra day is added. The questions from the Brazilian Astronomy and Astronautics Olympiad
test (level 1) are referenced to illustrate practical application of the leap year concept.
Leap Years Example
The video discusses the concept of leap years, focusing on the year 2016 as an example. It explains that a leap year has 366 days and occurs every four years. The next leap year after 2016 is 2020,
and the previous one before 2016 was in 2012. It also explains that in a leap year, February has 29 days instead of 28 days. The video then delves into the Sidereal year, the Tropical year, and the
concept of Tropic of Cancer and Tropic of Capricorn. It describes the Equinoxes and the Solstices in detail, explaining their significance and the sun's position during these celestial events.
Tropical Year and Equinoxes
The video discusses the concept of the Tropic of Cancer and the Tropic of Capricorn as imaginary lines that divide certain parts of the planet Earth. It explains that the sun oscillates between these
two tropics and the duration of this tropic year is approximately 365.25 days. The video also covers the equinoxes and solstices, emphasizing that the equinoxes are the first day of autumn or spring
when the sun is right on the line of the Equator, and the solstices are the first days of either winter or summer.
Determining Leap Years
Additionally, the video explains the concept of a leap year, discussing the calculation of the time left in a year and how leap years are determined by divisibility. It also delves into the precision
of the Earth's orbit around the sun, highlighting that the time it takes for the Earth to orbit the sun is approximately 365.24 days. Lastly, it discusses the long-term implications of adding a day
to the calendar every four years and how it can lead to a deviation in the Earth's position in relation to its orbit. The video concludes by posing a question about whether the year 2100 will be a
leap year.
Correcting Calendar Discrepancies
The video provides a detailed explanation of leap years, the Earth's orbit, and the correction of discrepancies in the calendar. It explains that the Tropical year is not exactly 365.25 but 365.24,
causing the Earth's orbit to be 0.97 days behind. Every four years, a leap year is introduced to correct this discrepancy, except for years that are multiples of 100, which are not leap years unless
they are multiples of 400. The video also explains how to identify leap years using a simple rule, and how this correction prevents the calendar from getting off track over the years.
Timestamped Summary
Leap years have an extra day in February every 4 years
• Leap years have 366 days, while normal years have 365 days
• This extra day is necessary to account for the time it takes for the Earth to revolve around the sun
Leap year adds an extra day to make up for the delay in earth's orbit
• Earth's orbit takes a little more than 365 days, causing a delay of about 6 hours every year
• Without leap year, the calendar would be messed up as seasons would occur at different times
• Leap year adds an extra day to bring earth back to its initial position
• The leap year has 366 days, with the extra day added to February
• This extra day makes up for the delay and keeps the calendar aligned with the seasons
2016 is a leap year with an extra day, and the next leap year is in 2020.
• Leap years occur every four years.
• February gains an extra day in a leap year.
• A leap year has 366 days, while a non-leap year has 365 days.
Leap years occur every four years to account for the quarter day difference between the calendar year and the tropical year.
• The tropical year is the time it takes for the earth to orbit around the sun and lasts approximately 365.25 days.
• Our calendar only has 365 days, so every four years a leap year is added with an extra day in February.
• The Tropic of Cancer and Tropic of Capricorn are imaginary lines that divide certain parts of the earth, and it is between these regions where the sun can be directly overhead.
The sun's oscillation and the equinoxes and solstices explained.
• The equinoxes mark the first day of autumn or spring when the sun is directly overhead the Equator line.
• The personal solstices mark the first days of either winter or summer when the sun is directly overhead the Tropic of Cancer or Capricorn.
• The number of hours left per year is 0.25 x 24 = 6 hours.
• A leap year occurs every 4 years, adding a day to February and making the year 366 days long. 2012, 2008, and 2016 were leap years.
A leap year is when a year is divisible by 4 with remainder 0.
• If the remainder is not 0, it is not a leap year.
• The time it takes for the earth to go around the sun is approximately 365.24 days.
• Leap years occur every 4 years with an extra day added due to the Earth's orbit.
• However, years that are multiples of 100 are not leap years, except for years that are multiples of 400.
• This rule means that 2000 was a leap year, but 2100 will not be.
Leap years add one day to the missing time in years that have only 365 days.
• Leap years occur every four years.
• Years that are multiples of 100 are not leap years, except for years that are multiples of 400.
Related Questions
What is a leap year and how is it related to the OBA test?
A leap year is a year that has one extra day, specifically the 29th of February, making it a 366-day year instead of the usual 365 days. This concept is essential for students preparing for the OBA
test as it is included in the Brazilian Astronomy and Astronautics Olympiad test for all levels (Level 1, Level 2, Level 3, and Level 4). Students from primary grades to high school need to
understand the significance of leap years in celestial and astronomical calculations, making it a fundamental topic for the OBA test.
What astronomical phenomenon explains the occurrence of leap years?
The occurrence of leap years is explained by the Earth's movements around the sun, specifically the translational movement. As the Earth orbits the sun, it takes approximately 365.24 days to complete
a full revolution. The additional quarter of a day (approximately 6 hours) results in the need for a leap year every four years to compensate for the cumulative extra time and align the calendar with
the Earth's actual movements. This phenomenon is fundamental to understanding leap years in the context of the OBA test.
Why is February the month with 29 days in a leap year?
February is the month with 29 days in a leap year to account for the extra time accumulated due to the Earth's translational movement around the sun. By adding an extra day to February, the calendar
compensates for the approximately 0.25 days of additional time that occur each year. This adjustment ensures that the calendar remains aligned with the Earth's celestial movements, preventing
divergence in the seasons and maintaining consistency in astronomical calculations, which is critical for the OBA test.
When is the next leap year after 2020 and how often do leap years occur?
The next leap year after 2020 will be in 2024, as leap years occur every four years. This recurring pattern of leap years, occurring every four years, is essential to understanding the cyclical
nature of leap years and their impact on the calendar. The regular occurrence of leap years is a key concept for students preparing for the OBA test, as it forms the basis of astronomical
calculations and calendar adjustments.
What is the significance of the Tropic of Cancer and the Tropic of Capricorn in relation to the Earth's movements and leap years?
The Tropic of Cancer and the Tropic of Capricorn are imaginary lines that mark the regions where the sun can appear directly overhead at midday. These tropics play a significant role in the Earth's
movements and the determination of leap years. The duration of the tropic year, approximately 365.25 days, is based on the sun's apparent annual movement between the Tropic of Capricorn and the
Tropic of Cancer. This understanding of the Earth's axial tilt and its impact on the sun's position is fundamental to comprehending the concept of leap years and their relation to celestial events,
making it a relevant topic for the OBA test.
What is the significance of the Tropic of Cancer and the Tropic of Capricorn in relation to solstices and seasons?
How many days are there in a year and how are leap years calculated?
How is a leap year determined and what are the rules for leap years?
|
{"url":"https://www.youtubesummaries.com/education/hZCpF7BPfqg","timestamp":"2024-11-11T13:02:33Z","content_type":"text/html","content_length":"111875","record_id":"<urn:uuid:00715c05-8b95-4396-b0aa-a9ce0398e26d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00255.warc.gz"}
|
relativeRoot_dry KU Leuven rootToPlantMassRatio_Comp_unitless massDryratioRoot Sebastien Carpentier Bioversity International Rhiannon Crichton DeathCause_Est Guillaume Bauchet FrtIntLng_M_cm IITA-BTI
KU Leuven Sebastien Carpentier pseudostemToPlantMassRatio_Comp_unitless massDryratioPseudostem relativePS_dry Matooke firmness in mouth measurement scale 0-10 NARO, Uganda MatFirmM_Meas_0to10
Elizabeth Kakhasa Bioversity International BlkLfStrkSvrtLfNb_Ct_grd5 Rhiannon Crichton other 9 Danny Coyne RadoNumRtWt2_Comp_Nema IITA Bioversity International/IITA temp52 Inge Van den Bergh/Danny
Coyne Allan Brown/Guillaume Bauchet (temp.) IITA-BTI FngUnSmshdPlpSmth_Est_1to5 Bioversity International Rhiannon Crichton OdStdLfRk_Ct_leaf widthPerLeaf singleLeafWidth_Meas_cm widthSingleLeaf KU
Leuven Sebastien Carpentier Danny Coyne PratyCofNumSlWt_Comp_Nema IITA BrctApxShp_Est_1to5 Rachel Chase Bioversity International homogeneous 10 Bioversity International PltDead_Ct_Plnt Rhiannon
Crichton Inge Van den Bergh IntDisProp_Comp_pct Bioversity International Stage 3 differs from the previous one by its dimensions. The stripe becomes longer, is enlarged and in certain conditions
(weak inoculum and unfavourable climatic conditions) can reach 2 or 3 cm. 3 Bioversity International Rhiannon Crichton temp35 moldable 10 BlkLfStrkSvrt_Est_MedLawCpt IITA-Bioversity International
BnchWt_M_kg Bioversity International Rhiannon Crichton horizontal 4 BnchFrtNum_Comp_Frt Bioversity International Rhiannon Crichton Christophe Jenny CIRAD MlRchsTp_Estm_1to2 Bioversity International
Inge Van den Bergh LfChgPres_Est_0or1 entity IITA Tendo Ssali WvlLvMsrd_Ct_weevil Bioversity International LfChgPres_Est_1or2 Inge Van den Bergh Bioversity International Rhiannon Crichton
BlkLfStrkSvrtLfNb_Ct_grd0 Guillaume Bauchet FrtDiam_M_cm IITA-BTI HndFrtAvgNum_Ct_Frt Bioversity International Rhiannon Crichton hanging at a 45 degree angle 3 banana bunchy top (BBTD) 4 Bioversity
International Rhiannon Crichton temp18 Rachel Chase Bioversity International MlBdLngth_M_1to3 Noel Madalla Selection criteria for genotype that was ranked second highest by male respondents during
preference scoring exercise Bioversity AnCrpCyclPrp_Comp_CrpCclY Bioversity International Rhiannon Crichton AvgFrtExtLng_M_cm Bioversity International Rhiannon Crichton 6 pink/pink-purple KU Leuven
TEtotalDry Sebastien Carpentier transpirationEfficiency_Comp_gml TE_dry Bioversity International Rhiannon Crichton BlkLfStrkSvrtLfNb_Ct_grd6 IITA AvgWvlAdltBdWght_M_mg Tendo Ssali < 25 % of roots
galled 2 67-100 % of all leaves turning yellow and necrotic, with leaves hanging down the pseudostem 4 Rachel Chase PMargClasp_Est_1or2 Bioversity International Bioversity International Rachel Chase
MlRchsPst_Estm_1to5 StdLf_Ct_leaf IITA-Bioversity International Amos Alakonya-Rhiannon Crichton rwcRoot KU Leuven Sebastien Carpentier root_rwc rwcRoot_Comp_unitless IITA AvgWvlLvBdWght_M_mg Tendo
Ssali Moist 10 IITA-BTI OffTypEst_Date_ymd Guillaume Bauchet Elizabeth Kakhasa Matooke astringency measurement scale 0-10 NARO, Uganda MatAstr_Meas_0to10 IITA-BTI Guillaume Bauchet DamEst_Date_ymd
Bioversity International BrctClExtFc_Int_1to8 Rachel Chase RchsWt_M_kg Bioversity International Rhiannon Crichton Tendo Ssali WvlLv_Ct_weevil IITA Bioversity International Rhiannon Crichton
BlkLfStrkInfIndex_Comp_Idx less than 1% of lamina with symptoms, only streaks and/or up to 10 spots 1 Allan Brown GdBnch_Est_1to2 IITA totalNumLeaves KU Leuven Sebastien Carpentier
numLeaves_Count_unitless numLeaf BlkLfStrkSvrt_Est_Faure IITA-Bioversity International KU Leuven waterLossDay_Comp_ml Sebastien Carpentier waterLossDaily rateWaterLossPerDay Elizabeth Kakhasa Matooke
smoothness in hand measurement scale 0-10 MatSmoothM_Meas_0to10 NARO, Uganda YellowPres_Est_0or1 Inge Van den Bergh Bioversity International 2 21-30 cm Bioversity International Rhiannon Crichton
PStmHt_M1_cm watery green 3 CigLfUnrBrnStgB_Date_ymd Bioversity International Rhiannon Crichton IITA Tendo Ssali WvlAdltMsrd_Ct_weevil temp43 IITA Tendo Ssali BlkLfStrkSvrt_Est_pct IITA-Bioversity
International 9 orange-red IITA WvlAdltBdWght_M_mg Tendo Ssali Inge Van den Bergh PetColpPres_Est_0or1 Bioversity International IITA DmgPrphl_M_% Tendo Ssali Bioversity International Rhiannon
Crichton DateHarv_Date_dmy Inge Van den Bergh PetColpPres_Est_1or2 Bioversity International massFreshPseudostem KU Leuven pseudostem_fresh pseudostemFreshMass_Meas_g Sebastien Carpentier
PltSpcg_Comp_m2 Bioversity International Rhiannon Crichton Bioversity International Rachel Chase MlBdShp_Comp_1to3 HvstDth_Comp_d Bioversity International Rhiannon Crichton Bioversity Noel Madalla
Selection criteria for genotype that was ranked second lowest by male respondents during preference scoring exercise Bioversity International Rhiannon Crichton SuckHtShtStg_M_cm < 1/3 discoloured 3 2
1-5% of lamina with symptoms CigLfUnrBrnStgB_Date_dmy Bioversity International Rhiannon Crichton KU Leuven areaSingleLeaf Sebastien Carpentier singleLeafArea_Meas_cm2 areaPerLeaf IITA
PratyNumSlWt_Comp_Nema Danny Coyne Sebastien Carpentier relativeLeaf_dry massDryratioLeaf leafToPlantMassRatio_Comp_unitless KU Leuven temp44 Tendo Ssali IITA straight with erect margins 3 4 obtuse
and split YlLfPercFW_Comp_pct Bioversity International Rhiannon Crichton Inge Van den Bergh/Danny Coyne Bioversity International/IITA temp45 BlotchCol_Est_1to4 Bioversity International Rachel Chase
Bioversity International Inge Van den Bergh PStmSplitPres_Est_0or1 Bioversity International Rhiannon Crichton PSuckN_Ct_PSuck Bioversity International Rhiannon Crichton BlkLfStrkLfFrStg1_Date_dmy 4
like very much IITA-Bioversity International Amos Alakonya-Rhiannon Crichton FstYngLfsptd_Est_dmy FngSmshdPlpTxt_Est_1to5 Allan Brown/Guillaume Bauchet (temp.) IITA-BTI Bioversity International
Rhiannon Crichton HvstPrp_Comp_PlntNb PMargCol_Esr_1to4 Rachel Chase Bioversity International MgnBhvPtCnlThrdLf_Est_1to4 Rachel Chase Bioversity International pink-purple 8 HndRnk_Ct_HndRnk
Bioversity International Rhiannon Crichton Bioversity Noel Madalla PA_EstResp_Txt Sebastien Carpentier rwcPseudostem rwcPseudostem_Comp_unitless ps_rwc KU Leuven Rhiannon Crichton temp9 Bioversity
International red 3 Bioversity International Rachel Chase BrctClExtFc_Est_1to16 Second-streak stage: the streaks may elongate slightly, but the most notable change is that in colour from reddish
brown to dark brown or almost black, sometimes with a purplish tinge. The streak is now clearly visible on the upper surface of the leaf. The distribution of streaks at this stage of development
varies in the same way as mentioned above. When streaks are very numerous, and more or less evenly distributed, the entire leaf blackens. 3 IITA-BTI FngUnSmshdPlpTst_Est_1to5 Allan Brown/Guillaume
Bauchet (temp.) BoiPlantnFirm_Meas_0to10 Hermann Antonin KOUASSI Boiled plantain Firmness scale 0-10 CIRAD / UNA, Côte d’Ivoire 0 no egg masses IITA-BTI FrtGrth_M_cm Guillaume Bauchet Bioversity
Selection criteria for genotype that was ranked highest by male respondents during preference scoring exercise Noel Madalla Inge Van den Bergh ExtDisProp_Comp_pct Bioversity International Elizabeth
Kakhasa Matooke surface yellowness measurement scale 0-10 NARO, Uganda MatSurfYellV_Meas_0to10 Elizabeth Kakhasa NARO, Uganda MatMoistM_Meas_0to10 Matooke moisture in mouth measurement scale 0-10
curved (sharp curve) 3 PltDth_Comp_d Bioversity International Rhiannon Crichton Rachel Chase AnthrCl_Est_1to16 Bioversity International 1 less than or equal to 12 Bioversity International Rachel
Chase FrtftPdclLgth_Est_1to3 Selection criteria for genotype that was ranked third highest by male respondents during preference scoring exercise Bioversity Noel Madalla Bioversity Noel Madalla
Selection criteria for genotype that was ranked third highest by female respondents during preference scoring exercise PS_CompFemale_PrefScale Noel Madalla Bioversity equal to or more than 21 mm 3
FngUnSmshdPlpTxt_Est_1to5 Allan Brown/Guillaume Bauchet (temp.) IITA-BTI 2 pink-purple to red IITA AvgWvlLvHdCps_M_µM Tendo Ssali leafChlorosis KU Leuven leafChlorosis_Comp_unitless Sebastien
Carpentier light green 5 IITA Danny Coyne NmNumSlWt Bioversity International PulpDiam_M_mm Rhiannon Crichton plantGrowth_fresh KU Leuven Sebastien Carpentier growthFresh growthFresh_Comp_g 34-50% of
lamina with symptoms 5 Inge Van den Bergh PlntgDisTim_Comp_d Bioversity International BnchShp_Estm_1to5 Bioversity International Rachel Chase > 75 % of roots galled 5 massDryRoot KU Leuven
rootDryMass_Meas_g Sebastien Carpentier root_dry Bioversity International Rhiannon Crichton DamEst_Date_dmy MeloiNumRtWt2_Comp_Nema IITA Danny Coyne FrtGrth_M_mm Rhiannon Crichton Bioversity
International MlBdLngthRt_M_1to3 Bioversity International Rachel Chase Bioversity International Rhiannon Crichton BlkLfStrkLfFrStg1_Date_ymd Bioversity International Rhiannon Crichton CropCyc_Comp_d
Rhiannon Crichton temp2 Bioversity International few internal spots 2 FstExtDiSmptFW_Date_ymd Bioversity International Rhiannon Crichton Bioversity International Rhiannon Crichton PltHvstd_Ct_Plnt
plantGrowth_dry KU Leuven growthDry_Comp_g Sebastien Carpentier growthDry not winged 2 PltDamDesc_Est_txt Bioversity International Rhiannon Crichton slightly curved 2 ChgNwLfPercFW_Comp_pct
Bioversity International Rhiannon Crichton WvlLvBdLgt_M_µM IITA Tendo Ssali Elizabeth Kakhasa MatStickT_Meas_0to10 NARO, Uganda Matooke stickiness in hand measurement scale 0-10 Allan Brown/
Guillaume Bauchet (temp.) IITA-BTI FngSmshdPlpSmth_Est_1to5 SptLfNb_Ct_leaf IITA-Bioversity International Amos Alakonya-Rhiannon Crichton Second-spot stage: the dark brown or black centre of each
lesion becomes slightly depressed and water soaked border becomes more pronounced. 5 leafAreaRatio_Comp_unitless areaPlant_per_areaTot KU Leuven areaRatio Sebastien Carpentier Bioversity
International Rhiannon Crichton PStmGrth_M_cm 5 purple IITA Tendo Ssali DmgIdxOut_M_% Bioversity International Rachel Chase BrctImbrcApxMlBd_Est_1to3 IITA RadoNumSlWt_Comp_Nema Danny Coyne Bioversity
International Rhiannon Crichton temp19 rootShootDry_Comp_unitless rootShootRatioDry massDryRootShoot KU Leuven Sebastien Carpentier Bioversity International Rhiannon Crichton temp41 Second-spot
stage: the dark brown or black central area of the spot becomes slightly depressed and the water-soaked border becomes more pronounced due to darkening. At this stage, a slight yellowing of the leaf
tissue immediately surrounding the water-soaked border may occur. 5 Inge Van den Bergh/Danny Coyne Bioversity International/IITA temp51 Inge Van den Bergh/Danny Coyne Bioversity International/IITA
NmNumRtWt waterLossSystem_per_Area waterLossSystPerLeafArea_Meas_mlcm2 KU Leuven Sebastien Carpentier Elizabeth Kakhasa MatSour_Meas_0to10 NARO, Uganda Matooke sourness measurement scale 0-10
Selection criteria for genotype that was ranked highest by female respondents during preference scoring exercise Bioversity Noel Madalla blunt-tipped (plateau at tip) 3 temp49 Inge Van den Bergh/
Danny Coyne Bioversity International/IITA hanging vertically 1 without flower relicts 1 8 dark green CmpdTplMnCl_Est_1to16 Bioversity International Rachel Chase Tendo Ssali WvlLvPrTrp_Ct_weevil IITA
Bioversity International Rhiannon Crichton SSuckN_Ct_SSuck Guillaume Bauchet PltEst_Date_ymd IITA-BTI FrtNb_Ct_gFruit Bioversity International Rhiannon Crichton 2 sparse blotching (<20%) temp21
Bioversity International Rhiannon Crichton IITA WvlAdltHdCps_M_µM Tendo Ssali Bioversity International temp30 Rhiannon Crichton 8 mole rats Bioversity International Rhiannon Crichton CauDam_Est_txt
5 obtuse and split Stellenbosch University temp37 Altus Viljoen hard 10 Inge Van den Bergh/Danny Coyne Bioversity International/IITA temp53 Allan Brown/Guillaume Bauchet (temp.) IITA-BTI
FngUnSmshdPlpCl_Est_1to5 no pigmentation 1 NSpotLfProp_Comp_idx IITA-Bioversity International Boiled plantain Sweetness scale 0-10 BoiPlantnSwt_Meas_0to10 CIRAD / UNA, Côte d’Ivoire Hermann
Antonin KOUASSI Bioversity International Rhiannon Crichton FnctLeav_Ct_Leaf 2 intermeditae KU Leuven Sebastien Carpentier areaLeaves totLeafArea_Meas_cm2 areaTotLeaves Inge Van den Bergh/Danny Coyne
Bioversity International/IITA temp48 Rachel Chase AnthrCl_Est_1to8 Bioversity International Danny Coyne HoploNumRtWt2_Comp_Nema IITA areaPlantTop area_topView KU Leuven topLeafArea_Meas_cm2 Sebastien
Carpentier Bioversity International Rhiannon Crichton ShtHvstTime_Comp_d PratyGooNumRtWt2_Comp_Nema Danny Coyne IITA MlRchsApp_Estm_1to7 Bioversity International Rachel Chase red-purple 2 truncated 1
margins curved inward 3 parthenocarpic 1 less than or equal to 10 mm 1 3 male flowers/bracts above the male bud (rest of stalk is bare) HndNumMsrd_Ct_Hnd Bioversity International Rhiannon Crichton
IITA-BTI BnchMatStg_Est_1to2 Guillaume Bauchet 1 fully green 4 curved (sharp curve) OffTypEst_Date_dmy Bioversity International Rhiannon Crichton lengthSum KU Leuven Sebastien Carpentier
totLeafLength_Meas_cm Bioversity International Rhiannon Crichton temp8 neutral/male flowers on the whole stalk without persistent bracts (still firmly attached to the rachis) 5 dont know 2
intermediate 3 Bioversity International Rhiannon Crichton RkYgstLf_Comp_leafrank NARO, Uganda Matooke moldability measurement scale 0-10 MatMolT_Meas_0to10 Elizabeth Kakhasa DIT_Comp_d Bioversity
International Rhiannon Crichton 3 equal to or more than 17 Rhiannon Crichton temp40 Bioversity International Bioversity International BlkLfStrkSvrtLfNb_Ct_grd3 Rhiannon Crichton 1 1-2 egg masses
temp13 Bioversity International Rhiannon Crichton Bioversity International FrtApxFlwRlt_Est_1to3 Rachel Chase PratyGooNumSlWt_Comp_Nema IITA Danny Coyne orange red 10 Shoot_Date_dmy Bioversity
International Rhiannon Crichton Inge Van den Bergh PStmDiscol_Est_cm Bioversity International Bioversity International HandAvgWt_M_kgPlnt Rhiannon Crichton Bioversity International Rhiannon Crichton
FstExtDiSmptFW_Date_dmy 2 cream Bioversity International Rhiannon Crichton DthDnst_Comp_PlntHa very light yellow 0 IITA-BTI FlwT_Date_ymd Guillaume Bauchet Rachel Chase Bioversity International
BlotchCol_Est_1to5 BrctClExtFc_Est_1to10 Rachel Chase Bioversity International 1 leaf spots 2 contrast between margin and petiole (with a contrasting colour line along) DateDeath_Est_ymd IITA-BTI
Guillaume Bauchet PltFstLfSptd_Ct_leaf Bioversity International Rhiannon Crichton Noel Madalla Selection criteria for genotype that was ranked third lowest by male respondents during preference
scoring exercise Bioversity KU Leuven Sebastien Carpentier massFreshRoot rootFreshMass_Meas_g root_fresh IITA Tendo Ssali WvlLvHdCps_M_µM Bioversity International Rhiannon Crichton AvgFrtGrth_M_mm
FngSmshdPlpTst_Est_1to5 Allan Brown/Guillaume Bauchet (temp.) IITA-BTI yellow 3 temp14 Bioversity International Rhiannon Crichton FrtLngth_M_cm Rachel Chase Bioversity International DateHarv_Date_ymd
IITA-BTI Guillaume Bauchet sparse blotching 1 3 6-15% of lamina with symptoms Bioversity International/IITA temp47 Inge Van den Bergh/Danny Coyne lengthily pointed (like plantain) 2 heightPseudostem
KU Leuven heightPS Sebastien Carpentier lengthPseudostem_Meas_cm 0 crumbly spiral (all fruit are attached to a unique crown coiled around the stalk) 4 Bioversity International Rhiannon Crichton
HvstDst_Comp_PlntHa 11-30 egg masses 3 HelicoNumSlWt_Comp_Nema IITA Danny Coyne temp10 Bioversity International Rhiannon Crichton DateColl_Date_dmy Bioversity International Rhiannon Crichton
AvgWvlAdltBdLgt_M_µM Tendo Ssali IITA Bioversity International PltNtHvstd_Ct_Plnt Rhiannon Crichton Matooke sweetness measurement scale 0-10 NARO, Uganda MatSweet_Meas_0to10 Elizabeth Kakhasa 2
cream plant_dry plantDryMass_Meas_g KU Leuven massDryPlant Sebastien Carpentier 2 11-20 mm BoiPlantnMeal_Meas_0to10 Hermann Antonin KOUASSI Boiled plantain Mealiness scale 0-10 CIRAD / UNA, Côte
d’Ivoire 4 green yellow IITA-Bioversity International Amos Alakonya-Rhiannon Crichton SpotLfNum_Ct_Lf Bioversity International Rhiannon Crichton AnPltPrp_Comp_KgHaY attribute Elizabeth Kakhasa
Matooke surface homogeneity of colour measurement scale 1-10 MatSurfHomColV_Meas_0to10 NARO, Uganda temp38 Altus Viljoen Stellenbosch University WvlAdltBdLgt_M_µM IITA Tendo Ssali 1 cream
LfEmRate_Comp_LfWk Bioversity International Rhiannon Crichton falling vertically 1 Bioversity International Rhiannon Crichton RkPvsMktLf_Comp_leafrank Bioversity International AvgPulpDiam_M_mm
Rhiannon Crichton Third or mature spot stage: the centres of the lesions dry out becoming light grey or buff coloured and a bright yellow transitional zone appears between them and the normal green
colour of the leaf. The lesions remain clearly visible after the leaf has collapsed or withered because of their light coloured centre and dark border. 6 4 green yellow 3 long (equal to or more than
30 cm) BlotchA_Est_1to5 Bioversity International Rachel Chase 1 bare IITA-Bioversity International BlkLfStrkSvrt_Est_MedLawCdsd medium shouldered (0.28 < x/y < 0.30) 2 temp16 Bioversity International
Rhiannon Crichton Bioversity International Rhiannon Crichton AcAnYd_Comp_TnHaY Bioversity International FngWtTtl_Comp_kg Rhiannon Crichton 3 asymmetrical PltEst_Date_dmy Bioversity International
Rhiannon Crichton 1 revolute (rolling) adult female 2 pointed 1 Dwfsm_Est_1to2 Allan Brown IITA Bioversity International FrtLngth_M_1to5 Rachel Chase pseudostemDryMass_Meas_g KU Leuven Sebastien
Carpentier ps_dry massDryPseudostem Bioversity International Rhiannon Crichton MSuckN_Ct_MSuck discolouration greater than two-thirds of vascular tissue 5 3 purple to blue Rachel Chase Bioversity
International FrtCvShp_Est_1to5 Tendo Ssali AvgWvlAdltHdCps_M_µM IITA BoiPlantnMois_Meas_0to10 CIRAD / UNA, Côte d’Ivoire Boiled plantain Moistness scale 0-10 Hermann Antonin KOUASSI 3 large
blotches leafAreaIncrease_Comp_cm2 KU Leuven Sebastien Carpentier leafAreaIncrease 0 lumpy PeelThck_M_mm Bioversity International Rhiannon Crichton temp23 Bioversity International Rhiannon Crichton
Bioversity International Rhiannon Crichton HndFrtNum_Ct_Frt Stage 2 appears as a stripe, generally brown in colour and visible on the underside of the leaf, later the symptom also appears on the
upper part of the limb as a stripe, the yellow colour of which resembles the stripe at stage 1 of Yellow Sigatoka. The colour of this stripe will change progressively from brown and later to black on
the upper side of the limb, but will retain the brown colour on the underside. 2 2 First streak stage: specks elongate, becoming slightly wider to form narrow reddish-brown streaks. straight in the
distal part 3 high intensity 10 Bioversity International Rhiannon Crichton LfEmRate_Comp_LfMth BlkLfStrkSvrt_Est_Gauhl IITA-Bioversity International BlkLfStrkLfFrStg6_Date_dmy Bioversity
International Rhiannon Crichton acronym Bioversity International PltDnsty_Comp_PlntHa Rhiannon Crichton base of the style prominent 3 waterLossSyst_Meas_ml KU Leuven Sebastien Carpentier
waterLossSystem transpSyst CigLfPigm_Est_1to3 Rachel Chase Bioversity International Bioversity International Rhiannon Crichton FrtWt_M_gFruit absent 0 Tendo Ssali IITA WvlLvBdWght_M_mg Allan Brown/
Guillaume Bauchet (temp.) IITA-BTI FngSmshdPlpArm_Est_1to5 7 nematodes IITA Danny Coyne NmRpFctr_Ct_Nema medium green 6 Bioversity International Rhiannon Crichton FlwT_Date_dmy Bioversity
International Rhiannon Crichton FrtExtLng_M_mm PMargWing_Est_1or2 Bioversity International Rachel Chase 2 truncate ( Noel Madalla Selection criteria for genotype that was ranked second lowest by
female respondents during preference scoring exercise Bioversity Selection criteria for genotype that was ranked lowest by female respondents during preference scoring exercise Bioversity Noel
Madalla Bioversity International Rhiannon Crichton BlkLfStrkLfFrStg6_Date_ymd Bioversity International Rhiannon Crichton PStmHt_M2_cm 1 corm completely clean, no vascular discolouration grey 4
Bioversity International Rhiannon Crichton NtHvstDst_Comp_PlntHa IITA Danny Coyne HoploNumSlWt_Comp_Nema BlkLfStrkSvrtLfNb_Ct_grd1 Bioversity International Rhiannon Crichton Bioversity International
Rhiannon Crichton AvgPeelThck_M_mm 1 margins spreading Bioversity International CrmIntDiscFW_Est_1to5 Inge Van den Bergh Rhiannon Crichton temp31 Bioversity International Noel Madalla Bioversity
Selection criteria for genotype that was ranked second highest by female respondents during preference scoring exercise 3 black-purple KU Leuven numFuncLeaves_Count_unitless Sebastien Carpentier
totalNumFunctionalLeaves numFuncLeaf FrtWtAvg_M_g Bioversity International Rhiannon Crichton Boiled plantain Chewiness by number of chews CIRAD / UNA, Côte d’Ivoire BoiPlantnChew_Meas_number of
chews Hermann Antonin KOUASSI temp12 Bioversity International Rhiannon Crichton Tendo Ssali IITA AvgWvlLvBdLgt_M_µM Stage 6 is when the centre of the spot dries out, turns clear gray and is
surrounded by a well-defined black ring, which is, in turn, surrounded by a bright yellow halo. These spots remain visible after the leaf has dried out because the ring persists. 6 margins erect 2 3
watery green Bioversity PA_EstMale_Txt Noel Madalla Shoot_Date_ymd Guillaume Bauchet IITA-BTI 13-16 2 no intensity 0 6 purple brown 1 no internal symptoms IITA GdFrtFll_Est_1to2 Allan Brown
DET_Comp_d Bioversity International Rhiannon Crichton PS_CompMale_PrefScale Bioversity Noel Madalla 2 wide with erect margins Bioversity International BrctClExtFc_Int_1to16 Rachel Chase heterogeneous
0 one yellow finger 2 NARO, Uganda MatPump_Meas_2pt Elizabeth Kakhasa Matooke pumpkin aroma measurement 2pt scale Bioversity International PMargWing_Est_1or5 Rachel Chase bright yellow 5
FuncRtNum2_Ct_Rt Inge Van den Bergh/Danny Coyne Bioversity International/IITA without any floral relicts 1 4 31-100 egg masses Bioversity International Rhiannon Crichton DateDeath_Est_dmy few flower
relicts (< 20% of the fruits with relicts) 2 Tendo Ssali DmgIdxInn_M_% IITA Bioversity International FrtNbMdHndBnch_Ct_ft Rachel Chase Sebastien Carpentier leaf_dry massDryLeaf KU Leuven
leafDryMass_Meas_g 2 fusarium wilt 3 like fairly 3 low shouldered (x/y more than or equal to 0.30) 21-25 cm 3 obtuse 3 26-30 cm 4 1 straight (or slightly curved) Tendo Ssali WvlAdlt_Ct_weevil IITA
Bioversity International Rhiannon Crichton BlkLfStrkSvrtLfNb_Ct_grd4 Rachel Chase CmpdTplLbCl_Est_1to5 Bioversity International 1 short (y less than or equal to 20 cm) Banana ontology Plantain
ontology > 100 egg masses 5 1 present Bioversity International Rhiannon Crichton BlkLfStrkSvrtLfNb_Ct_grd2 Inge Van den Bergh CrmDiscDistTlstSckFW_M_cm Bioversity International Bioversity
International YellowPres_Est_1or2 Inge Van den Bergh Stage 4 appears on the underside as a brown spot and on the upper side as a black spot. 4 MeloiNumSlWt_Comp_Nema IITA Danny Coyne
FstYngLfsptd_Est_ymd Bioversity International Rhiannon Crichton soft 0 horizontal or supra-horizontal 4 Bioversity International Rhiannon Crichton BnchFrtNum_Ct_Frt IITA Danny Coyne
PratyNumRtWt2_Comp_Nema lengthPerLeaf lengthSingleLeaf singleLeafLength_Meas_cm KU Leuven Sebastien Carpentier cream 1 medium green 6 IITA-BTI DateColl_Date_ymd Guillaume Bauchet 5 plant dead, with
brown leaves hanging down the pseudostem Sebastien Carpentier leavesFormedPerPseudostemGrowth_Comp_percm KU Leuven leaves_per_heightPS 5 medium intensity PStmSplitPres_Est_1or2 Bioversity
International Inge Van den Bergh KU Leuven plant_rwc Sebastien Carpentier rwcPlant rwcPlant_Comp_unitless PulpDiam_M_cm IITA-BTI Guillaume Bauchet equal to or more than 31 cm 5 Inge Van den Bergh
Bioversity International CormDiscol_Est_1to6 5 other 51-100% of lamina with symptoms 6 massFreshPlant plantFreshMass_Meas_g KU Leuven plant_fresh Sebastien Carpentier Second-streak stage: streaks
change colour from reddish brown to dark brown or black, sometimes with a purplish tinge, clearly visible at the upper surface of the leaf. 3 Bioversity International Rachel Chase
FstBrctApxShp_Est_1to4 3 banana Xanthomonas wilt (BXW) BnchHndNum_Ct_Hnd Bioversity International Rhiannon Crichton Bioversity International Rachel Chase MlBdShp_Comp_1to5 neutral/male flowers and
presence of withered bracts on the entire stalk 4 Bioversity International Rhiannon Crichton PtlClpFW_Comp_pct Bioversity International Rachel Chase MgnBhvPtCnlThrdLf_Est_1to5 red-purple 9 1 white
temp22 Bioversity International Rhiannon Crichton Elizabeth Kakhasa MatAro_Meas_0to10 NARO, Uganda Matooke aroma measurement scale 0-10 0 low intensity MltHandWt_M_kgPlnt Bioversity International
Rhiannon Crichton AvgFrtDiam_M_mm Bioversity International Rhiannon Crichton transpPlant KU Leuven waterLossPlant_Meas_ml Sebastien Carpentier waterLossPlant 3-10 egg masses 2 5 erect
TrpNum_Ct_weeviltrap IITA Tendo Ssali PMargColL_Est_1or2 Bioversity International Rachel Chase MlBdShd_Comp_1to3 Rachel Chase Bioversity International no visual leaf symptoms 1 yellow 4
PMargCol_Esr_1to16 Rachel Chase Bioversity International Bioversity International Rhiannon Crichton NtHvstPrp_Comp_PlntNb 0 non sticky Rachel Chase Bioversity International FrtApxFlwRlt_Est_1to4
Bioversity International Rhiannon Crichton PltPltd_Ct_Plnt leaf_rwc KU Leuven Sebastien Carpentier rwcLeaf rwcLeaf_Comp_unitless not winged and clasping the pseudostem 4 Bioversity International
Rhiannon Crichton temp32 7 orange red 2 good strongly bottle-necked (wider under tip than number 2) 4 banana streak virus (BSV) 5 IITA Tendo Ssali temp42 Bioversity International Rhiannon Crichton
FlwHvstTime_Comp_d Bioversity International Rhiannon Crichton PlntShtTime_Comp_d Bioversity International Rhiannon Crichton WSuckN_Ct_WSuck 5 light green brown 1 CIRAD / UNA, Côte d’Ivoire Boiled
plantain Stickiness scale 0-10 Hermann Antonin KOUASSI BoiPlantnStic_Meas_0to10 Rachel Chase Bioversity International FrtApxPt_Est_1to5 2 present 10 high intensity Allan Brown PdltFrt_Est_1to2 IITA
Bioversity International Rachel Chase HndNbWhlBnch_Ct_hands green 7 FngSmshdPlpCl_Est_1to5 Allan Brown/Guillaume Bauchet (temp.) IITA-BTI DthPrp_Comp_DdPltRt Bioversity International Rhiannon
Crichton red-purple 4 less than or equal to 20 cm 1 Rhiannon Crichton PtYd_Comp_TnHaY Bioversity International Inge Van den Bergh/Danny Coyne Bioversity International/IITA temp50 8 dark green 34-66 %
of older leaves turning yellow, with some hanging down the pseudostem 3 First-spot stage: the streak broadens and becomes more or less fusiform and elliptical in outline and similar to the first-spot
stage of Sigatoka (Leach 1946). The transition from streak to spot is further characterised by the development of a light brown, water-soaked border around the spot. This water-soaked effect is
especially clear in the early morning, when dew is still present on the leaf, or after rain. 4 leafFreshMass_Meas_g massFreshLeaf leaf_fresh KU Leuven Sebastien Carpentier Selection criteria for
genotype that was ranked third lowest by female respondents during preference scoring exercise Noel Madalla Bioversity 2 16-20 cm FrtExtLng_M_cm Bioversity International Rhiannon Crichton margins
curved inward 4 PStmCol_Est_1to16 Rachel Chase Bioversity International 1 straight Bioversity International Rhiannon Crichton temp11 Bioversity International Rhiannon Crichton MltFrtWt_M_gFruit
Bioversity International Rhiannon Crichton temp17 persistent style 2 1 dwarf 3 intermediate pink/pink-purple 10 FrtDiam_M_mm Bioversity International Rhiannon Crichton extensive pigmentation 4
Bioversity International/IITA temp46 Inge Van den Bergh/Danny Coyne weevils 6 5 firm DeadRtNum2_Ct_Rt Inge Van den Bergh/Danny Coyne Bioversity International/IITA only base of the style persists 4
FrtCvShp_Est_1to6 Bioversity International Rachel Chase 6 other SuckNum_Ct_Suck Bioversity International Rhiannon Crichton Bioversity International BnchPst_Estm_1to5 Rachel Chase Tendo Ssali IITA
WvlAdltPrTrp_Ct_weevil 1 convolute 2 present Rachel Chase BrctBhvBfFll_Est_1to2 Bioversity International curved in slight S shape (double curvature) 5 at an angle 2 HandWt_M_kgPlnt Bioversity
International Rhiannon Crichton 16-33% of lamina with symptoms 4 red 8 4 extensive pigmentation (>50%) equal to or more than 31 cm 3 IITA Allan Brown/Guillaume Bauchet (temp.) VitCCnt_M_ juvenile 3
medium intensity 5 Bioversity PS_CompResp_PrefScale Noel Madalla Sebastien Carpentier pseudostemHeightIncreasePerFormedLeaf_Comp_cm growthPS_per_leafFormed KU Leuven Selection criteria for genotype
that was ranked lowest by male respondents during preference scoring exercise Noel Madalla Bioversity 1 yellow medium intensity 5 Bioversity International Rachel Chase CmpdTplMnCl_Est_1to6 1 white 5
medium intensity orange red 10 9 whitish 7 blue green 1 3 brown-black other 4 Stage 5 is when the elliptical spot is totally black and has spread to the underside of the limb. It is surrounded by a
yellow halo with the centre beginning to flatten out. 5 firm 10 discolouration of up to one-third of vascular tissue 3 BlotchA_Est_1to4 Rachel Chase Bioversity International obtuse 4 3 winged and
clasping the pseudostem medium (0.45 < w/y < 0.55) 2 10 dont know present 1 1 Stage 1 appears as a small depigmentation spot whose whitish or yellow colour resembles stage 1 of Yellow Sigatoka
disease. These symptoms are not visible in transmitted light and can be observed only on the underside of the leaf. skinny (w/y less than or equal to 0.45) 1 not winged and not clasping the
pseudostem 5 10 bright yellow erect 5 2 slightly pointed not dwarf 2 2.1-2.9 m 2 Bioversity International Rhiannon Crichton RtnCrpCyc_Comp_d absent 1 Rhiannon Crichton HndMltpFrtNum_Ct_Frt Bioversity
International Rachel Chase CmpdTplLbCl_Est_1to16 Bioversity International purple 6 9 whitish Bioversity Noel Madalla PA_EstFemale_Txt 2 neutral flowers on one to few hands only near the bunch (rest
of stalk is bare) clasping 1 6 Small bunch from neutral/hermaphrodite flowers just above the male bud 5 other high intensity 10 no intensity 0 black (anthers aborted) 7 5 rounded 1 winged and
undulating purple-brown 5 4 margins overlapping 6 entire inner rhizome discoloured temp20 Bioversity International Rhiannon Crichton 1 add** fat (w/y equal to or more than 0.55) 3 1 no visible sign
of fusion Danny Coyne RotyNumSlWt_CompNema IITA Bioversity International Rachel Chase FrtftPdclLgth_M_mm 6 total discolouration of vascular tissue Rachel Chase Bioversity International
PdcFsn_Est_1to2 5 grainy discolouration of between one-third and two-thirds of vascular tissue 4 Matooke grassy aroma measurement 2pt scale Elizabeth Kakhasa MatGrass_Meas_2pt NARO, Uganda 5 rounded
Other 5 less than or equal to 2 m 1 other 4 3 equal to or more than 3 m 1 adult male pseudostemHeightIncrease pseudostemHeightIncrease_Comp_cm2 KU Leuven Sebastien Carpentier moderately imbricate 2 5
like extremely Bioversity International Rhiannon Crichton temp15 1 no contrast between margin and petiole (without a colour line along) High shouldered (x/y less than or equal to 0.28) 1 ** 2 1 less
than or equal to 15 cm ivory 3 NARO, Uganda Matooke hardness in hand measurement scale 0-10 MatHardT_Meas_0to10 Elizabeth Kakhasa not parthenocarpic 2 high intensity 10 orange-brown (mahogany, like
in Pisang Mas) 1 2 not clasping 1 open with margins spreading no symptoms 0 2 0-33 % of older banana leaves turning yellow 1 trace infections with a few small galls 8 other 2 small blotches 2
slightly angled dislike very much 1 4 egg 4 ovoid brown/rusty brown 5 3 highly imbricate absent 0 lanceolate 2 3 moderate blotching (20-50%) totally fused (more than 50 % of the length of the
pedicel) 3 IITA Sckr_Est_1to2 Allan Brown 3 other (specify on answer sheet) pink-purple 7 Initial speck stage: symptoms are first visible to the naked eye as faint, minute (less than 0.25 mm diam),
reddish brown specks on the lower surface of the leaf. Specks are often most abundant near the margin of the left side of the leaf, particularly towards the tip. Where leaf spot is severe, specks
have been observed on the second leaf of plants that have not yet produced a bunch. Elsewhere they usually appear on the third, fourth, or older leaves. 1 margins overlapping 5 2 dark brown 7 Other 2
winged and not clasping the pseudostem curved in S shape (double curvature) 4 2 not revolute (not rolling) 0 Dry Bioversity International PsdHght_E Rachel Chase IITA Danny Coyne
PratyCofNumRtWt2_Comp_Nema dislike 2 First streak stage: the initial speck elongates, becoming slightly wider, to form a characteristic narrow, reddish brown streak up to 20 mm long and 2.0 mm wide,
with the long axis parallel to the leaf venation. At this stage, streaks are more clearly visible on the lower surface of the leaf than on the upper one. The distribution of streaks is variable.
Sometimes they are most numerous near the edge of the left side of the leaf. At other times they are equally numerous on both sides of the leaf, and more or less evenly distributed. Frequently they
are densely aggregated in a band several cm wide on one side or both sides of the mid-rib, becoming less numerous towards the edge of the leaf. Streaks may be so numerous that every cm2 of leaf
surface bears one or more. There is considerable variation in the length of individual streaks on a given leaf, and they frequently overlap to form larger, compound streaks. 2 1 cylindrical 4 1/3-2/3
discoloured > 1/3 discoloured 5 2 isolated points of discolouration in vascular tissue 1 bad 2 brown other 10 yellow 2 no intensity 0 Initial speck stage: faint, minute, reddish-brown specks on the
lower surface of the leaf. 1 green 1 6 orange 4 50-75 % of roots galled Rhiannon Crichton SptPsdStmPercFW_Comp_pct Bioversity International Guillaume Bauchet IITA-BTI AvgFrtIntLng_M_cm Third or
mature spot stage: the centre of the spot dries out, becoming light grey or buff-coloured, and further depressed. The spot is surrounded by a narrow, well-defined, dark brown or black border. between
the latter and the normal green colour of the leaf, there is often a bright yellow transitional zone. After the leaf has collapsed and withered, spots remain clearly visible because of the
light-coloured centre and dark border. 6 5 other 25-50 % of roots galled 3 winged 1 pointed 1 2 straight in the distal part 5 without pigmentation partially fused (up to 50 % of the length of the
pedicel) 2 medium (20 cm < y < 30 cm) 2 2 yellow 7 green 3 persistent flower relicts (> 20% of the fruits with relicts) 10 smooth 2 green with a curve 3 First-spot stage: the streaks broaden and
become more or less fusiform or elliptical in outline and a water soaked border appears around each lesion. 4 other 8 1 pointed 0 soft 4 black-purple 0 absent 1 like a top 10 sticky BD colour chart A
- 2016 numsuckers Number of all types of suckers Number of suckers The number of all types of suckers in the mat. Damage to upper outer corm weevils - measurement Cut a transverse cross section of
the corm at the collar (upper cross-section). Score weevil damage (galleries) as percentage damage on the side of the cross-section that was facing the ground, downwards. Place the sample in the
mouth, chew it at the rate of one chewing per second and assess the number of chews before swallowing. Chewiness Sensory Texture method Count how many of the leaves are standing leaves (without a
bent or broken petiole) Number of standing leaves - counting weevil Combined number of plants that are dead The number of plants that are dead. Number of plants dead numdead Single leaf width Width
singleLeafWidth The width of a leaf Male rachis type - estimation Visually observe the part of the rachis between the last hand and the male bud. Truncated means there is no bract scar below the last
hand. Present means there is a degenerated or persistent male bud Preference score male respondents Researchers calculate the preference score by male respondents The following flower descriptors
refer to the flowers at the axil of the first external unlifted bract. Fresh material must be used (make the observation as soon as you detach the bract/flower from the rachis). Visually observe the
colour of the lobe at the tip of the tepal. Use colour chart B and observe out of direct sunlight. Lobe colour of tip of compound tepal - estimation Calculate as: the Number of functional roots
divided by the Number of all roots, multiplied by 100, e.g. 12 / 20 * 100 = 60 % functional roots. Percentage of functional roots - computation Calculate as: the sum of values with a Black leaf
streak disease severity grade of 4. Number of leaves with black leaf streak disease severity grade 4 - computation 80?C, for 14 days, or longer untill dry Dry weight - measurement The number of
leaves on a plant with a black leaf streak disease severity grade of 2. Number of leaves with black leaf streak disease severity grade 2 Number of leaves with a black leaf streak disease severity
grade of 2 wing presence/absence and clasping presence/absence scale 1 to 5 The weight of the bunch, including the rachis. Bunch weight bunchweight Weight Count how many fingers are in the
second-most distal hand. Number of fingers in 2nd-most distal hand - counting The longest diagonal of a fitted ellips Leaf length - measurement Water content - computation (Root Fresh Weight - Root
Dry Weight) / Root Fresh Weight Main colour Blotches colour at the petiole base Colour of the blotches on the upper leaf sheath of the petiole base. Average number of banana weevil larvae per trap
Average number of banana weevil larvae The average of how many banana weevil larvae caught per trap. Calculate as: the sum of the Fruit pulp lateral diameter measurements, divided by the number of
those measurements, e.g. (3.1 + 4.8 + 3.7 + 4.6 + 5.6 + 6.4) / 6 = 4.7. Average fruit pulp lateral diameter - computation Calculate as: the sum of the number of plants with any or multiple of the
external disease symptoms caused by Fusarium wilt - yellowing leaves, splitting of pseudostem base, changes in new leaves, petiole collapse - divided by the Number of plants planted * 100, e.g. 112 /
300 * 100 = 37%. Percentage of plants with external disease symptoms caused by Fusarium wilt - computation Number of fingers in hand - counting Count how many fingers are in a hand. Associate the
data with the Hand rank. The recommendation is to collect this data from the second and second-most distal hands. The nematode species found in the banana plant root sample. Species present Nematode
species color homogeneity scale 0-10 Moistness scale 0-10 Put a part of the product and by retro-olfaction evaluate the presence and the intensity of pumpkin-like aroma Pumpkin-like Aroma method
Selection traits (text) Degree of fusion of the pedicels before they join the rachis Fusion Fusion of pedicels Stickiness Sensory Texture method Press a piece of banana between the molars and
appreciate the adhesion of the product Calculate as: the sum of values with a Black leaf streak disease severity grade of 6. Number of leaves with black leaf streak disease severity grade 6 -
computation The extent to which vascular discolouration extends up the pseudostem of the tallest sucker should be determined by making cross-section cuts, from the base of the pseudostem upwards, and
examining the internal tissues following each cut. The point at which discolouration is no longer visible should be noted and the distance from this point to the pseudostem base recorded. Extent of
internal discolouration in corm of tallest sucker caused by Fusarium wilt - estimation µM Pigmentation of outer surface of cigar leaf on sucker Pigmentation of outer surface Colour of pigmentation
on outer surface of cigar leaf on a developed sucker Count from 100 ml of soil sample Number of nematodes (Meloidogyne spp.) per unit fresh root weight - method Number of Pratylenchus goodeyi per
soil sample Number of Pratylenchus goodeyi in soil Population density Petiole margin colour - estimation Observations on the margins and petiole wings should be made where the petiole and pseudostem
meet at shooting. Use colour chart A and observe out of direct sunlight. Record the colour of the margin (general colour is below the rim). [x Observe at flowering time.] Number of hands on whole
bunch - counting On a bunch with mostly hands of > 10 fingers, a possible ultimate hand with 1-5 (rather smaller) fingers should not be counted. singleLeafArea Leaf area The area of a leaf Area
Number of fruits on mid-hand of the bunch Number of fruits on the mid-hand of the bunch **subsume Number of fingers Multiple finger weight - measurement Weigh multiple fingers together, using scales.
Associate the data with the Hank rank and the Number of fingers measured. The recommendation is to collect this data from six fingers in total - three fingers in the middle of the outer whorl from
the second hand and from the second-most distal hand. yyyy/mm/dd Pseudostem height difference between timepoints (refer to experimental metadata) Growth - computation cm avenumfingshand The average
number of fingers in a hand. Average number of fingers Average number of fingers in hand Number of hands in bunch - counting Count how many hands are in the bunch. Extent of internal discolouration
in corm of plant caused by Fusarium wilt - estimation The extent to which vascular discolouration extends up the pseudostem should be determined by making cross-section cuts, from the base of the
pseudostem upwards, and examining the internal tissues following each cut. The point at which discolouration is no longer visible should be noted and the distance from this point to the pseudostem
base recorded. Weighing Fresh weight - measurement Faures stages of development of BLS disease scale (1987) numnotharv Number of plants not harvested The number of plants that are alive but not
harvested (e.g. due to slow or poor development). Combined number of plants not harvested Ratio of root dry mass versus plant dry mass Ratio - computation The length of a leaf Single leaf length
Length singleLeafLength Time from flowering to harvest - computation Calculate as: the time elapsed between the Date of flowering to the Date of harvest, e.g. 24/12/2014 - 24/10/2014 = 61 days. The
shortest diagonal of a fitted ellips Leaf width - measurement Firmness scale 0-10 Visually observe the shape of the mature bunch on a fully developed plant that is not experiencing environmental
stress and choose the option that is most similar, from the 4 photos of descriptor 6.4.7. in the reference material. Bunch shape - estimation Fruit pendulance - estimation Visual estimation of fruit
pendulance on a bunch, for Early Evaluation Trial stage only. The average actual annual yield, taking into account the proportion of plants harvested, from the number of plants planted actualannyield
Average actual annual yield Average actual annual yield Collect all roots from a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Take only roots from the
selected plant. Divide the collected roots into two categories: dead roots, functional roots. Dead roots are completely rotten or shrivelled whereas functional roots show at least some healthy
tissue. Count the number of dead roots. Number of dead roots - counting Individual respondents give the plant one of three possible scoring cards: Like, Dont know, Dont like. Men and women use
different colors of scoring cards. Preference scoring scale individual respondents Visual estimation of dwarfism, for Early Evaluation Trial stage only. Dwarfism - estimation Researchers calculate
the preference score by female respondents Preference score female respondents % Rating of internal symptoms of Fusarium wilt in the greenhouse Discolouration caused by Fusarium wilt rating A rating
of the extent of rhizome (corm) discolouration, caused by Fusarium wilt. Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 1 - estimation Record the date of event. The
average weight of the fingers. Average finger weight Average weight avefingerweight functional roots Focus group discussion with male respondents, to discuss selection traits of genotype that was
ranked second highest in preference ranking exercise Focus group discussion for second ranked traits with male respondents Ratio of leaf dry mass versus plant dry mass Ratio - computation Measurement
under microscope Weevil larvae body length - measurement pseudostemFreshMass Fresh weight Pseudostem fresh weight A pseudostem weight trait which is the fresh weight of a pseudostem Extent of
discolouration The extent of discolouration in the corm of the plant, caused by Fusarium wilt. Extent of internal discolouration in corm of plant caused by Fusarium wilt The circumference of a
finger. fingercircumf Circumference Finger circumference Weekly leaf emission rate - computation Calculate as: the Rank of previously marked leaf at one point in time, minus 1 (equivalent to the Rank
of marked youngest leaf), divided by the time elapsed between the two Date of data collection events when the marked leaf was recorded as 1) the Rank of marked youngest leaf and 2) the Rank of
previously marked leaf, e.g. (2 - 1) / 1 week = LER of 1 leaves/week during a particular time period. Firmness in mouth method Put a part of the sample in your mouth, evaluate during the first bite
(between molars) how hard is the sample. datedeath The date of death of the plant. Date of death Date of death Refers to the first external unlifted bract. Visually observe the apex shape of
flattened bracts to determine shape and choose the option that is most similar, from the 4 photos of descriptor 6.5.2. in the reference material. Bract apex shape - estimation bunchmaturity Bunch
maturity stage at harvest Stage of maturity at harvest The stage of maturity of the bunch at harvest. The date a tagged leaf presents black leaf streak disease symptoms at Foures black leaf streak
disease severity grade 6. Date a tagged leaf presents black leaf streak disease symptoms at Foures black leaf streak disease severity grade 6 Date of tagged leaf presenting black leaf streak disease
symptoms at Foures stage 6 Average percentage damage to the corm caused by weevils Damage percent X-average The average percentage of damage in the whole corm (the inner corm and the outer corm)
caused by weevils. Ratio of projected leaf area to the actual leaf area, measure for the compactness of the plant crown Ratio leafAreaRatio Leaf area to plant area ratio Number of leaves with black
leaf streak disease severity grade 1 Number of leaves with a black leaf streak disease severity grade of 1 The number of leaves on a plant with a black leaf streak disease severity grade of 1. g / mL
leaf Count how many maiden suckers are in the mat. Number of maiden suckers - counting Growth - computation Fresh weight difference between timepoints (refer to experimental metadata) Put a piece of
banana in your mouth, chew it and swirl it around your tongue to detect the sweet flavor Taste method Sweetness Observations on the margins and petiole wings should be made where the petiole and
pseudostem meet at shooting. Use colour chart A and observe out of direct sunlight. Record the colour of the margin (general colour is below the rim). [x Observe at flowering time.] Petiole margin
colour - estimation Nematode stage - estimation Identify the nematode stage, using the options. Number of standing leaves The number of leaves attached to the plant that have an erect petiole (not
bent or broken). Number of standing leaves Calculate as: the time elapsed between the Date of planting or the Date of harvest of the bunch from the plant of the previous crop cycle and the Date of
first external disease symptoms caused by Fusarium wilt. Time from start of crop cycle to first external disease symptoms caused by Fusarium wilt - computation DET The time elapsed between the
development of black leaf streak disease symptoms at Foures black leaf streak disease severity grade 1, to 6. Black leaf streak disease evolution time Time elapsed from Foures black leaf streak
disease severity scale 1 to 6 sucker Presence of yellowing leaves caused by Fusarium wilt - estimation Visual assessment. Compound tepal main colour - estimation The following flower descriptors
refer to the flowers at the axil of the first external unlifted bract. Fresh material must be used (make the observation as soon as you detach the bract/flower from the rachis). Visually observe the
colour of the backside middle of tepal. Use colour chart B and observe out of direct sunlight. Number of banana weevil larvae measured How many banana weevil larvae used to obtain a measurement.
Number of banana weevil larvae measured The lifecycle stage(s) of the nematode species found in the banana plant root sample. Lifecycle stages present Nematode lifecycle stage Measure the
circumference of the pseudostem of the plant at 75 cm from the collar, or from the pseudostem base at the ground if the collar is not visible, using a tape measure. Plant circumference 75 cm -
measurement Dry weight - measurement 80?C, for 14 days, or longer untill dry rachis angle scale (1 to 2) The time elapsed from the start of the crop cycle to the appearance on the plant of any of the
external disease symptoms - - yellowing leaves, splitting pseudostem base, changes in new leaves, petiole collapse, caused by Fusarium wilt. Time from start of crop cycle to first external disease
symptoms caused by Fusarium wilt Time from start of crop cycle to appearance of external disease symptoms caused by Fusarium wilt Number of plants not harvested - computation Calculate as: the Number
of plants planted, minus the Number of plants dead, minus the Number of plants harvested, e.g. 24 - 4 - 18 = 2. firmness scale 0-10 Population density Number of nematodes in soil Nematode population
density by species in soil Number of fingers The number of fingers in the second-most distal hand Number of fingers in 2nd-most distal hand numfingers2nddistalhand The texture of the smashed pulp of
a mature fruit Texture Fruit smashed pulp texture Combined proportion of plants not harvested The proportion of plants that are alive but not harvested (e.g. due to slow or poor development), from
the number of plants planted. nonharvprop Non-harvest proportion Extent of internal discolouration in corm of tallest sucker caused by Fusarium wilt Extent of discolouration The extent of
discolouration in the corm of the tallest sucker, caused by Fusarium wilt. Leaf chlorosis - computation Ratio of non green to green plant pixels numhands Number of hands Number of hands in bunch The
number of hands in the bunch. Time from planting to shooting - computation Calculate as: the time elapsed between the Date of planting and the Date of shooting. E.g. 24/09/2014 - 24/01/2014 = 243
days. Focus group discussion with male respondents, to discuss selection traits of genotype that was ranked third highest in preference ranking exercise Focus group discussion for third ranked traits
with male respondents shoot2harvest The time elapsed from shooting (when the inflorescence emerges from the pseudostem and is still in an erect position) to the harvest of the bunch. Time from
shooting to harvest Time from shooting to harvest Total finger weight - computation Calculate as: the Bunch weight, minus the Rachis weight, e.g. 50 - 10 = 40. Boiled plantain Sweetness BoiPlantnSwt
Boiled plantain Elemental flavor caused by dilute aqueous solutions of various substances such as sucrose or aspartame Sweetness Anther colour - estimation The following flower descriptors refer to
the flowers at the axil of the first external unlifted bract. Fresh material must be used (make the observation as soon as you detach the bract/flower from the rachis). Visually observe the anther
colour on the face opposite to the dehiscence split of the anther. Use colour chart B and observe out of direct sunlight. Gauhls modification of Stovers severity scoring system (YYYY) Smoothness in
mouth method Put a part of the sample in mouth, chew it and after 5 chews, evaluate between tongue and palate the number and the size of the particles. Fruit length - exact value - measurement
Measure the length of the internal arc of a fruit, without pedicel. Record on the inner fruit in the middle of the mid-hand of the bunch. If there is an even number of hands, there will be two middle
hands so use the upper hand that developed first. Record the exact value. Pseudostem relative water content The relative water content in a pseudostem Content rwcPseudostem hand Fruit parthenocarpy -
estimation Visual estimation of a fruit fill, (or parthenocarpic) for Early Evaluation Trial stage only. Leaf area - measurement Top view image of a single leave Petiole margins winged Behaviour of
the petiole margins - winged or not winged. the petiole and pseudostem meet at shooting. Margin is the part of the petiole that can be bent outwards/inwards Wing presence sourness scale 0-10 Growth -
computation Pseudostem height difference between timepoints (refer to experimental metadata) per number of new leaves formed between timepoints (refer to experimental metadata) mm Rachis weight
Weight rachisweight The weight of the rachis. Presence/absence of collapse The presence/absence of petiole collapse, caused by Fusarium wilt. Presence of petiole collapse caused by Fusarium wilt PC
BoiPlantnFirm Boiled plantain Firmness Force required to obtain deformation, penetration or rupture of the banana Boiled plantain Firmness No. per 10g dd/mm/yyyy Water content - computation (Leaf
Fresh Weight - Leaf Dry Weight) / Leaf Fresh Weight Calculate as: the sum of values with a Black leaf streak disease severity grade of 1. Number of leaves with black leaf streak disease severity
grade 1 - computation Calculate as: the sum of the number of values for Date of planting. Number of plants planted - computation Fusarium wilt external symptoms scale Death proportion The proportion
of plants that are dead, from the number of plants planted. deathprop Combined proportion of plants that are dead Focus group discussion for second lowest ranked traits with female respondents Focus
group discussion with female respondents, to discuss selection traits of genotype that was second ranked lowest in preference ranking exercise The distance of discolouration in the pseudostem of the
plant, caused by Fusarium wilt. Distance of internal discolouration in pseudostem of plant caused by Fusarium wilt Distance of discolouration Describe the pest and/or disease assessment and/or sample
collection taking place at the time of data collection. Pest and/or disease assessment and/or sample collection description - estimation cm? bud length scale (1 to 3) The third leaf (Leaf III) is
counted from the last leaf produced before [x bunch emergence] [shooting]. Cut the petiole halfway between the pseudostem and the leaf blade and examine the cross section. [x Observe at flowering
time]. Margin behaviour on petiole canal of third leaf - estimation t/unit of area/y bract imbrication scale (1 to 3) Bunch position - estimation Visually observe the angle of the bunch and choose
the option that is most similar, from the 5 schematic drawings of descriptor 6.4.6 in the reference material. Number of leaves with a black leaf streak disease severity grade of 6 Number of leaves
with black leaf streak disease severity grade 6 The number of leaves on a plant with a black leaf streak disease severity grade of 6. The length of a pseudostem lengthPseudostem Pseudostem length
Length The weight of multiple fingers. multfingerweight Weight Multiple finger weight Focus group discussion for third ranked traits with female respondents Focus group discussion with female
respondents, to discuss selection traits of genotype that was ranked third highest in preference ranking exercise hand Tallest sucker height - measurement On the tallest sucker, measure the distance
from the pseudostem base at the ground to the intersection of the petioles of the two youngest leaves (leaf ranks 1 and 2), using a measuring pole or sliding ruler. Percentage damage to upper
cross-section inner corm caused by weevils Damage percent X-upper-inner The amount of damage to the inner corm (central cylinder) upper cross-section, assessed by the percentage of tunnels caused by
weevils. Youngest leaf spotted - counting Record the rank (order) of the youngest leaf spotted (the first fully unfurled leaf with at least 10 discrete, mature, necrotic lesions or one large necrotic
area with 10 light-coloured dry centres), counting the rank by starting with the youngest completely unrolled leaf as 1 and moving downwards, associating the data with the Date of data collection.
bud shape scale (1 to 3) Measurement of system water loss between two timepoints (refer to experimental metadata) Water loss rate - computation Number Number of all roots The total number of all
roots (both dead and functional) in a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Average head capsule width The average width of a banana weevil larvae
head capsule. Banana weevil larvae average head capsule width harvdens The number of plants that are harvested, in a defined area. Combined density of plants harvested Harvest density aroma scale
0-10 Calculate as: the time elapsed between the Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 1 and the Date of tagged cigar leaf unrolling at stage B, e.g. 24/04/
2014 - 24/03/2014 = 31 days. Black leaf streak disease incubation time - computation g Harvest the bunch by cutting the peduncle 10 cm above the first ridge above the most proximal, hand and
immediately below the most distal hand and weigh the bunch (including the rachis), using scales. Bunch weight - measurement Researchers calculate the preference score by respondents Preference score
individual respondents Time elapsed from Bruns stage B of unrolling to Foures black leaf streak disease severity scale 6 DDT Black leaf streak disease development time The time elapsed between a
tagged cigar leaf at Bruns stage B of unrolling and the development of black leaf streak disease symptoms at Foures black leaf streak disease severity grade 6. Chewiness Matooke sweetness MatSweet
Matooke Sweetness basic taste produced by dilute aqueous solutions of natural or artificial substances such as sucrose Focus group discussion for lowest ranked traits with male respondents Focus
group discussion with male respondents, to discuss selection traits of genotype that was ranked lowest in preference ranking exercise handweight Hand weight Weight The weight of a hand. The height of
the pseudostem of the plant, from the collar to the intersection of the petioles of the two youngest leaves. plantheightpetioles Height Plant height Average finger lateral diameter - computation
Calculate as: the sum of the Finger lateral diameter measurements, divided by the number of those measurements, e.g. (3.5 + 5.2 + 4.1 + 5 + 6 + 6.8) / 6 = 5.1. Transpiration efficiency - computation
Gram accumulated biomass per volume water transpired Rank Rank of previously marked leaf rankprevleaf The current rank (position) of the leaf that was previously marked as the youngest leaf at the
time of the last data collection event. mg Matooke hardness mechanical textural attribute relating to the force required to achieve a given deformation, penetration, or breakage of a product.
Hardness in hand Matooke MatHardT Time elapsed from Bruns stage B of unrolling to Foures black leaf streak disease severity scale 1 Black leaf streak disease incubation time The time elapsed between
a tagged cigar leaf at Bruns stage B of unrolling and the development of black leaf streak disease symptoms at Foures black leaf streak disease severity grade 1. DIT Description of damage
describedamage Damage description A description of the damage to the plant. Description of traits considered important in selection of second least genotype by respondents Genotype selection traits
Selection criteria for genotype ranked second lowest by respondents Combined number of plants harvested numharv Number of plants harvested The number of plants that are harvested. Weight The weight
of multiple hands. multhandweight Multiple hand weight Plant circumference at collar plantcircumfcollar The circumference of the pseudostem of the plant at the collar. Circumference at collar Root
necrosis caused by nematodes - estimation Select at random five functional primary roots, at least 10 cm long. Reduce the length of the five selected roots to 10 cm and slice through the roots
length-wise. Evaluate one half of each of the five roots for the percentage of root cortex showing necrosis. The maximum root necrosis per root half can be 20 %, giving a maximum root necrosis of 100
% for the five halves together. Mentally determine the necrosis of the individual roots (1 to 5) and sum these together to get the total root necrosis of the sample. Mealiness scale 0-10
avepeelthickness The average thickness of the peel of a finger. Average finger peel thickness Average thickness of the peel Record the date of event. Date of flowering - estimation Population density
Number of Hoplolaimus spp. per soil sample Number of Hoplolaimus spp. in soil Calculate as: the sum of the Number of fingers in hand measurements from all the hands in the bunch (number equivalent to
Number of hands in bunch), e.g. 16 + 17 + 19 + 18 + 19 + 18 + 17 + 17 + 16 + 14 + 14 + 13 = 198. Number of fingers in bunch - computation Finger lateral diameter - measurement Measure the lateral
diameter of a finger from the left to the right side (not from the ventral to the dorsal side), at the widest point, using calipers. Associate the data with the Hand rank. The recommendation is to
collect this data from six fingers in total - three fingers in the middle of the outer whorl from the second hand and from the second-most distal hand. Cause of damage - estimation Record the
apparent cause of the damage to the plant, e.g. strong winds, weevil damage, etc. Shape of the fully developed bunch on a plant Bunch shape Shape Stickiness in hand method Put a part of the sample
between thumb and index fingers and using tapping motions, evaluate the amount of product adhering on them Visual counting method of Weevil traps (the trap should be minimum 30 cm long). Number of
adult weevil trap - counting Growth Increase of leaf area leafAreaIncrease Leaf area increase over time Preference score PS Calculation of preference score by respondents Overall plant appearance
Weighing Fresh weight - measurement Off-type description Description of how off-type A description of what made the plant recogniseable as an off-type. describeofftype Distance of internal
discolouration in pseudostem of tallest sucker caused by Fusarium wilt Distance of internal discolouration in pseudostem of tallest sucker, caused by Fusarium wilt For the tallest sucker, the
complete corm should be removed from the soil, the roots cut off and excess soil removed. Cross-sections of the corm should be cut (using a guillotine or other suitable device) to obtain five slices
of equal thickness. The upper surface of each cut section should be examined and visually evaluate the extent of the vascular discolouration. avehandweight Average hand weight The average weight of
the hands. Average weight topLeafArea The projected leaf area of a plant (photosynthetic surface) Area Projected top area Number of Pratylenchus goodeyi per fresh root weight Number of Pratylenchus
goodeyi in roots Population density MatSmoothM Matooke smoothness geometrical textural attribute relating to lack of presence of particles in a product Matooke Smoothness in mouth open answer The
number of leaves on a plant with a black leaf streak disease severity grade of 4. Number of leaves with black leaf streak disease severity grade 4 Number of leaves with a black leaf streak disease
severity grade of 4 Combined number of plants planted numplanted Number of plants planted The number of plants that are planted. plants dead/plants planted unitless Number of Pratylenchus coffeae per
fresh root weight Population density Number of Pratylenchus coffeae in roots The percentage of all roots in a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant
that are functional. Percentage of functional roots Percentage of functional roots peelthickness The thickness of the peel of a finger. Finger peel thickness Thickness of the peel The rank (order) of
the first fully unfurled leaf with at least 10 discrete, mature, necrotic lesions or one large necrotic area with 10 leaf spot light-coloued dry centres, caused by black leaf streak disease. Youngest
leaf spotted YLS Rank of youngest leaf spotted Nematode species - estimation Identify the nematode species present in the banana plant root sample (Radopholus similis, Pratylenchus coffeae,
Pratylenchus goodeyi, Helicotylenchus multicinctus, Meloidogyne incognita, ?) Dry weight Plant dry weight The dry weight of the whole plant. plantDryMass Fruit shape of curve - estimation Visually
observe the shape of the curve of the fruit and choose the option that is most similar, from the 5 photos of descriptor 6.7.4. in the reference material. Observe the inner fruit in the middle of the
mid-hand of the bunch. In case of an asymmetric bunch that has straight and curved fruits, please indicate it in the note section and score only the fruit on the upper side of the bunch. fruit
relicts scale (1 to 3) Multiple hand weight - measurement Cut multiple hands from the rachis and weigh them together, using scales. Associate the data with the Hank rank and the Number of hands
measured. Banana weevil adult head capsule width Head capsule width The width of a banana weevil adult weevil head. The date a tagged leaf presents black leaf streak disease symptoms at Foures black
leaf streak disease severity grade 1. Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 1 Date a tagged leaf presents black leaf streak disease symptoms at Foures
black leaf streak disease severity grade 1 Date of weevil damage to corm assessment The date the assessment of weevil damage to the corm of the plant is conducted. Date of weevil damage assessment
Population density Number of Hoplolaimus spp. per fresh root weight Number of Hoplolaimus spp. in roots Smoothness-Mouth feel - estimation Mouth feel assessment of smashed and unsmashed fruit pulp
Rachis weight - computation Calculate as: Bunch weight minus the sum of the Hand weight measurements coming from the count of number of hands (equivalent to Number of hands in bunch), e.g. 50 - (2.4
+ 3.6 + 3.2 + 4.1 + 4.0 + 3.9 + 3.6 + 3.3 + 3.3 + 2.9 + 2.9 + 2.8) = 10.0. Date of flowering dateflower Date of flowering The date the last bracts fall from the most distal hand (closest to the male
bud) to display the female flowers. Calculate as: the sum of the number of values for Presence of yellowing leaves caused by Fusarium wilt scored as Present, divided by the Number of plants planted,
multiplied by 100, e.g. 78 / 300 * 100 = 26%. Percentage of plants with yellowing leaves caused by Fusarium wilt - computation Efficiency transpirationEfficiency Transpiration efficiency The growth
(weight increase) per volume transpired water Presence absence 0 Tallest sucker number of functional leaves - counting Count how many functional leaves (leaves that have 50% or more of their surface
as green, healthy, photosynthetic tissue) are on the tallest sucker. Matooke Firmness in mouth MatFirmM Mechanical textural attribute relating to the force required to achieve a given deformation,
penetration, or breakage of a product. Matooke firmness Sweetness scale 0-10 Alignment of bracts at the apex of the male bud Bract imbrication at apex of male bud Imbrication Number of plants
harvested - computation Calculate as: the sum of the number of values for Date of harvest. Sucker quality - estimation Visual estimation of suckering, for Early Evaluation Trial stage only. Male
rachis appearance - estimation Visually observe the male rachis appearance and choose the option that is most similar. Boiled plantain Moistness Perception of the amount of water absorbed or released
by the product BoiPlantnMois Boiled plantain Moistness Average number of banana weevil adults Average number of banana weevil adults per trap The average of how many adult banana weevils caught per
trap. waterLossDay The average daily water loss Rate Daily rate of water loss The presence of root knot nematodes, observed externally as galls on the roots. Presence of root knot nematodes Nematode
root knot galling presence Population density Number of Pratylenchus coffeae per soil sample Number of Pratylenchus coffeae in soil fingerlatdiam Finger lateral diameter The lateral (side to side)
diameter of a finger. Lateral diameter plantheightneck Plant height to neck of peduncle Height The height of the pseudostem of the plant from the collar to the curved neck of the peduncle. kg/plant/y
How many banana weevil larvae caught per trap. Number of banana weevil larvae per trap Number of banana weevil larvae Time elapsed from planting to first youngest leaf spotted Time elapsed between
planting and the appearance of the first youngest leaf spotted. Time from planting to first youngest leaf spotted Rating of external symptoms of Fusarium wilt in the field - estimation Visual
assessment. Root knot nematode galling Plant fresh weight plantFreshMass Fresh weight A plant weight trait which is the fresh weight of a plant Number of adult weevils measured Number of banana
weevil adults measured How many banana weevil adults used to obtain a measurement. Fruit smashed pulp aroma/smell The pulp aroma of the smashed pulp of a mature fruit Aromas Calculate as: the Number
of dead roots plus the Number of functional roots, e.g. 8 + 12 = 20 roots. Number of all roots - computation dateplant Date of planting The date of planting the plant. Date of planting blotch
pigmentation scale (1 to 5) Colour of the external face Colour of bract external face Main colour of the first external unlifted bract external face Calculate as: the Bunch weight divided by 1,000,
multiplied by the Annual crop cycle proportion, multiplied by the Harvest density, e.g. (50 / 1,000) * 1.09 * 1,250 = 68.13. Actual annual yield - computation height score (1 to 3) Water loss -
measurement Weighing of water loss though system normalized by leaf area Count from 100 ml of soil sample Number of nematodes (Pratylenchus spp.) per unit fresh root weight - method Boiled plantain
Stickiness Force required to peel off the fraction of product adhering to the interior of the oral cavity Boiled plantain Stickiness BoiPlantnStic Sucker quality Overall suckering quality suckering
The overall estimation of sucker quality on a mat Body weight Banana weevil adult body weight The weight of a banana weevil adult weevil body. Colour of bract internal face - estimation Visually
observe the colour of the internal face of the first unlifted bract. Use colour chart A and observe out of direct sunlight. Cause of damage Cause of damage The apparent cause of damage to the plant.
causedamage Percentage of plants with internal disease symptoms caused by Fusarium wilt - computation Calculate as: the sum of the number of plants with any or multiple of the internal disease
symptoms caused by Fusarium wilt - discolouration in the corm, and/or discolouration in the pseudostem - divided by the Number of plants planted * 100, e.g. 108 / 300 * 100 = 36%. Calculate as: the
Rank of previously marked leaf at one point in time, minus 1 (equivalent to the Rank of marked youngest leaf), divided by the time elapsed between the two Date of data collection events when the
marked leaf was recorded as 1) the Rank of marked youngest leaf and 2) the Rank of previously marked leaf, e.g. (4 - 1) / 1 month = LER of 3 leaves/month during a particular time period. Monthly leaf
emission rate - computation Date of tagged cigar leaf unrolling at Bruns stage B - estimation Record the date of event. Population density Number of Pratylenchus spp. in roots Number of Pratylenchus
spp. per fresh root weight The number of fingers in a hand. Number of fingers in hand numfingershand Number of fingers Measure the length of a finger, along the internal (ventral) arc, excluding the
pedicel and the fruit tip, using a tape measure. Associate the data with the Hand rank and the Number of fingers measured. The recommendation is to collect this data from six fingers in total - three
fingers in the middle of the outer whorl from the second hand and from the second-most distal hand. Finger internal length - measurement Record the date of event. Date of harvest - estimation
pseudostemHeightIncrease Growth Height increase of pseudostem Pseudostem height increase over time numwatersuckers Number of water suckers The number of water suckers (suckers with broad leaves, a
small rhizome, and a weak connection to the plant) in the mat. Number of water suckers finger The length of a finger measured along the internal (ventral) arc, excluding the pedicel and the fruit
tip. fingerintlength Internal length Finger internal length Measure for the biomass allocation: dry mass of leaves versus whole plant dry mass ratio Ratio Leaf to whole plant mass ratio
leafToPlantMassRatio_dry Matooke moisture perception of moisture content of a food by the tactile receptors in the mouth and also in relation to the lubricating properties of the product Moisture in
mouth Matooke MatMoistM plantspacing Area allocated to one plant The area allocated to one plant. Plant spacing dd/mm/yyyy hh:mm:ss Number of leaves with black leaf streak disease severity grade 0 -
computation Calculate as: the sum of values with a Black leaf streak disease severity grade of 0. Number of peeper suckers - counting Count how many peeper suckers are in the mat. Height Pseudostem
height Distance from the base of the pseudostem to the emerging point of the peduncle. Number used to obtain a measurement The number of fingers used to obtain a measurement. Number of fingers
measured numfingersmeas Total finger weight - hands - computation Calculate as: the sum of the Hand weight measurements from all the hands in the bunch (number equivalent to Number of hands in
bunch), e.g. 2.4 + 3.6 + 3.2 + 4.1 + 4.0 + 3.9 + 3.6 + 3.3 + 3.3 + 2.9 + 2.9 + 2.8 = 40.0. Banana weevil larvae head capsule width The width of a weevil larvae head capsule. Head capsule width Number
of nematodes (Radopholus similis ) per unit fresh root weight -method Count from 10 g of chopped roots from the tested banana plant sample End date and time of data collection - estimation Record the
date and time of event. The estimated number of fingers in the bunch based on a calculation using the number of fingers in the third and second-most distal hands and the number of hands in the bunch.
Calculated number of fingers in bunch Number of fingers estnumfingers Count how many fingers are used to obtain a measurement. Number of fingers measured - counting Calculate as: the sum of Damage to
inner corm upper cross-section caused by weevils, Damage to outer corm upper cross-section caused by weevils, Damage to inner corm lower cross-section caused by weevils, and Damage to outer corm
lower cross-section caused by weevils, divided by the number of traits included in the sum, e.g. (25 + 36 + 18 + 20) / 4 = 24.8. Average damage to the corm weevils - computation Number of adult
larvae - counting Visual counting method Visually observe and record the amount of the surface area of a standing leaf that is affected by black leaf streak disease. Black leaf streak disease
severity - estimation plant BD colour chart B - 1996 Total finger weight The weight of all the hands of the bunch, excluding the rachis. Combined weight totalfingerweight Population density Number of
Meloidogyne spp. in soil Number of Meloidogyne spp. per soil sample Growth growthDry Dry mass accumulation over time Dry plant growth Average body length The average length of a banana weevil adult
body, from the head to the tip of the abdomen. Banana weevil adult average body length Nematode multiplication from single root inoculation of 50 nematodes per 8 cm length after 12 weeks Nematode
reproduction factor (microplot screening) Population density Colour of bract external face - estimation Visually observe the colour of the external face of the first unlifted bract. Use colour chart
A and observe out of direct sunlight. Male rachis appearance Appearance of flowers/bracts on the male rachis Appearance in relation to the presence or absence of neutral, male and/or hermaphrodite
flowers fingerextlength The length of a finger measured along the external (dorsal) arc, excluding the pedicel and the fruit tip. External length Finger external length Percentage of plants with
yellowing leaves, caused by Fusarium wilt. Percentage of plants with yellowing leaves caused by Fusarium wilt Combined percentage with yellowing leaves Date of death - estimation Record the date of
event. Record the rank (order) of the second-most distal hand in the bunch, starting with the hand at the proximal end (closest to the pseudostem) of the bunch as 1 and continuing to the hand at the
most distal end (closest to the male bud). Hand rank - counting BD colour chart A - 2016 Weighing Fresh weight - measurement Number of dead roots Number The number of dead roots in a standard-size
excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Count how many of the standing leaves are spotted (with at least 10 discrete, mature, necrotic lesions or one large
necrotic area with 10 light-coloured dry centres), considering all leaves in between and inclusive of, the youngest leaf and the oldest standing leaf. Number of spotted leaves - counting Record the
relative surface area coverage by blotches. [x Look at several plants if possible to get an overall idea. Observe at flowering time]. Blotches area - estimation Fruit apex point Description of fruit
apex Apex point For the plant, the complete corm should be removed from the soil, the roots cut off and excess soil removed. Cross-sections of the corm should be cut (using a guillotine or other
suitable device) to obtain five slices of equal thickness. The upper surface of each cut section should be examined and visually evaluate the extent of the vascular discolouration. Distance of
internal discolouration in pseudostem of plant caused by Fusarium wilt Percentage of plants with petiole collapse caused by Fusarium wilt - computation Calculate as: the sum of the number of values
for Presence of petiole collapse caused by Fusarium wilt scored as Present, divided by the Number of plants planted, multiplied by 100, e.g. 84 / 300 * 100 = 28%. bunch position scale(1 to 5) plants
harvested/plants planted Percentage of flower relicts at fruit apex Remains of flower relicts at apex Remains of flower relicts at fruit apex fruit curve shape (scale 1 to 5) Surface yellowness
intensity MatSurfYellV Matooke surface yellowness intensity Matooke Color of the surface of the sample from light yellow to bright yellow Calculate as: the Number of plants not harvested, divided by
the Number of plants planted, e.g. 2 / 24 = 0.08. Non-harvest proportion - computation avefingerintlength Average finger internal length Average internal length The average length of the fingers
measured along the internal (ventral) arc, excluding the pedicel and the fruit tip. numLeaves Number The number of all leaves (including dried/dead/wilted/yellow) in a shoot system Leaf number
Calculate as: the Number of plants harvested, divided by the Number of plants planted, e.g. 18 / 24 = 0.75. Harvest proportion - computation Dry weight difference between timepoints (refer to
experimental metadata) Growth - computation Preference scale Root relative water content The relative water content in a root Content rwcRoot Vitamin C content The Vitamin C content of freeze-dried
smashed pulp of a mature fruit Vitamin C content pseudostemHeightIncreasePerFormedLeaf Growth Stunting expressed as height growth per new formed leaf Pseudostem height increase per leaf formed Count
from 10 g of chopped roots from the tested banana plant sample Number of nematodes (Hoplolaimus spp.) per unit fresh root weight -method Average external length Average finger external length The
average length of the fingers measured along the external (dorsal) arc, excluding the pedicel and the fruit tip. avefingerextlength Calculate as: the sum of values with a Black leaf streak disease
severity grade of 5. Number of leaves with black leaf streak disease severity grade 5 - computation Number of fingers Multiple number of fingers in hand multnumfingshand The number of fingers in
multiple hands. Non-harvest density - computation Calculate as: the Plant density multiplied by the Non-harvest proportion, e.g. 1,667 * 0.08 = 133. Extent of root knot nematodes The extent of root
knot nematodes, observed externally as galls on the roots. Nematode root knot galling extent Ratio - computation Ratio of belowground (root) to aboveground (shoot) dry mass Number of nematodes
(Rotylenchulus spp.) per unit fresh root weight - method Count from 100 ml of soil sample Observations on the margins and petiole wings should be made where the petiole and pseudostem meet at
shooting. Observation should be made at shooting on the neck, where the petiole and pseudostem meet. Margin is the part of the petiole that can be bent outwards/inwards. [x Observe at flowering
time.] Petiole margin clasping - estimation Parthenocarpy goodfruitfill Parthenocarpy The overall estimation of fruit parthenocarpy level on a bunch Number of banana weevil traps used How many How
many banana weevil traps used. Damage to lower outer corm weevils - measurement Cut a transverse cross section of the corm at 10 cm below the collar (lower cross-section). Score weevil damage
(galleries) as percentage damage on the side of the cross-section that was facing the ground, downwards. Calculate as: the Bunch weight divided by 1,000, multiplied by the Annual crop cycle
proportion, multiplied by the Plant density, e.g. (50 / 1,000) * 1.09 * 1,667 = 90.85. Potential annual yield - computation Calculate as: ((((Number of leaves with black leaf streak disease severity
grade 0, multiplied by 0), plus (Number of leaves with black leaf streak disease severity grade 1, multiplied by 1), plus (Number of leaves with black leaf streak disease severity grade 2, multiplied
by 2), plus (Number of leaves with black leaf streak disease severity grade 3, multiplied by 3), plus (Number of leaves with black leaf streak disease severity grade 4, multiplied by 4), plus (Number
of leaves with black leaf streak disease severity grade 5, multiplied by 5), plus (Number of leaves with black leaf streak disease severity grade 6, multiplied by 6)), divided by the number of grades
in the scale minus 1 (i.e. 7-1)), divided by the Number of standing leaves), multiplied by 100. For example, ((((3 * 0) + (1 * 1) + (1 * 2) + (3 * 3) + (5 * 4) + (0 * 5) + (0 * 6)) / 6) / 13) * 100 =
((32 / 6) / 13) * 100 = 41. Black leaf streak disease severity index - computation Time from planting to death The time elapsed from planting to the death of the plant. plant2death Time from planting
to death Put a part of the product and by retro-olfaction evaluate the presence and the intensity of grassy-like aroma Grassy-like Aroma method The presence/absence of yellowing leaves, caused by
Fusarium wilt. Presence/absence of yellowing Presence of yellowing leaves caused by Fusarium wilt YL Date of first youngest leaf spotted - estimation Record the date of event. Male bud shoulder -
computation Calculate the ratio w/y, i.e. the broadest width of the male bud to the total length of the male bud. Do not measure the dimensions along the bud but rather on a projection/outline of the
bud (e.g. trace the outline of the bud on paper) and use ruler or measuring tape Stunting expressed as number of leaves per height growth Formation Leaves formed per pseudostem height increase
leavesFormedPerPseudostemGrowth Number of hands Number of hands on whole bunch Number of hands on bunch **subsume petiole-margin contrast Start date and time of data collection - estimation Record
the date and time of event. None Index of black leaf streak disease severity Main underlying colour Main underlying colour of pseudostem Main colour of the pseudostem under the outermost sheath.
blotch color scale (1 to 5) Surface color homogeneity Appearance method When you receive the sample, observe the surface and evaluate the homogeneity Combined density of plants planted The number of
plants that are planted, in a defined area. Plant density plantdens Date of harvest dateharvest The date the bunch is harvested from the plant. Date of harvest Measure the circumference of the
pseudostem of the plant at the collar, or from the pseudostem base at the ground if the collar is not visible. Plant circumference at collar - measurement Surface yellowness intensity Appearance
method When you receive the sample, observe the surface and evaluate the intensity of the color BD colour chart A - 1996 Calculate as: the sum of values with a Black leaf streak disease severity
grade of 3. Number of leaves with black leaf streak disease severity grade 3 - computation Growth - computation Leaf area difference between timepoints (refer to experimental metadata) Count from 10
g of chopped roots from the tested banana plant sample Number of nematodes (Pratylenchus coffeae) per unit fresh root weight -method Number of peeper suckers The number of peeper suckers (suckers
less than 15 cm tall) in the mat. Number of peeper suckers numpeepersuckers Presence of splitting pseudostem base caused by Fusarium wilt - estimation Visual assessment. Length and maximum diamter of
male bud at harvest. Male bud size - measurement Moldability in hand MatMolT Matooke moldability mechanical textural attribute relating to the degree to which a substance can be deformed before it
breaks Matooke Rank The rank (order) of a hand in the bunch. Hand rank hand rank handid fruit curve shape (scale 1 to 6) Number of leaves with a black leaf streak disease severity grade of 0 Number
of leaves with black leaf streak disease severity grade 0 The number of leaves on a plant with a black leaf streak disease severity grade of 0. Rating of external symptoms of Fusarium wilt in the
field A rating of the extent of leaf yellowing and wilting, caused by Fusarium wilt. Leaf yellowing and wilting caused by Fusarium wilt rating Top view image of loose leaves Leaf area - measurement
Average finger internal length - computation Calculate as: the sum of the Finger internal length measurements, divided by the number of those measurements, e.g. (27 + 31 + 29 + 30 + 30 + 28) / 6 =
29.2. Rank The rank (order) of a the second-most distal hand in the bunch. Hand rank of 2nd-most distal hand handid2ndmostdistal Fruit pulp lateral diameter The lateral (side to side) diameter of the
pulp of a fruit. pulpdiam Lateral diameter of the pulp Number of nematodes (Pratylenchus spp.) per unit fresh root weight -method Count from 10 g of chopped roots from the tested banana plant sample
Selection criteria for genotype ranked highest by respondents Description of traits considered important in selection of best genotype by respondents Genotype selection traits Nematode egg-laying
females extent - estimation Collect all roots from a standard size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Take only roots from the selected plant. Carefully wash
the soil from the roots with tap water. Cut the roots into 1 cm pieces. Take a subsample of 5 g, add 100 ml distilled water and store the roots at 4 degrees Celsius. Stain the egg masses by immersing
the roots in 0.15 g/L phloxine B for 15 mins. Count the number of egg-laying females under a stereo microscope. Matooke aroma grassy Matooke MatAroGrassA Aroma of fresh grass Grassy-like aroma The
date of data collection. Date of data collection datedatacoll Date of data collection multiplication factor Shape of male bud Shape Male bud shape numhandsmeas Number of hands measured The number of
hands used to obtain a measurement. Number used to obtain a measurement Blotches relative area at petiole base Percentage coverage of petiole base by blotches. Blotch surface area Root dry weight The
dry weight of the root system Dry weight rootDryMass Percentage of dead roots - computation Calculate as: the Number of dead roots divided by the Number of all roots, multiplied by 100, e.g. 8 / 20 *
100 = 40 % dead roots. Focus group discussion with male respondents, to discuss selection traits of genotype that was ranked highest in preference ranking exercise Focus group discussion for top
traits with male respondents Angle between the axis of the bunch and the vertical Angle to the vertical Bunch position Presence of changes in new leaves caused by Fusarium wilt - estimation Visual
assessment. Measure the length of the internal arc of a fruit, without pedicel. Record on the inner fruit in the middle of the mid-hand of the bunch. If there is an even number of hands, there will
be two middle hands so use the upper hand that developed first. Record the range. Fruit length - range - measurement Focus group discussion with female respondents, to discuss selection traits of
genotype that was ranked third lowest in preference ranking exercise Focus group discussion for third lowest ranked traits with female respondents Number The number of functional roots in a
standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Number of functional roots Put a part of the sample in the mouth and evaluate the intensity of taste of
sweetness Sweetness tasting method Percentage of plants with internal disease symptoms caused by Fusarium wilt Combined percentage with internal symptoms of Fusarium wilt The percentage of plants
with any or multiple of the internal disease symptoms - discolouration in the corm, and/or discolouration in the pseudostem, caused by Fusarium wilt. Average fruit pulp lateral diameter The average
lateral (side to side) diameter of the pulp of a fruit. Average lateral diameter of the pulp avepulpdiam Adult weevil body - measurement Electronic weighing method Time from planting to death -
computation Calculate as: the time elapsed between the Date of planting and the Date of death, e.g. 24/05/2014 - 24/01/2014 = 120 days. mm Measurement under microscope Weevil larvae head capsule -
measurement Average finger external length - computation Calculate as: the sum of the Finger external length measurements, divided by the number of those measurements, e.g. (30 + 34 + 32 + 33 + 33 +
31) / 6 = 32.1. Genotype selection traits Selection criteria for genotype ranked lowest by respondents Description of traits considered important in selection of least genotype by respondents The
number of leaves on the plant that have 50 % or more of the leaf surface area as green, healthy, photosynthetic tissue. functleaves Number of functional leaves Number of functional leaves leaf rank
The number of leaves with at least 10 discrete, mature, necrotic lesions or one large necrotic area with 10 light-coloured dry centres, caused by black leaf streak disease. Number of spotted leaves
Number of spotted leaves Focus group discussion with female respondents, to discuss selection traits of genotype that was ranked lowest in preference ranking exercise Focus group discussion for
lowest ranked traits with female respondents Try to make a ball (agglomerate) of the sample and evaluate how easy it is to deform or break the sample Moldability in hand method bunch shape scale (1
to 5) m2 dateofftype The date the plant is first recognised as an off-type. Date of recognising off-type Date of recognising off-type Number of Radopholus similis per soil sample Population density
Number of Radopholus similis in soil Leaf chlorosis Leaf trait correlated to the health stage of the leaf leafChlorosis Chlorosis Number of sword suckers numswordsuckers The number of sword suckers
(suckers with narrow leaves and a large rhizome) in the mat. Number of sword suckers leaves/month fruit apex scale (1 to 5) pulp colour rootToPlantMassRatio_dry Ratio Root to whole plant mass ratio
Measure for the biomass allocation: dry mass of roots versus whole plant dry mass ratio maiden sucker roots Observations on the margins and petiole wings should be made where the petiole and
pseudostem meet at shooting. Record on the last developed leaf at [x flowering stage] [shooting]. Colour line along edge of petiole margin - estimation Calculate as: the sum of values with a Black
leaf streak disease severity grade of 2. Number of leaves with black leaf streak disease severity grade 2 - computation Calculate as: the Total finger weight, divided by the Number of hands in bunch,
e.g. 46.8 / 16 = 2.9. Average hand weight - all - computation Rank of previously marked leaf - counting Record the rank of the previously marked leaf (the leaf that was marked in the Rank of marked
youngest leaf during the last data collection), counting the leaf rank by starting with the newest completely unrolled leaf as 1 and moving downwards. Associate the data with the Date of data
collection. Combined percentage with splitting pseudostem base Percentage of plants with a splitting pseudostem base caused by Fusarium wilt Percentage of plants with a splitting pseudostem base,
caused by Fusarium wilt. finger Count from 100 ml of soil sample Number of nematodes (Radopholus similis ) per unit fresh root weight - method BD colour chart B - 2016 kg Ratoon crop cycle
harvest2harvest Ratoon crop cycle The time elapsed from the harvest of the bunch of the plant from the preceding crop cycle to the harvest of the bunch of the plant from the current crop cycle. Fresh
plant growth Growth Fresh mass accumulation over time growthFresh Date of tagged cigar leaf unrolling at Bruns stage B Date of tagging at Bruns stage B of unrolling The date a tagged cigar leaf is at
Bruns stage B of unrolling. Presence of splitting pseudostem base caused by Fusarium wilt - estimation Visual assessment. Population density Number of Radopholus similis in roots Number of Radopholus
similis per fresh root weight Visual counting method Number of weevil larvae - counting Focus group discussion with male respondents, to discuss selection traits of genotype that was ranked third
lowest in preference ranking exercise Focus group discussion for third lowest ranked traits with male respondents parthenocarpy Percentage of dead roots Percentage of dead roots The percentage of all
roots in a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant that are dead. Nematode egg-laying females extent The extent of egg-laying female root knot
nematodes, observed internally as pit-like structures on the roots. Extent of root knot nematodes The overall estimation of the bunch quality on a mat Overall bunch quality Quality goodbunch
Description of traits considered important in selection of second best genotype by respondents Selection criteria for genotype ranked second highest by respondents Genotype selection traits Boiled
plantain Chewiness Boiled plantain Chewiness BoiPlantnChew Energy or number of chews necessary to chew the banana to make it ready to be swallowed Record the date of event. Date of data collection -
estimation Pseudostem height - estimation Recorded from the base of the pseudostem to the emerging point of the peduncle. Record a description of what made the plant recogniseable as an off-type.
Off-type description - estimation Weekly leaf emission rate A rate that expresses the number of leaves that have emerged from the pseudostem in a week. LER Weekly leaf emission rate Main colour of
the tip of the compound tepal (lobe) Lobe colour of tip of compound tepal Lobe colour at tip of the tepal Measure the distance from the collar, or from the pseudostem base at the ground if the collar
is not visible, to under the neck of the curved peduncle, using a measuring pole or sliding ruler. Plant height - neck of peduncle - measurement Number of nematodes (Pratylenchus goodeyi) per unit
fresh root weight -method Count from 10 g of chopped roots from the tested banana plant sample dwarfism open answer leaves/week moisture scale 0-10 margin behaviour scale (1 to 4) Count how many
fingers are in the bunch. Number of fingers in bunch - counting Combined percentage with changes in new leaves Percentage of plants with changes in new leaves caused by Fusarium wilt Percentage of
plants with changes in new leaves, caused by Fusarium wilt. Meredith and Lawrence (1969) - condensed Number of adult weevils - counting Visual counting method of Weevils per trap (the trap should be
minimum 30 cm long) Date and time of start of data collection The date and time of the start of data collection on a plant. datestarttimedatacoll Start date and time of data collection Count only
fully developed fruit. If there is an even number of hands, there will be two middle hands. Count the middle hand that developed first. Number of fruits on mid-hand of the bunch - counting
Smoothness-mouth feel Fruit smashed pulp smoothness-mouth feel The smoothness-mouth feel of the smashed pulp of a mature fruit pestdiseasesampledescription Pest and/or disease assessment and/or
sample collection description Pest/disease assessment/sample collection description A description of the pest and/or disease assessment and/or sample collection being done at the time of data
collection. Collect all roots from a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Take only roots from the selected plant. Divide the collected roots
into two categories: dead roots, functional roots. Dead roots are completely rotten or shrivelled whereas functional roots show at least some healthy tissue. From the functional roots, randomly
select five primary roots that are at least 10 cm, and reduce their length to 10 cm. Determine the presence/absence of root knot nematodes, observed externally as galls on the roots. Nematode root
knot galling presence - estimation Time from planting to shooting plant2shoot Time from planting to shooting The time elapsed from planting to shooting (when the inflorescence emerges from the
pseudostem and is still in an erect position). Matooke Uniformity of color of the surface of the sample Matooke homogeneity of surface colour Surface color homogeneity MatSurfHomColV Focus group
discussion for second ranked traits with female respondents Focus group discussion with female respondents, to discuss selection traits of genotype that was ranked second highest in preference
ranking exercise 1 to 5 acceptability score Banana weevil larvae body weight The weight of a banana weevil larvae body. Body weight Plant water loss waterLossPlant The amount of water loss through
the plant over time Volume Presence of petiole collapse caused by Fusarium wilt - estimation Visual assessment. Date of first youngest leaf spotted The date a youngest leaf spotted is first observed
on the plant, caused by black leaf streak disease. Date of first youngest leaf spotted Measure the length of a finger that is at the ready to eat stage, along the external (dorsal) arc, excluding the
pedicel and the fruit tip, using a tape measure. Associate the data with the Hand rank and the Number of fingers measured. The recommendation is to collect this data from six fingers in total - three
fingers in the middle of the outer whorl from the second hand and from the second-most distal hand. Finger external length - measurement Number of Helicotylenchus multicinctus per soil sample
Population density Number of Helicotylenchus multicinctus in soil Percentage of plants with a splitting pseudostem base caused by Fusarium wilt - computation Calculate as: the sum of the number of
values for Presence of splitting pseudostem base caused by Fusarium wilt scored as Present, divided by the Number of plants planted, multiplied by 100, e.g. 24 / 300 * 100 = 8%. Percentage damage to
lower cross-section outer corm caused by weevils The amount of damage to the outer corm (cortex) lower cross-section, assessed by percentage of tunnels caused by weevils. Damage percent X-lower-outer
Male rachis type Behaviour of male rachis - truncated or present. Truncated means there is no bract scar below the last hand. Present means there is a degenerated or persistent male bud Presence
Number of leaves with black leaf streak disease severity grade 3 The number of leaves on a plant with a black leaf streak disease severity grade of 3. Number of leaves with a black leaf streak
disease severity grade of 3 Pseudostem colour - estimation Detach the outermost sheath from the pseudostem (the sheath should not be too dry). Record the overall impression of colour of the exposed
surface of the underlying pseudostem. Note that this main colour should cover more than 75% of the underlying pseudostem surface. Use colour chart A and observe out of direct sunlight. Take a part of
the sample between fingers and evaluate how hard the sample is Hardness in hand method Average lateral diameter Average finger lateral diameter The average lateral (side to side) diameter of a
finger. avefingerlatdiam Collect all roots from a standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Take only roots from the selected plant. Divide the
collected roots into two categories: dead roots, functional roots. Dead roots are completely rotten or shrivelled whereas functional roots show at least some healthy tissue. Count the number of
functional roots. Number of functional roots - counting The height of the pseudostem of the tallest sucker within the same mat as the plant (the sucker that will become the next plant with a bunch,
once the bunch from the current plant is harvested) suckerheight Tallest sucker height Height Blotches colour - estimation Observe visually the colour of the blotches on the upper leaf sheath.
Average body weight The average weight of a banana weevil larvae body. Banana weevil larvae average body weight bract behaviour scale (1 to 2) Harvest density - computation Calculate as: the Plant
density multiplied by the Harvest proportion, e.g. 1,667 * 0.75 = 1,250. Measurement under microscope Weevil larvae body weight - measurement Ratio - computation Ratio of pseudostem dry mass versus
plant dry mass The average length of a weevil larvae body, from the head to the tip of the abdomen. Banana weevil larvae average body length Average body length margin behaviour scale (1 to 5)
Behaviour of the petiole margins - winged or not winged. the petiole and pseudostem meet at shooting. Margin is the part of the petiole that can be bent outwards/inwards Margins clasping or not
clasping Petiole margins clasping Aroma of the local matooke. Matooke aroma matooke Matooke-like aroma MatAroMatA Matooke A leaf weight trait which is the fresh weight of a leaf Fresh weight Leaf
fresh weight leafFreshMass Main colour of the first external unlifted bract internal face Colour of the internal face Colour of bract internal face Rank of marked youngest leaf - counting Mark the
youngest completely unrolled leaf and record its rank, which is always 1. Associate the data with the Date of data collection. Number of nematodes (Helicotylenchus multicinctus) per unit fresh root
weight - method Count from 100 ml of soil sample Length of the internal arc of a finger ** subsume Fruit length Length [x at maturity] Presence absence 0 The texture of the unsmashed pulp of a mature
fruit Fruit unsmashed pulp texture Texture Index of non-spotted leaves Index of non-spotted leaves INSL An index to express the proportion of standing leaves without the typical late-stage symptoms
of black leaf streak disease, i.e. a black spot with a necrotic centre. This index provides an estimation of available photosynthetic leaf area prior to fruit filling and is a measure of resistance.
It also corrects for the difference in the number of leaves produced by different types of bananas. Pulp colour fruitpulpcolour The colour of the fruit pulp. Fruit pulp colour Open question Selection
criteria for genotype ranked third lowest by respondents Genotype selection traits Description of traits considered important in selection of third least genotype by respondents Date of planting -
estimation Record the date of event. Presence/absence of changes CNL Presence of changes in new leaves caused by Fusarium wilt The presence/absence of changes in new leaves - irregular pale margins,
narrowing of lamina, burning plus ripping of lamina, lamina becoming more erect, caused by Fusarium wilt. No. per 100ml Main colour of the backside of the compound tepal Main colour Compound tepal
main colour finger The number of plants that are dead, in a defined area. deathdens Death density Combined density of plants that are dead Percentage damage to upper cross-section outer corm caused
by weevils The amount of damage to the outer corm (cortex) upper cross-section, assessed by percentage of tunnels caused by weevils. Damage percent X-upper-outer Number of Rotylenchulus spp.in soil
Number of Rotylenchulus spp.per soil sample Population density Put a part of the sample in the mouth, chew and evaluate the quantity of water within the sample. Moisture in mouth method Number of
leaves with black leaf streak disease severity grade 5 The number of leaves on a plant with a black leaf streak disease severity grade of 5. Number of leaves with a black leaf streak disease severity
grade of 5 Length - measurement Manual height measurement Damage to lower inner corm weevils - measurement Cut a transverse cross section of the corm at 10 cm below the collar (lower cross-section).
Score weevil damage (galleries) as percentage damage on the side of the cross-section that was facing the ground, downwards. Record the rank (order) of the oldest standing leaf (with an erect
petiole), counting the rank by starting with the youngest completely unrolled leaf as 1 and moving downwards, associating the data with the Date of data collection. Rank of oldest standing leaf -
counting Index of black leaf streak disease severity Leaf area - measurement Top view image Winged petiole margin - estimation Observations on the margins and petiole wings should be made where the
petiole and pseudostem meet at shooting. Margin is the part of the petiole that can be bent outwards/inwards. [x Observe at flowering time.] peeper sucker Visual assessment. Presence of petiole
collapse caused by Fusarium wilt - estimation Plant height - petiole of two youngest leaves - measurement Measure the distance from the collar, or from the pseudostem base at the ground if the collar
is not visible, to the intersection of the petioles of the two youngest leaves (leaf ranks 1 and 2), using a measuring pole or sliding ruler. Calculate as: the time elapsed between the Date of
planting to the Date of harvest. E.g. 24/12/2014 - 24/01/2014 = 334 days. Plant crop cycle - computation Visual assessment. Presence of changes in new leaves caused by Fusarium wilt - estimation
Number of Pratylenchus spp. per soil sample Number of Pratylenchus spp. in soil Population density Date of shooting - estimation Record the date of event. Visually observe the bract imbrication of
the apex of the male bud and choose the option that is most similar, from the 3 photos of descriptor 6.5.3 in the reference material. Bract imbrication at apex of male bud - estimation Stickiness in
hand Matooke Matooke stickiness MatStickT mechanical textural attribute relating to the force required to remove material that sticks to the mouth Percentage of plants with changes in new leaves
caused by Fusarium wilt - computation Calculate as: the sum of the number of values for Presence of changes in new leaves caused by Fusarium wilt scored as Present, divided by the Number of plants
planted, multiplied by 100, e.g. 67 / 300 * 100 = 22%. plantcircumf75cm The circumference of the pseudostem of the plant at 75 cm from the collar. Circumference 75 cm above the ground Plant
circumference 75 cm from the collar Body length The length of a banana weevil adult body, from the head to the tip of the abdomen. Banana weevil adult body length Calculate the ratio w/y, i.e. the
broadest width of the male bud to the total length of the male bud. Do not measure the dimensions along the bud but rather on a projection/outline of the bud (e.g. trace the outline of the bud on
paper) and use ruler or measuring tape. Male bud shape - computation A description of the state of the plant. statedescription State description Plant state description Annual crop cycle proportion
Proportion of crop cycle that takes place in one year anncropcycprop The proportion of the crop cycle that takes place in one year. Matooke astringency Astringency Matooke complex sensation,
accompanied by shrinking, drawing or puckering of the skin or mucosal surface in the mouth, produced by substances such as kaki tannins or sloe tannins MatAstr Average finger peel thickness -
computation Calculate as: the sum of the Finger peel thickness measurements, divided by the number of those measurements, e.g. (2 + 2 + 2 + 3 + 2 + 3 + 3) / 6 = 2. The overall estimation fruit
pendulance on a bunch pendulantfruit Overall fruit pendulance quality Finger pendulance quality Bract apex shape Apex shape Shape of the first external unlifted bract of the male bud Count from 100
ml of soil sample Number of nematodes (Pratylenchus goodeyi) per unit fresh root weight - method Behaviour Margin behaviour on petiole canal of third leaf Margin behaviour of the petiole canal of the
third leaf before bunch. BD colour chart A - 2016 Colour Anther colour Main colour of the face of anther opposite to dehiscence Banana weevil adult average head capsule width Average head capsule
width The average width of a banana weevil adult head. Visually observe the male rachis position (the part between the last hand and the male bud) and choose the option that is most similar, from the
5 schematic drawings of descriptor 6.4.12 in the reference material. Male rachis position - estimation Circumference 20 cm above the ground plantcircumf20cm The circumference of the pseudostem of the
plant at 20 cm from the collar. Plant circumference 20 cm from the collar Colour line along edge of petiole margin Presence Presence or absence of colour line along edge of petiole margin hardness
scale 0-10 rwcPlant The relative water content in the whole plant Content Plant relative water content The percentage of plants with any or multiple of the external disease symptoms - yellowing
leaves, splitting of pseudostem base, changes in new leaves, petiole collapse, caused by Fusarium wilt. Percentage of plants with external disease symptoms caused by Fusarium wilt Combined percentage
with external symptoms of Fusarium wilt Cut the hands off the rachis and weigh the rachis, using scales. Rachis weight - measurement Number of maiden suckers Number of maiden suckers nummaidensuckers
The number of maiden suckers (fully grown suckers with foliage leaves) in the mat. Index of non-spotted leaves Calculate as: 100 multiplied by (Youngest leaf spotted, minus 1), divided by Number of
standing leaves, e.g. 100 * (6 - 1) / 12 = 42. Index of non-spotted leaves - computation The circumference of the pseudostem of the plant at 100 cm from the collar. Plant circumference 100 cm from
the collar Circumference 1 m above the ground plantcircumf100cm The number of fingers in the second hand. Number of fingers numfingers2ndhand Number of fingers in 2nd hand % Collect all roots from a
standard-size excavation of 20 x 20 x 20 cm extending outwards from the corm of the plant. Take only roots from the selected plant, do not include roots from adjacent plants. Carefully wash the soil
from the roots with tap water. Cut the roots into 10 cm long pieces and dry with paper tissue. Take a subsample of 15 g, add 100 ml distilled water and store the roots at 4 degrees Celsius. Put the
roots in 100 ml of distilled water in a kitchen blender. Macerate the roots 3 times for 10 secs (separated by 5 sec intervals). Pour the macerated suspension through 250, 106 and 40 micro-metre
sieves and rinse the sieves with tap water. Using distilled water, collect in a beaker the nematodes from the 40 micro-metre sieve. Using distilled water, dilute to 200 ml the nematode suspension in
a graduated cylinder. Blow air through the nematode suspension with a pipette (to homogenise the suspension). Take a subsample of 6 ml (counting dish) or 2 ml (counting slide). Count the nematodes in
the counting dish (stereo microscope) or in the counting slide (light microscope). Calculate the final nematode population per unit of fresh root weight. Root nematode density - computation leaf rank
Growth - computation Leaf area increase between timepoints (refer to experimental metadata) per number of new leaves formed between timepoints (refer to experimental metadata) Record a description of
the damage to the plant, e.g. leaves ripped, pseudostem snapped, plant toppled, etc. Damage description - estimation BLS Black leaf streak disease severity An indication of the extent of damage on an
individual leaf caused by black leaf streak disease. Black leaf streak disease severity System water loss Volume The amount of water loss in the system over time waterLossSyst The date of the
appearance on the plant of any of the external disease symptoms - yellowing leaves, splitting pseudostem base, changes in new leaves, petiole collapse, caused by Fusarium wilt. Date of first external
disease symptoms caused by Fusarium wilt Date of appearance of external symptoms caused by Fusairum wilt Presence of yellowing leaves caused by Fusarium wilt - estimation Visual assessment. Ratoon
crop cycle - computation Calculate as: the time elapsed between the Date of harvest of the bunch of the plant from the preceeding crop cycle and the Date of harvest of the bunch of the plant from the
current crop cycle, e.g. 24/11/2015 - 24/12/2014 = 335 days. Calculate as: the sum of the Number of fingers in 3rd hand and the Number of fingers in 2nd-most distal hand, divided by 2, and multiplied
by the Number of hands in bunch, e.g. (16 + 12) / 2 * 8 = 112. Number of fingers in bunch - estimation Average finger weight - computation Calculate as: the sum of the the Finger weight measurements,
divided by the number of those measurements, e.g. (150 + 146 + 147 + 152 + 140 + 146) / 6 = 147. Date of damage datedamage The date damage to the plant is first observed. Date of damage Number of
hands measured - counting Count how many hands are used to obtain a measurement. 2 pt scale Calculate as: number of square metres in the area unit of choice, divided by the Plant spacing, e.g. 10,000
square metres in 1 hectare / 6 = 1,667 plants/ha. Plant density - computation Weighing Fresh weight - measurement Calculation from count Nematode multiplication from single root inoculation -
computation Finger feel - estimation Finger feel assessment of smashed and unsmashed fruit pulp Count how many fingers are in the outer whorl of the second-most distal hand. Number of fingers in
outer whorl of 2nd-most distal hand - counting Selection criteria for genotype ranked third highest by respondents Genotype selection traits Description of traits considered important in selection of
third best genotype by respondents Body length The length of a weevil larvae body, from the weevil larvae head to the tip of the abdomen. Banana weevil larvae body length leafDryMass Leaf dry weight
The dry weight of a leaf Dry weight Dwarfism dwarfism The presence or absence of dwarfism on a mat Dwarfism of plants Weigh a finger, using scales. Associate the data with the Hand rank. The
recommendation is to collect this data from six fingers in total - three fingers in the middle of the outer whorl from the second hand and from the second-most distal hand. Finger weight -
measurement The color of the unsmashed pulp of a mature fruit Fruit unsmashed pulp colour Colour Visual estimation of bunch quality, for Early Evaluation Trial stage only. Bunch quality - estimation
Number of nematodes (unclassified spp.) per unit fresh root weight - method Count from 100 ml of soil sample MatSour Matooke gustatory complex sensation, generally due to presence of organic acids
Matooke sourness Sourness fruit pedicel scale (1 to 3) The color of the smashed pulp of a mature fruit Colour Fruit smashed pulp colour Firmness Sensory Texture method Put in the mouth a piece of
banana and Evaluate the force necessary to obtain the deformation of the product between the teeth during the first compression sword sucker Measure from the scar on the rachis until the beginning of
the fruit. Record on the inner fruit in the middle of the mid-hand of the bunch. Tip: use string to measure or trace outline of fruit on paper. Record the range. Fruit pedical length - range -
measurement Rank of oldest standing leaf Rank of oldest standing leaf The rank (order) of the oldest (most-distal) standing leaf (with an erect petiole). Number of functional leaves - counting Count
how many functional leaves (leaves that have 50% or more of their surface as green, healthy, photosynthetic tissue) are on the plant. Mechanical property linked to cohesion and the presence of fine
particles in the product during chewing Boiled plantain Mealiness BoiPlantnMeal Boiled plantain Mealiness Egg-laying female root knot nematode blotch color scale (1 to 4) fusion of pedicel scale (1
to 2) Calculate as: the sum of the number of values for Presence of petiole collapse caused by Fusarium wilt scored as Present, divided by the Number of plants planted, multiplied by 100, e.g. 84 /
300 * 100 = 28%. Vitamin C - measurement Count how many of all types of suckers are in the mat. Number of suckers - counting Total finger weight - measurement Cut the hands off from the rachis and
weigh all the hands together, using scales. Average potential annual yield The average potential annual yield, assuming that all plants are harvested, from the number of plants planted. potannyield
Average potential annual yield Calculate as: the Number of plants dead, divided by the Number of plants planted, e.g. 4 / 24 = 0.17. Death proportion - computation Leaf number - counting Count the
number of leaves Presence absence 1 smoothness scale 0-10 Number of Meloidogyne spp. in roots Population density Number of Meloidogyne spp. per fresh root weight wing presence/absence Individual
female respondents give the plant one of three possible scoring cards: Like, Dont know, Dont like. Men and women use different colors of scoring cards. Preference scoring scale female respondents
Annual crop cycle proportion - computation Calculate as: the length of a year (e.g. 365 days) divided by the Plant crop cycle or the Ratoon crop cycle, e.g. 365 / 334 = 1.09. The average weight of a
banana weevil adult body. Average body weight Banana weevil adult average body weight Number of fingers in 2nd hand - counting Count how many fingers are in the third hand. totLeafLength Length The
length of a leaf Total length of all leaves fruit relicts scale (1 to 4) Water content - computation (Plant Fresh Weight - Plant Dry Weight) / Plant Fresh Weight Main colour of the petiole margin.
Where the petiole and pseudostem meet at shooting. Margin is the part of the petiole that can be bent outwards/inwards Colour Petiole margin colour Banana pest disease Time from harvest to death -
computation Calculate as: the time elapsed between the Date of harvest of the bunch of the plant from the preceeding crop cycle and the Date of death of the current plant, e.g. 24/03/2015 - 24/12/
2014 = 90 days. Behaviour of the last lifted bract - revolute or not revolute Behaviour before falling, whether revolute or not revolute Bract behaviour before falling Fruit unsmashed pulp taste The
taste of the unsmashed pulp of a mature fruit Taste The weight of a finger. fingerweight Weight Finger weight Black leaf streak disease development time - computation Calculate as: the time elapsed
between the Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 6 and the Date of tagged cigar leaf unrolling at stage B, e.g. 24/06/2014 - 24/03/2014 = 92 days. The
bunch weight produced by the plant over the course of the crop cycle, averaged across one year. plantannyield Annual yield Plant annual yield Finger circumference - measurement Measure the
circumference of a finger at its widest point, using a tape measure. Associate the data with the Hand rank. The recommendation is to collect this data from six fingers in total - three fingers in the
middle of the outer whorl from the second hand and from the second-most distal hand. Calculate as: multiply the distance of the spacing between the plants, both latitudinally and longitudinally, e.g.
2 m x 3 m = 6 m2. Plant spacing - computation Visually observe the fruit pulp colour and select the option in the colour chart that is most representative. The recommendation is to collect this data
from six fingers in total - three fingers in the middle of the outer whorl from the second hand and from the second-most distal hand. Fruit pulp colour - estimation astringency scale 0-10 Record a
description of the state of the plant. Plant state description - estimation Put a part of the sample in the mouth and evaluate the intensity of the sourness Sourness method Matooke Pumpkin-like aroma
Aroma of pumpkin. Possibly, precise if the taste is like boiled pumpkin or fried... Matooke aroma pumpkin MatAroPumpA Black leaf streak disease evolution time - computation Calculate as: the time
elapsed between the Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 6 and the Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 1,
e.g. 24/06/2014 - 24/04/2014 = 61 days. Time from harvest to death harvest2death The time elapsed from the harvest of the bunch of the plant from the preceding crop cycle to the death of the plant
from the current crop cycle. Time from harvest to death The smoothness-mouth feel of the unsmashed pulp of a mature fruit Fruit unsmashed pulp smoothness-mouth feel Smoothness-mouth feel Dry weight -
measurement 80?C, for 14 days, or longer untill dry Average circumference avefingercircumf Average finger circumference The average circumference of the fingers. crop cycle duration/year Visual
observation Peripheral damage - measurement Subjective visual assessment of smashed and unsmashed fruit pulp Visual appearance - estimation Remains of flower relicts at fruit apex - estimation
Visually observe the remains, or not, of flower relicts at the finger apex, and choose the option that is most similar, from the 4 photos of descriptor 6.7.7 in the reference materials. The presence/
absence of a splitting pseudostem base, caused by Fusarium wilt. Presence of splitting pseudostem base caused by Fusarium wilt SP Presence/absence of splitting plants not harvested/plants planted
Count how many fingers are in the third hand. Number of fingers in 3rd hand - counting Count how many sword suckers are in the mat. Number of sword suckers - counting Count how many water suckers are
in the mat. Number of water suckers - counting Rank of marked youngest leaf rankyoungleaf The rank (position) of the youngest completely unrolled leaf. Rank Put a part of the sample in the mouth and
evaluate the intensity of astringency impression due to the sample Astringency method Visual assessment. Rating of internal symptoms of Fusarium wilt in the greenhouse - estimation plants/unit of
area Preference number scale Damage percent X-lower-inner The amount of damage to the inner corm (central cylinder) lower cross-section, assessed by percentage of tunnels caused by weevils.
Percentage damage to lower cross-section inner corm caused by weevils Date of recognising off-type - estimation Record the date of event. Matooke-like Aroma method Put a part of the product and by
retro-olfaction evaluate the presence and the intensity of matoke-like aroma Calculate as: the sum of the Finger circumference measurements, divided by the number of those measurements, e.g. (7.0 +
8.0 + 9.0 + 10.0 + 7.0 + 9.5) / 6 = 8.4. Average finger circumference - computation Record the date of event. Date of tagged leaf presenting black leaf streak disease symptoms at Foures stage 6 -
estimation 80?C, for 14 days, or longer untill dry Dry weight - measurement water suckers The longest diagonal of a fitted ellips Leaf length - measurement bud size scale (1 to 3) The ratio of dry
mass of belowground (root) to abovegrund (shoot) biomass rootShootDry Ratio root to shoot Ratio numfingersouterwhorl2nddistalhand The number of fingers in the outer whorl of the second-most distal
hand Number of fingers in outer whorl Number of fingers in outer whorl of 2nd-most distal hand nematode lifecycle stages Calculate as: the time elapsed between the Date of planting and the Date of
first youngest leaf spotted, e.g. 24/06/2014 - 24/01/2014 = 151 days. Time from planting to first youngest spotted leaf - computation Time from flowering to harvest The time elapsed from flowering
(when the last bracts fall from the most distal hand to display the female flowers) to the harvest of the bunch. flower2harvest Time from flowering to harvest Root nematode density The number of
nematode individuals per unit of fresh root weight. Number of nematodes per unit of fresh weight hands The time elapsed from planting to the harvest of the bunch. Time from planting to harvest Plant
crop cycle plant2harvest Water content - computation (Pseudostem Fresh Weight - Pseudostem Dry Weight) / Pseudostem Fresh Weight Count from 100 ml of soil sample Number of nematodes (Hoplolaimus
spp.) per unit fresh root weight - method Ratio Male bud shoulder Male bud shoulder description Date of first external disease symptoms caused by Fusarium wilt - estimation Record the date of event.
Put a piece of banana in your mouth and assess the presence of mealiness particles during chewing Mealiness Sensory Texture method Measure from the scar on the rachis until the beginning of the
fruit. Record on the inner fruit in the middle of the mid-hand of the bunch. Tip: use string to measure or trace outline of fruit on paper. Record the exact value. Fruit pedicel length - exact value
- measurement day Number of banana weevil adults per trap Number of banana weevil adults How many banana weevil adults caught per trap. Leaf area ratio - computation Computation of top leaf area with
total (loose) leaf area kg The percentage of the root cortex of five functional roots approximately 10 cm long that is necrotic due to nematode damage. Root necrosis caused by nematodes Percentage
necrotic Record the date of event. Date of damage - estimation Fresh weight A root weight trait which is the fresh weight of a root rootFreshMass Root fresh weight The taste of the smashed pulp of a
mature fruit Fruit smashed pulp taste Taste Water loss - measurement Weighing of water loss though plant between timepoints (refer to experimental metadata) Nosal smell - estimation Nosal smell
assessment of smashed and unsmashed fruit pulp aromas Subjective taste assessment of smashed and unsmashed fruit pulp Taste - estimation numfingers3rdhand Number of fingers The number of fingers in
the third hand. Number of fingers in 3rd hand 5 pt corm discolouration scale
|
{"url":"https://cropontology.org/ontology/CO_325/rdf","timestamp":"2024-11-13T02:13:36Z","content_type":"application/rdf+xml","content_length":"1048872","record_id":"<urn:uuid:72d1eb85-43e3-48cb-a32e-c02b4ebd9e26>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00214.warc.gz"}
|
Excel Formula: MOD Function for Finding Remainder
In this article, we will explore how to use the MOD function in Excel to find the remainder when dividing a number by another number. The MOD function is a useful tool for performing calculations
based on the remainder of a division operation. By using the MOD function, you can easily determine the remainder without having to perform the division manually. This can be particularly helpful
when working with large datasets or complex formulas. In Excel, the MOD function takes two arguments: the number to be divided and the divisor. It then returns the remainder of the division
operation. To use the MOD function, simply enter the formula =MOD(number, divisor) into the desired cell. For example, if you want to find the remainder when dividing the value in cell Q442 by 5, you
would use the formula =MOD(Q442, 5). This formula will return the remainder as the result. By using the MOD function, you can streamline your calculations and save time and effort. So, let's dive
into the details of using the MOD function in Excel and explore some examples to better understand its functionality.
An Excel formula
Formula Explanation
The formula uses the MOD function to find the remainder when dividing the value in cell Q442 by 5.
Step-by-step explanation
1. The MOD function takes two arguments: the number to be divided (Q442) and the divisor (5).
2. The MOD function divides the number by the divisor and returns the remainder.
3. In this case, the formula =MOD(Q442, 5) will return the remainder when the value in cell Q442 (which is 94) is divided by 5.
For example, if the value in cell Q442 is 94, the formula =MOD(Q442, 5) would return the remainder of 4, because 94 divided by 5 equals 18 with a remainder of 4.
Similarly, if the value in cell Q442 is 100, the formula =MOD(Q442, 5) would return the remainder of 0, because 100 divided by 5 equals 20 with no remainder.
The MOD function is useful in situations where you need to perform calculations based on the remainder of a division operation.
|
{"url":"https://codepal.ai/excel-formula-generator/query/ufpu3uWq/excel-formula-mod-function","timestamp":"2024-11-03T17:20:43Z","content_type":"text/html","content_length":"86454","record_id":"<urn:uuid:9f28a225-8b81-4fd5-8631-744ff6dbb78e>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00102.warc.gz"}
|
Dotnet Tutorial
Comb sort program in C#
Comb sort is a comparison-based sorting algorithm that improves upon the bubble sort algorithm. It works by repeatedly comparing and swapping adjacent elements with a fixed gap size and gradually
reducing the gap until it reaches 1. Comb sort has a time complexity of O(n^2), but with the right gap sequence, it can have better average-case performance than bubble sort.
Here's an explanation of the comb sort algorithm step by step:
1. Choose a shrink factor and gap: The shrink factor determines the amount by which the gap is reduced on each iteration. Common shrink factors include 1.3 and 1.4. The initial gap size is typically
set to the length of the array.
2. Compare and swap elements with a fixed gap: Starting from the first element, compare it with the element at a fixed gap distance. If they are in the wrong order, swap them. Repeat this process
for all elements in the array, using the same fixed gap.
3. Reduce the gap: After comparing and swapping elements with a fixed gap, reduce the gap size by multiplying it with the shrink factor. If the gap becomes 1, the algorithm proceeds to the next
4. Repeat until sorted: Repeat steps 2 and 3 until the gap becomes 1. At this point, the algorithm performs a final pass using a gap of 1, which is essentially a regular bubble sort. This final pass
ensures that any remaining small inversions are sorted.
Here's an example implementation of comb sort in C#:
using System;
public class CombSort
public static void Main()
int[] arr = { 64, 25, 12, 22, 11 };
Console.WriteLine("Original array:");
Console.WriteLine("Sorted array:");
public static void Sort(int[] arr)
int n = arr.Length;
int gap = n;
double shrinkFactor = 1.3;
bool swapped = true;
while (gap > 1 || swapped)
// Update the gap size
gap = (int)(gap / shrinkFactor);
if (gap < 1)
gap = 1;
swapped = false;
// Compare and swap elements with a fixed gap
for (int i = 0; i + gap < n; i++)
if (arr[i] > arr[i + gap])
Swap(arr, i, i + gap);
swapped = true;
public static void Swap(int[] arr, int i, int j)
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
public static void PrintArray(int[] arr)
foreach (int num in arr)
Console.Write(num + " ");
In this example, the Sort method implements the comb sort algorithm. The Swap method is used to swap elements in the array. The Main method initializes an array, calls the Sort method to sort the
array, and then prints both the original and sorted arrays using the PrintArray method.
When you run the program, it will output:
Original array:
Sorted array:
The program demonstrates the step-by-step execution of the comb sort algorithm on the given array and outputs the sorted array.
Comb sort improves upon bubble sort by allowing elements to move more quickly towards their correct positions. The choice of shrink factor affects the algorithm's performance, and a smaller shrink
factor typically results in better average-case performance. Comb sort is not as widely used as other sorting algorithms like quicksort or mergesort but can be a good alternative to bubble sort in
certain scenarios.
|
{"url":"https://dotnetustad.com/csharp-data-structure-and-algorithm/comb-sort","timestamp":"2024-11-11T11:19:33Z","content_type":"text/html","content_length":"58058","record_id":"<urn:uuid:3476e2a2-803c-4cea-b76e-3c9d494de004>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00234.warc.gz"}
|
Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images
Experimental Aesthetics Group, Institute of Anatomy, University of Jena School of Medicine, Jena University Hospital, 07743 Jena, Germany
Author to whom correspondence should be addressed.
Submission received: 3 August 2016 / Revised: 1 November 2016 / Accepted: 28 November 2016 / Published: 1 December 2016
We propose a method for measuring symmetry in images by using filter responses from Convolutional Neural Networks (CNNs). The aim of the method is to model human perception of left/right symmetry as
closely as possible. Using the Convolutional Neural Network (CNN) approach has two main advantages: First, CNN filter responses closely match the responses of neurons in the human visual system; they
take information on color, edges and texture into account simultaneously. Second, we can measure higher-order symmetry, which relies not only on color, edges and texture, but also on the shapes and
objects that are depicted in images. We validated our algorithm on a dataset of 300 music album covers, which were rated according to their symmetry by 20 human observers, and compared results with
those from a previously proposed method. With our method, human perception of symmetry can be predicted with high accuracy. Moreover, we demonstrate that the inclusion of features from higher CNN
layers, which encode more abstract image content, increases the performance further. In conclusion, we introduce a model of left/right symmetry that closely models human perception of symmetry in CD
album covers.
1. Introduction
Symmetry is ubiquitous. It can be found in formations that have emerged in evolution, such as bird wings and bugs, and in physical structures like crystals, as well as in man-made objects like cars,
buildings, or art. Symmetry as a concept refers to any manner, in which part of a pattern can be mapped onto another part of itself [
]. While this can be done by translation (translational symmetry) or rotation (rotational symmetry), reflectional symmetry, in which a part of a pattern is mirrored along an axis, is special because
it is highly salient for human observers [
]. Reflectional symmetry has been linked to attractiveness in faces [
] and it is thought to serve as an indicator of normal development, general health or the ability to withstand stress [
]. Symmetry was linked to beauty not only in natural stimuli, but also in abstract patterns [
]. Together, these findings suggest that the perception of symmetry is a general mechanism that plays an important role in our aesthetic judgment.
In mathematics, symmetry is a clean, formal concept of group theory. In contrast, symmetry detection in computer vision is faced with real world data, which can be noisy, ambiguous and even
distorted. Nevertheless, several algorithms to detect symmetry in real world data have been proposed [
]. Much work has been done regarding the detection of axes of symmetry in an image. Continuous symmetry, as described by [
], measures the degree to which symmetry is present in a given shape (defined by a set of points). In the present article, we introduce a novel measure of continuous symmetry, which approximates the
perception of natural images by human observers. The aim of this measure is to indicate to which degree reflectional symmetry is present in an arbitrary image. While this task is easily accomplished
by humans, it is much harder for computers. One possibility to assign a measure of continuous symmetry to images is to compare luminance values along an axis of the image, as proposed in [
]. However, this approach differs substantially from how humans perceive real-world scenes. Instead of comparing pixels, humans detect edges and group them into shapes, textures and, finally, into
objects. Such grouping contributes to symmetry perception by humans. Shaker and Monadjemi [
] proposed a symmetry measure that uses edge information in gray-scale images. Although this approach goes beyond the restricted usage of luminance intensity information, it does not take into
account color and shapes. In the present article, we propose a novel algorithm that detects symmetry in a manner that is closer to how humans perceive symmetry. To this aim, we use filter responses
from Convolutional Neural Networks (CNNs), which have gained huge popularity among computer vision researchers in recent years. The novelty of our measure is twofold: First, by using CNN filter
responses, we take color and spatial frequency information into account, as done by the human visual system, namely by encoding color-opponent edges as well as color blobs and spatial frequency
information [
]. Second, we show that features from higher CNN layers, where more abstract image content is represented [
], can improve the prediction of symmetry judgements of human observers even further.
Although CNNs were first proposed more than two decades ago [
], they have become state-of-the-art technology for many computer vision tasks only recently, due to progress in computing technology, such as the introduction of graphic cards for calculations, and
the availability of huge amounts of data for training. Currently, CNNs are being applied to object recognition tasks [
], image description [
], and texture synthesis [
], and they have conquered other areas like speech recognition [
]. CNNs learn a hierarchy of different filters that are applied to an input image, enabling them to extract useful information. The training algorithm works in a supervised manner, which means that,
given an input image, the output is compared to the target output so that an error gradient can be computed. Using backpropagation, parameters of the model are changed so that the error is minimized
and the network gets better at solving the task at hand.
In our study, we use filter responses from CNNs that were trained on millions of images of objects [
] for measuring continuous symmetry in images. To validate our results, we collected a dataset of 300 different CD album covers and asked human observers to rate them according to their left/right
symmetry. CD album covers are especially suited for this task because they offer a wide variety and different degrees of symmetry. For each of the 300 images, the subjective symmetry ratings were
compared to the symmetry measure obtained by our algorithm.
2. Materials and Methods
2.1. Measuring Continuous Symmetry Using Filter Responses from Convolutional Neural Networks
As mentioned above, CNNs learn a hierarchy of different filters that are applied to an input image. Filters reside on layers, where higher layers tend to extract more and more abstract features from
an image, compared to the previous ones. Different layer types have been proposed and are investigated in ongoing research. The model we use in our experiments, referred to as
, was proposed by [
] and is provided as a part of the Caffe Library [
]. It consists of a total of 8 building blocks, each consisting of one or more different layer types:
Convolutional layers
, in which the output of a previous layer is convolved with a set of different filters,
pooling layers
, in which a subsampling of the previous layer is performed by taking the maximum over equally sized subregions, and
normalization layers
, which perform a local brightness normalization. Several fully-connected layers that are stacked on top of a network learn to map extracted features onto class labels. Interestingly, the filters on
the first layer tend do develop oriented Gabor-like edge detectors, akin to those found in human vision, when trained on huge datasets containing millions of natural images [
] (see
Figure 1
). In our experiments, we drop the fully connected layers on top, which allows us to resize the input to have a dimension of
$512 × 512$
pixels. Using this modification of the
model [
], we propose an algorithm that measures continuous left/right mirror symmetry of natural images, as follows.
First, every image is fed to the network, which yields 96 filter response maps after the first convolutional layer, 256 after the second, 384 after the third and fourth, and 256 after the last
convolutional layer. Using these filter responses, we then build a histogram of the maximum responses of equally sized, non-overlapping subregions of these maps, i.e., we perform a max-pooling
operation over a grid of equally sized areas (patches). In the remainder of this article, a patch level of n refers to a tiling of the filter response maps into $n × n$ subregions.
Using convolutional layer $l ∈ { 1 , 2 , … , 5 }$, this procedures provides us with a max-pooling map $I l$, which has three dimensions: two positional parameters of the subimage, on which the
max-pooling was performed, and one dimension that holds the maximum responses in that subimage for each filter. We then flip the image along the middle vertical axis and repeat the same procedure for
the flipped version, which provides us with another five max-pooling maps $F l$.
In order to measure the reflectional symmetry of an image, we measure the asymmetry of the max-pooling maps
$I l$
$F l$
by calculating how different the right and the left side of any given image are, using the following equation:
$A ( I l , F l ) = ∑ x , y , f I l ( x , y , f ) − F l ( x , y , f ) ∑ x , y , f max ( I l ( x , y , f ) , F l ( x , y , f ) )$
iterate over all subimages and
iterates over all filters on layer
. Subtracting
$A ( I l , F l )$
from one yields our final measure of symmetry:
$S ( I l , F l ) = 1 − A ( I l , F l )$
This measure is bounded between zero and one (for asymmetric images and for highly symmetric ones, respectively).
2.2. Image Dataset
In order to evaluate the algorithm proposed in
Section 2.1
, we used a dataset of 300 CD album covers that were collected from the internet in 2015 (kindly provided by Ms. Maria Grebenkina, University of Jena School of Medicine). Images were equally
distributed between three different music genres (classic music, pop music and metal music). All cover images had a resolution of
$500 × 500$
pixels, or were down-sampled to this size by bicubic interpolation if the original image had a higher resolution. A complete list of the 300 CD covers used in this study is provided in Supplementary
Table S1. The dataset is made available on request for scientific purposes (please contact author C.R.).
2.3. Rating Experiment
Images of the 300 CD album covers (see above) were rated for their symmetry. Twenty participants (21–62 years old; Mean = 36 years; 6 male), mostly medical students or employees of the basic medical
science department, participated in the experiment. All participants reported to have corrected-to-normal vision. Images were presented on a calibrated screen (EIZO ColorEdge CG241W, $1920 × 1200$
pixels resolution) on a black background. A chin rest ensured a constant viewing distance of 70 cm. The images extended $500 × 500$ pixels on the screen ($135 mm × 135 mm$, corresponding to $11 × 11$
degrees of visual angle).
The study design was conducted in line with the ethical guidelines of the Declaration of Helsinki on human participants in experiments. The Ethics Committee of Jena University Hospital approved the
procedure. Prior to participating in the study, all participants provided informed written consent on the procedure of the study. The participants were tested individually in front of the screen in a
shaded room with the windows covered by blinds. First, the experimenter gave the instructions for the experiment. The participant was asked to rate the presented image according its left/right
symmetry. The participants started the experiment with a mouse click. Then, for the first trial, a fixation cross appeared on the screen for between 300 and 800 ms followed by the first image. The
question displayed on the screen below the image was “How symmetric is this image?” The participant rated the presented images on a continuous scale that was visualized as a white scoring bar on the
bottom of the screen. The extremes of the scale were labeled as “not symmetric” and “very symmetric”, respectively. Immediately after the response, the second trial with the next image was initiated,
and so on. Images were presented in random order. After each of 100 trials, participants were allowed to take a rest for as long as they wished. The final symmetry rating value for each cover was
defined as the median of the ratings of all 20 participants. To test for normality of the resulting data, D’Agostino and Pearson’s normality test [
] was used.
3. Results
In order to validate the computer algorithm proposed in the present work, 20 participants rated the left/right symmetry of 300 covers from CD albums featuring pop music, metal music or classic music.
For 97 covers, the distribution of ratings was not normally distributed (
$p < 0.05$
). As a representative value for a cover, we thus decided to use the median of all ratings for this cover.
Figure 3
shows box-plot diagrams of the resulting subjective ratings for the three music genres. Median symmetry is highest for covers of metal music, intermediate for pop music, and lowest for covers of the
classic genre. The symmetry ratings for all three music genres span a wide range of values, including extreme values. Consequently, the cover images seem sufficiently diverse with respect to their
symmetry to serve as the ground truth in the validation of our model of left/right symmetry perception by human observers.
In order to maximize the correlation between the subjective ratings and the symmetry values calculated with our algorithm (see
Section 2.1
), we modified the following two parameters: First, we are free to chose which of the five convolutional layers of the CaffeNet model serves as a basis for the calculations. Second, we can decide how
many subregions to use in the max-pooling operation. We therefore tested all five layers of the model. In addition, for layers conv1 and conv2, we obtained results for patch levels 2–32 and, for the
upper layers, for patch levels 2–31 (note that the number of patches is restricted by the size of the response maps, which is
$31 × 31$
pixels for layers above conv2). We calculated our measure of symmetry for each of these parameter configurations (see
Figure 2
for examples) and compared results with the human ratings. Because the distribution of ratings for many covers was not normally distributed (
$p < 0.05$
, D’Agostino and Pearson’s normality test), Spearman’s rank coefficients were calculated.
Figure 4
a plots the coefficients for the different model configurations. The correlation coefficients obtained for our model ranged from 0.64 (for convolutional layer 1 using patch level with 2 patches
squared) to 0.90 (for convolutional layer 5 with 6 patches squared). Additionally, we provide the RMSE of a linear fit (
Figure 4
b), a quadratic fit (
Figure 4
c) and a cubic fit of the distributions (
Figure 4
d) to better understand the relation between our measure and the subjective ratings. Resulting trends are similar to those of the correlation analysis (
Figure 4
a); the quadratic and cubic models have a lower RMSE than the linear model, which indicates they provide a better fit of the relation between ratings and our measure.
For comparison, we implemented the symmetry measure recently proposed by [
] and measured a correlation of 0.34 for their model. Thus, all configurations tested in our model outperformed the previously proposed method. We also tried to compare our results with the approach
described in [
]. However, due to missing details regarding the filtering process used in [
], we were not able to reproduce their results.
In our model, convolutional layer 1 performed worst with 2 patches squared, but results for this layer improved when more patches were used, i.e., when we use more, but smaller max-pooling regions. A
plateau is reached at around 10 patches squared with a maximum correlation of 0.80 peak at 17 patches (
Figure 4
Interestingly, the correlations between human ratings and our measure increase when higher layers of the network are used. For the second convolutional layer, the correlation peaks at 0.85 with 11
patches and then drops steadily as more patches are used, performing even worse than convolutional layer 1 at around 24 patches and above. The same can be observed for layers above layer 2 where the
correlations peak at smaller number of patches and then drop rapidly. Specifically, the peak is reached at around 8 patches squared for layer 3 and 4, and at 6 patches squared for the highest (fifth)
layer (see
Figure 4
Figure 5
, median symmetry ratings for each image are plotted as a function of the values calculated for two model configurations with high correlations. For both configurations, the rated and calculated
values seem to correspond better at the extremes of the spectrum, i.e., for symmetry values closer to 0 or 1. In the mid-part of the spectrum, the two values correlated less well for the individual
images. To visualize whether similar difference can also be observed at the level of individual observers,
Figure 6
plots the standard deviation of all ratings for each cover over its median rating. The resulting plot shows an inverse u-curve shape, which means that the standard deviation of ratings is lower for
highly symmetric covers and highly asymmetric covers, i.e., at the extremes, compared to symmetry values at the mid-part of the rating spectrum. In conclusion, while people tended to agree whether an
image is highly symmetric or highly asymmetric, judgments show a higher deviation for images not falling into one of the extremes.
4. Discussion
In the present study, we introduce a novel computational measure of left/right symmetry that closely matches human perception, as exemplified for symmetry ratings of 300 CD album covers by human
participants. Using CNN filter responses in our measure has two main advantages. First, CNN filters of the first layer are thought to resemble edge detectors akin to those found in the human visual
system (Gabor-like filters [
]), as well as color detectors in form of color blobs and opponent color edges [
] (see
Figure 1
for an illustration of filters used on conv1, the first layer). Similar to human vision, we can take luminance, color and spatial frequencies in an image into account simultaneously by using features
from lower layers of the CNN. Second, because features are becoming increasingly abstract when using higher layers for the max-pooling maps, our symmetry measure is more likely to reflect human
perception of symmetry because it takes into account the grouping of visual elements in the images as well as more abstract image features.
The first advantage may explain why our method outperforms the symmetry measure based on intensity values [
], which ignores color completely and does not deal with structural features like oriented edges. Furthermore, we group image regions over subimages (called
in the present work), which makes our measure more robust with regard to image features that are somewhat symmetric but not exactly mirrored. Comparing intensity values of pixels alone does not
address this issue. However, although we demonstrated that our approach works well for music cover art, it remains to be investigated whether it can also be applied to other types of images.
The emergence of increasingly abstract features at higher layers of the CNN can explain some of the trends that we observed in our experiment at different layers and for different patch sizes. On the
one hand, when the patches become too big, our measure does not perform well because there is not much local detail taken into account. On the other hand, when regions become exceedingly small,
grouping of luminance values is no longer possible over larger areas and, consequently, our measure does not correlate well with the ground truth data (i.e., the subjective symmetry ratings).
Higher-layer features seem to resemble human symmetry perception more closely, if patch size is optimal at higher layers (
Figure 4
). We speculate that this resemblance can be explained by the fact that, when specific features or objects are prevalent in an image, larger patches are more tolerant regarding the exact position of
these features or objects. For example, when people observe two faces in an image, one on the left side and one on the right side, the exact pixel positions of the faces are not critical for symmetry
perception in our approach, as long as the faces have roughly corresponding positions with respect to the left/right symmetry axis.
Although the higher layers seem to resemble human symmetry perception of the CD album covers best, we cannot unconditionally recommend to use higher-layer features for getting the best results with
other types of stimuli. Higher-layer features are not well understood yet, despite intense research [
]. Yosinski et al. [
] investigated how transferable learned features are between different tasks and found that the features from the first two layers only can be considered truly generic. Higher-layer features tend to
be specific to the set of images that were used during training; this specificity may potentially limit their usefulness when novel images are encoded by the CNNs. Although we did not observe such a
negative effect in our study, this potential problem should be kept in mind.
In order to evaluate the correlation between subjective ratings and calculated symmetry values, we measured Spearman’s rank (non-parametric) correlation. Thus, we did not exactly replicate the
behavioral measures, but predicted their relative strength. We observed that quadratic and cubic models fit the relation between our measure and the subjective ratings better than a linear model,
which indicates that the relation between the two measures is not strictly linear. In other words, our algorithm does not match the exact subjective ratings that one would get from human observers.
Rather, the algorithm predicts the relative subjective impression of symmetry in sets of images. This correspondence is strongest when symmetry is either very prominent or almost absent. With
intermediate degrees of symmetry, human observers tend to agree less on how symmetric an image is (
Figure 6
). At the same time, the correspondence with the calculated values is less precise (
Figure 5
5. Conclusions
We propose a novel computational method that was developed to predict human left/right symmetry ratings for complex, non-geometrical images, as exemplified by CD album covers. The aim of the model is
to closely match subjective symmetry as judged by human observers. For this purpose, we used filters learned by CNNs because they are akin to receptive fields in the early human visual system and get
more and more abstract at higher layers of the CNNs. In order to evaluate our method, we compared the results from the computational model with subjective ratings by 20 participants who assessed left
/right symmetry in a dataset of 300 different album covers. We evaluated different model configurations by calculating the correlation between the computationally obtained results and the ratings by
Results demonstrate that our algorithm outperforms a recently proposed method for measuring continuous symmetry in an image by comparing pixel intensities [
]. Moreover, the correlation increased from 0.80 to 0.90 when we used filters from higher layers that focus on more abstract features. However, it remains to be established whether our approach also
works for images other than album covers. For arbitrary images, we recommend to use second-layer features because they are known to be more universal than higher-layer features and lead to better
results than first-layer features in our study.
In future research, we will use the proposed symmetry measure to study the role of symmetry in aesthetic perception, for example, by applying the measure to images of visual artworks and photographs.
The authors are grateful to Maria Grebenkina for providing the digital collection of CD album covers, and to members of the group for suggestions and discussions. This work was supported by funds
from the Institute of Anatomy I, University of Jena School of Medicine.
Author Contributions
Anselm Brachmann and Christoph Redies conceived and designed the experiments; Anselm Brachmann performed the experiments and analyzed the data; Anselm Brachmann and Christoph Redies wrote the paper.
Conflicts of Interest
The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and
in the decision to publish the results.
1. Bronshtein, I.; Semendyayev, K.; Musiol, G.; Mühlig, H. Handbook of Mathematics; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
2. Wagemans, J. Detection of visual symmetries. Spat. Vis. 1995, 9, 9–32. [Google Scholar] [CrossRef] [PubMed]
3. Grammer, K.; Thornhill, R. Human (Homo sapiens) facial attractiveness and sexual selection: The role of symmetry and averageness. J. Comp. Psychol. 1994, 108, 233. [Google Scholar] [CrossRef] [
4. Møller, A.P.; Swaddle, J.P. Asymmetry, Developmental Stability and Evolution; Oxford University Press: Oxford, UK, 1997. [Google Scholar]
5. Tinio, P.; Smith, J. The Cambridge Handbook of the Psychology of Aesthetics and the Arts; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
6. Zaidel, D.W.; Aarde, S.M.; Baig, K. Appearance of symmetry, beauty, and health in human faces. Brain Cogn. 2005, 57, 261–263. [Google Scholar] [CrossRef] [PubMed]
7. Jacobsen, T.; Höfel, L. Aesthetic judgments of novel graphic patterns: Analyses of individual judgments. Percept. Motor Skills 2002, 95, 755–766. [Google Scholar] [CrossRef] [PubMed]
8. Liu, Y.; Hel-Or, H.; Kaplan, C.S.; Kaplan, C.S.; Van Gool, L. Computational symmetry in computer vision and computer graphics. Found. Trends^® Comput. Grap. Vis. 2010, 5, 1–195. [Google Scholar]
9. Chen, P.-C.; Hays, J.; Lee, S.; Park, M.; Liu, Y. A quantitative evaluation of symmetry detection algorithms. In Technical Report CMU-RI-TR-07-36; Carnegie Mellon University: Pittsburgh, PA, USA,
2007. [Google Scholar]
10. Liu, J.; Slota, G.; Zheng, G.; Wu, Z.; Park, M.; Lee, S.; Rauschert, I.; Liu, Y. Symmetry detection from realworld images competition 2013: Summary and results. In Proceedings of the 2013 IEEE
Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 25–27 June 2013; pp. 200–205.
11. Zabrodsky, H.; Peleg, S.; Avnir, D. Symmetry as a continuous feature. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 1154–1166. [Google Scholar] [CrossRef]
12. Den Heijer, E. Evolving symmetric and balanced art. In Computational Intelligence; Springer: Berlin, Germany, 2015; pp. 33–47. [Google Scholar]
13. Shaker, F.; Monadjemi, A. A new symmetry measure based on gabor filters. In Proceedings of the 2015 23rd Iranian Conference on Electrical Engineering, Tehran, Iran, 10–14 May 2015; pp. 705–710.
14. Wurtz, R.; Kandel, E. Central visual pathway. In Principles of Neural Science, 4th ed.; Kandel, E., Schwartz, J., Jessell, T., Eds.; McGraw-Hill: New York, NY, USA, 2000; pp. 523–547. [Google
15. Lennie, P. Color vision. In Principles of Neural Science, 4th ed.; ER, K., Schwartz, J., Jessell, T., Eds.; McGraw-Hill: New York, NY, USA, 2000; pp. 572–589. [Google Scholar]
16. Yosinski, J.; Clune, J.; Nguyen, A.; Fuchs, T.; Lipson, H. Understanding neural networks through deep visualization. In Proceedings of the Deep Learning Workshop, International Conference on
Machine Learning (ICML), Lille, France, 10–11 July 2015.
17. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. In The Handbook of Brain Theory and Neural Networks; Massachusetts Institute of Technology (MIT) Press:
Cambridge, MA, USA, 1995; pp. 255–258. [Google Scholar]
18. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
19. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Massachusetts Institute of
Technology (MIT) Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
20. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. Comput. Sci. arXiv 2014. [Google Scholar]
21. Karpathy, A.; Fei-Fei, L. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA,
7–12 June 2015; pp. 3128–3137.
22. Gatys, L.A.; Ecker, A.S.; Bethge, M. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. Comput. Sci. arXiv 2015. [Google Scholar]
23. Abdel-Hamid, O.; Mohamed, A.R.; Jiang, H.; Deng, L.; Penn, G.; Yu, D. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2014, 22, 1533–1545. [
Google Scholar] [CrossRef]
24. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd
Association of Computing Machinery (ACM) international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678.
25. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Comput. Sci. arXiv 2014. [Google Scholar]
26. D’Agostino, R.B. An omnibus test of normality for moderate and large size samples. Biometrika 1971, 58, 341–348. [Google Scholar] [CrossRef]
27. Marĉelja, S. Mathematical description of the responses of simple cortical cells. J. Opt. Soc. Am. 1980, 70, 1297–1300. [Google Scholar] [CrossRef] [PubMed]
28. Zeiler, M.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 818–833.
29. Mahendran, A.; Vedaldi, A. Understanding deep image representations by inverting them. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA,
USA, 7–12 June 2015; pp. 5188–5196.
Figure 1.
Filters of the first convolutional layer (conv1) of the Convolutional Neural Networks (CNN) architecture used in our experiment (
; [
]). The filters detect oriented luminance edges and different spatial frequencies. Color is detected in form of oriented color-opponent edges and color blobs.
Figure 2. Representative covers and their respective calculated left/right symmetry values, which were obtained with first-layer filters at patch level 17. The images are of high symmetry (a);
intermediate symmetry (b); and low symmetry (c); respectively. Due to copyright issues, we cannot reproduce covers used in our study here. Copyright: (a) author A.B.; (b) Graham JamesWorthington, CC
BY-SA 4.0; and (c) Musiclive55, CC BY-SA 4.0.
Figure 4. (a) Spearman’s rank coefficients for the correlation between the subjective ratings and calculated values of left/right symmetry. Subjective ratings are plotted as a function of the number
of subimages in the model for different layers of the CaffeNet model. The model parameters were systematically varied. The patch level squared corresponds to the number of subimages. The RMSE values
of (b) a linear fit; (c) a quadratic fit and (d) a cubic fit show similar trends for all configurations. With quadratic and cubic polynomials, lower errors were obtained compared to the linear fit,
which indicates that the relation between our measure and the subjective ratings is not linear.
Figure 5.
Scatter plot of rated symmetry values versus calculated symmetry values for two different configurations of the model (
, layer 1 with 17 patches squared, correlation of 0.80;
, layer 2 with 11 patches squared, correlation of 0.85). Each dot represents one cover image. Metal music covers are shown in black, pop music covers in cyan and classic music covers in magenta. The
blue curve represents the best quadratic fit, as determined from the plots shown in
Figure 4
Figure 6. Standard deviation of the ratings of 20 participants for 300 CD album cover images, plotted as a function of the median rating for the covers. Each dot represents one cover image. Metal
music covers are shown in black, pop music covers in cyan and classic music covers in magenta.
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http:/
Share and Cite
MDPI and ACS Style
Brachmann, A.; Redies, C. Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images. Symmetry 2016, 8, 144. https://doi.org/10.3390/sym8120144
AMA Style
Brachmann A, Redies C. Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images. Symmetry. 2016; 8(12):144. https://doi.org/10.3390/sym8120144
Chicago/Turabian Style
Brachmann, Anselm, and Christoph Redies. 2016. "Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images" Symmetry 8, no. 12: 144. https://doi.org/10.3390/sym8120144
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-8994/8/12/144","timestamp":"2024-11-12T06:14:52Z","content_type":"text/html","content_length":"394243","record_id":"<urn:uuid:35d87ed8-75dc-4396-9256-9851ab207605>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00155.warc.gz"}
|
Critical Stability of Quantum Few-Body Systems
The first week was organized as a school for mature students, postdocs, and young researchers. There were four lecturers, each giving three lectures of one hour, and a follow up of twice two hours of
discussion and exercise sessions. This resulted in four full days of teaching activities. The topics and teachers were chosen to present a broad pedagogical introduction to the subjects expected at
the workshop, namely momentum and coordinate space few-body techniques, the concept of universality, and the transition from few- to many-body degrees of freedom. The lecture notes by the teachers
were made available before the presentations.
The last day was reserved for the contributions of the participants. Each of them had twenty minutes to present a project he had himself chosen and respond to questions. The organizers were present
at all lectures and talks on the last day of the school. It was a good surprise that the talks by the participants were of very high quality both by their scientific content and pedagogical aspects.
The school participants were asked to evaluate the lectures and the exercises, through an e-mail ques- tionnaire. The responses were in general very positive with evaluations ranging from excellent
to above the average. Without exceptions, the responses indicated that the school was a very fine prepa- ration for the specialized workshop talks. Also the individual project presentations and the
subsequent discussions on the last day were unanimously very positively received. The lectures were deemed very well presented while the exercises in general were less popular and less helpful
probably because they were too often made at generalizations and further applications of the concepts introduced in the lec- tures, and less often devoted to immediate and direct applications. The
amount of content in the four series of lectures probably added up to be too large for four days. The week was intense but overall the participants were satisfied.
The second week was organized as a workshop with 39 contributions of 40 minutes by all participants split as 30 minutes for the talk and 10 minutes for the discussion. In addition, the Colloquium by
the recipient of the Gutzwiller award was included in the program of the workshop. The 9 groups of topics, 1. Universality, 2. Finite-range corrections, 3. Few and many-body degrees of freedom, 4.
One and two dimensions, 5. Dimensional crossover, 6. Multicomponent systems, 7. Dynamics, 8. Reactions with weakly bound systems, 9. Mathematical few-body problems. All topics received attention
through several talks.
Most workshop contributions are planned to appear in a special issue of Few-Body Systems entitled “Special issue on Critical Stability of Quantum Systems” where also the lectures at the school will
be included.
There were 63 participants at the workshop, coming from 17 different countries, including almost all 21 school participants who stayed for the workshop. A number of subfields of physics with focus on
few-body quantum problems were represented, such as quantum chemistry, mathematical, atomic, molecular, condensed matter, hadron and nuclear physics. Apart from the school participants (very young)
and the organizers (rather senior) the average age of the workshop participant was about 43 years. This corresponds to a generation ready to take over from the very well established physicists. They
all presented convincing and mature talks about the diverse topics they had been working on, expressing vitality and new avenues to be explored on the boundaries and within the different subfields of
The topics of universality in various disguises were probably the most prominent issue discussed at the workshop. It is not easy to select specially interesting contributions, but if pressed
consensus probably would be, (i) the experiments by Reinhard Do ?rner where the probability distribution of the excited atomic helium trimer is mapped out, (ii) the zoo of Efimov towers of excited
states by Yusuke Nishida, (iii) the topological classification of symmetries of few-body structures in different spatial dimensions by Nathan Harsmann.
The overall goals of the workshop were achieved, that is exchange of ideas and techniques across the barriers of subfields, updating and distributing research results, and initialization of new
collaborations perhaps based on the ideas exchanged at the meeting. The school served both as basic education but also as a preparation for the more specialized workshop talks.
The organizers are plainly satisfied by the success of the school and workshop, which came across different subfields of physics and in broad sense each with its set of concepts applied to finite and
many-body quantum systems in different dimensions. At the workshop the underlying relevance of the long range quantum correlations brought by some selected degrees of freedom to the complex finite or
infinite quantum systems were tackled in the presentations. This common universal basic concept, was raised in different forms during the discussions through the fruitful questions and answers, where
the participants were prompted to make an effort to go over distinctions and find the subtle links between the conceptual complexity coming across the boundaries of different subfields of physics.
All this was made possible by the generosity of the Max Planck Institute for Complex Systems in Dresden. This is of course part of the purpose of the Institute but the organizers are nevertheless
very grateful for the support which includes not only financial but also the quiet and stimulating environment, the well-functioning infrastructure and the efficient secretarial assistance.
|
{"url":"https://www.pks.mpg.de/de/critical-stability-of-quantum-few-body-systems/scientific-report","timestamp":"2024-11-03T07:06:17Z","content_type":"text/html","content_length":"108935","record_id":"<urn:uuid:106ce18b-50cf-460a-87a6-8e1260abce1d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00272.warc.gz"}
|
When the factor levels or treatments chosen for study are the specific ones about which we wish to draw conclusions, the model obtained from analysis of such data is called a fixed effects model. The
results from the analysis will apply only to the treatments considered in the study and cannot be generalized to similar treatments which were not explicitly included in the experiment.
A pizza bakery is conducting a study on whether the five ovens at the bakery give consistent results when using the same temperature and time settings. These particular five ovens constitute all the
factor levels of interest, so the analysis based on these levels will apply only to these ovens. The model is therefore called a fixed effects model.
See Also
Random Effects Model
|
{"url":"https://sigmapedia.com/includes/term.cfm?&word_id=1267&lang=ENG","timestamp":"2024-11-12T16:59:14Z","content_type":"text/html","content_length":"18156","record_id":"<urn:uuid:af34769c-0628-46a4-8bb2-0793cab3b992>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00101.warc.gz"}
|
Quantum Field Theory on Curved Spacetimes
Christian Bär, Klaus Fredenhagen (Eds.)
Springer, Berlin Heidelberg 2009
ISBN 978-3-642-02779-6
Webpage at Springer
Webpage at amazon
Book description:
After some decades of work a satisfactory theory of quantum gravity is still not available; moreover, there are indications that the original field theoretical approach may be better suited than
originally expected. There, to first approximation, one is left with the problem of quantum field theory on Lorentzian manifolds. Surprisingly, this seemingly modest approach leads to far reaching
conceptual and mathematical problems and to spectacular predictions, the most famous one being the Hawking radiation of black holes. Ingredients of this approach are the formulation of quantum
physics in terms of C*-algebras, the geometry of Lorentzian manifolds, in particular their causal structure, and linear hyperbolic differential equations where the well-posedness of the Cauchy
problem plays a distinguished role, as well as more recently the insights from suitable concepts such as microlocal analysis. This primer is an outgrowth of a compact course given by the editors and
contributing authors to an audience of advanced graduate students and young researchers in the field, and assumes working knowledge of differential geometry and functional analysis on the part of the
|
{"url":"https://cbaer.eu/joomla/index.php/en/mathematics/books/32-quantum-field-theory-on-curved-spacetimes","timestamp":"2024-11-03T10:39:30Z","content_type":"text/html","content_length":"16307","record_id":"<urn:uuid:09f146d4-e11f-4ecc-88fc-d1cc65971896>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00877.warc.gz"}
|
Slide 1
Graphs and Problems
Slide 2
a. A car is moving eastward along Lake Avenue and increasing its speed from 25 mph to 45 mph.
b. A northbound car skids to a stop to avoid a reckless driver.
c. An Olympic diver slows down after splashing into the water.
d. A southward-bound free kick delivered by the opposing team is slowed down and stopped by the goalie.
e. A downward falling parachutist pulls the chord and rapidly slows down.
f. A rightward-moving Hot Wheels car slows to a stop.
g. A falling bungee-jumper slows down as she nears the concrete sidewalk below
For each sentence determine the direction of acceleration.
As a general rule
if object is increasing speed acceleration is in the same direction as motion
if object is slowing down acceleration is in the opposite direction
Slide 3
Representing acceleration graphically
Describe the motion indicated by the graphs.
Slide 4
Examples - calculating Acceleration
The velocity of the aircraft is reduced from 100 m/s[S] to 40 m/s[S] in 8 s. Find it’s average acceleration.
Slide 5
example 2
A truck is moving east at a speed of 20 m/s. The driver presses on the gas pedal and truck accelerates at the rate of 1.5 m/s2 [E] for 7 seconds. What is the final velocity of the truck?
Slide 6
example 3
A truck is moving east at a speed of 30 m/s. The driver presses on the brake pedal and truck accelerates at the rate of 4.5 m/s2 [W] for 5 seconds. What is the final velocity of the truck?
Slide 7
Kinematic Equations - found in Motion text section 2.5
, always true
, always true
, constant acceleration only
, constant acceleration only
, constant acceleration only
, constant acceleration only
Slide 8
example 4- finding average velocity
A motorcycle accelerates uniformly from +40 ft/s to +120 ft/s in 7 seconds. Find the average velocity
use vavg - ½(vi + vf). this only works if acceleration is constant and uniform.
Slide 9
example 5
A car is moving at +25 m/s. It then accelerates at a rate of 1.5 m/s2 for 10 s. Find it’s final velocity.
|
{"url":"https://www.sliderbase.com/spitem-700-1.html","timestamp":"2024-11-06T12:28:44Z","content_type":"text/html","content_length":"13223","record_id":"<urn:uuid:5d4c5619-6ff2-40e1-bf5b-3f12efd41370>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00440.warc.gz"}
|
Mathematics - Chelsea School
Algebra I
Algebra I provide a formal development of the algebraic skills and concepts necessary for students to succeed in advanced courses. In particular, the instructional program in this course provides
for the use of algebraic skills in a wide range of problem-solving situations. The concept of function is emphasized throughout the course.
Geometry follows the Common Core State Standards and formalizes and extends students’ geometric experiences from the middle grades. Students explore more complex geometric situations and deepen their
explanations of geometric relationships, moving towards formal mathematical arguments. Six critical areas comprise the Geometry course: Congruence, Proof and Constructions, Similarity, Proof and
Trigonometry, Extending to Three Dimensions, Circles With and Without Coordinates, and Applications of Probability. The Mathematical Practice Standards apply throughout each course and, together with
the content standards, prescribe that students experience mathematics as a coherent, useful, and logical subject that makes use of their ability to make sense of problem situations.
Algebra II
Building on their work with linear, quadratic, and exponential functions, students in Algebra II extend their repertoire of functions to include polynomial, rational, radical, and trigonometric
functions. In this course rational functions are limited to those whose numerators are of degree at most one and denominators of degree at most 2; radical functions are limited to square roots or
cube roots of at most quadratic polynomials.
Students work closely with the expressions that define the functions, and continue to expand and hone their abilities to model situations and to solve equations, including solving quadratic equations
over the set of complex numbers and solving exponential equations using the properties of logarithms.
Pre-Calculus is designed to improve the student’s knowledge of linear, exponential, logarithmic, power, polynomial, trigonometric, and rational functions. Students will also study sequences and
series, quadratic relations and be presented with an introduction to Calculus. The students will be able to discuss the effects of combining functions and modeling real world phenomena with a variety
of functions. This course follows the Common Core State Standards.
Probability and Statistics
Probability and Statistics introduces students to the basic statistical testing. Students learn to organize, display, and analyze data and to explore the elements of probability. The course is
enriched through the use of real world problems.
Financial Literacy
This course presents a variety of units to assist students in acquiring personal finance principles. The implementation of the ideas, concepts, knowledge, and skills contained in this course will
enable students to apply decision-making skills and to become wise and knowledgeable consumers, savers, investors, users of credit, money managers, citizens, and members of a global workforce and
society. Topics of study include financial responsibility and decision making, planning and money management, credit and debt, risk management and insurance, saving and investing, as well as income
and careers.
|
{"url":"https://www.chelseaschool.edu/programs/upper-division-curriculum/mathematics/","timestamp":"2024-11-07T12:58:25Z","content_type":"text/html","content_length":"47479","record_id":"<urn:uuid:4c45dd77-0ef1-4843-b039-1e3938bc3e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00741.warc.gz"}
|
Substitution Theorem
Substitution theorem is a principle in electrical circuits analysis, particularly in the study of linear circuits. It states that any element in a linear electric network can be substituted by a
combination of independent voltage or current sources and their associated resistances, without changing the behavior of the rest of the network with respect to any pair of terminals.
• In simple terms, it means that you can replace any part of a circuit with a simpler equivalent circuit, as long as the behavior of the original circuit remains the same. This theorem is
particularly useful in simplifying complex circuits for analysis purposes.
• For example, if you have a complex network with multiple resistors, capacitors, and voltage sources, you can replace a specific resistor with a voltage source and another resistor in series, or
with a current source and another resistor in parallel, while maintaining the same behavior of the original circuit.
Application of Substitution Theorem
The Substitution theorem finds numerous applications in electrical engineering, particularly in the areas of circuit analysis and design. Here are some of the applications:
Simplifying Circuit Analysis
- One of the primary applications of the Substitution theorem is to simplify complex circuits. By replacing a complicated network of resistors, capacitors, inductors, or other components with an
equivalent component that has the same electrical characteristics (voltage and current), engineers can make the analysis of the circuit more manageable.
Component Replacement
- The theorem allows for the replacement of a component in a circuit with another that has the same voltage and current characteristics. This can be useful for testing, maintenance, or upgrades.
Modeling Complex Impedances
- In AC circuits, components such as inductors and capacitors can be replaced with their equivalent impedances. This is particularly useful in analyzing AC circuits where phase relationships between
voltage and current are important.
Thevenin and Norton Equivalent Circuits
- The Substitution theorem is instrumental in deriving Thevenin's and Norton's equivalent circuits, which simplify the analysis of power systems and other electrical networks.
Design and Optimization
- Engineers can use the Substitution theorem to optimize circuit designs by testing different equivalent components and choosing the ones that provide the best performance.
Fault Analysis and Troubleshooting
- The theorem can aid in diagnosing faults in a circuit by substituting suspected faulty components with known good equivalents and observing changes in circuit behavior.
Simulation and Prototyping
- During the simulation of circuits, equivalent components can be used to simplify models, making simulations run faster and allowing for quicker prototyping and testing.
The Substitution theorem is a versatile tool in electrical engineering, aiding in the simplification of circuit analysis, component replacement, design optimization, fault diagnosis, and simulation.
By ensuring that substituted components produce the same voltage and current characteristics, the theorem helps maintain the integrity and performance of electrical circuits.
Practical Example
Consider a simple circuit where you have a complex network of resistors between two points, \(A\) and \(B\). According to the substitution theorem, if you can determine the equivalent resistance \(R_
{eq}\) between A and B, you can replace the entire network of resistors with a single resistor \(R_{eq}\) without affecting the rest of the circuit. This greatly simplifies the analysis, as you now
only need to consider a single resistor instead of a complex network.
Electric substitution theorem is a tool in circuit analysis that allows for the simplification of electrical networks by substituting parts of the circuit with equivalent components. This theorem
helps in reducing complex circuits into simpler forms, facilitating easier analysis and design.
Tags: Electrical Electrical Theorems
|
{"url":"https://www.piping-designer.com/index.php/disciplines/electrical/3429-substitution-theorem","timestamp":"2024-11-04T04:32:34Z","content_type":"text/html","content_length":"31655","record_id":"<urn:uuid:5835aff0-7a55-49ad-b91a-21485f77cba3>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00727.warc.gz"}
|
Internet Encyclopedia of Philosophy
The word “model” is highly ambiguous, and there is no uniform terminology used by either scientists or philosophers. Here, a model is considered to be a representation of some object, behavior, or
system that one wants to understand. This article presents the most common type of models found in science as well as the different relations—traditionally called “analogies”—between models and
between a given model and its subject. Although once considered merely heuristic devices, they are now seen as indispensable to modern science. There are many different types of models used across
the scientific disciplines, although there is no uniform terminology to classify them. The most familiar are physical models such as scale replicas of bridges or airplanes. These, like all models,
are used because of their “analogies” to the subjects of the models. A scale model airplane has a structural similarity or “material analogy” to the full scale version. This correspondence allows
engineers to infer dynamic properties of the airplane based on wind tunnel experiments on the replica. Physical models also include abstract representations which often include idealizations such as
frictionless planes and point masses. Another, but completely different type of model, is constituted by sets of equations. These mathematical models were not always deemed legitimate models by
philosophers. Model-to-subject and model-to-model relations are described using several different types of analogies: positive, negative, neutral, material, and formal.
Like unobservable entities, models have been the subject of debate between scientific realists and antirealists. One’s position often depends on what one considers the truth-bearers in science to be.
Those who take fundamental laws and/or theories to be true believe that models are true in inverse proportion to the degree of idealization used. Highly idealized models would therefore be (in some
sense) less true. Others take models to be true only insofar as they describe the behavior of empirically observable systems. This empiricism leads some to believe that models built from the
bottom-up are realistic, while those derived in a top-down manner from abstract laws are not.
Models also play a key role in the semantic view of theories. What counts as a model on this approach, however, is more closely related to the sense of models in mathematical logic than in science
Table of Contents
1. Models in Science
The word “model” is highly ambiguous, and there is no uniform terminology used by either scientists or philosophers. This article presents the most common type of models found in science as well as
the different relations—traditionally called “analogies”—between models and between a given model and its subject. For most of the 20th century, the use of models in science was a neglected topic in
philosophy. Far more attention was given to the nature of scientific theories and laws. Except for a few philosophers in the 1960’s, Mary Hesse in particular, most did not think the topic was
particularly important. The philosophically interesting parts of science were thought to lie elsewhere. As a result, few articles on models were published in twenty-five years following Hesse’s
(1966). [These include (Redhead, 1980) and (Wimsatt, 1987), and parts of (Bunge, 1973) and (Cartwright, 1983.] The situation is now quite different. As philosophers of science have come to pay
greater attention to actual scientific practice, the use of models has become an import area of philosophical analysis.
2. Physical Models
One familiar type of model is the physical model: a material, pictorial, or analogical representation of (at least some part of) an actual system. “Physical” here is not meant to convey an
ontological claim. As we shall see, some physical models are material objects; others are not. Hesse classifies many of these as either replicas or analogue models. Examples of the former are scale
models used in wind tunnel experiments. There is what she calls a “material analogy” between the model and its subject, that is, a pretheoretic similarity in how their observable properties are
related. Replicas are often used when the laws governing the subject of the model are either unknown or too computationally complex to derive predictions. When a material analogy is present, one
assumes that a “formal analogy” also exists between the subject and the model. In a formal analogy, the same laws govern the relevant parts of both the subject and model.
Analogue models, in contrast, have a formal analogy with the subject of the model but no material analogy. In other words, the same laws govern both the subject and the model, although the two are
physically quite different. For example, ping-pong balls blowing around in a box (like those used in some state lotteries) constitute an analogue model for an ideal gas. Some analogue models were
important before the age of digital computers when simple electric circuits were used as analogues of mechanical systems. Consider a mass M on a frictionless plane that is subject to a time varying
force f(t) (Figure 1). This system can be simulated by a circuit with a capacitor C and a time varying voltage source v(t). The voltage across C at time t corresponds to the velocity of M.
Figure 1: Analogue Machine
Today engineers and physicists are more familiar with simplifying models. These are constructed by abstracting away properties and relations that exist in the subject. Here we find the usual zoo of
physical idealizations: frictionless planes, perfectly elastic bodies, point masses, and so forth. Consider a textbook mass-spring system with only one degree of freedom (that is, the spring
oscillates perfectly along one dimension) shown in Figure 2. This particular system is physically possible, but nonactual. Real springs always wobble just a bit. If by chance a spring did oscillate
in one dimension for some time, the event would be unlikely but would not violate any physical laws. Frictionless planes, on the other hand, are nonphysical rather than merely nonactual.
Figure 2: Physical Water Drop Model
Simplifying models provide a context for Hesse’s other relations known as positive, negative, and neutral analogies. Positive analogies are the ways in which the subject and model are alike—the
properties and relations they share. Negative analogies occur when there is a mismatch between the two. The idealizations mentioned in the previous paragraph are negatively analogous to their
real-world subjects. In a scale-model airplane (a replica), the length of the wing relative to the length of the tail is a positively analogous since the ratio is the same in the subject and the
model. The wood used to make the model is negatively analogous since the real airplane would use different materials. Neutral analogies are relations that are in fact either positive or negative, but
it is not yet known which. The number of neutral analogies is inversely related to our knowledge of the model and its subject. One uses a physical model with strong, positive analogies in order to
probe its neutral analogies for more information. Ideally, all neutral analogies will be sorted into either positive or negative. The early success of the Bohr model of the atom showed that it had
positive analogies to real hydrogen atoms. In Hesse’s terms, the neutral analogies proved to be negative when the model was applied to atoms with more than one electron.
The use of “analogy” in this regard has declined somewhat in recent years. “Idealization” has replaced “negative analogy” when these simplifications are built into physical models from the start. The
degree to which a model has positive analogies is more typically described by how “realistic” the model is. One might also use the notion of “approximate truth”—a term long recognized as more
suggestive than precise. The rough idea is that more realistic models—those with stronger positive analogies—contain more truth than others. “Negative analogy” contains an ambiguity. Some are used at
the beginning of the model-building process. The modeler recognizes the false properties for what they are and uses them for a specific purpose—usually to simplify the mathematics. Other negative
analogies, known as “artifacts,” are unintended consequences of idealizations, data collection, research methods, and limitations of the medium used to construct the model. Some artifacts are benign
and obvious. Consider the wooden models of molecules used in high school chemistry classes. Three balls held together by sticks can represent a water molecule, but the color of the balls is an
artifact. (As the early moderns were fond of pointing out, atoms are colorless.) Other artifacts are produced by measuring devices. It is impossible, for example, to fully shield an oscilloscope from
the periodic signal produced by its AC current source. This produces a periodic component in the output signal not present in the source itself.
The heavy emphasis here on models in the physical sciences has more to do with the interests of philosophers than scientific practice. Physical models are used throughout the sciences, from
immunoglobulin models of allergic reactions to macroeconomic models of the business cycle.
3. Mathematical Models
Philosophers have generally taken physical models as paradigm cases of scientific models. In many branches of science, however, mathematical models play a far more important role. There are many
examples, especially in dynamics. Equation (1) below is an ordinary differential equation representing the motion of a frictionless pendulum. [θ is the angle of the string from vertical, l is the
length of the string, and g is the acceleration due to gravity. The two dots in the first term stand for the second derivative with respect to time.] Even when sets of equations have clearly been
used “to model” some behavior of a system, philosophers were often unwilling to take these as legitimate models. The difference is driven in part by greater familiarity with models in mathematical
logic. In the logician’s realm, a model satisfies a set of axioms; the axioms themselves are not models. To philosophers, equations look like axioms. Referring to a set of equations as “a model” then
sounds like a category mistake.
This attitude was eroded in part by the central role mathematical models played in the development of chaos theory. The 1980s saw a deluge of scientific articles with equations governing nonlinear
systems as well as the state spaces that represented their evolution over time (see section 4). Physical models, on the other hand, were often bypassed altogether. This made it far more difficult to
dismiss “mathematical model” as a scientist’s misnomer. It soon became apparent that all of the issues regarding idealizations, confirmation, and construction of physical models had mathematical
Consider the physical model of the electric circuit in Figure 1. A common idealization is to stipulate that the circuit has no resistance. When we look to the associated differential equations—a
mathematical model—there is a corresponding simplification, in this case the elimination of an algebraic term that represented the resistance of the wire. Unlike this example, simplification is often
more than a mere convenience. The governing equations for many types of phenomena are intractable as they stand. Simplifications are needed to bridge the computational gap between the laws and
phenomena they describe. In the old (pre-1926) quantum theory, for example, it was common to run across a Hamiltonian (an important type of function in physics that expresses the total energy of the
system) that blocked the usual mathematical techniques—for example, separation of variables. Instead, a perturbation parameter λ was used to convert the problematic Hamiltonian
4. State Spaces
State spaces have received scant attention in the philosophical literature until recently. They are often used in tandem with a mathematical model as a means for representing the possible states of a
system and its evolution. The “system” is often a physical model, but might also be a real-world phenomenon essentially free of idealizations. Figure 3 is the state space associate with equation (1),
the mathematical model for an ideal (frictionless) pendulum. Since θ represents the angle of the string, a,b correspond to the two highest points of deflection. c,d are the points at which the
pendulum is moving the fastest.
Figure 3: State Space for Ideal Pendulum
State spaces take a variety of forms. Quantum mechanics uses a Hilbert space to represent the state governed by Schrödinger’s equation. The space itself might have an infinite number of dimensions
with a vector representing an individual state. The ordinary differential equations used in dynamics require many-dimensional phase spaces. Points represent the system states in these (usually
Euclidean) spaces. As the state evolves over time, it carves a trajectory through the space. Every point belongs to some possible trajectory that represents the system’s actual or possible evolution.
A phase space together with a set of trajectories forms a phase portrait (Figure 4). Since the full phase portrait cannot be captured in a diagram, only a handful of possible trajectories are shown
in textbook illustrations. If the system allows for dissipation (for example friction), attractors can develop in the associated phase portrait. As the name implies, an attractor is a set of points
toward which neighboring trajectories flow, though the points themselves possess no actual attractive force. The center of Figure 4a, known as a point attractor, might represent a marble coming to
rest at the bottom of a bowl. Simple periodic motion, like a clock pendulum, produces limit cycles, attracting sets forming closed curves in phase space (Figure 4b).
Figure 4: Sample Phase Portraits
Let us consider a very simple system—a leaky faucet—that illustrates the use of each type of model mentioned. Researchers at the University of California, Santa Cruz, believed that the time between
drops does not change randomly over time, but instead has an underlying dynamical structure (Martien 1985). In other words, one drip interval causally influences the next. In order to explore this
hypothesis, a simplified physical model for a drop of water was developed (the one shown above in Figure 2). They believed that a water drop is roughly like a one-dimensional, oscillating mass on a
spring. Part of the mass detaches when the spring extends to a critical point. The amount of mass that detaches depends on the velocity of the block when it reaches this point.
The mathematical model (3) for this system is relatively simple. y is the vertical position of the drop, v is its velocity, m is its mass prior to detachment, and Δm is the amount of mass that
detaches (k, b, and c are constants). When this model is simulated on a computer, the resulting phase portrait is very similar to the one that was reconstructed from the data in the lab. Although
this qualitative agreement is too weak to completely vindicate these models of the dripping faucet, it does provide a small degree confirmation.
Going back to the physical model, there are two clear idealizations/negative analogies. First, of course, is that water drops are not shaped like rigid blocks. Second, the mass-spring model only
oscillates along one axis. Real liquids are not constrained in this way. However, these idealization allow for a far simpler mathematical model to be used than one would need for a realistic fluid.
(Without these idealizations, (3) would have to be replaced by a difficult partial differential equation.) In addition, Peter Smith has argued that this mathematical tractability came with a steep
price, namely, an unrecognized artifact (1998). The problem is that the state space for this particular system contains a “strange attractor” with a fractal structure, a geometrical structure far
more complex than the attractors in Figure 4. Smith argues that the infinitely intricate structure of this attractor is an artifact of the mathematics used to describe the evolution of the system. If
more realistic physical and mathematical models were used, this negative analogy would likewise disappear.
5. Models and Realism
One of the perennial debates in the philosophy of science has to do with realism. What aspects of science—if any—truly represent the real world? Which devices, on the other hand, are merely
heuristic? Antirealists hold that some parts of the scientific enterprise—laws, unobservable entities, and so forth—do not correspond to anything in reality. (Some, like van Fraassen (1980), would
say that if by chance the abstract terms used by scientists did denote something real, we have no way of knowing it.) Scientific realists argue that the successful use of these devices shows that
they are, at least in part, truly describing the real world. Let’s now consider what role models have played in this debate.
Whether models should be taken realistically depends on what one takes the truth-bearers in science to be. Some hold that foundational, scientific truths are contained either in mature theories or
their fundamental laws. If so, then idealized models are simply false. The argument for this is straightforward (Achinstein 1965). Let’s say that theory T describes a system S in terms of properties
p1, p2, and p3. As we have seen, simplified models either modify or ignore some of the properties found in more fundamental theories. Say that a physical model M describes S in terms of p1 and p4. If
so, then T describes S in one way; M describes S in a logically incompatible way. The simplifying assumptions needed to build a useful model contradict the claims of the governing theory. Hence, if T
is true, M is false.
In contrast, Nancy Cartwright has long argued that abstract laws, no matter how “fundamental” to our understanding of nature, are not literally true. In her earlier work (1983), she argued that it is
not models that are highly idealized, but rather the laws themselves. Abstract laws are useful for organizing scientific knowledge, but are not literally true when applied to concrete systems. They
are “true,” she argues, only insofar as they correctly describe simplified physical models (or “simulacra”). Fundamental laws are true-of-the-model, not true simpliciter. The idea is something like
being true-in-a-novel. The claim “The beast that terrorized the island of Amity in 1975 was a squid” is false-in-the-novel Jaws. Similarly, Newton’s second law of motion plus universal gravitation
are only true-in-Newtonian-particle-models.
For most scientific realists, whether physical models are “true” or “real” is not a simple yes-or-no question. Most would point out that even idealizations like the frictionless plane are not simply
false. For two blocks of iron sliding past each other, neglecting friction is a poor approximation. For skis sliding over an icy slope, it is much better. In other words, negative analogies come in
degrees. If the idealizations are negligible, we may properly say that a physical model is realistic.
Scientific realists have not always held similar views about mathematical models. Textbook model building in the physical sciences often follows a “top-down” approach: start with general laws and
first principles and then work toward the specifics of the phenomenon of interest. Dynamics texts are filled with models that can serve as the foundation for a more detailed mathematical treatment
(for example, an ideal damped pendulum or a point particle moving in a central field). Philosophers have paid much less attention to models constructed from the bottom-up, that is, models that begin
with the data rather than theory. What little attention bottom-up modeling did receive in the older modeling literature was almost entirely negative. Conventional wisdom seemed to be that
phenomenological laws and curve-fitting methods were devices researchers sometimes had to stoop to in order to get a project off the ground. They were not considered models, but rather “mathematical
hypotheses designed to fit experimental data” (Hesse 1967, 38). According to Ernan McMullin, sometimes physicists—and other scientists presumably—simply want a function that summarizes their
observations (1967, 390-391). Curve-fitting and phenomenological laws do just that. The question of realism is avoided by denying the legitimacy of bottom-up mathematical models.
In her broad attack on “theory-driven” philosophy of science, Cartwright has recently defended a nearly opposite view (1999). She argues that top-down mathematical models are not realistic, but
bottom-up models are. Once again, this verdict follows from a more general thesis about the truth-bearers in science. Cartwright is an antirealist about fundamental laws and abstract theories which,
she claims, serve only to systematize scientific knowledge. Since top-down mathematical models use these laws as first principles from which to begin, they cannot possibly represent real systems.
Bottom-up models, on the other hand, are not derived from covering laws. They are instead tied to experimental knowledge of particular systems. Unlike fundamental theories and their associated
top-down models, bottom-up models are designed to represent actual objects and their behavior. It is this grounding in empirical knowledge that allows these kinds of mathematical models to be the
primary device in science for representing real-world systems.
6. Models and the Semantic View of Theories
This typology of models and their properties has been developed with an eye toward scientific practice. Within the philosophy of science itself, models have also played a central role in
understanding the nature of scientific theories. For most of the 20th century, philosophers considered theories to be special sets of sentences. Theories on this so-called “syntactic view” are
linguistic entities. The meaning of the theory is contained in the sentences that constitute it, roughly the same way the meaning of this article is contained in these sentences. The semantic view,
in contrast, uses the model-theoretic language of mathematical logic. In broad terms, a theory just is a family of models. The theory/model distinction collapses. Using the terminology we have
already defined, a model in this sense might be an idealized physical model, an existing system in nature, or even a state space. The semantic content of a theory, on this view, is found in a family
of models rather than in the sentences that describe them. If a given theory were axiomatized—a rare occurrence—one could think of these models as those entities for which the axioms are true. To
take a toy example, say T1 is a theory whose sole axiom is “for any two lines, at most one point lies on both.” Figure 5 is one model that constitutes T1:
Figure 5: A Model of Theory T1
A model for ideal gases would be a physical model of dilute, perfectly elastic atoms in a closed container with an ordered set of parameters P, V, m, M, T> that satisfies the equation .
(Respectively, pressure, volume, mass of the gases, molecular weight of the molecules, and temperature. R is a constant). In fact two different sets of parameters P1, V1, m1, M1, T1> and P2, V2, m1,
M1, T2> constitute two separate models in the same family.
Some advocates of the semantic view claim that the use of the term “model” is similar in science and in logic (van Fraassen, 1980). This similarity has been one of the motivating forces behind this
particular understanding of scientific theories. Given the distinctions made in previous sections of this article, this similarity seems to be questionable.
First, many things that would count as a model on the semantic view, for example the geometric diagram in Figure 5, are not physical models, mathematical models, or state spaces. In what sense, one
wonders, are they scientific models? Moreover, a model on the semantic view might be an existing physical system. For example, Jupiter and its moons would constitute another model of Newton’s laws of
motion plus universal gravitation. This blurs the distinction between the model and its subject. One may use a physical and/or mathematical model to study celestial bodies, but such entities are not
themselves models. The scientist’s use of the term is not this broad.
Second, as we have already seen, sets of equations often constitute mathematical models. In contrast, laws and equations on the semantic approach are said to describe and classify models, but are
never themselves taken to be models. Their relation is satisfaction, not identity.
Some time before the semantic view became popular, Hesse issued what still seems to be the correct verdict: “[M]ost uses of ‘model’ in science do carry over from logic the idea of interpretation of a
deductive system,” however, “most writers on models in the sciences agree that there is little else in common between the scientist’s and the logician’s use of the term, either in the nature of the
entities referred to or in the purpose for which they are used” (1967, 354).
7. References and Further Reading
• Achinstein, P. “Theoretical Models.” The British Journal for the Philosophy of Science 16 (1965): 102-120.
• Bunge, M. Method, Model and Matter. Dordrecht: Reidel, 1973.
• Cartwright, N. How the Laws of Physics Lie. New York: Clarendon Press, 1983.
• Cartwright, N. The Dappled World. Cambridge: Cambridge University Press, 1999.
• Hesse, M. Models and Analogies in Science. Notre Dame: University of Notre Dame Press, 1966.
• Hesse, M. “Models and Analogy in Science.” The Encyclopedia of Philosophy. New York: Macmillan Publishing, 1967.
• McMullin, E. “What do Physical Models Tell Us?” Logic, Methodology, and Philosophy of Science III. Eds. B. van Rootselaar and J. F. Staal. Amsterdam: North-Holland Publishing, 1967: 385-396.
• Morrison, M. and M. Morgan, eds. Models as Mediators. Cambridge: Cambridge University Press, 1999.
• Morton, A. “Mathematical Models: Questions of Trustworthiness.” The British Journal for the Philosophy of Science 44 (1993): 659-674.
• Morton, A. and M. Suàrez. “Kinds of Models.” Model Validation in Hydrological Science. Eds. P. Bates and M. Anderson. New York: John Wiley Press, 2001.
• Redhead, M. “Models in Physics.” The British Journal for the Philosophy of Science 31 (1980): 154-163.
• Smith, P. Explaining Chaos. Cambridge: Cambridge University Press, 1998.
• Van Fraassen, B. The Scientific Image. New York: Clarendon Press, 1980.
• Wimsatt, W. “False Models as Means to Truer Theories.” Neutral Models in Biology. Eds. M. Nitecki and A. Hoffmann. New York: Oxford University Press, 1987.
Author Information
Jeffrey Koperski
Email: koperski@svsu.edu
Saginaw Valley State University
U. S. A.
|
{"url":"https://iep.utm.edu/models/","timestamp":"2024-11-08T12:29:45Z","content_type":"text/html","content_length":"52812","record_id":"<urn:uuid:9a947d13-e6ee-4bcf-99b7-05c363d0dafa>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00266.warc.gz"}
|
Lisp RISC-V assembler RP2350 extensions
21st October 2024
This article describes functions to add support to the Lisp RISC-V assembler for additional RISC-V instructions provided in the RP2350.
The RP2350 Hazard3 RISC-V core designed by Luke Wren extends the base 32-bit RISC-V instruction set with a number of RISC-V extensions. To my mind the most interesting of these to uLisp users are the
Zbb, Zbs, and Zbkb extensions which provide bit manipulations and single bit instructions which could be particularly useful in embedded and electronics applications. I've defined an additional
RISC-V extensions file to allow you to add support for these to the RISC-V assembler.
Loading the RISC-V extensions
To add the extensions load the standard assembler file first, followed by the extensions file, because some of the extensions add compressed versions of the instructions in the main file.
Get the standard assembler file here: RISC-V assembler in uLisp.
Get the extensions here: RISC-V RP2350 extensions.
Note that these extensions won't work on the Kendryte K210 RISC-V processor used on the Sipeed MAiX boards, which is also supported by the RISC-V assembler in uLisp.
It's not obvious what some of these extensions might be useful for, so the following examples demonstrate some possible applications:
Reverse bits – brev8 and rev8
This is a function to efficiently reverse the order of bits in a 32-bit number. The reverse-bits operation could be useful when transforming bitmap images, or when interfacing between protocols that
work MSB first and LSB first. It takes advantage of the brev8 instruction that reverses the bits within each byte, and the rev8 instruction that reverses the order of the bytes:
(defcode reverse-bits (n)
; Reverse bits within each byte
($brev8 'a0 'a0)
; Reverse all bytes
($rev8 'a0 'a0)
For example:
> (format t "~b" (reverse-bits #b10110011100011110000111110000011))
Maximum number in a list - max
The following example demonstrates the use of the max instruction that returns the maximum of two signed integers. It finds the largest integer in a list of arbitrary length:
(defcode maximum (x)
($lui 'a2 #x80000)
($beqz 'a0 finished)
($lw 'a1 0 '(a0))
($lw 'a1 4 '(a1))
($max 'a2 'a1 'a2)
($lw 'a0 4 '(a0))
($j repeat)
($mv 'a0 'a2)
For example:
> (maximum '(23 -91 47 -73 11))
It iterates through the list keeping track of the largest value found so far. Obviously you can also use min to find the smallest value.
Integer square root - clz
The new clz instruction counts the number of leading zeros in a register. It provides an easy way of getting upper and lower bounds for the integer part of the square root of a number. These are
useful for applications such as finding prime numbers, where the upper bound gives the largest factor you need to test. If a more accurate result is needed, these bounds can be used as the starting
point for Newton's method, or a binary search.
The algorithm takes advantage of the fact that the length of the binary representation of a number's integer square root is approximately half that of the original number.
Here is the upper bound routine, upper-sqrt:
(defcode upper-sqrt (x)
($li 'a1 33)
($li 'a2 1)
($clz 'a0 'a0)
($sub 'a0 'a1 'a0)
($srli 'a0 'a0 1)
($sll 'a0 'a2 'a0)
($addi 'a0 'a0 -1)
It's equivalent to this Lisp function (assuming you defined clz):
(defun upper-sqrt (x) (1- (ash 1 (truncate (- 33 (clz x)) 2))))
For example:
> (upper-sqrt 9)
> (upper-sqrt 1000000)
> (upper-sqrt 1600000000)
To get the lower bound of the integer square root you could use the following Lisp function, lower-sqrt:
(defun lower-sqrt (x) (1- (truncate (+ (upper-sqrt x) 3) 2)))
A compact representation for unsigned integers - clz and ror
A 32-bit unsigned integer has a range of 0 to 2^32-1 and precision of 1 in 2^32. Is it possible to devise a more compact 16-bit floating-point format that will represent the same range, but with
reduced precision? This might be useful, for example, to log the values from an analogue-to-digital converter with limited storage.
The solution is to normalize the 32-bit unsigned integer, by shifting it left until the most significant bit is a '1'. Then store the number in a 16-bit halfword with the top five bits (E, the
exponent) giving the amount of the shift, and the bottom 11 bits (F, the fractional part) giving the top 11 bits of the normalized number ^[1].
A number N is then: N = (F + #x800) × 2^(11-E).
The range is still 0 to 2^32-1 but the precision is 1 in 2^11. I've called this format ufloat16.
Here's the routine to encode a 32-bit unsigned integer, which is another application of the clz (count leading zeros) instruction:
(defcode to-ufloat16 (n)
; Normalize
($clz 'a1 'a0)
($andi 'a1 'a1 #x1f)
($sll 'a0 'a0 'a1)
; Shift back down to bottom 11 bits
($srli 'a0 'a0 21)
; Shift result of clz to top 5 bits
($slli 'a1 'a1 11)
; Pack into 16 bits
($or 'a0 'a1 'a0)
Here's the routine to unpack an integer in ufloat16 notation, which uses the new ror (rotate right) instruction:
(defcode from-ufloat16 (n)
; Get the exponent from top 5 bits
($srli 'a1 'a0 11)
; Get the fraction from bottom 11 bits
($li 'a2 #x7ff)
($and 'a0 'a0 'a2)
; Shift up/down by the exponent
($addi 'a1 'a1 11)
($ror 'a0 'a0 'a1)
Here are some examples (using a Lisp format statement to print the results in hexadecimal where appropriate).
Numbers up to 2048 are encoded without loss of precision:
> (format t "#x~4,'0x" (to-ufloat16 1))
> (from-ufloat16 #xfc00)
> (format t "#x~4,'0x" (to-ufloat16 2048))
> (from-ufloat16 #xa400)
Numbers over 2048 have 11-bits precision:
> (format t "#x~4,'0x" (to-ufloat16 4661))
> (from-ufloat16 #x9c8d)
up to the maximum unsigned 32-bit number #xffffffff:
> (format t "#x~4,'0x" (to-ufloat16 #xffffffff))
> (format t "#x~8,'0x" (from-ufloat16 #x07ff))
You could do something similar to represent signed 32-bit integers in 16 bits by using one of the bits as a sign bit.
Interleaving two integers - zip and unzip
The following example is a way to encode two small integers, such as a pair of coordinates, as a single compact integer. The encoding technique involves expressing the two numbers in binary, and then
interleaving the bitstrings, right-aligned, so their bits alternate.
The new zip and unzip instructions are ideal for this application. The zip instruction interleaves the upper and lower half of a register into the odd and even bits of the result, and unzip does the
reverse operation.
The function encode takes two integers of 16 bits or less, and interleaves them into a single integer:
(defcode encode (x y)
($pack 'a2 'a0 'a1)
($zip 'a0 'a2)
For example:
> (encode 137 73)
The function decode takes a single integer and decodes it into a list of the original two numbers. It uses a machine-code function unzip:
(defcode unzip (x)
($unzip 'a0 'a0)
(defun decode (x)
(let ((u (unzip x)))
(list (logand u #xffff) (logand (ash u -16) #xffff))))
For example:
> (decode 24771)
(137 73)
The zip instruction is also useful for making double-width characters from bitmap fonts, by doubling each column of pixels.
Binomial random number generator - cpop
The next example shows how to generate random numbers with a binomial distribution. It uses the new cpop instruction (standing for population count) which counts the number of '1' bits in a register.
For example, suppose you tossed 20 coins and counted the number of heads. If you repeated this 2^20 times you would expect to get:
• No heads ^20C[0] times, or once.
• 1 head ^20C[1] or 20 times.
• 10 heads ^20C[10] or 184756 times.
• 20 heads ^20C[20] times, or once.
This is a binomial distribution.
To get a random number from 0 to 20 with a binomial distribution you can simulate the coin tossing by generating a 20-bit random number, and then counting the number of '1' bits. The cpop instruction
will do this:
(defcode popcount (n)
($cpop 'a0 'a0)
For example:
> (popcount #b10101010101010101010)
The final binomial random number generator is then:
(defun binomial-random ()
(popcount (random #xfffff)))
Trying it out:
> (dotimes (x 20) (format t "~a " (binomial-random)))
Summary of the extensions
Here's a summary of the extensions defined in the RISC-V RP2350 extensions file:
Operation Example Action Notes
Basic bit AND inverted operand ($andn 'a0 'a1 'a2) a0 = a1 & ~a2 Number of leading zeros
Count leading zeros ($clz 'a0 'a1) a0 = a1 + imm Number of 1s; popcount
Count set bits ($cpop 'a0 'a1) a0 = a1 - a2 Number of trailing zeros
Count trailing zeros ($ctz 'a0 'a1) a0 = max(a1, a2) Signed integers
Maximum ($max 'a0 'a1 'a2) a0 = max(a1, a2) Unsigned integers
Unsigned maximum ($maxu 'a0 'a1 'a2) a0 = min(a1, a2) Signed integers
Minimum ($min 'a0 'a1 'a2) a0 = min(a1, a2) Unsigned integers
Unsigned minimum ($minu 'a0 'a1 'a2) a0 = a1 | ~a2 Byte is #xff if any bit set
Bitwise OR-combine ($orc.b 'a0 'a1) a0 = a1 byte reversed Only lower 5 bits of a2
OR inverted operand ($orn 'a0 'a1 'a2) a0 = a1 rotate left by a2 Only lower 5 bits of a2
Byte-reverse register ($rev8 'a0 'a1) a0 = a1 rotate right by a2 Only lower 5 bits of imm
Rotate left ($rol 'a0 'a1 'a2) a0 = a1 rotate right imm
Rotate right ($ror 'a0 'a1 'a2) a0 = a1[7..0] sign extend
Rotate right immed. ($rori 'a0 'a1 11) a0 = a1[15..0] sign extend
Sign-extend byte ($sext.b 'a0 'a1) a0 = ~(a1 ^ a2)
Sign-extend halfword ($sext.h 'a0 'a1) a0 = a1[7..0] zero extend
Exclusive NOR ($xnor 'a0 'a1) a0 = a1[15..0] zero extend
Zero-extend byte ($zext.b 'a0 'a1)
Zero-extend halfword ($zext.h 'a0 'a1)
Single bit Single-bit clear ($bclr 'a0 'a1 'a2) a0 = a1 & ~(1<<a2) Only lower 5 bits of a2
Single-bit clear immed. ($bclri 'a0 'a1 8) a0 = a1 & ~(1<<imm) Only lower 5 bits of imm
Single bit extract ($bext 'a0 'a1 'a2) a0 = (a1>>a2) & 1 Only lower 5 bits of a2
Single bit extract immed. ($bexti 'a0 'a1 8) a0 = (a1>>imm) & 1 Only lower 5 bits of imm
Single-bit invert ($binv 'a0 'a1 'a2) a0 = a1 ^ (1<<a2) Only lower 5 bits of a2
Single-bit invert immed. ($binvi 'a0 'a1 8) a0 = a1 ^ (1<<imm) Only lower 5 bits of imm
Single-bit set ($binv 'a0 'a1 'a2) a0 = a1 | (1<<a2) Only lower 5 bits of a2
Single-bit set immed. ($binvi 'a0 'a1 8) a0 = a1 | (1<<imm) Only lower 5 bits of imm
Cryptography Bit-reverse each byte ($brev8 'a0 'a1) a0 = a1 bit reversed Lower 16 bits of a1, a2
Pack 2 halfwords ($pack 'a0 'a1 'a2) a0 = (a2<<15) | a1 Lower 8 bits of a1, a2
Pack 2 bytes into halfword ($packh 'a0 'a1 'a2) a0 = (a2<<7) | a1
Deinterleave odd/even bits ($unzip 'a0 'a1)
($zip 'a0 'a1)
Interleave upper/lower half
1. ^ Since the top bit of F will always be a '1' (except in the case of zero) it can be omitted, to increase the precision to 12 bits. However, one value then has to be used to represent zero, which
makes the routines more complicated.
|
{"url":"http://www.ulisp.com/show?4Y5E","timestamp":"2024-11-08T11:32:11Z","content_type":"text/html","content_length":"30447","record_id":"<urn:uuid:fc8bde75-5273-4c58-9b07-83ec2278737f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00238.warc.gz"}
|
Block Diagram Algebra in control system
Hello friends, in this blog article, we will learn Block diagram algebra in the control system. It will include block diagram reduction rules, some block diagram reduction examples and solutions.
We know that the input-output behavior of a linear system is given by its transfer function: G(s)=C(s)/R(s)
where R(s) = Laplace transform of the input variable and C(s) is Laplace transform of the output variable
A convenient graphical representation of such a linear system (transfer function) is called Block Diagram Algebra.
A complex system is described by the interconnection of the different blocks for the individual components. Analysis of such a complicated system needs simplification of block diagrams by the use of
block diagram algebra. Below table showing some of the rules for Block Diagram Reduction.
Block Diagram Reduction Rules
Block diagram reduction rules help you to minimize the block diagram thus solving the equations quickly. Below table represents block diagram reduction rules in the control system
Using the above rules you have to follow below simple steps to solve the block diagrams:
1. Combine all cascade blocks
2. Combine all parallel blocks
3. Eliminate all minor (interior) feedback loops
4. Shift summing points to left
5. Shift takeoff points to the right
6. Repeat steps 1 to 5 until the canonical form is obtained
Block diagram reduction examples
Now we will see some block diagram reduction examples. We will start with some simple examples and then will solve a few complex ones.
Example 1:
In the below example, all the three blocks are in series (cascade). We just need to multiply them as G1(s)×G2(s)×G3(s).
Example 2:
In this example, two blocks are in parallel but there is one summing point as well.
Example 3:
Solve the below block diagram
Example 4:
Simplify the block diagram shown in Figure below.
Step 1: Moved H2 before G2
Step 2: H1 and G2 are in parallel, thus added them as below
Step 3: (H1+G2) and G3 are in series, thus multiplied them
Step 4: Moved takeoff point 2 after G3(G2+H1)
Step 5: Minimizing parallel block with a feedback loop
Step 6: Finally, we will get the minimized equation as below
I hope you liked this article. Please share this with your friends. Like our facebook page and subscribe to our newsletter to get daily updates. Please let us know about your queries in the comment
section below. Have a nice time :)
block diagram algebra pdf, block diagram algebra solver, block diagram algebra problems, block diagram reduction examples and solutions, block diagram reduction examples pdf, block diagram tutorial,
block diagram questions, control systems block diagram reduction problems.
|
{"url":"https://www.myclassbook.org/2018/10/block-diagram-algebra-in-control-system.html","timestamp":"2024-11-11T01:51:23Z","content_type":"application/xhtml+xml","content_length":"260172","record_id":"<urn:uuid:95325d82-ce22-453f-aabb-16dd9354ec72>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00650.warc.gz"}
|
An Algorithm For Fuller's World Map
The general overview of the algorithm I use is:
1. Understand layout of Icosahedron on sphere.
2. Understand layout of map in plane.
3. Define a "standard" spherical triangle into which all sphere points will get mapped to.
4. Define a standard plane equilateral triangle into which all the standard spherical triangle points will get mapped to.
1. Select a point P=(long. lat.) to be mapped to (x, y).
2. Determine which of the 20 Icosahedron spherical triangles the point P1 falls within.
3. Change the point's P1 coordinates to a corresponding point P2 in the "standard" spherical triangle.
4. Determine the arc *distances* d1, d2, d3 along the three sides of the "standard" spherical triangle which uniquely identifies the location of the point P2.
5. Use these same *distances* d1, d2, d3, along the flat edges of the standard *plane* equilateral triangle to define (at most) a small plane triangle. This small plane triangle will have (at most)
3 vertices V1, V2, V3.
6. Calculate the average of (at most) V1, V2, V3 to get the point P3.
7. Translate and rotate the point P3 onto the plane map. This depends on which Icosahedron spherical triangle the original point P1 was in.
We want to map any (longitude, latitude) coordinate of the World sphere to an (x, y) coordinate on the plane map.
We must first understand that the sphere is divided into 20 spherical triangles. This is "simply" the Icosahedron projected onto the world sphere. Fuller positioned the Icosahedron in a special way.
(See this web page for a table of coordinates for the Icosahedron's 12 vertices.)
Because each of the 20 spherical triangles are the same shape and size, we only need to understnad how to transform an arbitrary point within a single triangle from (longitude, latitude) to (x, y).
So, the first step of the algorithm is to determine and remember which of the 20 spherical triangles the point is within.
This is accomplished by using the table of the 12 vertex coordinates and calculating the distance from the selected point P=(longitude, latitude) to each vertex.
Usage Note: My work is copyrighted. You may use my work but you may not include my work, or parts of it, in any for-profit project without my consent.
|
{"url":"http://www.rwgrayprojects.com/rbfnotes/maps/graymapa.html","timestamp":"2024-11-06T04:53:49Z","content_type":"text/html","content_length":"3244","record_id":"<urn:uuid:cdae19a3-51f0-49f2-a3e0-271d1719459a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00867.warc.gz"}
|
The ‘Balls Into Bins’ Process and Its Poisson Approximation
The ‘Balls Into Bins’ Process and Its Poisson Approximation
A simple, yet flexible process to model numerous problems
Photo by Sharon McCutcheon on Unsplash
In this article, I want to introduce you to a neat and simple stochastic process that goes as follows:
m balls are thrown randomly into n bins. The target bin for each of the m balls is determined uniformly and independently of the other throws.
Image by Author.
Sounds easy enough, right? However, some famous mathematical, as well as computer science problems can be described and analyzed using this process. Among them:
1. Birthday paradox: If there are m people in a room, what is the probability of two of them having the same birthday? We assume that the birthdays are uniformly distributed over n=365 days.
Translated into balls and bins: What is the probability that at least one of the bins contains at least two of the balls?
2. Coupon collector’s problem: A collector wants to collect all of n distinct stickers. Whenever he buys a package, he gets one sticker in a uniformly random and independent manner. What is the
expected number of packages he has to buy to collect all n stickers?
Translated into balls and bins: How many balls do we have to throw until all bins contain at least one ball in expectation?
3. Dynamic resource allocation: Important websites are not stored on merely a single but on n servers. That is because if one server crashes the website should still be available. One reason for a
crash might be that too many of m people use the same server trying to access the website. But how to prevent that? It’s impossible to oversee millions of people acting independently and properly
route them to servers with a low load in real-time, especially since these people don’t communicate with each other. One very easy way to do this is to uniformly distribute the users onto the n
servers. But is it good? What is the maximum load of any of these servers?
Translated into balls and bins: What is the (expected) maximum load across all bins after all balls were thrown?
Now, we will first analyze this problem and check out some interesting properties!
Quick Wins
Let us deal with the easy things at first. If you know a bit about probabilities, the following results should not surprise you.
The Number of Balls in a particular Bin
We have m balls. The probability of one ball landing in a particular bin is 1/n. Therefore, the number of balls Nᵢ bin i is binomially distributed with parameters m and 1/n for each i.
In particular,
• we expect m/n balls in this bin and
• the probability of the bin staying empty is (1–1/n)ᵐ.
The Number of Throws until a Ball lands in a particular Bin
Since each ball lands in bin i with probability 1/n, the number of throws until bini is not empty anymore is geometrically distributed with parameter 1/n.
We see that things are uncomplicated if you want to answer questions about single bins. However, most interesting questions, including the three aforementioned ones, involve statements about all bins
at once.
Any bin with two balls? Does every bin have at least one ball? What is the maximum load of any bin?
The problem is that the numbers of balls in the bins are stochastically dependent on each other.
To demonstrate this, imagine that all of the m balls have landed in bin 1. The number of balls in bins 2 to n is determined now: it is zero. This dependence makes tackling the three problems from the
introduction more difficult, but not impossible.
Harder Problems
Let’s take a look at the three problems from the introduction.
What is the probability that one bin contains two balls?
Probably you have seen this one in the birthday paradox setting with n=365 and something like m=23. We denote the event “all bins contain less than two balls” as E, the counter-event of what we
actually ask for. Then you can argue like this:
• After the first throw, the probability for E is 1=1–0/n.
• The second throw is not allowed to land in the bin of the first ball, thus the probability for E after two throws is (1–0/n)*(1–1/n).
• The third throw is not allowed to land in the bin of the first two balls, thus the probability for E after two throws is (1–0/n)*(1–1/n)*(1–2/n).
• …
In total, we get
A small sanity check: For m>n the probability should be zero according to the pigeonhole principle. The formula above reflects this since the factor for m=n+1 is zero, thus the whole product is.
So the answer to our initial question is 1-P(E).
How many balls to throw until all bins contain a ball in expectation?
A simple argument is as follows:
• The number X₁ of throws until one bin is filled is Geo(1) distributed, i.e. after a single throw, one bin will be filled, obviously.
• From there, the number X₂ of throws until some second bin contains a ball is Geo(1–1/n) distributed since with probability 1/n a ball lands in the first filled bin again. But with probability 1–1
/n, the ball lands in some other bin, the second filled bin.
• From there, the number X₃ of throws until some third bin contains a ball is Geo(1–2/n) distributed since with probability 2/n a ball lands in the first or second filled bin again. But with
probability 1–2/n, the ball lands in some other bin, the third filled bin.
• …
Since a Geo(p) distributed random variable has a mean of 1/p, the expected number of throws until all n bins are filled is
where the sum is called the harmonic sum. For large n, it can be approximated by
where γ≈0.5772 is the Euler–Mascheroni constant. So, in total the expected number of throws is about n*(ln(n)+0.5772).
If you studied maths, you have probably encountered both problems and knew their solution already. At least it was like this for me. The third problem and its solution, however, surprised me.
What is the expected maximum load across all bins?
I will give you the solution to this problem without proof. Let Mₘ,ₙ be the expectation of the maximum in any bin. From the paper “Balls into Bins” — A Simple and Tight Analysis by Martin Raab and
Angelika Steger, it follows for large n:
Especially the case for m=n got me. What I expected was a constant, maybe. If I throw a million balls into a million bins, I expect a few bins to stay empty, so of course, the maximum load should be
larger than one. But here we see that it’s not just a constant, but a slowly growing function in n.
Now, what I want to tell you is the following: Solving these problems was definitely harder than the problems that deal with a single bin only. Especially for computing the expected maximum load, a
lot of technical arguments are needed.
What I want to show you now is a method to deal with the balls into bins problem in a general way. As with everything in life, this comes with a price, unfortunately — looser bounds for the results
we derive with this method. Furthermore, we cannot answer every question using this method. Let us see what this means.
The Poisson Approximation
The Poisson approximation method lets us upper-bound the probability of events in the balls into bins setting in an easy way.
The Method
The awesome book “Probability and Computing” by Michael Mitzenmacher and Eli Upfal recommends the following steps:
1. Pretend that the number of balls in each bin is independently Poisson distributed with parameter m/n.
2. Calculate the probability q of some event of your interest in this easier setting.
3. For the probability p of this event in the real balls into bins setting we have
where we call “e times the square root” the penalty factor.
Great, right? This implies for example, that rare events in the Poisson setting are also rare in the original setting.
This procedure is great because we don’t have to care about dependencies among the number of balls in the bins anymore. Everything is independent and following a simple Poisson distribution.
Of course, this inequality is not always meaningful, e.g. if q is a fixed number like 0.5. But if you can apply it, you can easily develop upper bounds for rare events without putting too much effort
into the analysis.
You might ask: Why Poisson? And why m/n? A hand-waving argument is the following: We have said that the number of balls in a bin is Bin(m, 1/n) distributed. Such a distribution can be approximated
(under certain conditions) using a Poisson distribution with the same mean, which is m/n.
Application to a New Problem
Let us consider the extended birthday problem as an easy example to solve with this method.
There are 20 people in a room. What is — at most — the probability that 3 of them share the same birthday?
This problem is hard to solve analytically. Try it yourself. Therefore, it’s a good playing ground for testing the Poisson approximation. Framed into the balls and bins setting:
20 balls are thrown into 365 bins. What is — at most — the probability of having 3 balls in one bin?
So, let’s do it the Poisson way. In the independent Poisson setting, and with m=20 and n=365 we have
Multiplying this with the penalty factor we get
Nice! This bound, however, is extremely loose. According to this site, the real probability is about 1%. This is due to the square root factor, which blows up the probability a lot.
But luckily, there is another upper bound without a square root factor! Also from the book “Probability and Computing”:
If the considered probability is increasing (or decreasing) in m, the penalty factor is 2.
If you think about it, you will see that we are in the increasing case. The more persons we have, the likelier it is to have 3 with the same birthday. Using the lower penalty factor, the Poisson
approximation yields an upper bound of only 1.9%, which is close to the real probability of around 1%. Perfect!
We have seen an introduction to the balls into bins problem, which arises in several settings that do not seem related at first sight. It connects the birthday paradox with the coupon collector’s
problem, for example.
Computing probabilities in this process can be cumbersome. Therefore, we have taken a look at the Poisson approximation for the balls into bins process. This approximation allows for easy
computations of upper bounds of probabilities. However, these bounds might be quite bad since we get a rather large penalty factor. But there are also problems that require a smaller penalty factor
of 2, as we have seen in the last example.
You can now frame some of your challenges as balls into bins problems and bound probabilities in this process using the Poisson approximation!
I hope that you learned something new, interesting, and useful today. Thanks for reading!
If you have any questions, write me on LinkedIn!
|
{"url":"https://www.cantorsparadise.org/the-balls-into-bins-process-and-its-poisson-approximation-e38d11bdf283/","timestamp":"2024-11-12T02:32:20Z","content_type":"text/html","content_length":"45202","record_id":"<urn:uuid:bd8af378-f16c-4d7d-a623-f39cdb008066>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00519.warc.gz"}
|
KAILASH and the SYSTEM of PYRAMIDS.
Kailash is a sacred Tibetan mountain shrouded in mystery and legends. Perhaps, this is the only currently peak which no man has gone before. Approach Kailash not only prohibited, but dangerous. In
the immediate vicinity of mountains time flows much faster, and people have gone to the mountain, often not returned.
Kailash has a pyramidal shape, close to the right, and there is little doubt that this is the pyramid. For the first time that the Pyramid Kailash can be linked with other pyramids on our planet
began to speak in the 80s. Indefatigable researcher Ernest Muldashev proposed his own system of pyramids and described it in his book «In search of the city of the Gods». But probably because in
those days there was no program Google Earth, to build the lines he had to use a globe and thread, and «system» to put it mildly, was conditional. But in the main he was right. The Kailas Mountain
really is the important point of the System Ancient Monumental Structures (SAMS), it is directly connected with the main pyramids of Earth and other key objects.
But probably would be correct to say on the contrary, this ancient structures and pyramids are associated with the mount Kailash, because it existed long before the pyramids. No doubt that, via a
certain points, the System of connected to the mount Kailash.
Look at the location Kailash relative to other objects. The first thing we see is the location Кайласа on the Meridian of Teotihuacan. Meridian Mexican pyramids on the reverse side of the Earth
passes through the Sacred mountain with an error of less than 14 km.
Therefore (see the article «Geodesy meridians») Meridian Kailash – Teotihuacan is separated from the Meridian of the Great Pyramid (GP) at 50 degrees East along the equator. In addition, point
Kailash divides the segment on the Meridian Teotihuacan in the ratio 5:13. This means that from mount Kailas to antipode Teotihuacan over an arc of the Meridian – 50(50,5) degrees, and to the complex
Teotihuacan – 130(to 129.3) degrees.
Moreover, the equator divides the distance between the antipode of Teotihuacan and mount Kailas in the ratio 2:3. And this point Kailash lies at the latitude of 31* in the daytime that on 1 degree
above the latitude of the Great pyramid.
Next, we consider the interaction point of Kailash with Тriangles of Giza. Perm point of this system is equidistant from Kailash and Mohenjo-Daro, and is the apex of an equilateral triangle formed
by these points. In this triangle side PERM – KAILASH is 3640 km, and is the base for a distance of Triangles Giza.
Also the base of the triangle – Kailash – Mohenjo-Daro, length 1,346 km, is exactly 12 degrees along the arc of the globe. Thus, the distance from Mohenjo-Daro to Kailash is exactly 1/30 of the
circumference of the Earth. The line itself Kailash – Mohenjo-Daro also has very interesting properties, which we will discuss later.
Now pay attention to the side of Perm – Kailash. This line passes through exactly Turgojak lake with an island Vera, which are the famous megaliths. And island Vera also equidistant from Kailash and
from Mohenjo-Daro, forming another equilateral triangle with sides of 3160 and 3157 km respectively.
This combination of triangles once again confirms the validity of the construction point of Perm in the triangle of Giza, moreover absolutely independent manner, using point – Kailash.
But let’s go further. Now the side of the triangle island Vera – Kailash becomes the basis for another isosceles triangle with a vertex in another mysterious place Russia – Fortress Por-Bajin
, which is equidistant from the island Vera and Kailash. Distances are 2530 and 2546 km respectively.
Completes combination of triangles Kailash is a unique isosceles triangle, the presence of which completely eliminates the random arrangement of all these objects.
It turned out that of Kailas, in turn, are equidistant Por-Bajin and White Pyramid (Qianling Tomb) – the biggest pyramid in China. Together, they form an isosceles triangle with the apex at Kailash.
In the middle line of the triangle, it is his height, is the direction of the Great Zimbabwe in Africa – a key point of SAMS.
Moreover the base of the triangle that is Line of Por-Bajin – White pyramid is the direction to yet other reference point systems – Uluru.
The image below shows how the whole system looks triangles Kailash.
Now back to the line of Mohenjo-Daro – Kailash. In the picture above we see that in fact, the continuation of this line is the side of the triangle Kailash – White Pyramid. But this is not the case.
It is known that in China hundreds of pyramids and all of them compactly located south of the city Xian. White pyramid (Qianling Tomb), which forms a right triangle with the Kailas and Por-Bajin
located further south and a little farther north.
It turned out that the line (orthodromy) Mohenjo-Daro – Kailash is tangent to the latitude of Baalbek (ie, reaches its maximum breadth at the latitude of Baalbek), and touch is just near the main
cluster of Chinese pyramids.
The line crosses the latitude of Baalbek simultaneously with the meridians of the Nazca lines and Tiwanaku. All these nuances are best seen on the interactive map at the end of this page.
In addition the line crosses the tropics on the meridian of Nan Madol, latitude of Teotihuacan on the meridian GP 135* (90*+45*), the meridian of Lalibela on the latitude of Angkor, latitude the
Nazca geoglyphs (15*) on the perpendicular to the meridian of Uluru.
It is easy to see that in this construction appear almost share the same objects that interact with each other. If we remember that the road dead at Teotihuacan precisely oriented towards the
Mohenjo-Daro, it will be further evidence of a direct link between these historical buildings.
Line of Kailash – Great Zimbabwe, is the height of an isosceles triangle, and also has very interesting properties, is clearly related to the 45-degree parameters of the Earth.
This image is centered on the meridian, deferred by 45 degrees to the east of the Great Pyramid. Line of Kailash – Great Zimbabwe crosses the meridian at this “anti” latitude of Easter Island.
Meridian GP +35 degrees crosses the line at the “anti” latitude Samaipata, the meridian of GP+55* on latitude of Baalbek.
Of course, on the back side of the planet line crosses of latitude of real objects. Latitude 45 degrees, line intersects with on the meridian of Tiwanaku, and the meridian of Baalbek on the latitude
of the Nazca lines.
In addition, the line of Kailash – Great Zimbabwe is crossed by “anti” the latitude of Great Zimbabwe and the perpendicular to the meridian of Nan Madol, on which is situated the city of
You might also notice that the middle line of the triangle Island Verа – Por-Bajin – Kailash intersects with the meridian of GP+45* on the 45th latitude and the meridian at GP+35*, line Perm –
Mohenjo-Daro intersects with the breadth of Baalbek.
In this article, we looked at one more subsystem equilateral triangles that form along with Kailas key sites and its interaction with the main system. It turned out that Mount Kailas, thanks to the
Island Verа, Por-Bajin and Chinese pyramids connects of Giza Triangles with Western triangles by combining them into a single system of ancient monumental structures (SАMS).
If we assume that such an inter arrangement of objects random, it is contrary to common sense. But the fact of having a legitimate system in the location of ancient structures looks even more
fantastic. If you accept it, will have to reconsider the very basis of our history. But, nevertheless, it is. In a word, “Believe your eyes”.
|
{"url":"https://objective-history.com/kailash-and-the-system-of-pyramids/","timestamp":"2024-11-05T18:41:08Z","content_type":"text/html","content_length":"128576","record_id":"<urn:uuid:126211ef-55af-4ea4-bd61-324a528783be>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00262.warc.gz"}
|
PIIKA 2 input guide
This page describes the various input files and parameters associated with using PIIKA 2.
Step #1: Input files
Main input file (required)--- The main input file must be a file in tab-delimited text format. The format of the input file must follow these rules.
• The first row contain column headings.
□ The first two column headings must be "Peptide" and "Accession", respectively.
□ Subsequent column headings correspond to arrays in your experiment, and should be labeled in groups of two, with the same name for each, to correspond with the foreground and background
intensity readings for a single array.
☆ For example, in the sample input file, the first array is called "A-1", so the third and fourth column headings are each labeled "A-1".
• If your arrays contain n technical replicates for each peptide, then each subsequent set of n lines (after the header line) must contain the data for those replicates, one replicate per line.
□ For example, the arrays used to produce the data in the sample input file contained 9 technical replicates, so lines 2-10 of the file contain the data for the first peptide, lines 11-19
contain the data for the second peptide, and so on.
• Within a line (other than the first line):
□ The first two columns contain the peptide name and accession number of the protein corresponding to that peptide, respectively (as suggested by the column headings).
□ The next two columns contain the foreground and background intensity values, respectively, for the first array; the next two columns contain these values for the second array, and so on.
Important note: If all of the arrays in your experiment correspond to different treatments/controls, then the order of the columns (except for the first two) is unimportant. However, if you have
arrays from, say, multiple animals that all received the same treatment (biological replicates), then the data for the arrays corresponding to the same treatment must be in adjacent columns. Although
the 4 arrays corresponding to each subject in our sample data were not grouped together for analysis purposes, suppose that we did want to group them together (for, say, clustering). Then all of the
arrays corresponding to subject A must appear together, and the same for the other subjects, as shown in the sample file.
The following figure illustrates the use of the above rules using a portion of the sample input file.
What is the best way to make a file like this? That depends on the format of the files produced by your image analysis software. If you are familiar with programming and/or UNIX utilities, you can
write a program that takes those files as input and outputs a file in the format specified above. If you do not have this expertise, the best method would probably be to construct the file in Excel
(or the spreadsheet program of your choice). To use Excel, follow these general guidelines:
1. Open a new Excel file.
2. Save the file in tab-delimited text (.txt) format (call the file main_input_file.txt).
3. Enter "Peptide" in cell A1 and "Accession" in cell B1.
4. Import the first file from your image analysis software (corresponding to the first array in your experiment) into Excel as a separate spreadsheet.
5. Copy and paste the names of the peptides and the accession numbers from this file into columns A and B, respectively, of main_input_file.txt starting at row 2.
6. Copy and paste the foreground and background measurements in this file into columns C and D, respectively, of main_input_file.txt starting at row 2
7. Put appropriate column headings in cells C1 and D1.
8. Repeat steps 4, 6, and 7 for each array, putting the new data into successive columns, making sure that the order of the peptides are identical for each array.
If you have difficulty creating a file in the appropriate format, please e-mail the author for assistance.
Treatment-control combinations file (optional)---This file specifies the treatment-control combinations. T-tests will be performed for each peptide for each treatment-control combination specified in
this file, and these combinations will also be used for biological subtractions. This file must be a tab-delimited text file, with one treatment-control combination specified per line. Each line
should contain the number corresponding to the treatment, followed by a tab, followed by the number corresponding to the control. This "number" refers to the order of the treatments as they appear in
the main input file. For example, if you were using the sample dataset and set "Number of inter-array replicates" to be 1, then the line "1<tab>2" in this file would mean a comparison between A-1 and
A-2. If, however, "Number of inter-array replicates" is 4, then the line "1<tab>2" would mean a comparison between subject A (all four samples combined) and subject B (all four samples combined).
The following figure illustrates the features of the example treatment-control combinations file.
Treatment-control combinations for P-value visualizations file (optional)--- This file specifies the treatment-control combinations for constructing P-value visualization files. It must be a
tab-delimited text file similar to the one described above, except two treatment-control combinations must be specified per line. The first two columns correspond to the first treatment-control
combination, and are used for the left semicircle in each circle in the visualization file, while the third and fourth columns correspond to the second treatment-control combination and are used for
the right semicircle. Any treatment-control combination listed in this file must also appear in the "Treatment-control combinations file" described above.
The following figure illustrates the features of the example P-value visualizations file.
Step #2: Required parameters
Number of technical replicates per unique peptide on the same array---the number of technical replicates corresponding to the same peptide sequence on a single array. The value of this parameter
would be 9 for the sample data.
Number of treatments---The number of unique biological treatments in your experiment. If you do not have any inter-array replicates (either technical or biological), then this will be equal to the
number of arrays. If you do have inter-array technical replicates, then this will be equal to the number of arrays divided by the number of inter-array replicates per treatment. The value of this
parameter would be 24 for the sample data.
Number of unique peptides on the array---The number of unique peptide sequences on the array. This is equal to the total number of spots on the array divided by the number of intra-array technical
replicates per peptide. The value of this parameter would be 297 for the sample data.
Number of inter-array replicates---The number of inter-array replicates (either biological or technical) per treatment. The value of this parameter would be 1 for the sample data.
Depending on the nature of your data, by specifying different values for the above parameters, your data will be analyzed in a different way. For example, suppose that for the sample data you choose
the following values for the above paramaters rather than the ones specified above:
• Number of technical replicates per unique peptide on the same array: 9
• Number of treatments: 6
• Number of unique peptides on the array: 297
• Number of inter-array replicates: 4 (and choose "biological" in the drop-down box).
In this case, for each subject the normalized intensity values for the 4 time points corresponding to that subject will be averaged together. As an example of a consequence of this, the heatmap will
have only 6 columns (corresponding to the 6 treatments/subjects) rather than 24.
Step #3: Optional parameters
Distance metric for hierarchical clustering---The distance metric to use when performing hierarchical clustering. Choices are (1 - Pearson correlation) (default) and Euclidean distance.
Linkage method for hierarchical clustering---The linkage method to use when performing hierarchical clustering. Choices are McQuitty linkage (default), average linkage, and complete linkage.
Perform chi-square test?---If yes, then the chi-square test will be performed to identify peptides with inconsistent phosphorylation patterns among the technical replicates on each array. As a
result, PIIKA 2 will output extra t-test files containing only peptides that are consistently phosphorylated in both the treatment and the control, and also omits from the heatmaps and PCA analyses
any peptides that are not consistently phosphorylated on any of the arrays.
Perform F test?---If yes, then the F test will be performed to identify peptides with inconsistent phosphorylation patterns among the biological replicates. The implications of this option are
analogous to that of the "Perform chi-square test?" option.
Perform biological subtraction before performing F test?---If yes, then biological subtraction will be performed on each treatment-control combination before performing the F test.
Perform random tree analysis?---If yes, then the analysis described under the heading "Statistical significance of the clustering of a priori groups" in the PIIKA 2 paper will be performed. In order
for perform this analysis, your samples must be named such that PIIKA 2 can tell which samples are in the same group. To do this, your sample names (column names in the main input file) must have a
hyphen in them, where everything before the hyphen defines the name of the group, and everything after the hyphen is a number (letters are not allowed) that is unique to that sample. For example, in
the same data, the samples corresponding to subject A are labeled "A-1", "A-2", "A-3", and "A-4", and similarly for the other subjects.
Perform peptide subset analysis?---If yes, then the analysis described under the heading "Identifying sets of peptides that support the clustering of a priori groups" will be performed. For this
analysis to work correctly, the same sample naming format as described above must be used.
Value of alpha (false positive rate) for statistical significance testing---The P-value threshold for describing a peptide as differentially phosphorylated between a treatment and a control.
Estimated background probability that a peptide will be differentially phosphorylated---This value is used for calculating positive and negative predictive values; see the "Positive and negative
predictive values" section of the manuscript describing PIIKA 2 for details.
Step #4: E-mail address
Enter your e-mail address so we can send you an e-mail when your job has finished running. This e-mail will contain a link enabling you to download the results.
|
{"url":"https://saphire.usask.ca/saphire/piika_old/piika2_input_guide.html","timestamp":"2024-11-11T08:01:30Z","content_type":"text/html","content_length":"12680","record_id":"<urn:uuid:443fcc2b-ba18-4fc6-ab7f-81de841008a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00009.warc.gz"}
|
Subset of Null Set
SUBSET OF NULL SET
If null set is a super set, then it has only one subset. That is { }.
More clearly, null set is the only subset to itself. But it is not a proper subset.
Because, { } = { }.
Therefore, A set which contains only one subset is called null set.
Apart from the stuff "Subset of null set", let us know some other important stuff about subsets of a set.
Subset of a Set
A set X is a subset of set Y if every element of X is also an element of Y.
In symbol we write
x ⊆ y
Reading Notation :
Read ⊆ as "X is a subset of Y" or "X is contained in Y".
Read ⊈ as "X is a not subset of Y" or "X is not contained in Y".
Proper Subset
A set X is said to be a proper subset of set Y if X ⊆ Y and X ≠ Y.
In symbol, we write X ⊂ Y.
Reading Notation :
Read X ⊂ Y as "X is proper subset of Y".
The figure given below illustrates this.
Power Set
The set of all subsets of A is said to be the power set of the set A.
Reading Notation :
The power set of A is denoted by P(A).
Super Set
A set X is said to be a proper subset of set Y if X ⊆ Y and X ≠ Y.
In symbol, we write X ⊂ Y.
Y is called super set of X
Formula to Find Number of Subsets
If A is the given set and it contains n number of elements, we can use the following formula to find the number of subsets.
Number of subsets = 2ⁿ
Formula to find the number of proper subsets :
Number of proper subsets = 2^n - 1
Cardinality of Power Set
We already know that the set of all subsets of A is said to be the power set of the set A and it is denoted by P(A).
If A contains "n" number of elements, then the formula for cardinality of power set of A is
n[P(A)] = 2^n
Note :
Cardinality of power set of A and the number of subsets of A are same.
Null Set is a Subset or Proper Subset
Null set is a proper subset for any set which contains at least one element.
For example, let us consider the set A = {1}.
It has two subsets. They are { } and {1}.
Here null set is proper subset of A. Because null set is not equal to A.
Solved Problems
Problem 1 :
Let A = {1, 2, 3, 4, 5} and B = {5, 3, 4, 2, 1}. Determine whether B is a proper subset of A.
Solution :
If B is the proper subset of A, every element of B must also be an element of A and also B must not be equal to A.
In the given sets A and B, every element of B is also an element of A. But B is equal A.
Hence, B is the subset of A, but not a proper subset.
Problem 2 :
Let A = {1, 2, 3, 4, 5} and B = {1, 2, 5}. Determine whether B is a proper subset of A.
Solution :
If B is the proper subset of A, every element of B must also be an element of A and also B must not be equal to A.
In the given sets A and B, every element of B is also an element of A.
And also But B is not equal to A.
Hence, B is a proper subset of A.
Problem 3 :
Let A = {1, 2, 3, 4, 5} find the number of proper subsets of A.
Solution :
Let the given set contains "n" number of elements.
Then, the formula to find number of proper subsets is
= 2^n - 1
The value of n for the given set A is 5.
Because the set A = {1, 2, 3, 4, 5} contains five elements.
Number of proper subsets = 2^5 - 1
= 32 - 1
= 31
Hence, the number of proper subsets of A is 31.
Problem 4 :
Let A = {1, 2, 3} find the power set of A.
Solution :
We know that the power set is the set of all subsets.
Here, the given set A contains 3 elements.
Then, the number of subsets = 2^3 = 8.
P(A) = {{1}, {2}, {3}, {1, 2}, {2, 3}, {1, 3}, {1, 2, 3}, { }}
Problem 5 :
Let A = {a, b, c, d, e} find the cardinality of power set of A.
Solution :
The formula for cardinality of power set of A is given below.
n[P(A)] = 2ⁿ
Here n stands for the number of elements contained by the given set A.
The given set A contains five elements. So n = 5.
Then, we have
n[P(A)] = 2^5
n[P(A)] = 32
Hence, the cardinality of the power set of A is 32.
Kindly mail your feedback to v4formath@gmail.com
We always appreciate your feedback.
©All rights reserved. onlinemath4all.com
|
{"url":"https://www.onlinemath4all.com/subset-of-null-set.html","timestamp":"2024-11-08T11:10:27Z","content_type":"text/html","content_length":"39656","record_id":"<urn:uuid:14ac850b-5705-4395-a12d-3249398ae9b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00081.warc.gz"}
|
Anyone else struggle with the lighting today, my eyes are still going haywire now!
From nigh on the back row of the Itchen North, I made the mistake of looking directly at the lights over the Kingsland - the brightness was retina burning..
Very annoying, i can still see them!
Spent most of the match with spots before my eyes. Every time I looked up the glare was painful. I recommend sunglasses for an evening match!
Ah, but it's so much better for viewing on the tellybox don't you know. Which after all is the most important thing!
First football stadium seen from ISS?
It was fine. Why were you gimps staring at the lights?
It's ok guys, I used to work in lighting. After a few hours of use they will dim, people always used to ring up when new LED lighting was first installed anywhere. It won't always be as bright as
tonight ;-)
Irritatingly bright but not very illuminative, if you get what I mean. They have a very narrow angle of illumination and I found them intrusive but I was sitting in row CC in the north east corner
instead of my usual row O on the half way line.
It's ok guys, I used to work in lighting. After a few hours of use they will dim, people always used to ring up when new LED lighting was first installed anywhere. It won't always be as bright as
tonight ;-)
In which case shouldn't they have left them on for hours in the build up to today's game to reduce the impact on people's eyes?
In which case shouldn't they have left them on for hours in the build up to today's game to reduce the impact on people's eyes?
yes they should have put them on for a few hours over the last couple of evenings for sure.
It was fine. Why were you gimps staring at the lights?
I was more concerned by the fact that people had turned up in flip flops
It's ok guys, I used to work in lighting. After a few hours of use they will dim, people always used to ring up when new LED lighting was first installed anywhere. It won't always be as bright as
tonight ;-)
LMFAO..... I'm guessing that's why you USED to work in the industry.
They won't dim over time.
Edited by Gemmel
In which case shouldn't they have left them on for hours in the build up to today's game to reduce the impact on people's eyes?
Maybe they did.
LMFAO..... I'm guessing that's why you USED to work in the industry.
They won't dim over time.
I use tens of millions of LEDs a year and mine are not the over-driven lighting type. They'll go dim.
I use tens of millions of LEDs a year and mine are not the over-driven lighting type. They'll go dim.
Whitey - I could be (very likely) being really thick here but that's over my head
LMFAO..... I'm guessing that's why you USED to work in the industry.
They won't dim over time.
you'll see
you'll see
But they've already had 14 hours use .....are you still expecting them to get dimmer
But they've already had 14 hours use .....are you still expecting them to get dimmer
Lets put it like this, a street light, on for about 10-12 hours a night would take at least a week to dim, these will need a lot more than 14 hours.
Lets put it like this, a street light, on for about 10-12 hours a night would take at least a week to dim, these will need a lot more than 14 hours.
It's ok guys, I used to work in lighting. After a few hours of use they will dim, people always used to ring up when new LED lighting was first installed anywhere. It won't always be as bright as
tonight ;-)
Ok Gemmel, have it your way, they will stay that bright, if not get brighter and thousands of people will be blinded by these new lights at St Mary's
Better ?
Can we play the lights 12 hours of Talksport? Guaranteed they will be dimmer as a result.
Can we play the lights 12 hours of Talksport? Guaranteed they will be dimmer as a result.
LOL - About the only thing that would
We had a trip to row MM in the Kingsland and the chit chat in the second half was how we wouldn't want to be sat this high every week as the lights intruded on your vision.
Row V Itchen North, didnt notice them at all.
Whitey - I could be (very likely) being really thick here but that's over my head
Sorry, of course you're not. In order to get the maximum brightness out of these LEDs they run them at the maximum current possible which means they tend to run hot and need special device packaging
to get rid of all the heat. The hotter any electronic device gets the more unreliable they generally become but LEDs in particular suffer from various progressive degradations and lose brightness
over time. That's if they don't fail altogether. They're getting better though
The ones we use are for electronic signage and are driven at more sustainable currents which is what I meant when I said that these were over driven.
I think if you stare at any floodlight (LED or not) you are going to get dazzled. Nothing wrong with the lighting today IMHO. Mountain out of a molehill thread.
I just wish they would do something with the PA system, in the Northam stand it is terrible unless you are in the loo you cant hear a bloody thing.
It was fine. Why were you gimps staring at the lights?
Well that's an unnecessary response.
I don't think anyone was staring at the lights. Sometimes in football the ball goes quite high, and I found it dazzled me when the ball crossed the light. That's all.
Glad to hear that they will dim over time.
What would be the point of a floodlight that "dims over time" ?
What would be the point of a floodlight that "dims over time" ?
the dazzle and the glare dims, not the wattage/power of the light.
Apart from the brightness (there seemed to be far fewer lights overall so I would imagine that's why the ones there were were brighter) I thought they were positioned quite strangely, they didn't
seem equidistant or in any obvious pattern. We won't get the true effect until they're used in "proper" dark though, the players only had one shadow on Saturday evening, and that was from the big
bright round thing in the sky (and I don't mean the Nike Ordem Premier League match ball).
**** me. Some people will moan about anything.
|
{"url":"https://www.saintsweb.co.uk/topic/47411-lighting/","timestamp":"2024-11-12T22:51:44Z","content_type":"text/html","content_length":"401840","record_id":"<urn:uuid:0c9f0535-0995-4627-9938-a465e5c1ef4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00377.warc.gz"}
|
faster than light?
Re: faster than light?
so achieving the speed of light is an illusion ! How ever much energy we pump in we will still be at a standstill.
In this case.. At what point, when travelling away from the Earth, does it stop being a point of reference and what can we expect of the perceived distance to the next star at this time ?
Re: faster than light?
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
so achieving the speed of light is an illusion ! How ever much energy we pump in we will still be at a standstill.[/b]
No, you will be moving at some velocity relative to your original reference frame (where you were before you began to move.) You cannot acheive lightspeed as measured from
reference frame though. If you are moving at 99% of C, then that's what someone in your original reference frame will measure your speed to be. If, at this time, you shine a light at the wall toward
your direction of travel, you will see that the light moves at the speed of light because you are your own reference frame. Some guy not moving with your speed that is positioned outside your ship
looking in (say a guy with a very good telescope back on Earth) will also see your beam moving at the speed of light C. Not C+99% C like you might think.
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
In this case.. At what point, when travelling away from the Earth, does it stop being a point of reference and what can we expect of the perceived distance to the next star at this time ? [/b]
If you left from the earth, then the earth is your original reference frame. When you begin to move relative to the Earth, then you have created a new (your own) reference frame. Nothing will change
the distance (perceived or real) to the next star. Constriction of length only occurs in your reference frame, and then only when observed from outside your reference frame (the guy with the
Einstein used a good example to illustrate how we make our own reference frame. Apparently he lived at some time near a canal that was connected to the ocean. You could sit on the bank of the canal
and watch waves from the ocean move up the canal from right to left. The waves moved at a slow pace that could be matched by jogging next to the canal. He noticed that, as he jogged in the direction
that the waves moved, if he looked only at the water surface, it appeared to him that the canal water was experiencing a series of equidistant standing waves. If he stopped, the waves appeared to
move from the right to the left. Which perception is more real? The are equal.
Re: faster than light?
There is a theoretical method for travelling faster than light involving spacetime.
- Position a craft in a certain area of space.
- Distort the spacetime at the fore of the craft to contract.
- Distort the spacetime at the aft of the craft to expand.
This will create and arrowhead shape field of spacetime around the craft. According to super relativity theory, the field should 'slip' through the rest of spacetime. Because the accepted physical
constants do not neccessarily apply in quantum, the field is likely to be capable of travelling faster than the speed of light. The problem is, the exact speed it will go at is completely
Re: faster than light?
No, you will be moving at some velocity relative to your original reference frame (where you were before you began to move.) You cannot acheive lightspeed as measured from any reference frame
though. If you are moving at 99% of C, then that's what someone in your original reference frame will measure your speed to be. If, at this time, you shine a light at the wall toward your
direction of travel, you will see that the light moves at the speed of light because you are your own reference frame. Some guy not moving with your speed that is positioned outside your ship
looking in (say a guy with a very good telescope back on Earth) will also see your beam moving at the speed of light C. Not C+99% C like you might think.
This is just a means of visualisation though, it does not necessarily stand up in the real world.
The point of reference use is simply to illustrate a calculation. Everything in the universe has to be our real point of reference. If we move toward another star then we move away from the Earth and
toward the other star, they are both points of reference. As is everything else in existence as we must be moving in one direction or another relative to them.
c being constant is more to do with energy and mass conversions. Wasn't he saying that when converting mass to energy or vice versa that mass and energy can change but light can't ? You couldn't
contain the energy from a nuclear detenation and change the speed of light locally (or could you)...
Tenshi, if your craft reaches infinite mass, doesn't this include your fuel tank which will also gain infinite mass ! There's your infinte energy source.
I read some time ago that this would be the best way to traverse large distances. If you acheive infinte mass at the speed of light (or near to) then you are everywhere in the Universe at once, all
you need do is slow yourself down but aim for the target point the other side of the universe. With infinite mass you are already there. Sounds a bit like the improbability drive....
Re: faster than light?
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
This is just a means of visualisation though, it does not necessarily stand up in the real world.
The point of reference use is simply to illustrate a calculation. Everything in the universe has to be our real point of reference. If we move toward another star then we move away from the Earth
and toward the other star, they are both points of reference. As is everything else in existence as we must be moving in one direction or another relative to them. [/b]
It absolutely does stand up in the real world. It
the real world. This is the very basis for time dilation, which has been measured precisely multiple times in the past and always has matched exactly Einstein's predictions. In fact, time dilation is
used on almost a daily basis in particle physics. It enables scientists to observe properties of particles moving at high speeds that they would not have been able to observe if time dilation did not
extend the life of these particles.
When you say point of reference, you are talking about a different thing than I am. I spoke of the reference frame. That is everything that is traveling with you, not the place you are coming from or
going to.
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
c being constant is more to do with energy and mass conversions. Wasn't he saying that when converting mass to energy or vice versa that mass and energy can change but light can't ? You couldn't
contain the energy from a nuclear detenation and change the speed of light locally (or could you)... [/b]
C is absolutely constant, a fact proven in the Michelson-Morley experiment that also showed that there exists no "ether". Special Relativity does not predict C to be a constant, it
it. It is a postulate in Special Relativity that C is constant. Every prediction of Special Relativity is a result of this fact. Indeed, the entire theory can be deduced from C being constant. C will
always be measured to be the same no matter which reference frame you are in, whether you are moving with the flashlight or whether you are standing still and the flashlight is moving past you.
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
Tenshi, if your craft reaches infinite mass, doesn't this include your fuel tank which will also gain infinite mass ! There's your infinte energy source.[/b]
Again, infinite fuel does not equal infinite thrust. As your mass approaches infinity, your engine's thrust (which is not subject to relativistic effects) still has to accelerate you. Your thrust is
finite, your mass infinite, which one prevails? Can an ant make a moving car speed up by pushing on it?
Re: faster than light?
There is a problem with thinking that mass increases when the volicity increases. Even teachers tell you that mass increases when the volicity increases. You can even read it in books. The problem
is... It doesn't work that way.
<div class='quotetop'>QUOTE(\"\"Gary Oas\"\")</div>
There is one concept that has been ingrained into the collective mindset of not only lay-people but also many working physicists. This is the notion of relativistic mass; a moving object?s mass
increases with velocity with respect to an observer considered to be at rest [...][/b]
This is from an article recently published at arXiv.org. Here is the abstract:
<div class='quotetop'>QUOTE(\"\"Gary Oas\"\")</div>
The concept of velocity dependent mass, relativistic mass, is examined and is found to be inconsistent with the geometrical formulation of special relativity. This is not a novel result; however,
many continue to use this concept and some have even attempted to establish it as the basis for special relativity. It is argued that the oft-held view that formulations of relativity with and
without relativistic mass are equivalent is incorrect. Left as a heuristic device a preliminary study of first time learners suggest that misconceptions can develop when the concept is introduced
without basis. In order to gauge the extent and nature of the use of relativistic mass a survey of the literature on relativity has been undertaken. The varied and at times self-contradicting use
of this concept points to the lack of clear consensus on the formulation of relativity. As geometry lies at the heart of all modern representations of relativity, it is urged, once again, that
the use of the concept at all levels be abandoned.[/b]
On the Abuse and Use of Relativistic Mass
by Gary Oas.
The author also investigates a large number of publications in which this is told. Maybe boring, but he published a large list in the paper below.
<div class='quotetop'>QUOTE(\"\"Gary Oas\"\")</div>
A lengthy bibliography of books referring to special and/or general relativity is provided to give a background for discussions on the historical use of the concept of relativistic mass.[/b]
On the Use of Relativistic Mass in Various Published Works
by Gary Oas.
In the 70s people tried to get rid of that misconception. For a while they succeeded. For some reason the error is back. In the year of Einstein's 100th anniversary the author thought it would be a
good idea to highlight it once again. I think these are must reads for anyone involved in physics and who post on a time travel forum like this.
Re: faster than light?
On those shows they show on Discovery about traveling in space, they talk about a theory of space travel with nuclear purpolsion. Basically the ship has a huge parachute sail in front of it and you
launch a mini nuke into it and the blast is used as the excelleration. I forget how fast they think they could go but I think they said in theory you could from Earth to Mars in Days rather then
months using this.
But thats still a way off even by early estimates (plus we can just imagine how fun it would be when NASA asks congress for a few billion to test out that baby if its ever made).
Re: faster than light?
<div class='quotetop'>QUOTE(\"Harte\")</div>
When you say point of reference, you are talking about a different thing than I am. I spoke of the reference frame. That is everything that is traveling with you, not the place you are coming
from or going to.
Uh ? So...
<div class='quotetop'>QUOTE(\"Harte\")</div>
No, you will be moving at some velocity relative to your original reference frame (where you were before you began to move.)
Earth is our reference frame, that we just left, but is travelling with us ?? I'm confused.
I sitll want to see what happens when we leave the presence of a gravity well, I think there will be no space or time there so traversing the expanses will be instantaneous, but then I don't have a
space ship to prove it either
Re: faster than light?
<div class='quotetop'>QUOTE(\"nate\")</div>
I was thinking, if you send a space shuttle into space, get it to go 20,000mph and stop the thrust from the engines, wouldnt you continue to go 20,000mph? If so then why cant you thrust and
continue to use the engines to get past the speed of light if you had an unlimited amount of fuel?[/b]
I am tiered and dident read the whole thread so if some one else posted this then oops
Well because the faster you move the slower time moves for you and the more mass you have, you would in fact need an infinite amount of fuel in order to push your self faster than the speed of light,
the bummer part is if you ever did that, you mass would reach a value equal to infinnaty, and the universe would colapse around you. and you would be made fun of forever cause your the jack ass that
filled his gass tank up to and infinite amount and distroyed the universe
Re: faster than light?
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
Earth is our reference frame, that we just left, but is travelling with us ?? I'm confused.[/b]
I certainly don't mean to confuse you. I think the pertinant point you are missing is here in what I previously said:
<div class='quotetop'>QUOTE(\"Harte\")</div>
If you left from the earth, then the earth is your original reference frame. When you begin to move relative to the Earth, then you have created a new (your own) reference frame. [/b]
You create a new reference frame when you begin to move with respect to your original reference frame (the Earth). You have no motion with respect to your current reference frame. It travels with
you. If you look out a window of your spaceship at a passing planet, it is true to say "I am moving past that planet. It is equally true to say "That planet is moving past me." The second statement
illustrates what I mean by no motion with respect to your current reference frame.
I included this example in a previous post to try and cut through any confusion:
<div class='quotetop'>QUOTE(\"Harte\")</div>
Einstein used a good example to illustrate how we make our own reference frame. Apparently he lived at some time near a canal that was connected to the ocean. You could sit on the bank of the
canal and watch waves from the ocean move up the canal from right to left. The waves moved at a slow pace that could be matched by jogging next to the canal. He noticed that, as he jogged in the
direction that the waves moved, if he looked only at the water surface, it appeared to him that the canal water was experiencing a series of equidistant standing waves. If he stopped, the waves
appeared to move from the right to the left. Which perception is more real? They are equal.[/b]
Sitting on the bank of the canal, Einstein shared the reference frame of anything else around him that was motionless. Once he began to jog, he created a new reference frame for himself. The fact
that these two reference frames are not the same is illustrated by what he sees the waves in the canal doing when he is in one reference frame and then in the other.
Think of traveling on a train at 60 mph. If you get up and walk down the aisle at 5 mph, it looks to everyone sitting on the train that you are moving at 5 mph. Someone on the ground outside the
train looking in would see you walking at 65 mph. To you, it appears that you are still and the interior of the train is moving past you at 5 mph.
<div class='quotetop'>QUOTE(\"thenumbersix\")</div>
I sitll want to see what happens when we leave the presence of a gravity well, I think there will be no space or time there so traversing the expanses will be instantaneous, but then I don't have
a space ship to prove it either
Good luck finding a place outside any gravity well.
|
{"url":"https://paranormalis.com/threads/faster-than-light.1245/page-2","timestamp":"2024-11-06T08:20:40Z","content_type":"text/html","content_length":"137844","record_id":"<urn:uuid:ce4f2999-bb64-418d-b522-7374e4bfd370>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00124.warc.gz"}
|
C program to toggle or invert nth bit of a number - Codeforwin
C program to toggle or invert nth bit of a number
Write a C program to input any number from user and toggle or invert or flip n^th bit of the given number using bitwise operator. How to toggle n^th bit of a given number using bitwise operator in C
programming. C program set n^th bit of a given number if it is unset otherwise unset if it is set.
Input number: 22
Input nth bit to toggle: 1
After toggling nth bit: 20 (in decimal)
Required knowledge
Bitwise operators, Data types, Variables and Expressions, Basic input/output
Logic to toggle nth bit of a number
Toggling bit means setting a bit in its complement state. Means if bit is currently set then change it to unset and vice versa.
To toggle a bit we will use bitwise XOR ^ operator. Bitwise XOR operator evaluates to 1 if corresponding bit of both operands are different otherwise evaluates to 0. We will use this ability of
bitwise XOR operator to toggle a bit. For example – if Least Significant Bit of num is 1, then num ^ 1 will make LSB of num to 0. And if LSB of num is 0, then num ^ 1 will toggle LSB to 1.
Step by step descriptive logic to toggle nth bit of a number.
1. Input number and nth bit position to toggle from user. Store it in some variable say num and n.
2. Left shift 1 to n times, i.e. 1 << n.
3. Perform bitwise XOR with num and result evaluated above i.e. num ^ (1 << n);.
Program to toggle or invert nth bit
* C program to toggle nth bit of a number
#include <stdio.h>
int main()
int num, n, newNum;
/* Input number from user */
printf("Enter any number: ");
scanf("%d", &num);
/* Input bit position you want to toggle */
printf("Enter nth bit to toggle (0-31): ");
scanf("%d", &n);
* Left shifts 1, n times
* then perform bitwise XOR with num
newNum = num ^ (1 << n);
printf("Bit toggled successfully.\n\n");
printf("Number before toggling %d bit: %d (in decimal)\n", n, num);
printf("Number after toggling %d bit: %d (in decimal)\n", n, newNum);
return 0;
Enter any number: 22
Enter nth bit to toggle (0-31): 1
Bit toggled successfully.
Number before toggling 1 bit: 22 (in decimal)
Number after toggling 1 bit: 20 (in decimal)
Happy coding 😉
|
{"url":"https://codeforwin.org/c-programming/c-program-to-toggle-nth-bit-of-number","timestamp":"2024-11-11T20:31:18Z","content_type":"text/html","content_length":"128000","record_id":"<urn:uuid:34c8ecd6-e492-4154-9a97-9794741da5d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00625.warc.gz"}
|
Questioning an assumption in calculus of variations
• Thread starter hideelo
• Start date
In summary, when deriving stationary points of a function defined by a 1-D integral, there is an assumption that a function exists without proof, but once the derivation is completed, it is clear
that a function satisfying the Euler-Lagrange equation will be a stationary function. The existence of this function can be proven using functional analysis, but it is a complex problem that depends
on the assumptions of the space of functions and the properties of the functional being minimized.
When deriving stationary points of a function defined by a 1-D integral (think lagranian mechanics, Fermat's priniciple, geodesics, etc) and arriving at the Euler Lagrange equation, there seems to me
to be an unjustified assumption in the derivation. The derivations I have seen start with something along the following lines: assume some function x(t) is the function we are looking for, let x'(t)
= x(t) + η(t) be a nearby path... The derivation will then go on to show the conditions for the original function x(t), namely that the function satisfy the Euler Lagrange equation.
It seems a little odd that we assume, without proof, that this function exists and then sort out its properties. How do we know such a function exists? Does it always exist? Are there conditions on
this? Isn't it a little shady to be discussing properties of something if we haven't proved yet that it exists?
On the other hand, once we complete the derivation, it seems clear to me that a function which satisfies the Euler Lagrange equation will be a stationary function. I think.
I'm still left feeling uncomfortable however about this. Is there some outside proof which shows that this function must exist?
I shoud give the caveat that I have only seen this derivation in physics books, I don't own any math books on calculus of variations
Functions that represent reasonable things in physics are real and have reasonable mathematical properties (continuous, derivatives exist, etc.)
Dr. Courtney said:
Functions that represent reasonable things in physics are real and have reasonable mathematical properties (continuous, derivatives exist, etc.)
I understand that, but we are asking for something more here, the existence of some extreme values. In calculus on R we can say that on compact subsets of R, the extreme values exist. I don't know
what the analogy would be here when I am not looking at R, but some subset of all continuous functions.
I think I am looking for some topology on the space of functions and hope to see some compact set or something. Maybe there an easier way, I don't know.
hideelo said:
It seems a little odd that we assume, without proof, that this function exists and then sort out its properties. How do we know such a function exists? Does it always exist? Are there conditions
on this? Isn't it a little shady to be discussing properties of something if we haven't proved yet that it exists?
The minimal function certainly does not always exist mathematically. I haven't done this type of analysis for a long time but couldn't you just take say [itex] C^1([0,1]) [/itex] as your space of
functions with an action functional given by [itex] S(f)=\int_{0}^1 f(x) dx [/itex]? Surely this can't have a local min/max because you could always just remove a tiny portion of the original
function and glue in a Gaussian of the appropriate size in a continuous way to make the integral a tiny bit bigger/smaller than any proposed min/max function.
What it is saying is simply that if the minimizer does exist, then it must satisfy these equations. So we can replace the problem of finding a minimizer with the problem of solving a differential
equation which is usually more tractable. Of course not every differential equation has a solution so the nonexistence of a minimizer will manifest itself in the nonexistence of a solution to the
differential equation. In the example I gave above, the Euler lagrange equations simply become 1=0 so no solution to the Euler Lagrange equation exist as expected.
hideelo said:
I think I am looking for some topology on the space of functions and hope to see some compact set or something. Maybe there an easier way, I don't know.
If you want to know the conditions about when the existence of a minimizer is guaranteed, generally you will need to use some functional analysis (although in the one dimensional case things may be
much simpler, I don't really know) and it is much more complicated than a simple compactness argument. For example, I remember a theorem from a PDE course that stated if you take a reflexive Banach
space B (which is some space of functions for calculus of variations applications) with a subset [itex] A \subseteq B [/itex] which is weakly closed in [itex] B [/itex] and if [itex] S:A\to \mathbb
{R} [/itex] is a coercive, weakly lower semicontinuous functional then it is bounded below and achieves it's minimum in [itex] A [/itex]. I'm not sure why I remember this theorem since I don't even
remember the precise definitions of the conditions anymore but in any case, the existence of the minimum is a hard problem to solve and and the answer depends quite a bit on the assumptions on your
space of functions and on the properties of the functional you are trying to minimize.
Last edited:
What Terandol said. The thing is that the Euler-Lagrange equations are a necessary condition. If you look closely, what the theorem actually says, is that if the minimizer exists, then it has to
satisfy the E-L equations.
FAQ: Questioning an assumption in calculus of variations
1. What does it mean to question an assumption in calculus of variations?
Questioning an assumption in calculus of variations means to critically examine the assumptions made in a particular problem or scenario and determine whether they are valid and/or necessary for
finding a solution.
2. Why is it important to question assumptions in calculus of variations?
Questioning assumptions in calculus of variations is important because it ensures that the solutions obtained are accurate and applicable in real-world situations. It also helps in identifying any
potential errors or limitations in the assumptions made.
3. How do you identify assumptions in calculus of variations?
Assumptions in calculus of variations are usually stated explicitly in the problem or scenario. They may also be implied or hidden within the mathematical equations used to solve the problem. It is
important to carefully read and analyze the problem to identify all relevant assumptions.
4. Can assumptions in calculus of variations be changed or eliminated?
Yes, assumptions in calculus of variations can be changed or eliminated if they are found to be unnecessary or invalid. This can lead to a different solution or approach to the problem, but it is
important to carefully consider the implications of changing or eliminating an assumption.
5. Are there any risks associated with questioning assumptions in calculus of variations?
Yes, there are some risks associated with questioning assumptions in calculus of variations. Changing or eliminating an assumption may lead to a different solution that is not applicable in
real-world situations. It is important to carefully evaluate the potential consequences before making any changes to assumptions.
|
{"url":"https://www.physicsforums.com/threads/questioning-an-assumption-in-calculus-of-variations.823225/","timestamp":"2024-11-02T02:10:11Z","content_type":"text/html","content_length":"96711","record_id":"<urn:uuid:a87875aa-9c54-463a-add2-28f7d2cc4549>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00649.warc.gz"}
|
Last time we looked at spread-spectrum techniques using the output bit sequence of an LFSR as a pseudorandom bit sequence (PRBS). The main benefit we explored was increasing signal-to-noise ratio
(SNR) relative to other disturbance signals in a communication system.
This time we’re going to use a PRBS from LFSR output to do something completely different: system identification. We’ll show two different methods of active system identification, one using sine
waves and the other...
Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things:
• the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties
• the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit
The unusually-good correlation properties...
Last time we looked at the use of LFSRs in counters and position encoders.
This time we’re going to look at pseudorandom number generation, and why you may — or may not — want to use LFSRs for this purpose.
But first — an aside:
Science Fair 1983
When I was in fourth grade, my father bought a Timex/Sinclair 1000. This was one of several personal computers introduced in 1982, along with the Commodore 64. The...
Jason Sachs ●
December 9, 2017
Last time we looked at LFSR output decimation and the computation of trace parity.
Today we are starting to look in detail at some applications of LFSRs, namely counters and encoders.
I mentioned counters briefly in the article on easy discrete logarithms. The idea here is that the propagation delay in an LFSR is smaller than in a counter, since the logic to compute the next LFSR
state is simpler than in an ordinary counter. All you need to construct an LFSR is
Jason Sachs ●
December 3, 2017
Last time we looked at matrix methods and how they can be used to analyze two important aspects of LFSRs:
• time shifts
• state recovery from LFSR output
In both cases we were able to use a finite field or bitwise approach to arrive at the same result as a matrix-based approach. The matrix approach is more expensive in terms of execution time and
memory storage, but in some cases is conceptually simpler.
This article will be covering some concepts that are useful for studying the...
Jason Sachs ●
November 21, 2017
●4 comments
Last time we looked at a dsPIC implementation of LFSR updates. Now we’re going to go back to basics and look at some matrix methods, which is the third approach to represent LFSRs that I mentioned in
Part I. And we’re going to explore the problem of converting from LFSR output to LFSR state.
Matrices: Beloved Historical Dregs
Elwyn Berlekamp’s 1966 paper Non-Binary BCH Encoding covers some work on
Jason Sachs ●
November 13, 2017
●1 comment
The last four articles were on algorithms used to compute with finite fields and shift registers:
Today we’re going to come back down to earth and show how to implement LFSR updates on a microcontroller. We’ll also talk a little bit about something called “idiomatic C” and a neat online tool for
experimenting with the C compiler.
The last two articles were on discrete logarithms in finite fields — in practical terms, how to take the state \( S \) of an LFSR and its characteristic polynomial \( p(x) \) and figure out how many
shift steps are required to go from the state 000...001 to \( S \). If we consider \( S \) as a polynomial bit vector such that \( S = x^k \bmod p(x) \), then this is equivalent to the task of
figuring out \( k \) from \( S \) and \( p(x) \).
This time we’re tackling something...
Jason Sachs ●
October 1, 2017
Last time we talked about discrete logarithms which are easy when the group in question has an order which is a smooth number, namely the product of small prime factors. Just as a reminder, the goal
here is to find \( k \) if you are given some finite multiplicative group (or a finite field, since it has a multiplicative group) with elements \( y \) and \( g \), and you know you can express \( y
= g^k \) for some unknown integer \( k \). The value \( k \) is the discrete logarithm of \( y \)...
Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.
Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.
What is a Discrete Logarithm, Anyway?
Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but...
The last two articles were on discrete logarithms in finite fields — in practical terms, how to take the state \( S \) of an LFSR and its characteristic polynomial \( p(x) \) and figure out how many
shift steps are required to go from the state 000...001 to \( S \). If we consider \( S \) as a polynomial bit vector such that \( S = x^k \bmod p(x) \), then this is equivalent to the task of
figuring out \( k \) from \( S \) and \( p(x) \).
This time we’re tackling something...
Elliptic curve mathematics over finite fields helps solve the problem of exchanging secret keys for encrypted messages as well as proving a specific person signed a particular document. This article
goes over simple algorithms for key exchange and digital signature using elliptic curve mathematics. These methods are the essence of elliptic curve cryptography (ECC) used in applications such as
SSH, TLS and HTTPS.
Last time, we continued a discussion about error detection and correction by covering Reed-Solomon encoding. I was going to move on to another topic, but then there was this post on Reddit asking how
to determine unknown CRC parameters:
I am seeking to reverse engineer an 8-bit CRC. I don’t know the generator code that’s used, but can lay my hands on any number of output sequences given an input sequence.
This is something I call the “unknown oracle”...
Jason Sachs ●
June 12, 2018
Last time, we talked about Gold codes, a specially-constructed set of pseudorandom bit sequences (PRBS) with low mutual cross-correlation, which are used in many spread-spectrum communications
systems, including the Global Positioning System.
This time we are wading into the field of error detection and correction, in particular CRCs and Hamming codes.
Ernie, You Have a Banana in Your Ear
I have had a really really tough time writing this article. I like the...
Last time we looked at the use of LFSRs in counters and position encoders.
This time we’re going to look at pseudorandom number generation, and why you may — or may not — want to use LFSRs for this purpose.
But first — an aside:
Science Fair 1983
When I was in fourth grade, my father bought a Timex/Sinclair 1000. This was one of several personal computers introduced in 1982, along with the Commodore 64. The...
Jason Sachs ●
November 13, 2017
●1 comment
The last four articles were on algorithms used to compute with finite fields and shift registers:
Today we’re going to come back down to earth and show how to implement LFSR updates on a microcontroller. We’ll also talk a little bit about something called “idiomatic C” and a neat online tool for
experimenting with the C compiler.
Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.
Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.
What is a Discrete Logarithm, Anyway?
Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but...
Mike ●
October 22, 2015
●6 comments
Everything in the digital world is encoded. ASCII and Unicode are combinations of bits which have specific meanings to us. If we try to interpret a compiled program as Unicode, the result is a lot
of garbage (and beeps!) To reduce errors in transmissions over radio links we use Error Correction Codes so that even when bits are lost we can recover the ASCII or Unicode original. To prevent
anyone from understanding a transmission we can encrypt the raw data...
Mike ●
August 30, 2023
●4 comments
New book on Elliptic Curve Cryptography now online. Deep discount for early purchase. Will really appreciate comments on how to improve the book because physical printing won't happen for a few more
months. Check it out here: http://mng.bz/D9NA
Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things:
• the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties
• the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit
The unusually-good correlation properties...
Jason Sachs ●
November 21, 2017
●4 comments
Last time we looked at a dsPIC implementation of LFSR updates. Now we’re going to go back to basics and look at some matrix methods, which is the third approach to represent LFSRs that I mentioned in
Part I. And we’re going to explore the problem of converting from LFSR output to LFSR state.
Matrices: Beloved Historical Dregs
Elwyn Berlekamp’s 1966 paper Non-Binary BCH Encoding covers some work on
The last two articles were on discrete logarithms in finite fields — in practical terms, how to take the state \( S \) of an LFSR and its characteristic polynomial \( p(x) \) and figure out how many
shift steps are required to go from the state 000...001 to \( S \). If we consider \( S \) as a polynomial bit vector such that \( S = x^k \bmod p(x) \), then this is equivalent to the task of
figuring out \( k \) from \( S \) and \( p(x) \).
This time we’re tackling something...
Jason Sachs ●
June 12, 2018
Last time, we talked about Gold codes, a specially-constructed set of pseudorandom bit sequences (PRBS) with low mutual cross-correlation, which are used in many spread-spectrum communications
systems, including the Global Positioning System.
This time we are wading into the field of error detection and correction, in particular CRCs and Hamming codes.
Ernie, You Have a Banana in Your Ear
I have had a really really tough time writing this article. I like the...
Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things:
• the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties
• the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit
The unusually-good correlation properties...
Last time we looked at spread-spectrum techniques using the output bit sequence of an LFSR as a pseudorandom bit sequence (PRBS). The main benefit we explored was increasing signal-to-noise ratio
(SNR) relative to other disturbance signals in a communication system.
This time we’re going to use a PRBS from LFSR output to do something completely different: system identification. We’ll show two different methods of active system identification, one using sine
waves and the other...
Last time we looked at the use of LFSRs in counters and position encoders.
This time we’re going to look at pseudorandom number generation, and why you may — or may not — want to use LFSRs for this purpose.
But first — an aside:
Science Fair 1983
When I was in fourth grade, my father bought a Timex/Sinclair 1000. This was one of several personal computers introduced in 1982, along with the Commodore 64. The...
Jason Sachs ●
October 1, 2017
Last time we talked about discrete logarithms which are easy when the group in question has an order which is a smooth number, namely the product of small prime factors. Just as a reminder, the goal
here is to find \( k \) if you are given some finite multiplicative group (or a finite field, since it has a multiplicative group) with elements \( y \) and \( g \), and you know you can express \( y
= g^k \) for some unknown integer \( k \). The value \( k \) is the discrete logarithm of \( y \)...
What a boring title. I wish I could come up with something snazzier. One word I learned today is studentization, which is just the normalization of errors in a curve-fitting exercise by the sample
standard deviation (e.g. point \( x_i \) is \( 0.3\hat{\sigma} \) from the best-fit linear curve, so \( \frac{x_i - \hat{x}_i}{\hat{\sigma}} = 0.3 \)) — Studentize me! would have been nice, but I
couldn’t work it into the topic for today. Oh well.
I needed a little break from...
Jason Sachs ●
September 9, 2017
Last time we talked about basic arithmetic operations in the finite field \( GF(2)[x]/p(x) \) — addition, multiplication, raising to a power, shift-left and shift-right — as well as how to determine
whether a polynomial \( p(x) \) is primitive. If a polynomial \( p(x) \) is primitive, it can be used to define an LFSR with coefficients that correspond to the 1 terms in \( p(x) \), that has
maximal length of \( 2^N-1 \), covering all bit patterns except the all-zero...
Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.
Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.
What is a Discrete Logarithm, Anyway?
Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but...
|
{"url":"https://embeddedrelated.com/blogs-3/mp/all/Applied_Math.php","timestamp":"2024-11-10T18:08:53Z","content_type":"text/html","content_length":"77751","record_id":"<urn:uuid:f3ec2109-277a-4838-910d-1052a453d448>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00895.warc.gz"}
|
Best Maths Tuition in Noida
Transform your math struggles into success with Hitesh Sir Classes.
Book 5 demo classes for the student
The Common University Entrance Test (CUET) is being organized by the National Testing Agency (NTA) every year. The exam will be held for both undergraduate and postgraduate courses. While the
entrance exam is mandatory for admission to all undergraduate courses in central universities, some universities have decided to waive it for postgraduate courses this year.
1. Algebra
2. Calculus
3. Integration and its Applications
4. Differential Equations
5. Probability Distributions
6. Linear Programming
7. Relations and Functions
8. Inverse Trigonometric Functions
9. Matrices
10. Determinant
11. Continuity and Differentiability
12. Applications of Derivatives
13. Integrals
14. Applications of the Integrals
15. Differential equations
16. Vectors
17. Three-dimensional Geometry
18. Linear Programming
19. Probability
20. Numbers, Quantification and Numerical Applications
21. Matrices and types of matrices
22. Equality of matrices, Transpose of a matrix, Symmetric and Skew symmetric matrix
23. Higher order derivatives
24. Marginal Cost and Marginal Revenue using derivatives
25. Maxima and Minima
26. Probability Distribution
27. Index Numbers
28. Construction of index numbers
29. Test of Adequacy of Index Numbers
30. Population and Sample
31. Parameter and statistics and Statistical Interferences
32. Time Series
33. Components of Time Series
34. Time Series analysis for univariate data
35. Perpetuity, Sinking Funds
36. Valuation of Bonds
37. Calculation of EMI
38. Linear method of Depreciation
39. Introduction and related terminology
40. Mathematical formulation of Linear Programming Problem
41. Different types of Linear Programming Problems
42. Graphical Method of Solution for problems in two Variables
43. Feasible and Infeasible Regions
44. Feasible and infeasible solutions, optimal feasible solutions
These were some of the subject-wise topics that will be covered in the CUET Entrance Syllabus.
Hitesh Sir’s Classes are designed to help students improve their Maths skills, build confidence and achieve better results in exams.
Address: First Floor Shop no 30, Hotmart The Aranya Market, Sector 119, Noida
|
{"url":"https://hiteshsirclasses.in/cuet/","timestamp":"2024-11-11T07:19:59Z","content_type":"text/html","content_length":"173287","record_id":"<urn:uuid:d0691c86-43c1-47e9-b5a0-29c44904efde>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00625.warc.gz"}
|
How competitive are math PhD programs?
How competitive are math PhD programs?
PhD programs are competitive in general. “Ranks” 15-20 are still quite competitive. You really should not be looking at places based on ranking, in my opinion.
What is the highest level math course?
The official titles of the course are Honors Abstract Algebra (Math 55a) and Honors Real and Complex Analysis (Math 55b). Previously, the official title was Honors Advanced Calculus and Linear
How do you become a top math PhD?
Roughly: good grades (3.8+ GPA) in difficult courses, good test scores (80+ percentile on math GRE subject test [not the regular GRE math, which you should get a ~perfect score on without studying]),
strong research background and good letters corresponding to it.
How hard is it to get into a math PhD program?
So, yes, it is unbelievably difficult to go to a top graduate school for mathematics. It would require a near perfect GPA, 6 or more graduate courses, and research (all done at a top undergrad
program). If you go to a top undergrad program, move quickly into proof courses.
Is it worth getting a PhD in mathematics?
Probably not. If you don’t want to be a professor, then you probably don’t want to be a PhD student, since they involve doing pretty similar stuff, and in most other lines of work, 5-6 years of life
experience will get you more benefit than a PhD.
Is a masters in math hard?
Depends entirely on the courses you took in your degree. Of course, grad school is inherently difficult, but grad programs are prepared to take in students with a variety of backgrounds. For a
graduate degree, a master’s in pure math does not make much sense either.
Is maths a good degree?
If you’re a talented mathematician, a maths degree can be a good option. The fact that there is a right answer to questions means that it’s possible to achieve high marks, most courses offer the
chance as you progress to specialise in the areas that most interest you, and your skills will be useful in many careers.
Is mathematics a useless degree?
It’s not useless and even if you aren’t in a standard maths career like finance, quant, modeller, data science or programmer etc you will probably use your skills some way as it is a very canonical
and generalist degree.
Is a maths degree worth it?
Math degrees can lead to some very successful careers, but it will be a lot of work and might require you to get a graduate or other advanced degree. According to the Department of Education, math
and science majors tend to make significantly more money and get better jobs than most other degrees.
What are the top 5 math careers?
14 high-paying jobs for people who love mathEconomist. Astronomer. Operations research analyst. Actuary. Median salary: $110,560. Mathematical science teacher (postsecondary) Median salary: $77,290.
Physicist. Median salary: $118,500. Statistician. Median salary: $84,440. Mathematician. Median salary: $112,560.
Are mathematicians in demand?
Job Outlook Overall employment of mathematicians and statisticians is projected to grow 33 percent from 20, much faster than the average for all occupations. Businesses will need these workers to
analyze the increasing volume of digital and electronic data.
Does NASA hire mathematicians?
Of course the space industry hires mathematicians. You won’t see many job titles or job postings that say “mathematician,” but look at the skills being asked for. Practical applications like your
applied math degree rather than theoretical development is probably the better option.
How much do mathematicians get paid?
How Much Does a Mathematician Make? Mathematicians made a median salary of $101,9. The best-paid 25 percent made $126,070 that year, while the lowest-paid 25 percent made $73,490.
What can I do with a PhD in mathematics?
Doctorate (PhD), Mathematics Average by JobJob.Assistant Professor, Postsecondary / Higher Education.Data Scientist.Professor, Postsecondary / Higher Education.Associate Professor, Postsecondary /
Higher Education.Mathematician.Senior Software Engineer.Postdoctoral Research Associate.
Which country is best for PhD in mathematics?
Canada.China (Mainland)Crimea.Germany.Hong Kong SAR.Kosovo.Kosovo, Republic of.Macau SAR.
How long is a PhD in mathematics?
between 3 and 5 years
How much does a PhD in mathematics make?
To give you some numbers, from the US, the AMS Survey gives data on starting salaries for Math PhDs. You can also get some data from Payscale on average salaries (e.g., Math PhDs, EE PhDs and
Engineering Bachelors): AMS Median industry starting salary (2016 Math PhD): ~$106,000. Payscale Average Math PhD salary: …
Can you get a PhD in math online?
While PhD programs in math are rarely available online, interested graduate students may consider an online master’s degree in math or math education.
Can I do PhD in maths?
Ph. D. Mathematics is the program of choice for students who wish to pursue a career in a mathematical research field. The minimum duration of this course is 2-years, whereas you can complete this
course in a maximum time span of 3-5 years.
|
{"url":"https://eyebulb.com/how-competitive-are-math-phd-programs/","timestamp":"2024-11-11T21:35:18Z","content_type":"text/html","content_length":"112642","record_id":"<urn:uuid:1d986300-5298-4fa1-a476-ed14699ffc2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00404.warc.gz"}
|
A hyper parameter is a variable that we need to set before applying a learning algorithm into a dataset. The challenge with hyper parameters is that there are no magic numbers that work everywhere.
The best numbers depend on each task and each dataset. Generally speaking, we can break hyper parameters down into two categories. The first category is optimizer hyper parameters.
These are the variables related more to the optimization and training process than to the model itself. These include the learning rate, the minibatch size, and the number of training iterations or
The second category is model hyper parameters. These are the variables that are more involved in the structure of the model. These include the number of layers and hidden units and model specific
hyper parameters for architectures like RNMs.
Learning Rate
The learning rate is the most important hyperparameter. Even if you apply models that other people built to your own dataset, you’ll find that you’ll probably have to try a number of different values
for the learning rate to get the model to train properly. If you took care to normalize the inputs to your model, then a good starting point is usually 0.01. And these are the usual suspects of
learning rates. If you try one and your model doesn’t train, you can try the others. Which of the others should you try? That depends on the behavior of the training error. To better understand this,
we’ll need to look at the intuition of the learning rate. we saw that when we use gradient descent to train a neural network model, the training task boils down to decreasing the error value
calculated by a loss function as much as we can. During a learning step, we do a calculate the loss, then find the gradient.
Let’s assume this simplest case, in which our model has only one weight. The gradient will tell us which way to nudge the current weight so that our predictions become more accurate. To visualize the
dynamics of the learning rate; let take a look at some situations depicted in the following:
this here is a simple example with only one parameter, and an ideal convex error curve. Things are more complicated in the real world, I’m sure you’ve seen your models are likely to have hundreds or
thousands of parameters, each with its own error curve that changes as the values of the other weights change. And the learning rate has to shepherd all of them to the best values that produce the
least error. To make matters even more difficult for us, we don’t actually have any guarantees that the error curves would be clean u-shapes. They might, in fact, be more complex shapes with local
minima that the learning algorithm can mistake for the best values and converge on.
Now that we looked at the intuition of the learning rates, and the indications that the training error gives us that can help us tune the learning rate, let’s look at one specific case we can often
face when tuning the learning rate. Think of the case where we chose a reasonable learning rate. It manages to decrease the error, but up to a point, after which it’s unable to descend, even though
it didn’t reach the bottom yet. It would be stuck oscillating between values that still have a better error value than when we started training, but are not the best values possible for the model.
This scenario is where it’s useful to have our training algorithm decrease the learning rate throughout the training process. This is a technique called learning rate decay. Some adaptive learning
optimizers are AdamOptimizer or AdagradOptimizer
Minibatch Size
Minibatch size is another hyper parameter that no doubt you’ve run into a number of times already. It has an effect on the resource requirements of the training process but also impacts training
speed and number of iterations in a way that might not be as trivial as you may think. It’s important to review a little bit of terminology here first. Historically there had been debate on whether
it’s better to do online also called stochastic training where you fit a single example of the dataset to the model during a training step. And using only one example, do a forward pass, calculate
the error, and then back propagate and set adjusted values for all your parameters. And then do this again for each example in the dataset. Or if it was better to feed the entire dataset to the
training step and calculate that gradient using the error generated by looking at all the examples in the dataset. This is called batch training. The abstraction commonly used today is to set a
minibatch size. So online training is when the minibatch size is one, and batch training is when the minibatch size is the same as the number of examples in the training set. And we can set the
minibatch size to any value between these two values. The recommended starting values for your experimentation are between one and a few hundred with 32 often being a good candidate. A larger
minibatch size allows computational boosts that utilizes matrix multiplication, in the training calculations. But that comes at the expense of needing more memory. In practice, small minibatch sizes
have more noise in their error calculations, and this noise is often helpful in preventing the training process from stopping at local minima on the error curve rather than the global minima that
creates the best model.
This is an experimental result for the effective batch size on convolutional neural nets
It shows that using the same learning rate, the accuracy of the model decreases the larger the minibatch size becomes.
Number of Training Iterations / Epochs
To choose the right number of iterations or number of epochs for our training step, the metric we should have our eyes on is the validation error. The intuitive manual way is to have the model train
for as many epochs or iterations that it takes, as long as the validation error keeps decreasing. Luckily, however, we can use a technique called early stopping to determine when to stop training a
model. Early stopping roughly works by monitoring the validation error, and stopping the training when it stops decreasing.
Number of Hidden Units / Layers
Let’s now talk about the hyperparameters that relates to the model itself rather than the training or optimization process. The number of hidden units, in particular, is the hyperparameter I felt was
the most mysterious when I started learning about machine learning. The main requirement here is to set a number of hidden units that is “large enough”. For a neural network to learn to approximate a
function or a prediction task, it needs to have enough “capacity” to learn the function. The more complex the function, the more learning capacity the model will need. The number and architecture of
the hidden units is the main measure for a model’s learning capacity. If we provide the model with too much capacity, however, it might tend to overfit and just try to memorize the training set. If
you find your model overfitting your data, meaning that the training accuracy is much better than the validation accuracy, you might want to try to decrease the number of hidden units. You could also
utilize regularization techniques like dropouts or L2 regularization. So, as far as the number of hidden units is concerned, the more, the better. A little larger than the ideal number is not a
problem, but a much larger value can often lead to the model overfitting. So, if your model is not training, add more hidden units and track validation error. Keep adding hidden units until the
validation starts getting worse. Another heuristic involving the first hidden layer is that setting it to a number larger than the number of the inputs has been observed to be beneficial in a number
of tests. What about the number of layers? Andrej Karpathy tells us that in practice, it’s often the case that a three-layer neural net will outperform a two-layer net, but going even deeper rarely
helps much more. The exception to this is convolutional neural networks where the deeper they are, the better they perform.
LSTM Vs GRU
“These results clearly indicate the advantages of the gating units over the more traditional recurrent units. Convergence is often faster, and the final solutions tend to be better. However, our
results are not conclusive in comparing the LSTM and the GRU, which suggests that the choice of the type of gated recurrent unit may depend heavily on the dataset and corresponding task.”
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling by Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, Yoshua Bengio
“The GRU outperformed the LSTM on all tasks with the exception of language modelling”
An Empirical Exploration of Recurrent Network Architectures by Rafal Jozefowicz, Wojciech Zaremba, Ilya Sutskever
“Our consistent finding is that depth of at least two is beneficial. However, between two and three layers our results are mixed. Additionally, the results are mixed between the LSTM and the GRU, but
both significantly outperform the RNN.”
Visualizing and Understanding Recurrent Networks by Andrej Karpathy, Justin Johnson, Li Fei-Fei
“Which of these variants is best? Do the differences matter? Greff, et al. (2015) do a nice comparison of popular variants, finding that they’re all about the same. Jozefowicz, et al. (2015) tested
more than ten thousand RNN architectures, finding some that worked better than LSTMs on certain tasks.”
Understanding LSTM Networks by Chris Olah
“In our [Neural Machine Translation] experiments, LSTM cells consistently outperformed GRU cells. Since the computational bottleneck in our architecture is the softmax operation we did not observe
large difference in training speed between LSTM and GRU cells. Somewhat to our surprise, we found that the vanilla decoder is unable to learn nearly as well as the gated variant.”
Massive Exploration of Neural Machine Translation Architectures by Denny Britz, Anna Goldie, Minh-Thang Luong, Quoc Le
Resource and Reference
If you want to learn more about hyperparameters, these are some great resources on the topic:
More specialized sources:
Categories: Natural Language Processing
Tags: aiDeep learning
|
{"url":"https://eng.ftech.ai/?p=642","timestamp":"2024-11-13T13:03:50Z","content_type":"text/html","content_length":"61832","record_id":"<urn:uuid:e6e4e182-eb86-4f32-bcab-5fe472a92ba0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00861.warc.gz"}
|
Fast solution of parabolic problems in the tensor train/quantized tensor train format with initial application to the Fokker-Planck equation
In this paper we propose two schemes of using the so-called quantized tensor train (QTT)-approximation for the solution of multidimensional parabolic problems. First, we present a simple one-step
implicit time integration scheme using a solver in the QTT-format of the alternating linear scheme (ALS) type. As the second approach, we use the global space-time formulation, resulting in a large
block linear system, encapsulating all time steps, and solve it at once in the QTT-format. We prove the QTT-rank estimate for certain classes of multivariate potentials and respective solutions in
(x, t) variables. The log-linear complexity of storage and the solution time is observed in both spatial and time grid sizes. The method is applied to the Fokker-Planck equation arising from the
beads-springs models of polymeric liquids.
• Density matrix renormalization group
• Dumbbell model
• Fokker-Planck equation
• Higher dimensions
• Parabolic problems
• QTT-format
• Tensor methods
ASJC Scopus subject areas
• Computational Mathematics
• Applied Mathematics
Dive into the research topics of 'Fast solution of parabolic problems in the tensor train/quantized tensor train format with initial application to the Fokker-Planck equation'. Together they form a
unique fingerprint.
|
{"url":"https://researchportal.bath.ac.uk/en/publications/fast-solution-of-parabolic-problems-in-the-tensor-trainquantized-","timestamp":"2024-11-02T17:23:59Z","content_type":"text/html","content_length":"57752","record_id":"<urn:uuid:1525d3ff-dc77-4ff0-b2d9-b72e82e1f180>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00193.warc.gz"}
|
What is 37/46 as a percent? | Thinkster Math
First, let’s go over what a fraction represents. The number above the line is called the numerator, while the number below the line is called the denominator. The fraction shows how many portions of
the number there are, in relation to how many would make up the whole. For instance, in the fraction 37/46, we could say that the value is 37 portions, out of a possible 46 portions to make up the
For percentages, the difference is that we want to know how many portions there are if there are 100 portions possible. “Percent” means “per hundred”. For example, if we look at the percentage 25%,
that means we have 25 portions of the possible 100 portions. Re-writing this in fraction form, we see 25/100.
The first step in converting a fraction to a percentage is to adjust the fraction so that the denominator is 100. To do this, you first divide 100 by the denominator:
$\frac{100}{46} = 2.174$
We can then adjust the whole fraction using this number, like so:
$\frac{37*2.174}{46*2.174} = \frac{80.435}{100}$
Reading this as a fraction, we can say that we have 80.435 portions of a possible 100 portions.
Re-writing this as a percentage, we can see that 37/46 as a percentage is 80.435%
|
{"url":"https://hellothinkster.com/math-questions/percentages/what-is-37-46-as-a-percent","timestamp":"2024-11-09T10:41:51Z","content_type":"text/html","content_length":"99968","record_id":"<urn:uuid:cea62a5b-5ba9-4d70-82df-7f07bd5f5f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00477.warc.gz"}
|
The Free Monad is something I’ve been having a great deal of difficulty wrapping my head around. It’s one of those Haskell concepts that ends up being far simpler than any of the articles on the Net
would have you think. So, here’s a whirlwind tour of this Monad and how it can be super handy.
First, imagine you’re building a robot to walk through a maze. The robot is programmed to go forward until it can’t go forward anymore, and then check a set of instructions to learn if it should turn
left, turn right, or shutdown. A possible data type to model such instructions could be:
Here’s what our processing function might look like:
instrs = [L, R, L, S]
interpret :: [Directive] -> IO ()
interpret = mapM_ process
where process L = putStrLn "Going left"
process R = putStrLn "Going right"
process S = putStrLn "Saw shutdown, stopping"
And the output, as expected:
ghci> interpret instrs
Going left
Going right
Going left
Saw shutdown, stopping
Easy as pie, right? But a lot of the simplicity here is because the example is simplistic. What if we want to vary the operations depending on hints from the caller? So let’s trade a little bit of
simplicity up front, for a lot more expressiveness (and a return to simplicity) further on down the road…
Enter the Free Monad
The first step toward using the Free Monad is to make our Directive type recursive, and give it a Functor instance:
We will now chain directives together using the Free data type, from Control.Monad.Free (in the free package on Hackage). Here’s what the Free data type looks like:
And our set of instructions encoded using it:
Pretty ugly, right? But it’s easy to pattern match on this using a recursive function, giving us another interpreter for robotic instructions:
interpret' :: Free FDirective a -> IO ()
interpret' (Free (FL f)) = putStrLn "Going left" >> interpret' f
interpret' (Free (FR f)) = putStrLn "Going right" >> interpret' f
interpret' (Free FS) = putStrLn "Saw shutdown, stopping"
interpret' (Pure _) = error "Improper termination"
Now, why go through all this mess rather than use a list? To gain the power of monads, almost for free. All we have to do is add a few more helper functions:
And we get this:
Check to make sure the output is the same:
ghci> interpret' instrs4
Going left
Going right
Going left
Saw shutdown, stopping
The new runRobot works! We’ve gone from a list that used brackets and commas, to a list that uses just newlines. But we’ve gained something along the way: we can now express logic directly in the
robot’s programming:
instrs5 :: Bool -> Free FDirective a
instrs5 goLeftAlways = do
if goLeftAlways
then left
else right
And check again:
ghci> interpret' (instrs5 True)
Going left
Going left
Going left
Saw shutdown, stopping
As the logic gets more complicated, it would be much harder to do – and less optimal in many ways – if we were still using lists to sequence instructions.
What the Free Monad therefore gives us is the ability to create imperative-style DSLs, for which we can write any number of different interpreters. Consider it another power tool in your
meta-programming toolbox.
Another bonus is that the interpreter ignores any further instructions after the call to shutdown; we also get an error if the user forgets to shutdown. And all of this for free, just by using the
Free Monad. (Although I still don’t know what the adjective “Free” means in the term “Free Monad”. It has something to do with mathematics, but that will just have to wait for another day).
|
{"url":"https://newartisans.com/2012/08/meta-programming-with-the-free-monad/","timestamp":"2024-11-06T23:07:03Z","content_type":"text/html","content_length":"13675","record_id":"<urn:uuid:63316278-c6cf-426c-b35f-f2232a112413>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00596.warc.gz"}
|
Show that the differential equation (x2−y2)dx+2xydy=0 is hom... | Filo
Question asked by Filo student
OR Show that the differential equation is homogeneous and solve it. Sol. The given differential equation is
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 1/26/2023
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
12 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text OR Show that the differential equation is homogeneous and solve it. Sol. The given differential equation is
Updated On Jan 26, 2023
Topic Algebra
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 127
Avg. Video Duration 5 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/or-show-that-the-differential-equation-is-homogeneous-and-33393635303733","timestamp":"2024-11-09T16:16:50Z","content_type":"text/html","content_length":"448120","record_id":"<urn:uuid:218acc70-2d26-43d3-a55f-258c299817e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00763.warc.gz"}
|
Yes, We Do Make the Choice to Use the InternetYes, We Do Make the Choice to Use the InternetYes, We Do Make the Choice to Use the Internet
Example forum view, from PhpBB. (Photo credit: Wikipedia)
There was one comment left on my post about
taking responsibility for our feelings
while I was away that I wanted to address in some detail. This comment seemed to take issue with my suggestion that someone who routinely feels outraged when they spend time online might benefit from
spending less time online.
I thought this was an obvious thing to suggest. If someone were to tell me that they spent the last 6 hours watching a marathon of their favorite reality TV show and that they were getting sick of
it, I'd probably suggest that they take a break and do something else for a while. But I suppose this sort of solution was not nearly as obvious as I thought, at least not when it comes to the
Frankly, I find your "we are making the choice to do so" argument re: the internet to be extremely disingenuous. One might argue that people reading newspapers in the 1960s were "choosing" to do
so as well. Heck, they even had to go outside to pick the paper up! The internet is the medium of communication now. It's real life. It's not a fairyland where people happen to wander into
What I wrote certainly was not intended to be disingenuous. Yes, someone in the 1960s who read newspapers was in fact choosing to do so. Someone in the 1980s who watched television was choosing to do
so. I certainly knew people in the 80s who did not own televisions even though they could have afforded to do so. And today, someone who spends time on the Internet is choosing to do so. While these
statements are true, none of them gets at what I was suggesting in the post. So here's yet another stab at it.
Vanity Searches
Suppose I set up a Google alert so that I receive an email every time someone writes about me. Also suppose that I use a Twitter monitoring tool to do something similar (i.e., making sure I see every
tweet someone writes in which I am mentioned). With me so far? Now, my taking these steps is a choice insofar as anything else I do is a choice. Not doing these things does not prevent me from using
the Internet for all sorts of other things (e.g., keeping up with various news stories).
Now suppose that I have these alerts set up like I described, and I become extremely angry and upset almost every day because they show me that people are saying negative things about me. Does it
really seem disingenuous to suggest that I might disable these alerts once it becomes clear that I find their use so upsetting? And yes, if I find that the time I spend on the Internet doing whatever
it is that I am doing there is routinely associated with strong negative feelings with which I am poorly equipped to cope, why wouldn't I consider reducing the time I spend there or changing what I
am doing with my time there?
Try a different sort of example. Suppose I frequently visit fundamentalist Christian Internet forums. Further suppose that I become outraged whenever someone there expresses creationist beliefs to
the point where my outrage begins to cause me problems (e.g., insomnia, frequent rumination). Would it be disingenuous in some way to suggest that I might spend less time in such forums? I think not.
I am making a choice to seek this sort of information out in much the same way I am making the choice to use various vanity monitoring tools above. If I cannot cope with the consequences of my
behavior, perhaps I should change my behavior.
The Slymepit
Some people seem to delight in visiting the Internet forum known as
the Slymepit
, finding something objectionable, and then claiming that it constitutes some sort of harassment. I am no expert on the Slymepit. Having visited it twice briefly, I am hesitant to pass any sort of
judgment on it. But for the sake of argument, let's say that it contains material most people would find offensive. If I am the one regularly visiting it to find such material and then share my
outrage about it with others, don't I have at least some responsibility for the fact that I am choosing to visit it? If the material really bothered me, why would I keep coming back to it? Isn't this
an obvious question that I should be asked?
To sum up, we each make decisions about how we spend our time. If I am being negatively affected by how I spend my time, it makes sense that those concerned about me might ask me why I keep doing
things that produce distress. Why, they might ask, do I continue to visit places on the Internet that bother me so much? I think it is a good question and one which should be asked of some people
more often.
|
{"url":"https://www.atheistrev.com/2013/03/yes-we-do-make-choice-to-use-internet.html","timestamp":"2024-11-04T19:06:33Z","content_type":"application/xhtml+xml","content_length":"53097","record_id":"<urn:uuid:a087b45a-2ac7-41cd-9477-719df9a8f2d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00131.warc.gz"}
|
Support Vector Machines: A Geometric Point of View
back to events
Support Vector Machines: A Geometric Point of View
Speaker: Prof. Sergios Theodoridis
Date: 07/05/2010
University: Univ. of Athens
Room : A56
Time: 3:00am
Abstract: Support Vector Machines have been established as one of the major classification and regression tools for Pattern Recognition and Signal Analysis. Over the last decade a number of
theoretical arguments have been developed in order to justify their enhanced performance. The most widely known scenario is to look at them as maximum margin classifiers. Another approach is via
learning theory arguments and the structural risk minimization principle, which leads to an optimal trade off between performance and complexity. An alternative path is to look at the cost function,
associated with the SVMs, as a regularized minimizer that asymptotically tends to the Bayesian classifier. A less known viewpoint is the geometric one that leads to the notion of reduced convex
hulls. For the non-separable class case, the SVM solution is shown to be equivalent with computing the minimum distance between two reduced versions of the original convex hulls that "encircle" the
two classes (for the two class case). In this talk I will focus on the geometric approach and new results will be discussed concerning a) novel, necessary for our case, theorems concerning the
structure and properties of the reduced convex hulls (RCH) and b) novel algorithms for computing the minimum distance between the resulting RCH?s. This problem is far from being trivial, since
existing algorithms, which compute the minimum distance between convex hulls, rely on their respective extreme points. However, computing the extreme points of a reduced convex hull, as we have
shown, is a computationally hard task of a combinatorial nature. A basic projection theorem, that we have shown, will be discussed that bypasses the combinatorial burden of the task and opens the way
to employ geometric minimum distance algorithms to the SVM task. Most important, this theorem "respects" inner products, thus allowing to the well known kernel trick to be easily incorporated into
the algorithmic schemes, making them appropriate for the general nonlinear non-separable problem. The derived geometric algorithms are much more efficient compared to the classical and widely used
SMO algorithm and its versions. A number of tests with well known test beds have shown that, sometimes, a gain of an order of magnitude in the number of kernel computations, for similar error rates,
can be achieved. Furthermore, the new schemes are closer to our intuitive understanding of an iterative algorithm in simple geometric arguments. The reported results are the outcome of a joint work
with Dr M. Mavroforakis.
|
{"url":"https://www.madgik.di.uoa.gr/events/support-vector-machines-geometric-point-view","timestamp":"2024-11-04T21:06:44Z","content_type":"text/html","content_length":"40198","record_id":"<urn:uuid:30ee420c-3381-4321-a7c0-1223fd55c337>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00159.warc.gz"}
|
A Time-Marching MFS Scheme for Heat Conduction Problems
Valtchev, Svilen S. ; Roberty, Nilson C.
Engineering Analysis with Boundary Elements, 32(6) (2008), 480-493
In this work we consider the numerical solution of a heat conduction problem for a material with non-constant properties. By approximating the time derivative of the solution through a finite
difference, the transient equation is transformed into a sequence of inhomogeneous Helmholtz-type equations. The corresponding elliptic boundary value problems are then solved numerically by a
meshfree method using fundamental solutions of the Helmholtz equation as shape functions. Convergence and stability of the method are addressed. Some of the advantages of this scheme are the absence
of domain or boundary discretizations and/or integrations. Also, no auxiliary analytical or numerical methods are required for the derivation of the particular solution of the inhomogeneous elliptic
problems. Numerical simulations for 2D domains are presented. Smooth and non-smooth boundary data will be considered.
|
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=4&member_id=81&doc_id=1445","timestamp":"2024-11-07T12:05:21Z","content_type":"text/html","content_length":"8955","record_id":"<urn:uuid:51cdfd51-c9c4-45c7-93d3-f49f11b58675>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00422.warc.gz"}
|
How do you simplify h^3 / h^-6? | Socratic
How do you simplify #h^3 / h^-6#?
1 Answer
The answer is ${h}^{9}$
$\frac{{h}^{3}}{{h}^{- 6}} = {h}^{3 - \left(- 6\right)} = {h}^{3 + 6} = {h}^{9}$.
When dividing variables with exponents on the same base, you subtract the exponent in the denominator from the exponent in the numerator. $\frac{{a}^{m}}{{a}^{n}} = {a}^{m - n}$
Impact of this question
3118 views around the world
|
{"url":"https://socratic.org/questions/how-do-you-simplify-h-3-h-6","timestamp":"2024-11-02T11:12:42Z","content_type":"text/html","content_length":"32154","record_id":"<urn:uuid:1925b40d-5201-4239-8be9-38f9ac072244>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00894.warc.gz"}
|
What is Gear ratio? [How to calculate Gear Ratio with Formula]
Home » Mechanical Engineering »
What Is Gear Ratio? It’s Formula and Calculation on Gear Ratio
In this post, you will learn what is gear ratio in gears? and how to calculate the gear ratio. Also, you can download the PDF file at the end of this article.
Gear Ratio
The gear ratio is the ratio of the number of turns the output shaft makes when the input shaft turns once. In other words, the Gear ratio is the ratio between the number of teeth on two gears that
are meshed together, or two sprockets connected with a common roller chain, or the circumferences of two pulleys connected with a drive belt.
Don’t miss out: What are the Types of Gear Cutting Process? Their Advantages, Disadvantages [PDF]
How Gears Transmit Power
The tooth and wheel of the gear are basic workings parts of all types of gears. The different types of gear are used to execute the transfer of energy in a different direction. For instance, when two
gears of different sizes mesh and rotate, the pinion will turn faster and with less torque than the larger gear.
The teeth of the gear are principally carved on wheels, cylinders, or cones. Many devices that we use in our day-to-day life there working principles as gears.
Often gears that are meshed together will be of different sizes. In this case,
• The smaller gear is referred to as the pinion and
• The larger one is simply referred to as the gear.
Gear is different from a pulley. Gear is a round wheel that has teeth that mesh with other gear teeth, allowing the force to be fully transferred without slippage.
To overcome the problem of slippage as in belt drives, gear is used which produces a positive drive with uniform angular velocity. When two or more gears mesh together the arrangement is called a
gear set or a gear train.
Read about: Gear Terminology [This is one of the Easiest Guide on Gears]
Gear Ratio Calculation
For example, a pinion with 18 teeth is mounted on a motor shaft and is meshed with a larger gear that has 54 teeth.
During operation, the pinion makes three complete revolutions for every single revolution of the larger gear.
This relationship in which the gear turns at one-third of the pinion speed is a result of the number of teeth on the pinion and the larger gear. This relationship is called the gear teeth – pinion
teeth ratio or the gear ratio.
This ratio can be expressed as the number of gear teeth divided by the number of pinion teeth. So in this example, since there are 54 teeth on the larger gear and 18 teeth on the pinion. There’s a
ratio of 54 to 18 or 3 to 1 this means that pinion is turning at three times the speed of the gear.
Now often more than one gear set is used in a gearbox multiple gear sets may use in place of one large set because they take up less space.
However, the gear ratio can still be used to determine the output of a gearbox.
Example of Gear Ratio
Let’s see how this illustration consists of two gear sets. This gear set has a pinion with 10 teeth and a gear with 30 teeth. The second gear set consists of an opinion with 10 teeth and a gear with
40 teeth.
In our example, the input shaft is turned by an external device such as a motor. And the output shaft is connected to a machine to drive, such as a pump or a fan it’s often called the output shaft.
The input shaft and output shaft are connected by the intermediate shaft.
Now by using the gear ratio formula we looked at earlier, we can determine the ratio across the gears. The first gear set is 30 over 10 or 3 to 1. And that the ratio across the second gear set is 40
over 10 or 4 to 1. This information can be used to determine the ratio across the entire series of gears.
That’s done by multiplying the ratio of the first gear set by the ratio of the second gear set.
So 3 / 1 times 4 / 1 results in a ratio of 12 / 1 this means that for every 12 revolutions of the input shaft the output shaft will complete one revolution. Or in other words, the motor shaft is
turning 12 times faster than the pump shaft.
Well, so far we’ve looked at how a speed can be changed across the gear set and we’ve seen how this change can be described by you.
Gear ratios can be used to determine the speed of rotation of a gear set if the input or output speed of the gear set is known.
Download PDF of this article:
That’s it.
If you find this article helpful share it with your friends. Have any doubts or questions about “Gear Ratio” leave a comment I’ll respond.
Subscribe to our newsletter to get notification of new articles:
Read more on our blog:
Shaper machine, Drilling machine, Planner machine, and more.
10 thoughts on “What Is Gear Ratio? It’s Formula and Calculation on Gear Ratio”
1. Great post! I always found gear ratios a bit confusing, but your explanation and formula made it so much clearer. Thanks for breaking it down so simply!
□ Thank you! I’m glad the explanation helped clear things up for you! 😊
2. Great learning
□ Thanks.
3. There is an error in your diagram showing Gear ratio = #teeth on pinion / # teeth on gear
Should be Gear ratio = #teeth on gear / # teeth on pinion.
You calculate correctly afterwards but the equation presented should also be corrected.
□ Thank you so much for letting me know, I’ve changed the image now you can see it, and thanks for reading 🙂
4. thanks for your clear explanation (for us gear head wanna be’s).
□ you’re welcome
5. Best information is provided by your site. I always go for your post Engeers Post to clear my concepts in easy way. Keep it up
□ Thank you so much 🙂
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.theengineerspost.com/gear-ratio/","timestamp":"2024-11-06T08:22:17Z","content_type":"text/html","content_length":"147417","record_id":"<urn:uuid:08d533bd-e46b-4aff-98cf-cce107eb7529>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00020.warc.gz"}
|
Program: B.S., Mathematics
Program Description
The B.S. degree in Mathematics is designed for students who want to pursue occupational careers involving applied mathematics or want to prepare for graduate work in applied mathematics.
Double Major
A student pursuing either a B.A. or a B.S. degree may combine a second major with Mathematics. In this circumstance, upon approval of an advisor, 6 units of upper division electives may be satisfied
by courses in the second major. The remaining electives must be taken in the Department of Mathematics. Under certain rare conditions, the physics requirement in the lower division core may be
replaced by appropriate coursework in the second major. Prior approval must be obtained from an advisor and the department chair for this latter occurrence.
Program Requirements
In addition to University residence requirements for a bachelor’s degree, the student must complete a minimum of 18 units of upper division Mathematics in residence at CSUN with the approval of
a Mathematics advisor. Students in B.A. degree programs must fulfill the University requirement of at least 40 units of upper division coursework overall.
It is assumed that the student has a facility in mathematics normally gained by recent completion of four years of high school mathematics through trigonometry and “Mathematical Analysis.” Because of
the variation in curricula at the high school level, it is necessary to obtain satisfactory scores on the Mathematics Placement Test (MPT) to enter the first mathematics course in the program, MATH
150A. Without satisfactory scores, a student will need to complete additional coursework.
1. Lower Division Core for All Programs (23-24 units)
Students must complete the lower division core and one of the Mathematics options, and they must have at least a 2.0 GPA for all upper division units required in the major.
2. Upper Division Required Courses (30 units)
MATH 320 Foundations of Higher Mathematics (3)
MATH 340 Introductory Probability (3)
MATH 351 Differential Equations (3)
MATH 382/L Introduction to Scientific Computing and Lab (2/1)
MATH 440A Mathematical Statistics I (3)
MATH 440B Mathematical Statistics II (3)
MATH 450A Advanced Calculus I (3)
MATH 462 Advanced Linear Algebra (3)
MATH 483 Mathematical Modeling (3)
MATH 494 Practical Experience in Mathematics (3)
3. Upper Division Electives (9 units)
Choose 9 units from among (1) all upper division Math courses (excluding MATH 310, 310L, 311, 312, 331, 391 and 490); and (2) approved courses in other departments. At least 3 units must be in
Mathematics. Recommended courses: COMP 431 or COMP 465; FIN 303, FIN 431 or FIN 434; ECON 409; MATH 366, MATH 442, MATH 450B, MATH 480, MATH 481A, MATH 481B, MATH 481C, MATH 481D, MATH 482, MATH 540
or MATH 542ABCD; MKT 346; PSY 420; SOM 409, SOM 467 or SOM 591.
All classes taken outside the Mathematics department must have the approval of an advisor prior to enrollment, and students must either meet prerequisites or obtain permission of instructor.
Note: Early completion of MATH 340 and MATH 440A is recommended. Courses outside the Mathematics department are encouraged.
4. General Education (48 units)
Undergraduate students must complete 48 units of General Education as described in this Catalog, including 3 units of coursework meeting the Ethnic Studies (ES) graduation requirement.
12 units are satisfied by the following courses in the major: PHYS 220A satisfies B1 Physical Science; PHYS 220AL satisfies B3 Science Laboratory Activity; MATH 150A satisfies Basic Skills B4
Mathematics/Quantitative Reasoning; MATH 320 satisfies B5 Scientific Inquiry and Quantitative Reasoning; and COMP 110/L satisfies E Lifelong Learning.
Total Units in the Major/Option: 62-63
General Education Units: 36
Additional Units: 21-22
Total Units Required for the B.S. Degree: 120
Department of Mathematics
Chair: Rabia Djellouli
Live Oak Hall (LO) 1300
(818) 677-2721
Student Learning Outcomes
Students shall be able to:
1. Devise proofs of basic results concerning sets and number systems.
2. Rigorously establish fundamental analytic properties and results, such as limits, continuity, differentiability, and integrability.
3. Demonstrate facility with the objects, terminology, and concepts of linear algebra.
4. Demonstrate facility with the terminology, use of symbols, and concepts of probability.
5. Write simple computer programs to perform computations arising in the mathematical sciences.
|
{"url":"https://catalog.csun.edu/archive/2021/academics/math/programs/bs-mathematics-ii/statistics/","timestamp":"2024-11-02T17:40:30Z","content_type":"text/html","content_length":"20125","record_id":"<urn:uuid:a53806f5-369e-4f4d-b643-ff83afb89e02>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00480.warc.gz"}
|
Using Quantum mechanic to do computation.
An overview of quantum computer and quantum computation.
Computers are amazing they allow us to do a wide range of calculations in a split of a second. But unfortunately, they have a limit in what they can do. This is why some researchers are turning
toward this new and exciting field of quantum computing.
Quantum computing was proposed and begin in 1980 when a physicist called Paul Benioff proposed a quantum model for the Turing machine. It was then suggested by Richard Feynman that a quantum computer
had the capability to simulate things that a classical computer can not do.
The big difference between a classical computer and a quantum computer is what they use to do computation. A classical computer use bit to do computation on the other hand quantum computer use
quantum bit or qubit to do computation.
But what is the differrence between a bit and a qubit ?
A bit is the most basic unit in a computer it can only take on of two value that are 0 or 1 and those 0 and 1 corresponde to the electrical signal in a computer (signal on = 1/ signal off = 0). Those
bit are put together to make bytes to store data and execute instruction.
A qubit on the other hand as also the possibility of being represented as a 1 or 0 but it can also be both ( superposition ) instead of being an electrical signal a qubit can be made of many things
the only condition it has to be able to get into a superposition, for example, an atomic nucleus or subatomic particle.
Superposition is one of the fundamental principles of quantum mechanics. It states that any two states (or more) of a quantum object can be put together (superposed) to create a new valid state. This
principales can be better understand with the Schrodinger cat tough experiment.
The thought experiment goes as follows. Imagine a hypothetical cat in a sealed box whit at its side a poison, a Geiger counter , a radioactive material, and a hammer. The amount of radioactive
material was small enough that there was only a 50\50 chance of being detected. If the Geiger counter detected a sign of radiation, the hammer would smash the poison thus killing the cat. In this
scenario, we can considere to be in a superposition because it may be both alive and dead.
Spooky action at a disctance.
Entanglement is a quantum mechanic phenomenon in which the state of two or more quantum objects has a correlation or connection between them even when there a separated. This allows a measurement
made on one of the entangled objects to be influencing the other object entangled with it.
This phenomenon was described by Einstein as spooky action at a distance.
“Spooky action at a distance.”
― Albert Einstein
As state, before a quantum computer can make computation and solve problem that will be impossible to do on a classical computer. One of the most promising use of quantum computer is modeling
chemical reactions.
Because of the astounding large number of states in the molecule, it is extremely difficult to model molecule. It is in fact so large that even the strongest supercomputer that we have as difficulty
modeling even small molecule. But for a quantum computer, this task is extremely easy.
|
{"url":"https://penyel-djegnene.medium.com/using-quantum-mechanic-to-do-computation-195bacc9d184?source=user_profile_page---------8-------------2c03229742ec---------------","timestamp":"2024-11-08T11:08:22Z","content_type":"text/html","content_length":"107917","record_id":"<urn:uuid:27419806-7cc1-4529-9de6-abb35e578dc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00830.warc.gz"}
|
S&I's idea of EQ?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
S&I's idea of EQ?
• To: JINX@OZ.AI.MIT.EDU
• Subject: S&I's idea of EQ?
• From: Jonathan A Rees <JAR@MC.LCS.MIT.EDU>
• Date: Thu, 13 Mar 86 18:46:39 EST
• cc: RRRS-AUTHORS@MC.LCS.MIT.EDU
• In-reply-to: Msg of 12 Mar 1986 10:11 EST (Wed) from Bill Rozas <JINX%OZ.AI.MIT.EDU at xx.lcs.mit.edu>
Date: 12 Mar 1986 10:11 EST (Wed)
From: Bill Rozas <JINX%OZ.AI.MIT.EDU at xx.lcs.mit.edu>
If map is not known, the time to look its value up at runtime is
comparable to the time which it takes to close the procedure. In
particular, in MIT-Scheme (because of first class environments, etc),
looking up map can take considerably longer than closing the
lambda-expression, and the latter time is usually negligible. I think
that the small time difference which can be gained in this case is not
very interesting.
This is an empirical question for which I don't have any data, but my
intuition is that the way procedures are used and implemented in T, the
consing overhead here would be unacceptable, probably high enough to
make T want to be incompatible with Scheme in yet one more way, if
Scheme were changed. The fact that the T implementation "coalesces"
procedures is exploited very heavily in the implementation and, I
suspect, in the way some users write code. Not all Scheme
implementations have MIT scheme's high variable lookup overhead; consing
and space are not as cheap in most implementations as in MIT scheme; and
not all compilers or users choose to do as much analysis and procedure
integration as you believe is appropriate.
|
{"url":"https://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1986/msg00066.html","timestamp":"2024-11-11T10:27:55Z","content_type":"text/html","content_length":"3995","record_id":"<urn:uuid:982c21fc-3c49-486e-b3bb-3254a8a0cbb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00350.warc.gz"}
|
Machine Learning - Cracklogic
Machine Learning
Below are some useful Machine Learning question and answer.
Which ONE of the following are regression tasks?
A) Predict the age of a person
B) Predict the country from where the person comes from
C) Predict whether the price of petroleum will increase tomorrow
D) Predict whether a document is related to science
Answer: A
Which of the following is a supervised learning problem?
A) Grouping people in a social network.
B) Predicting credit approval based on historical data
C) Predicting rainfall based on historical data
D) all of the above
Answer: B and C
Which of the following are classification tasks? (Mark all that apply)
A) Find the gender of a person by analyzing his writing style
B) Predict the price of a house based on floor area, a number of rooms etc.
C) Predict whether there will be abnormally heavy rainfall next year
D) Predict the number of copies of a book that will be sold this month
Answer: A, C
Which of these are categorical features?
A) A height of a person
B) Price of petroleum
C) Mother tongue of a person
D) Amount of rainfall in a day
Answer: C
Occam’s razor is an example of
A) Inductive bias
B) Preference bias
Answer: A
How does generalization performance change with increasing size of the training set?
A) Improves
B) Deteriorates
C) No Change
D) None
Answer: A
In regression the output is
A) Discrete.
B) Continuous and always lies in a finite range.
D) Maybe discrete or continuous.
Answer: C
In linear regression the parameters are
A) strictly integers
B) always lies in the range [0,1]
c)any value in the real space
D) any value in the complex space
Answer: C
Which of the following is true for a decision tree?
A) A decision tree is an example of a linear classifier.
B) The entropy of a node typically decreases as we go down a decision tree.
C) Entropy is a measure of purity.
D) An attribute with lower mutual information should be preferred to other attributes.
Answer: B
Given a list of 14 examples including 9 positive and 5 negative examples. The entropy of the dataset with respect to this classification is
A) 0.940
B) 0.06
C) 0.50
D) 0.22
Answer: A
Decision trees can be used for the following type of datasets:
I. The attributes are categorical
II. The attributes are numeric valued and continuous
III. The attributes are discrete-valued numbers
A) In case I only
B) In case of II only
C) In cases II and III only
D)In cases I, II and III
Answer: D
How does generalization performance change with increasing size of the training set?
A) Improves
B) Deteriorates
C) No Change
D) None
Answer: A
One of the most common uses of Machine Learning today is in the domain of Robotics. Robotic tasks include a multitude of ML methods tailored towards navigation, robotic control and a number of other
tasks. Robotic control includes controlling the actuators available to the robotic system. An example of this is the control of a painting arm in automotive industries. The robotic arm must be able
to paint every corner in the automotive parts while minimizing the quantity of paint wasted in the process. Which of the following learning paradigms would you select for training such a robotic arm?
A) Supervised learning
B) Unsupervised learning
C) Combination of supervised and unsupervised learning
D) Reinforcement learning
In a K-NN algorithm, given a set of training examples and the value of < size of the training set ( ), the algorithm predicts the class of a test example to be the
A) Most frequent class among the classes of closest training examples.
B) Least frequent class among the classes of closest training examples.
C) Class of the closest point.
D) Most frequent class among the classes of the farthest training examples.
Answer: A
In collaborative Filtering based Recommendation, the items are recommended based on which of the following?
A) Similar users
B) Similar items
C) Both A and B
D) None
Answer: A
Which of the following are advantages of the large value of in K-NN algorithm?
A) Less sensitive to noise.
B) Better probability estimates for discrete classes.
C) Larger training sets allow larger values of.
D) All of the above.
Answer: D
For which of the following cases Dimensional reduction may be used?
A) Data Compression
B) Data Visualization
C) To prevent overfitting
D) Both A and B
Answer: D
Which of the following is the limitation of Collaborative Filtering?
A) Over specialization
B) Cold start
C) Both A and B
D) None
Answer: B
Which of the following statements is true about PCA?
(i) We must standardize the data before applying PCA.
(ii) We should select the principal components which explain the highest variance
(iii) We should select the principal components which explain the lowest variance
(iv) We can use PCA for visualizing the data in lower dimensions
A. (i), (ii) and (iv)
B. (ii) and (iv)
C. (iii) and (iv)
D. (i) and (iii)
Answer: A
In feature selection, which of the following techniques can be used to find a subset of features?
A) Sequential forward search
B) Sequential backward search
C) Both A and B
D) None of A or B
Answer: C
[True or False] A Pearson correlation between two variables is zero but, still, their values can still be related to each other.
A) TRUE
B) FALSE
Answer: A
Bayesian Network is a graphical model that efficiently encodes the joint probability distribution for a large set of variables.
B) False
Answer: A
A fair coin is tossed three times and a T (for tails) or H (for heads) is recorded, giving us a list of length 3. Let X be the random variable which is zero if no T has another T adjacent to it, and
is one otherwise.Let Y denote the random variable that counts the number of T’s in the three tosses.
Find P(X=1, Y=2).
B) 2/8
C) 5/8
D) 7/8
Answer: B
Two cards are drawn at random from a deck of 52 cards without replacement. What is the probability of drawing a 2 and an Ace in that order?
A) 4/51
B) 1/13
C) 4/256
D) 4/663
Answer: D
A and B throw alternately a pair of dice. A wins if he throws 6 before B throws 7 and B wins if she throws 7 before A throws 6. If A begins, his chance of winning would be:
A) 30/61
B) 31/61
C) 1/2
D) 6/7
Answer: A
Diabetic Retinopathy is a disease that affects 80% of people who have diabetes for more than 10 years. 5% of the Indian population has been suffering from diabetes for more than 10 years. Answer the
following questions. What is the joint probability of finding an Indian suffering from Diabetes for more than 10 years and also has Diabetic Retinopathy?
A) 0.024
B) 0.040
C) 0.076
D) 0.005
Answer: B
Which of the following is false about support vectors?
A) The support vectors are the subset of data points that determine the max-margin separator.
B) The Lagrangian multipliers corresponding to the support vectors are non-zero.
C) The support vectors are used to decide which side of the separator a test case is on.
D) The max-margin separator is a non-linear combination of the support vectors.
Answer: D
Consider a binary classification problem.Suppose I have trained a model on a linearly separable training set, and now I get a new labeled data point which is correctly classified by the model, and
far away from the decision boundary. If I now add this new point to my earlier training set and re-train, in which cases is the learnt decision boundary likely to change?
A) When my model is a perceptron.
B) When my model is logistic regression.
C) When my model is an SVM.
D) When my model is Gaussian discriminant analysis.
Answer: B and D
After training an SVM, we can discard all examples which do not support vectors and can still classify new examples?
A) TRUE
B) FALSE
If g(z) is the sigmoid function, then its derivative with respect to z may be written in term of g(z) as
A) g(z)(1-g(z))
B) g(z)(1+g(z))
C) -g(z)(1+g(z))
D) g(z)(g(z)-1)
Answer: A
Which of the following are true when comparing ANNs and SVMs?
A) ANN error surface has multiple local minima while SVM error surface has only one minima
B) After training, an ANN might land on a different minimum each time, when initialized with random weights during each run.
C) In training, ANN’s error surface is navigated using a gradient descent technique while SVM’s error surface is navigated using convex optimization solvers.
D) As shown for Perceptron, there are some classes of functions that cannot be learnt by an ANN. An SVM can learn a hyperplane for any kind of distribution.
Answer: A, B, C
Which of the following is not a kernel function?
A) K(xi,xj)=xi.xj
B) K(xi,xj)=(1 – xi.xj)3
C) K(xi,xj)=e(-‖xi-xj‖2 /(2σ2))
D) K(xi,xj)= tanh(β0xi.xj+β1)
Answer: B
Which of the following is true about SMO algorithm (multiple answers)?
A) The SMO can efficiently solve the primal problem.
B) The SMO can efficiently solve the dual problem
C) The SMO solves the optimization problem by co-ordinate ascent.
D) The SMO solves the optimization problem by coordinate descent.
Answer: B, C
Which of the following is/are true about the Perceptron classifier?
A) It can learn an OR function
B) It can learn a AND function
C) The obtained separating hyperplane depends on the order in which the points are presented in the training process.
D) For a linearly separable problem, there exists some initialization of the weights which might lead to non-convergent cases.
Answer: A, B, and C
The back-propagation learning algorithm applied to a two-layer neural network
A) always finds the globally optimal solution.
B) finds a locally optimal solution which may be globally optimal.
C) never finds the globally optimal solution.
D) finds a locally optimal solution which is never globally optimal
Answer: B
|
{"url":"https://cracklogic.com/machine-learning/","timestamp":"2024-11-10T01:17:01Z","content_type":"text/html","content_length":"85583","record_id":"<urn:uuid:15b9b42a-9e66-45a1-b375-6b12aed15ea3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00223.warc.gz"}
|
Class 11 Physics Model Paper 3 Solution - FBISE Past Papers
Class 11 Physics Model Paper 3 Solution
Class 11 Physics FBISE Model Paper 3 Solution is given below. You can download the paper by clicking on the download button just below the paper.
Class 11 Model Papers | Class 12 Model Papers
Class 11 Physics Model Paper 3 Solution
Leave a Comment
|
{"url":"https://fbisepastpapers.com/class-11-physics-model-paper-3-solution/","timestamp":"2024-11-12T19:31:46Z","content_type":"text/html","content_length":"162043","record_id":"<urn:uuid:88260ff7-5914-4e0d-a92a-b26b06d6d710>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00453.warc.gz"}
|
Price per Millimeter Calculator
The Price per Millimeter Calculator can calculate the price for each millimeter based on the total length and the total price of the length.
Therefore, to calculate the price per millimeter, we need the total length, the total price of the length, and the length measurement type. The length measurement type can be millimeters,
centimeters, meters, or kilometers.
Please enter the price, the total length, and the measurement type in the box below to get the price per millimeter.
To calculate the price per millimeter, we divide the total price by the total length. The price is rounded to the nearest cent if necessary.
Price per Minute Calculator
Here is a similar calculator you may find interesting.
|
{"url":"https://pricecalculator.org/per/price-per-millimeter-calculator.html","timestamp":"2024-11-12T05:39:15Z","content_type":"text/html","content_length":"6798","record_id":"<urn:uuid:b79ecf95-f301-437c-8a2f-f7101e1f14b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00450.warc.gz"}
|
Simultaneous Graph Representation Problems
University of Waterloo
Many graphs arising in practice can be represented in a concise and intuitive way that conveys their structure. For example: A planar graph can be represented in the plane with points for vertices
and non-crossing curves for edges. An interval graph can be represented on the real line with intervals for vertices and intersection of intervals representing edges. The concept of ``simultaneity''
applies for several types of graphs: the idea is to find representations for two graphs that share some common vertices and edges, and ensure that the common vertices and edges are represented the
same way. Simultaneous representation problems arise in any situation where two related graphs should be represented consistently. A main instance is for temporal relationships, where an old graph
and a new graph share some common parts. Pairs of related graphs arise in many other situations. For example, two social networks that share some members; two schedules that share some events,
overlap graphs of DNA fragments of two similar organisms, circuit graphs of two adjacent layers on a computer chip etc. In this thesis, we study the simultaneous representation problem for several
graph classes. For planar graphs the problem is defined as follows. Let G1 and G2 be two graphs sharing some vertices and edges. The simultaneous planar embedding problem asks whether there exist
planar embeddings (or drawings) for G1 and G2 such that every vertex shared by the two graphs is mapped to the same point and every shared edge is mapped to the same curve in both embeddings. Over
the last few years there has been a lot of work on simultaneous planar embeddings, which have been called `simultaneous embeddings with fixed edges'. A major open question is whether simultaneous
planarity for two graphs can be tested in polynomial time. We give a linear-time algorithm for testing the simultaneous planarity of any two graphs that share a 2-connected subgraph. Our algorithm
also extends to the case of k planar graphs, where each vertex [edge] is either common to all graphs or belongs to exactly one of them. Next we introduce a new notion of simultaneity for intersection
graph classes (interval graphs, chordal graphs etc.) and for comparability graphs. For interval graphs, the problem is defined as follows. Let G1 and G2 be two interval graphs sharing some vertices I
and the edges induced by I. G1 and G2 are said to be `simultaneous interval graphs' if there exist interval representations of G1 and G2 such that any vertex of I is assigned to the same interval in
both the representations. The `simultaneous representation problem' for interval graphs asks whether G1 and G2 are simultaneous interval graphs. The problem is defined in a similar way for other
intersection graph classes. For comparability graphs and any intersection graph class, we show that the simultaneous representation problem for the graph class is equivalent to a graph augmentation
problem: given graphs G1 and G2, sharing vertices I and the corresponding induced edges, do there exist edges E' between G1-I and G2-I such that the graph G1 U G_2 U E' belongs to the graph class.
This equivalence implies that the simultaneous representation problem is closely related to other well-studied classes in the literature, namely, sandwich graphs and probe graphs. We give efficient
algorithms for solving the simultaneous representation problem for interval graphs, chordal graphs, comparability graphs and permutation graphs. Further, our algorithms for comparability and
permutation graphs solve a more general version of the problem when there are multiple graphs, any two of which share the same common graph. This version of the problem also generalizes probe graphs.
Graph Theory, Graph Algorithms, Simultaneous Representation, Planar Graphs, Interval Graphs, Comparability Graphs, Chordal Graphs, Permutation Graphs, Sandwich Graphs, Probe Graphs
|
{"url":"https://uwspace.uwaterloo.ca/items/263b9259-019d-4f81-9fcc-3652164a4149","timestamp":"2024-11-13T01:19:55Z","content_type":"text/html","content_length":"434996","record_id":"<urn:uuid:6c9e4e4b-62ac-43c1-bdb3-01ace6526107>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00320.warc.gz"}
|
If f(x)={x2,ax+b,x≤cx>c is differentiable at x=c, fin
If ... | Filo
Question asked by Filo student
If is differentiable at , fin Or, If , prove that Question 9.
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
9 mins
Uploaded on: 2/16/2023
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Limit, Continuity and Differentiability
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text If is differentiable at , fin Or, If , prove that Question 9.
Updated On Feb 16, 2023
Topic Limit, Continuity and Differentiability
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 86
Avg. Video Duration 9 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/if-is-differentiable-at-fin-or-if-prove-that-question-9-34323831353930","timestamp":"2024-11-06T18:36:48Z","content_type":"text/html","content_length":"320621","record_id":"<urn:uuid:f81433db-b641-4792-a993-a365eac0d058>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00439.warc.gz"}
|
VE Simulation: Sloping Aquifer
In this example we consider a synthetic sloping aquifer, created in this tutorial. The topography of the top surface and the geological layers in the model is generated by combining the membrane
function (MATLAB logo) and a sinusoidal surface with random perturbations.
Here, CO[2] is injected in the aquifer for a period of 30 years. Thereafter we simulate the migration of the CO[2] in a post-injection period of 720 years.
The simulation is done using the vertical average/equilibrium framework.
Video of the simulation
Construct stratigraphic and petrophysical model
Description of how the model is created is provided in a separate tutorial.
Set time and fluid parameters
T = 750*year();
stopInject = 30*year();
dT = 1*year();
dTplot = 5*dT;
% Make fluid structures, using data that are resonable at p = 300 bar
fluidVE = initVEFluid(Gt, 'mu' , [0.056641 0.30860] .* centi*poise, ...
'rho', [686.54 975.86] .* kilogram/meter^3, ...
'sr', 0.2, 'sw', 0.1, 'kwm', [0.2142 0.85]);
gravity on
Set well and boundary conditions
We use one well placed down the flank of the model, perforated in the bottom layer. Injection rate is 1.4e3 m^3/day of supercritical CO[2]. Hydrostatic boundary conditions are specified on all outer
% Set well in 3D model
wellIx = [G.cartDims(1:2)/5, G.cartDims([3 3])];
rate = 2.8e3*meter^3/day;
W = verticalWell([], G, rock, wellIx(1), wellIx(2), ...
wellIx(3):wellIx(4), 'Type', 'rate', 'Val', rate, ...
'Radius', 0.1, 'comp_i', [1,0], 'name', 'I');
% Well and BC in 2D model
wellIxVE = find(Gt.columns.cells == W(1).cells(1));
wellIxVE = find(wellIxVE - Gt.cells.columnPos >= 0, 1, 'last' );
WVE = addWell([], Gt, rock2D, wellIxVE, ...
'Type', 'rate','Val',rate,'Radius',0.1);
bcVE = addBC([], bcIxVE, 'pressure', ...
bcVE = rmfield(bcVE,'sat');
bcVE.h = zeros(size(bcVE.face));
% for 2D/vertical average, we need to change defintion of the wells
for i=1:numel(WVE)
WVE(i).compi = NaN;
WVE(i).h = Gt.cells.H(WVE(i).cells);
Prepare simulations
Compute inner products and instantiate solution structure
SVE = computeMimeticIPVE(Gt, rock2D, 'Innerproduct','ip_simple');
preComp = initTransportVE(Gt, rock2D);
sol = initResSol(Gt, 0);
sol.wellSol = initWellSol(W, 300*barsa());
sol.h = zeros(Gt.cells.num, 1);
sol.max_h = sol.h;
Prepare plotting
We will make a composite plot that consists of several parts:
• a 3D plot of the plume
• a pie chart of trapped versus free volume
• a plane view of the plume from above
• two cross-sections in the x/y directions through the well
Details of the plotting is provided in the full tutorial,
, accompanying the VE module.
Main loop
Run the simulation using a sequential splitting with pressure and transport computed in separate steps. The transport solver is formulated with the height of the CO[2] plume as the primary unknown
and the relative height (or saturation) must therefore be reconstructed.
t = 0;
while t% Advance solution: compute pressure and then transport
sol = solveIncompFlowVE( sol, Gt, SVE, rock, fluidVE, ...
'bc', bcVE, 'wells', WVE);
sol = explicitTransportVE(sol, Gt, dT, rock, fluidVE, ...
'bc', bcVE, 'wells', WVE, 'preComp', preComp);
% Reconstruct 'saturation' defined as s=h/H, where h is the height of
% the CO[2] plume and H is the total height of the formation
sol.s = height2Sat(sol, Gt, fluidVE);
assert( max(sol.s(:,1))<1+eps && min(sol.s(:,1))>-eps );
t = t + dT;
% Check if we are to stop injecting
if t>= stopInject
WVE = []; bcVE = [];
% Compute trapped and free volumes of CO[2]
freeVol = ...
trappedVol = ...
totVol = trappedVol + freeVol;
% Plotting (...) details are provided in the full tutorial runSlopingAquifer.m accompanying the VE module.
|
{"url":"https://www.sintef.no/projectweb/mrst/modules/co2lab/ve-models/run-sloping-aquifer/","timestamp":"2024-11-10T11:21:06Z","content_type":"text/html","content_length":"19861","record_id":"<urn:uuid:469e80f3-a5c6-4c4f-90e6-f4162527903e>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00717.warc.gz"}
|
9th Class Maths Notes Chapter 5 Co-Ordinate Geometry
Students can go through AP Board 9th Class Maths Notes Chapter 5 Co-Ordinate Geometry to understand and remember the concepts easily.
AP State Board Syllabus 9th Class Maths Notes Chapter 5 Co-Ordinate Geometry
→ To locate the exact position of a point on a number line we need only a single reference.
→ To describe the exact position of a point on a Cartesian plane we need two references.
→ Rene Descartes a French mathematician developed the new branch of mathematics called Co-ordinate Geometry.
→ The two perpendicular lines taken in any direction are referred to as co-ordinate axes. © The horizontal line is called X – axis.
→ The vertical line is called Y – axis.
→ The meeting point of the axes is called the origin.
→ The distance of a point from Y – axis is called the x co-ordinate or abscissa.
→ The distance of a point from X – axis is called the y co-ordinate or ordinate.
→ The co-ordinates of origin are (0, 0).
→ The co-ordinate plane is divided into four quadrants namely Q[1], Q[2], Q[3], Q[4] i.e., first, second, third and fourth quadrants respectively.
→ The signs of co-ordinates of a point are as follows.
Q[1]: (+, +) Q[2]: (-, +) Q[3]: (-, -) Q[4]: (+, -).
→ The x co-ordinate of a point on Y – axis is zero.
→ The y co-ordinate of a point on X – axis is zero.
→ Equation of X – axis is y = 0
→ Equation of Y – axis is x = 0
→ In a co-ordinate plane (x[1]; y[1]) ≠ (x[2], y[2]) unless x[1] = x[2] and y[1] = y[2].
|
{"url":"https://apboardsolutions.guru/ap-board-9th-class-maths-notes-chapter-5/","timestamp":"2024-11-09T18:38:50Z","content_type":"text/html","content_length":"49910","record_id":"<urn:uuid:0ad475f4-9ed4-4ed1-8887-372e0b755248>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00583.warc.gz"}
|
Studies of correlated systems with non-trivial topology in real and reciprocal space
Materials featuring strong electronic correlations are prone to the emergence of unexpected new properties offering novel functionalities for applications. Many decades of research have led to a deep
and well-developed understanding of the effects of strong electronic correlations based on concepts such as generalized rigidities and symmetry breaking. Over the past decade the basic notions of
topology, describing generic geometrical properties that remain unchanged under gradual (elastic) deformations, have become key issues in condensed-matter physics. The underlying gradual variations
of the physical properties of interest may be driven by the effects of spin-orbit coupling and/or geometric frustration. Prominent examples for topological excitations in real space include the
formation of skyrmions in chiral magnets, which are classified by an integral winding number, or fractionalized excitations such as monopoles in spin ice. Similarly, topological invariants in
reciprocal space are the key characteristic of the celebrated discovery of topological insulators and Weyl semimetals, but are also important for the characterization of topological superconductors.
Pressing challenges in research on the implications of strong electronic correlations for functionalities concern the effects of non-trivial topological winding in real and reciprocal space. In bulk
materials these pertain to new forms of topological phases with coupled spin, orbital, charge, and lattice degrees of freedom. Work focusing on the identification of such phenomena will ultimately
permit to address the topological properties of coupled degrees of freedom and/or coupling of non-trivial topologies in real and reciprocal space. Project Area E (Topological Aspects) bundles these
activities in terms of five experimental and two theoretical projects.
The choice of activities in Project Area E defines a platform comprising the preparation of bulk materials and thin films, and the characterization of real space topology (projects E1, E4, and E6),
specific techniques to track the topology in reciprocal space (projects E2, E3, and E4) and, perhaps most importantly, the theoretical framework in terms of general notions and specific experimental
signatures (projects E5 and E7). Further questions on topological aspects of correlated matter will also be addressed in the projects of areas F and G, such as Raman, neutron, dielectric, and
ferromagnetic resonance spectroscopy of topological materials, as well as nuclear magnetic resonance. Likewise the evolution of topological properties under extreme conditions, such as high
pressures, large magnetic fields, or reduced dimensionality in tailored systems are specifically addressed in the other areas.
|
{"url":"https://www.trr80.de/trr80/index.php?ID=9010","timestamp":"2024-11-08T08:49:54Z","content_type":"text/html","content_length":"7317","record_id":"<urn:uuid:de4ab7b6-a37e-41fe-a4d4-7a002f911242>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00197.warc.gz"}
|
How much does an empty 10 mL graduated cylinder weigh?
Generally, the mass of an empty 10 mL graduated cylinder in grams is 25.4 grams.
Is there a 10ml graduated cylinder?
Plastic Graduated Cylinder Sets A convenient set of 7 graduated PP cylinders in sizes 10ml, 25ml, 50ml, 100ml, 250ml, 500ml and 1000ml. Also available as a set of 4 in sizes 10ml, 25ml, 50ml and
How many sig figs does a 10 mL graduated cylinder have?
Burets are very precise tools for measuring volume. Our lab is equipped with burets that measure to the nearest 0.05 mL, so a volume greater than 1 mL will have 3 significant digits, and a volume
greater than 10 mL will have 4 significant digits. You always estimate one more digit than you can read from the lines.
How do you read a 10ml cylinder?
What is the mass of 10 mL of liquid A?
The mass of 10 mL of a liquid is 10.112 g.
What is the density of 10mL of water?
(Water โ SX โ 10mL ยฑ0.00003 g/cm3) Calibrated at 15ยบC, 20ยบC, 25ยบC; 0.9982 is the typical Density Value in g/cm3.
How can I measure 10 mL?
Official answer 10mL equals two teaspoons (2tsp). A tablespoon is three times bigger than a teaspoon and three teaspoons equal one tablespoon (1Tbsp or 1Tb). One tablespoon also equals 15mL.
What is the precision of a 10 mL graduated cylinder?
A 10 ml graduated cylinder can be used in chemistry labs for measuring liquids to an accuracy of 0.1ml (0.1cc) at the 10ml mark based on its calibration error of 1% at full scale. It is the most
economical way to measure a 10ml volume; more accurate ways include pipets and burets.
How do you measure a graduated cylinder?
How many significant digits is 10?
The number “10.” is said to have two significant digits, or significant figures, the 1 and the 0. The number 1.0 also has two significant digits. So does the number 130, but 10 and 100 only have one
“sig fig” as written. Zeros that only hold places are not considered to be significant.
How many significant figures are in a graduated cylinder?
If a person needed only a rough estimate of volume, the beaker volume is satisfactory (2 significant figures), otherwise one should use the graduated cylinder (3 significant figures) or better yet,
the buret (4 significant figures).
What does a graduated cylinder measure in unit?
A graduated cylinder measures in milliliters, which is a measure of volume. The English system equivalent is pints, quarts, and gallons.
What is the volume between each graduation on the 10 graduated cylinder?
In the 10-mL graduated cylinder, first subtract 8 mL – 6 mL = 2 mL. Next, count that there are ten intervals between the labeled graduations. Therefore, the scale increment is 2 mL/10 graduations =
0.2 mL/graduation.
What is the smallest scale increment for the 10 mL graduated cylinder shown in Figure 6?
If you look at a 10mL graduated cylinder, for example, the smallest graduation is tenth of a milliliter (0.1mL). That means when you read the volume, you can estimate to the hundredths place
How do you find the mass of a liquid in a graduated cylinder?
Mass the fluid, find its volume, and divide mass by volume. To mass the fluid, weigh it in a container, pour it out, weigh the empty container, and subtract the mass of the empty container from the
full container. To find the volume of the fluid, you simply measure it very carefully in a graduated cylinder.
How do you find the mass of water in a graduated cylinder?
How do you calculate mass from mL?
1. mass = volume x density.
2. mass = 30 ml x 0.790 g/ml.
3. mass = 23.7 g.
What is the mass of 10 mL h2o at STP?
Solution : First method
First case
Weight of `10 mL` of `H_(2)` at `STP = (10 xx 2)/(22400) = 0.0008 g`
Weight of `5 mL` of `O_(2)` at `STP = (5 xx 32)/(22400) = 0.0007 g`
Second case
Weight of `200 m`L of `H_(2)` at `STP = (200 xx 2)/(22400) = 0.1178 g`
Weight of oxygen taken away from …
How do I calculate density?
What is the formula for density? The formula for density is the mass of an object divided by its volume. In equation form, that’s d = m/v , where d is the density, m is the mass and v is the volume
of the object. The standard units are kg/mยณ.
What does 10mg mL mean?
Milligrams per milliliter (mg/mL) is a measurement of a solution’s concentration. In other words, it’s the amount of one substance dissolved in a specific volume of a liquid. For example, a salt
water solution of 7.5 mg/mL has 7.5 milligrams of salt in each milliliter of water.
Is a teaspoon 5 or 10 mL?
The teaspoon is a unit of volume equal to one-third of a tablespoon. One teaspoon is equal to around 4.9 milliliters, but in nutrition labeling, one teaspoon is equal to exactly 5 milliliters. The
teaspoon is a US customary unit of volume.
Is a 10mL or 100mL more precise?
Answer and Explanation: The volume measurements we make using a 10-mL graduated cylinder are more precise as compared to measurements done by using 100-mL graduated cylinder.
Is a 10 mL or 50 mL graduated cylinder more precise?
Most 50 ml graduated cylinders have markings spaced every milliliter while 10 ml graduates have markings every tenth of a milliliter. If we measure a small volume of liquid in a 10 ml graduate, our
measurement should be more accurate than if a 50 ml graduate were used.
How do you calculate precision?
Consider a model that predicts 150 examples for the positive class, 95 are correct (true positives), meaning five were missed (false negatives) and 55 are incorrect (false positives). We can
calculate the precision as follows: Precision = TruePositives / (TruePositives + FalsePositives)
How do you read a 50 mL graduated cylinder?
|
{"url":"https://scienceoxygen.com/how-much-does-an-empty-10-ml-graduated-cylinder-weigh/","timestamp":"2024-11-04T09:03:35Z","content_type":"text/html","content_length":"310927","record_id":"<urn:uuid:5cf0e1f8-b4b4-47f6-866d-0d25e60e54c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00619.warc.gz"}
|
Breaking Beams - Activity
Quick Look
Grade Level: 8 (7-9)
Time Required: 45 minutes
Expendable Cost/Group: US $2.00
Group Size: 2
Activity Dependency: None
Subject Areas: Physical Science, Physics
NGSS Performance Expectations:
Students learn about stress and strain by designing and building beams using polymer clay. They compete to find the best beam strength to beam weight ratio, and learn about the trade-offs
engineers make when designing a structure. This engineering curriculum aligns to Next Generation Science Standards (NGSS).
Cracks result when there is excessive stress placed on a beam causing the beam to essentially break.
Engineering Connection
Engineers consider the forces of stress and strain in their choice of design and materials. Civil engineers often use a system of beams and columns in their structural design, to keep us safe in
our homes and schools. Engineers specify the exact materials from which objects and structures should be made, so that walls support the weight of the roof, airplanes fly safely at high altitude,
wheels do not fall off, chairs support the weight of people, bridges support the loads that travel them, shopping carts support groceries, and strollers support children, and so on.
Learning Objectives
After this activity, students should be able to:
□ Recognize various engineered beam designs.
□ Identify instances of elastic and plastic deformation.
□ Describe the process of how engineers and scientists conduct materials testing to determine the ultimate tensile strength of a beam.
□ Perform data collection and analysis (ranking).
The activity setup to stress test student-designed beams.
Educational Standards
Each TeachEngineering lesson or activity is correlated to one or more K-12 science, technology, engineering or math (STEM) educational standards.
All 100,000+ K-12 STEM standards covered in TeachEngineering are collected, maintained and packaged by the Achievement Standards Network (ASN), a project of D2L (www.achievementstandards.org).
In the ASN, standards are hierarchically structured: first by source; e.g., by state; within source by type; e.g., science or mathematics; within type by subtype, then by grade, etc.
NGSS Performance Expectation
MS-ETS1-2. Evaluate competing design solutions using a systematic process to determine how well they meet the criteria and constraints of the problem. (Grades 6 - 8)
Do you agree with this alignment? Thanks for your feedback!
Click to view other curriculum aligned to this Performance Expectation
This activity focuses on the following Three Dimensional Learning aspects of NGSS:
Science & Engineering Practices Disciplinary Core Ideas Crosscutting
Evaluate competing design solutions based on jointly developed and There are systematic processes for evaluating solutions with respect to how well they meet the criteria
agreed-upon design criteria. and constraints of a problem.
Alignment agreement:Thanks for your feedback! Alignment agreement:Thanks for your feedback!
□ Recognize and represent proportional relationships between quantities. (Grade 7) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Buildings generally contain a variety of subsystems. (Grades 6 - 8) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Engage in a research and development process to simulate how inventions and innovations have evolved through systematic tests and refinements. (Grades 6 - 8) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Quantities can be expressed and compared using ratios and rates. (Grade 6) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Fluently add, subtract, multiply, and divide multidigit decimals using standard algorithms for each operation. (Grade 6) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Predict and evaluate the movement of an object by examining the forces applied to it (Grade 8) More Details
Do you agree with this alignment? Thanks for your feedback!
□ Identify the distinguishing characteristics between a chemical and a physical change (Grade 8) More Details
Do you agree with this alignment? Thanks for your feedback!
Suggest an alignment not listed above
Materials List
Each group needs:
□ string or rope (to wrap the around the beam several times, about 2 ft.)
□ weights to hang on the string (up to 100 pounds for a 7-inch long beam.)
□ scale to measure beam weight
Engineers use beams to support the weight of a structure. Beams hold up floors and walls, dams and bridges — in fact, almost every structure you can think of has beams in it. Beams are typically
the horizontal support; columns or pillars are usually the vertical support. Since engineers use beams so much, they do a lot of work to figure out what the best kind of beam is for a given job.
A solid rectangle support beam is a simple and effective design. However, the weight of a solid beam is tremendous! If we tried to construct buildings and bridges with these beams, their weight
would be enormous and a lot of material and money would be wasted unnecessarily. So, engineers have come up with clever designs to reduce beam weight.
Three types of beams: solid, hollow and I-beam (left to right). The hollow and I-beam can support nearly as much load as the solid beam, but they are much lighter.
The three types of beam designs shown in the drawing are all the same length, width and height. However, the hollow rectangle beam and the I-beam weigh less than half as much as the solid beam.
Even though they weigh less, they can almost hold the same amount of weight as the solid beam! This means they have a much higher beam strength to beam weight ratio (written, beam strength : beam
weight), and are more efficient and cost-effective to use in construction projects.
Why do the hollow beam and I-beam perform as well as the solid beam? Due to the principles of stress and strain, the greatest tensile and compressive stresses are realized on the tops and bottoms
of beams while the neutral axis (middle of the beam) experiences no stresses. This allows engineers to take away material from the inside of the beam where the stresses are minimal. In this
activity, you will design and build your own beam to find a good beam strength : beam weight ratio — you want to build a lightweight beam that can hold a lot of weight. As you make your design,
think about the stress and strain on the beam; remember to keep material on the top and bottom surfaces where the stresses are the greatest!
Before the Activity
□ Gather materials.
□ Divide the 1.75 lbs of clay into 2-oz cubes (1¼ in cube size), resulting in 14 equal cubes.
□ For the stress testing, make sure you have a place, such as between two level tables, desks or chairs, to rest the beams and add weights.
□ If you want to cure the polymer clay with the students, preheat the oven to 130°C (275°F).
□ Make two long, thin lengths of polymer clay for a demonstration. Cure one piece in the oven, but not the other piece.
With the Students
1. Ask students to vote with a show of hands on the following question, "Do engineers construct buildings with solid beams or hollow beams?" Tally responses on the board. Tell them they will
find out more about what engineers do in this activity.
2. Explain the concepts of stress, strain and deformation introduced in this lesson. Use an uncured length of polymer clay to demonstrate plastic deformation by putting it across a gap and
showing that it bends, but does not spring back to its original shape, after a weight is added to it and removed. Use the cured length of polymer clay to demonstrate elastic deformation by
showing that the polymer clay returns to its original shape after the weight is removed. Challenge the students to design a beam that is very strong, but does not weigh very much.
3. Divide the class into groups of two students each.
4. Give each team a 2-oz cube of polymer clay with which to make their own beam. Not all 2 ounces must be used. Explain that using less clay may increase their strength : beam weight ratio.
5. Have the students design a 7-inch long beam to span a 6-inch gap. Ask the "junior engineer" students to sketch their ideas for various beam designs before constructing the one they predict
will have the best strength : beam weight ratio. The beams can be square, rectangular, circular, I-shaped, triangular or any other shape they think will be successful.
6. During beam construction, suggest that students use a pencil point to help join any vertical clay slabs to any horizontal clay slabs (perpendicular surfaces) of a beam design, such as the
example I-beam in the photograph. This reduces any gaps between the two surfaces, which would weaken the beam.
Use a pencil to join clay slab surfaces of a beam.
7. Follow the directions on the packaging to cure the students' clay beams by baking them in an oven. This can be done at the end of day 1 or overnight, if desired. Typically, curing requires
baking at 130°C (275°F) for 15 minutes for every ¼-in thickness. For example, a ½-in thick beam requires 30 minutes to cure.
8. To complete the curing process, let the beams cool to room temperature.
9. Weigh and record each team's beam design.
10. To test the beam strengths, straddle each beam across a six-inch gap (such as between two level tables, desks or chairs).
11. Tie several loops of string or rope around the beam, which helps to distribute the weight and provide a place to attach weights.
12. Add weight until the beam breaks. Record the maximum amount of weight each beam held (= yield strength).
13. Back at their desks, have the students calculate the strength : beam weight ratio, such as 12 oz / 2 oz = 6. Which beams had the highest strength : beam weight ratio? Are they the same three
beams that held the most weight? Which beams would be preferred for construction purposes?
14. Announce the winning team design as the beam with the highest strength : beam weight ratio. Have the winning team (and runner-up, if time allows) present their design concept to the rest of
the class.
Pre-Activity Assessment
Voting: Ask students to vote on the following question with a show of hands. Tally the responses on the board.
□ Do engineers construct buildings with solid beams or I- beams? (Answer: I-beams, because their strength : beam weight ratio is higher.)
Activity Embedded Assessment
Sketching: Have students sketch their ideas for various beam designs before constructing one they predict will have the best strength : beam weight ratio.
Calculation / Pairs Check: Have the student groups calculate their beam strength: beam weight ratio for their beam. Have them check their calculations with a neighbor, giving all students time to
Post-Activity Assessment
Presentation: Have the winning team (and runner-up if time allows) present their design to the rest of the class. Ask them to explain why they think their design worked the best.
Informal Discussion: Solicit, integrate and summarize student responses.
□ Ask the students to discuss why the beam strength : beam weight ratio is important to engineers.
□ Ask the students to think of situations in which the different styles of beams made by the class groups might work better than others. (For example, if a team made a circular beam, it might
work better as a vertical column support for holding up a bridge instead of a horizontal load support for cars going across a bridge.)
□ Ask the students to come up with different types of materials for beams in different situations. (Example: Would you use concrete to make a beam in a playground toy? Why or why not?)
Safety Issues
□ The cured clay will be hot when it comes out of the oven.
□ Do NOT bake clay in a microwave oven.
□ Do NOT bake clay at a temperature higher than recommended on the package.
Troubleshooting Tips
To avoid beams breaking before loading, make sure there are no cracks or gaps in the clay before curing.
Some brands of polymer clays are hard to manipulate because of their firmness. Firmer clays will also not bend/break as easily after baked. Sculpey and Fimo Soft brands work well.
Polymer clay does not actually completely harden until it has cooled.
If the clay is cured too long it will become brittle and break more easily. Follow the instructions for curing clay on the package of clay.
If unable to obtain large weights (up to 100 pounds for a 7-inch long beam), increase the required length of the beam and gap, which will lower the overall strength of the beams, so lighter
weights will work just as effectively.
Activity Extensions
On their own, have the students research four different styles of beams and model them out of clay. Ask them to:
□ Label the forces (stresses) acting on each beam.
□ Label purposes for which each beam is commonly used.
□ Label from what material the beam is usually made.
□ Place their beams in order of beam strength : beam weight ratio. Ask them if this order makes sense in terms of the purpose for which the beam is usually used.
Activity Scaling
□ For upper grades, have the students hypothesize at what point on the beam the most amount of stress and strain occurs. How can they prove this? Ask them to be creative and come up with a way
to show where the stress and strain is occurring on the beam. (Note: The most compressive and tensile stress on a beam is on the top and bottom of the beam.)
Get the inside scoop on all things TeachEngineering such as new site features, curriculum updates, video releases, and more by signing up for our newsletter!
PS: We do not share personal information or emails with anyone.
More Curriculum Like This
Middle School Lesson
Stressed and Strained
Students are introduced to the concepts of stress and strain with examples that illustrate the characteristics and importance of these forces in our everyday lives. They explore the factors that
affect stress, why engineers need to know about it, and the ways engineers describe the strength of mater...
Middle School Lesson
Strong as the Weakest Link
To introduce the two types of stress that materials undergo — compression and tension — students examine compressive and tensile forces and learn about bridges and skyscrapers. They construct
their own building structure using marshmallows and spaghetti to see which structure can hold the most weigh...
High School Lesson
Mechanics of Elastic Solids
Students calculate stress, strain and modulus of elasticity, and learn about the typical engineering stress-strain diagram (graph) of an elastic material.
Middle School Lesson
Designing Bridges
Students learn about the types of possible loads, how to calculate ultimate load combinations, and investigate the different sizes for the beams (girders) and columns (piers) of simple bridge
design. Additionally, they learn the steps that engineers use to design bridges.
Sculpey Clay:http://www.sculpey.com/
© 2004 by Regents of the University of Colorado
Ben Heavner; Chris Yakacki; Malinda Schaefer Zarske; Denise Carlson
Supporting Program
Integrated Teaching and Learning Program, College of Engineering, University of Colorado Boulder
The contents of this digital library curriculum were developed under a grant from the Fund for the Improvement of Postsecondary Education (FIPSE), U.S. Department of Education, and National
Science Foundation GK-12 grant no 0338326. However, these contents do not necessarily represent the policies of the Department of Education or National Science Foundation, and you should not
assume endorsement by the federal government.
Last modified: August 4, 2020
|
{"url":"https://www.teachengineering.org/activities/view/cub_mechanics_lesson07_activity1","timestamp":"2024-11-09T16:45:31Z","content_type":"text/html","content_length":"100362","record_id":"<urn:uuid:8928eed5-279d-44cb-901f-fc72e415f535>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00049.warc.gz"}
|
Global Network of Domestic Election Monitors
1. A Gentle Introduction to Summarizing Data
In this tutorial we are going to define some common terms and concepts including the basic types, or categories, of data. Then we'll learn how to describe a dataset. By the end you will be prepared
to take the concepts here and use them to summarize the polling station list in the next module.
We begin by learning some common terms used in examining data.
A dataset contains information about "individuals". Each "individual" is called an "observation" or "case". In most datasets, each row contains information about an individual. In Module 1, we looked
at a list of polling stations. In that dataset each row contained information about an individual polling station.
Any characteristic of an individual (i.e., observation) is called a variable. Some variables, like gender and job title, simply place individuals into categories. Others, like height or number of
registered voters, take on numerical values for which we can do arithmetic. Next we'll take a closer look at the different types of data.
Data is stored as different types, which are sometimes referred to as the "level of measurement". We need to understand the type of data because it helps us figure out how to properly summarize it.
There are three types of data:
1. Categorical or Nominal: These are data that have several categories and are not numerical (e.g., gender, ethnicity, constituency). For example, an election observation form might ask "Were you
permitted to observe all day?" where the answer option is either "Yes" or "No". An election management body (EMB) might release a list of officials who are assigned to each polling station, and
that list might contain the name and position of the official. The "position'" variable is likely to be categorical data (e.g., President, Deputy President, and Secretary).
2. Ordinal: These are data with categories that go in a specific order or rank. For example, on many election observation forms, there is a questions that asks "How many people were assisted to
vote?" where the answer options are "None", "Few", "Some", or "Many". "Many" is more than "Some", which is more than "None".
3. Continuous or Interval: This kind of data has a continuous range of numbers. All data values are possible. For example, an election day observation form may ask for the number of registered
voters for each polling station or the number of votes received for each candidate.
By first understanding what type of data a variable is, we can then decide how to best summarize or describe that variable.
Why do we summarize? We summarize data to "simplify" the data and quickly identify what looks "normal" and what looks odd. The distribution of a variable shows what values the variable takes and how
often the variable takes these values.
The two most useful ways of describing the distribution of data are:
1. The typical: This describes the center--or middle--of the data. This way of describing the center is also called a "measure of central tendency".
2. The spread of the values around the center: This describes how densely the data is distributed around the center. This is also called a "measure of dispersion".
These two ways of describing the data are also referred to as descriptive statistics.
The three common ways of looking at the center are average (also called mean), mode and median. All three summarize a distribution of the data by describing the typical value of a variable (average),
the most frequently repeated number (mode), or the number in the middle of all the other numbers in a data set (median).^[1] In this module, we are going to focus on the average. The average is the
most appropriate way to measure the center for interval/continuous data (e.g, numbers of registered voters). To calculate the average, we add up all the numbers for a variable and then divide by how
many numbers there are. Put another way the average (mean) is the sum divided by the count.
Simple Example
In the example dataset below, we have information about the names of some animals. We also have measurements of the height of each animal. The dataset has two variables -- name and height -- and five
observations. Here is the dataset:
Here we've made a quick chart that plots the height of each animal:
To calculate the Average height (in cm) we sum up all the values and divide by the total count of observations:
Average height = (181 + 175 + 159 + 177 + 165) ÷ 5 = 857 ÷ 5 = 171.4
The average value for height is 171.4 centimeters. Here we have added a reference line marking the average on our chart so we can see how that looks:
Looking at the spread of the distribution of data tells us about the amount of variation, or diversity, within the data. The three measures of the spread of the data are the range, the standard
deviation, and the variance.
This is the difference between the largest and the smallest values. It is the distance between the extremes. To calculate the range, we take the maximum value and subtract the minimum value.
In our height dataset, what is the largest value (maximum)? 181 cm
In the same example, what is the smallest value (minimum)? 159 cm
The range in our small dataset of heights is 181 - 159 = 22 cm
Here we added some reference lines on the chart to indicate the maximum and minimum:
In practical terms, the animal with the maximum value is the tallest, and the animal with the minimum value is the shortest. So Harry the Horse is the tallest, and Fran the Fox is the shortest.
While the range gives us the endpoints (i.e., extremes), it does not tell us anything about how tightly or loosely the data are distributed between those two endpoints. We also do not know whether
more of the data is closer to the average, the maximum or the minimum. From our chart, it looks like just over half of the animals are tall (i.e., above the average height).
Two other related measures of dispersion--the variance and the standard deviation--can help us answer these questions. They provide a numerical summary of how much the data are scattered.
Standard Deviation
The standard deviation provides us with a standard way of knowing what is normal^[2] given the average. A really helpful attribute of the standard deviation is that it is expressed in the same units
as the data itself. The standard deviation is like an "index of variability," because it is proportional to the scatter of the data. The standard deviation is larger for more diverse distributions
(i.e., the data are widely scattered). The standard deviation is smaller for less diverse distributions (i.e., the data are clustered together).
The standard deviation is very useful for understanding the spread of a variable. For most "normally" distributed data, generally almost all of the values will be within three standard deviations of
the average. In statistics, this is sometimes referred to as the 68-95-99.7 rule. About 68.27% of the values lie within 1 standard deviation of the average (mean). Similarly, approximately 95.45% of
the values lie within 2 standard deviations of the mean. Nearly all (99.73%) of the values lie within 3 standard deviations of the mean.
A diagram of the 68-95-99.7 rule from wikipedia
In Module 3, we use Excel to summarize the data in the 2008 polling station list.
In the sample animal heights dataset, we've calculated the standard deviation for heights. It is 9.1 cm^[3]. On the chart we have shaded the area to show what data is within three standard deviations
(9.1 x 3) of the average. Any data within this range is "normal."
The standard deviation gives us a standardized way of knowing what is normal, what is extra large or what is extra small. We know that Fran the Fox is short. When we consider the standard deviation
and that nearly all (99.73%) of all values are generally within 3 standard deviations, we can conclude that Fran is short but not abnormally short.
Similar to standard deviation, variance measures how tightly or loosely numbers are spread around the average. So, a larger variance means data is spread further out from the average, and a smaller
variance means they are more tightly grouped around the average. The variance is the average of the squared differences (or deviations) of each number from the average (the mathematical formula is at
the end of this note). We are not going to focus on the formula in this module, but it's important to understand that variance provides the basis for calculating the standard deviation.
Test your knowledge by answering these questions:
1. What is an observation?
2. How are the terms "observations" and "variable" related to each other?
3. What is the purpose of describing or summarizing a dataset?
4. What are the three types of data (also called levels of measurement)?
5. List the two most useful ways of describing the distribution of data?
6. Is Fran the Fox abnormally short?
If you want to perform your own calculations, here is the heights dataset. The data along with some calculations are available as an Excel file or an Open Spreadsheets file.
Here are the two formulas for Standard Deviation, explained in the Standard Deviation Formulas section at the Math is Fun site.
The Population^[4] Standard Deviation":
The Sample Standard Deviation":
It looks complicated, but the important change is to divide by N-1 (instead of N) when calculating a Sample Variance. (Remember that the Standard Deviation is just the square root of the Variance, so
the formula for calculating the Variance is the same formula above but without the Square root part.)
All animal images copyright Dashikka/Shutterstock.
1. To find the median, the formula is "( [the number of data points] + 1) ÷ 2", but you don't have to use the formula. If you prefer, you can just count in from both ends of the list until you meet
in the middle. The mode is the number that is repeated more often than any other number. In a series of values of 2, 3, 4, 5, 4, 4, 6, 10, 12; the mode would be 4. ↩︎
2. It is helpful to think of "normal" in probabilistic terms, where normal means something is highly possible or very typical. ↩︎
3. We are skipping the calculation for the standard deviation for this module, because we want to focus on it as a concept and not get caught up in the formula. The formula for the standard
deviation and variance are at the end of this module for those who may want it. ↩︎
4. The term "population" means you are summarizing the entire (i.e., whole) dataset. The term "sample" means that you working with a smaller subset (i.e., a sample) of the larger dataset (i.e.,
population). ↩︎
|
{"url":"https://openelectiondata.net/uk/academy/a-gentle-introduction-to-summarizing-data/","timestamp":"2024-11-07T22:32:52Z","content_type":"text/html","content_length":"28804","record_id":"<urn:uuid:0a9fca57-e9b6-494c-837e-59adb15d80c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00260.warc.gz"}
|
How Many Amps Does A Gaming PC Use? An Efficient Guide - Gaming Ivy
How Many Amps Does A Gaming PC Use? An Efficient Guide to Optimization
Do you want to know how many amps a gaming pc consumes? You’ve come to the correct place because we’ll answer this question in-depth in this article.
Hundreds of millions of individuals use their personal computers to play games. Many people run many systems at the same time.
In that situation, you’ll want to know how much power your gaming computer consumes from the circuit because you don’t want to overpower the course by drawing more energy than it can handle.
If you’re going to use the very same circuit to run numerous systems, you’ll want to be sure the course can handle it.
Your computers may be destroyed if you do not do so. That is something that no one desires. As a result, we will discuss this critical topic in this article. So, how much power does a gaming PC
Let’s get started…
How Many Amperes Does a Gaming Computer Use?
Surprisingly, there isn’t a simple answer to this subject. This is due to several different factors acting in it simultaneously.
So, we will attempt to clarify all of the complexities in this part to provide you with an answer.
What is A Gaming Computer?
We’ll need to clarify a few things before we get into the meat of the discussion. For example, what exactly is a gaming computer?
Many people have their ideas on what constitutes a gaming PC.
However, we believe that the most basic description is a computer to play games. Or it could be a computer you designed primarily for gaming.
In such instance, the GPU/VGA or Video Card, or Graphics Card, would be one of the most significant distinctions from a conventional office or home computer.
A dedicated GPU is required for a perfect gaming machine. It’s also possible that you’re using RGB peripherals.
Furthermore, the display you are using may have high specifications. You might also take advantage of numerous displays.
And, of course, your PC’s overall configuration would be more advanced than a standard PC. Otherwise, there would be congestion in the system. These are some of the most significant distinctions
between a regular computer and a gaming computer.
Factors to Consider Before Calculating the Amps
We’ve been talking a lot about the amperes. But do you have any idea what an amp is?
Electric current is measured in amps or amperes. To figure out how much power your gaming PC consumes, we’ll need to consider a few other factors.
Without such criteria, there is no way to answer this question accurately. Also, knowing the exact amount of power your gaming PC uses is one of the most critical variables.
The second step is to figure out your area’s standard voltage supply rate. We can calculate the total ampere drawn by your game PC if you know these two numbers. So, in the following parts, we’ll
discuss those.
How Many Watts a Gaming Computer Uses?
You should already be aware that, once again, we don’t have a definitive answer. Why?
You can assist us in this process because we don’t know your PC’s specifications. Determine the wattage of your PSU (Power Supply Unit).
PSUs come in a variety of shapes and sizes. On the other hand, the wattage rating would be printed on the PSU panel.
Alternatively, Google the model of your power supply, and you’ll discover all the information you need.
Additionally, if you have the PSU’s box, you will find the information there as well. For the sake of ease, we’ll assume you have a 500-watt power supply.
What is The Standard Voltage Supply in Your Area?
The voltage rating in your location is the next factor you’ll need to know.
Again, for the fastest and best results, do a quick Google search for the supply voltage in your country. And you will immediately know this.
For the record, the supply voltage in the United States and Canada is 120 volts. On the other hand, North America, Taiwan, Japan, Saudi Arabia, and other countries have 100 volts.
The voltage supply in the United Kingdom is 230 Volts. The supply voltage in various parts of South Asia is between 220 and 230 volts.
We’ll assume that your area’s supply voltage is 120 volts for our comfort.
How Many Amps Your Gaming Computer Draw?
You already know what your power supply’s supplied voltage and wattage are. We all know that our computer doesn’t always consume the highest wattage.
That provides just because your PSU is 500 watts doesn’t mean your PC is constantly drawing 500 watts from the circuit. We will, however, calculate using the maximum watt rating as a precaution.
If you know these two numbers (voltage and watt) for the record, you can calculate the amp using a formula. Furthermore, ‘amps = watts/volts is the procedure to calculate the amps.
In that case, if your power supply is rated at 500 watts and the supply voltage in your area is 120V.
(500 watts/120 volts) = 4.16amps are the amount of power your PC will draw.
We hope you’ve figured out what you’re looking for. You’ll be able to calculate the ampere your computer requires on your own now.
Sum Up
You should now be able to compute the amp using watts and voltage. You will obtain the correct solution to this question if you know the simple formula watts/volts=amps.
There is another option if you don’t want to go through all the calculations.
In that situation, you’ll need to purchase an ‘Ammeter.’ This is a tool for determining the amount of current or amps in an electric circuit. It’s straightforward to use and doesn’t cost a lot of
It can also be used for other purposes. Having an Ammeter on hand could be helpful at times.
If you learned a new thing today about How many amps gaming PC uses? Then please don’t forget to share this article with others. If you have any questions or suggestions for us, please feel free to
write them in the comment section down below.
Thank you very much.
|
{"url":"https://gamingivy.com/how-many-amps-does-agaming-pcuse-an-efficient-guide-to-optimization/","timestamp":"2024-11-03T13:52:13Z","content_type":"text/html","content_length":"140612","record_id":"<urn:uuid:e65d8df5-97c0-4c69-ab39-4977b4c9cccd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00571.warc.gz"}
|
Dense graphs with a large triangle cover have a large triangle packing
It is well known that a graph with m edges can be made triangle-free by removing (slightly less than) m/2 edges. On the other hand, there are many classes of graphs which are hard to make
triangle-free, in the sense that it is necessary to remove roughly m/2 edges in order to eliminate all triangles. We prove that dense graphs that are hard to make triangle-free have a large packing
of pairwise edge-disjoint triangles. In particular, they have more than m(1/4+cβ) pairwise edge-disjoint triangles where β is the density of the graph and c â‰1 is an absolute constant. This improves
upon a previous m(1/4-o(1)) bound which follows from the asymptotic validity of Tuza's conjecture for dense graphs. We conjecture that such graphs have an asymptotically optimal triangle packing of
size m(1/3-o(1)). We extend our result from triangles to larger cliques and odd cycles.
ASJC Scopus subject areas
• Theoretical Computer Science
• Statistics and Probability
• Computational Theory and Mathematics
• Applied Mathematics
Dive into the research topics of 'Dense graphs with a large triangle cover have a large triangle packing'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/dense-graphs-with-a-large-triangle-cover-have-a-large-triangle-pa","timestamp":"2024-11-03T06:16:01Z","content_type":"text/html","content_length":"52239","record_id":"<urn:uuid:3345f3ee-0f24-4c78-a051-4c2c4b5560e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00809.warc.gz"}
|
practice a angle relationships in triangles
This theorem also serves as a condition to check if triangles can be formed using the side lengths provided. An oblique triangle is any triangle that is not a right triangle. The sum of the measures
of the three triangles is 180 2x + 3x + 4x = 180 9x = 180 x = 180/9 x = 20 2x = 2 20 = 40 3x = 3 20 = 60 4x = 4 20 = 80 Turn and Talk Discuss how to find a missing measure of an angle in a triangle
when the other two angle measures are given. EX: A Triangle has three angles A, B, and C. Angle A equals 60, Angle B equals 84. 0 L. 6. . Although triangle properties such as sides, angles,
bisectors, congruence, and similarity are more commonly used, how do you think the sides and angles in a triangle are related to each other? He wants to check if the angle at \(C\) is a right angle.
Q&A. Repeat the process. The average satisfaction rating for the company is 4.8 out of 5. It states that, \({b^2} = {a^2} {x^2} + {c^2} + 2cx + {x^2}\), From the figure, \(x = a \cos \left( {{{180}^
{\rm{o}}} B} \right) =\, a\cos \cos B\), \(\therefore {b^2} = {a^2} + {c^2} 2ca \cos B\). 9. Get free Alphabet Worksheets from A to Z here! "mainEntity": [{ B < C < A Angles Formed by Intersecting
Secants, Triangle Angle Sum, and Inscribed Angles_#LinginThis video explains important relationships among Angles Formed by Int. All three angles in any triangle always add up to 180 degrees. its
basically when u add all the interior(inside)angles of the triangle,the sum is always 180 no matter how big or small the triangles are. Determining inequalities about angle and side measures in
triangles. Free interactive and printable angles worksheet. Help Devin identify the type of angle at \(C.\)Ans: The side lengths of a right triangle satisfy the Pythagorean theorem.\({\rm{hypotenus}}
{{\rm{e}}^{\rm{2}}}{\rm{ = perpendicula}}{{\rm{r}}^{\rm{2}}}{\rm{ + bas}}{{\rm{e}}^{\rm{2}}}\)In Devins yard,perpendicular ? U7D1_T Angle relationships in Triangles: Page 371 # 1 - 9, 11, 14: 2. For
a point \(D\) on \(BC\) that divides it in the ratio \(m:n,\) the theorem states that, \((m + n) \cot \theta = m \cot \alpha n \cot \beta \), \((m + n) \cot \theta = m \cot B n \cot C\), Given:\(\
frac{{BD}}{{DC}} = \frac{m}{n}\) and \(\angle ADC = \theta \), \(\angle ADB = {180^{\rm{o}}} \theta \), So, \(\angle ABD = \theta \alpha = B,\) and \(C = {180^{\rm{o}}} (\theta + \beta )\), In \(\
Delta ABD,\frac{{BD}}{{ \sin \alpha }} = \frac{{AD}}{{ \sin (\theta \alpha )}}\), In \(\Delta ADC,\frac{{DC}}{{ \sin \beta }} = \frac{{AD}}{{ \sin (\theta + \beta )}}\), \(\frac{{BD}}{{DC}}\frac{{ \
sin \beta }}{{ \sin \alpha }} = \frac{{ \sin (\theta + \beta )}}{{ \sin (\theta \alpha )}}\), \( \Rightarrow \frac{{m \sin \beta }}{{n \sin \alpha }} = \frac{{ \sin (\theta + \beta )}}{{ \sin (\theta
\alpha )}}\), \(\frac{{m \sin \beta }}{{n \sin \alpha }} = \frac{{ \sin \theta \cos \beta + \cos \theta \sin \beta }}{{ \sin \theta \cos \alpha \cos \theta \sin \alpha }}\), \(m \sin \beta ( \sin \
theta \cos \alpha \cos \theta \sin \alpha ) = n \sin \alpha ( \sin \theta \cos \beta + \cos \theta \sin \beta )\), \(m \cot \alpha m \cot \theta = n \cot \beta + n \cot \theta \). Fill in your
details below or click an icon to log in: You are commenting using your WordPress.com account. Geometry | Volume & Surface Area Of Cylinders. Solve for . &. Direct link to Brynne Van Allsburg's post
I do not understand how t, Posted 3 years ago. G andC are corresponding angles and they are equal. ", Calculator. The projection formula expresses the sum of two sides in terms of the third side. C
andA are vertically opposite angles and they are equal. The theorem states that the sum of lengths of two sides of a triangle is greater than the length of the third side. Direct link to justin
pinks's post can someone explain the t, Posted 3 years ago. Please see the preview for details!NE Subjects: Geometry, Math Grades: 8th - 10th If \(\angle a{\rm{ = 3}}{{\rm{0}}^{\rm{o}}},\) then its
corresponding exterior angle is \({\rm{=\,15}}{{\rm{0}}^{\rm{o}}}\). 4-2-3: If a triangle is equiangular, then each angle measures 60. Quizizz worksheets are a great way for teachers to assess their
students' understanding of mathematics topics and provide feedback to help them improve. Thinking in terms of dimensions proved to be extremely difficult for me throughout my childhood and beyond and
I never got to wrap my head around it because I always forced myself to visualize those dimensions. Angle-side relationship theorem states that in any triangle:1. Thank you for visiting our website
and searching for Angle Relationships In Triangles Worksheet. . If the measure of one angle of a triangle is 90 degrees, you can assume that the sum of the other two triangles should be 90 degrees.
It is defined as, \(\tan \frac{{B C}}{2} = \frac{{b c}}{{b + c}}\cot \frac{A}{2}\), \(\tan \frac{{C A}}{2} = \frac{{c a}}{{c + a}}\cot \frac{B}{2}\), \(\tan \frac{{A B}}{2} = \frac{{a b}}{{a + b}}\
cot \frac{C}{2}\), \(\frac{a}{{ \sin A}} = \frac{b}{{ \sin B}} = \frac{c}{{ \sin C}} = k(say)\), \(\therefore \frac{{b c}}{{b + c}} = \frac{{k( \sin B \sin C)}}{{k( \sin B + \sin C)}}\), \( = \frac
{{2 \cos \frac{{B + C}}{2} \sin \frac{{B C}}{2}}}{{2 \sin \frac{{B + C}}{2} \cos \frac{{B C}}{2}}}\), \(\cot \frac{{B + C}}{2} \tan \frac{{B C}}{2}\), \( = \cot \left( {\frac{\pi }{2} \frac{A}{2}} \
right) \tan \frac{{B C}}{2}\), \( = \frac{{ \tan \frac{{B C}}{2}}}{{ \cot \frac{A}{2}}}\), \(\therefore \tan \frac{{B C}}{2} = \frac{{b c}}{{b + c}} \cot \frac{A}{2}\). Engage students with these
DIGITAL and PAPERLESS math activities that practice determining angle measures in similar triangles. Q.2. Direct link to . Do you think any three-line segments can form a triangle? lookup, Perceived
visual angle Question 1. 48 mhypotenuse ? B are vertically opposite angles and they are equal. The angles AOB and POQ are unequal. Equilateral triangle worksheets are an excellent way to improve
students' math skills. This set of notes teaches the concept of Side & Angle Relationships in Triangles. The algebraic expression x + 5 represents the other angles. In a Euclidean space, the sum of
measures of these three angles of any triangle is invariably equal to the straight angle, also expressed as 180 , radians, two right angles, or a half-turn. In this section, we are going to see the
angle relationships in triangles through the following steps. i) Corresponding angles of both the triangles are equal; ii) Corresponding sides of both the Match the search results: Basic
Proportionality theorem was introduced by a famous Greek Mathematician, Thales, hence it is also called Thales Theorem. 's post my teacher ask me to do t, Posted 4 years ago. The Angle Bisector
Theorem is a mathematical principle that states that a point on a bisector is equidistant from each of the angles it cuts. A are vertically opposite angles and they are equal. How do you find the
angle of a right-angled triangle given two sides?Ans: The angle measure can be calculated using the trigonometric relations between sides and angles of a right-angled triangle. as well as online
calculators and other tools to help you practice. Improve your math knowledge with free questions in "Angle-side relationships in triangles" and thousands of other math skills. These figures of
unequal line segments and unequal angles have a close relationship between unequal sides and unequal angles of a triangle. Angle Relationships in Triangles. lesson 1 5 practice angle relationships
an-swers sczweb de. The Interior Angles of a Quadrilateral add up to 360. lookup, 7.7 solving right triangles to solve a right triangle means to find the, The sum of the measures of two complementary
angles, Aim #75: How do we solve for an exterior angle of a triangle? The sum is less than 90. Two formulas help us solve such oblique triangles. If two sides of a triangle are unequal, the angle
opposite to the longer side is greater than . What is the measure of angle C? All three angles in any triangle always add up to 180 degrees. This requires students in high school to calculate the
sides of triangles as integers. The dimensions are as marked in the diagram. . These two are supplementary because 60 + 120 = 180. Direct link to madiha mariyam's post its basically when u add ,
Posted a month ago. Q.5. This relationship is called the Exterior Angle Theorem. wikipedia , . Looking for a quick and professional tutoring services? "acceptedAnswer": { We hope this detailed
article on relations among sides and angles of a triangle helped you in your studies. Students will practice solving problems involving the. This diagram might make it easier to remember: Also:
Acute, Obtuse and Reflex are in alphabetical order. In a triangle, if the second angle is 5 greater than the first angle and the third angle is 5 greater than second angle, find the three angles of
the triangle. The most common rule for angles in a triangle is: If we extend one side (past angle c as shown in the video) and label it , then there is another rule, which works for all exterior
angles. These worksheets also help students develop their calculative skills. Note that in order to use the law of sines, you have to know either two angles and a side length or two side lengths and
an angle that is opposite to one of them. },{ Use of the Caddell Prep service and this website constitutes acceptance of our. "@type": "Answer", The triangle midsegment theorem looks at the
relationship between a midsegment of a triangle and the triangle's third side. Portions of the notes are interactive for concept practice. . All rights reserved, Practice Triangles Questions with
Hints & Solutions, Relations Among Sides and Angles of a Triangle: Types & Examples, JEE Advanced Previous Year Question Papers, SSC CGL Tier-I Previous Year Question Papers, SSC GD Constable
Previous Year Question Papers, ESIC Stenographer Previous Year Question Papers, RRB NTPC CBT 2 Previous Year Question Papers, UP Police Constable Previous Year Question Papers, SSC CGL Tier 2
Previous Year Question Papers, CISF Head Constable Previous Year Question Papers, UGC NET Paper 1 Previous Year Question Papers, RRB NTPC CBT 1 Previous Year Question Papers, Rajasthan Police
Constable Previous Year Question Papers, Rajasthan Patwari Previous Year Question Papers, SBI Apprentice Previous Year Question Papers, RBI Assistant Previous Year Question Papers, CTET Paper 1
Previous Year Question Papers, COMEDK UGET Previous Year Question Papers, MPTET Middle School Previous Year Question Papers, MPTET Primary School Previous Year Question Papers, BCA ENTRANCE Previous
Year Question Papers. You can use the Angle Triangle Worksheet for basic and advanced mathematics. So if you only have two of the angles with you, just add them together, and then subtract the sum
from 180. If the angles of a triangle are in the ratio 5: 6: 7 , the triangle is. That should lead you to the next triangle. (Unit 8, + Angle Z is an exterior angle. Step 1| (A)60 degrees + (B)83
degrees = 143 degrees "name": "Q.4. }. lookup, Rational trigonometry Our mission is to provide a free, world-class education to anyone, anywhere. Triangles are one of the most fundamental geometric
shapes and have a variety of often studied properties including: Rule 1: Interior Angles sum up to 180 0. The sum of all the angles in any triangle is 180. How do I find a missing value but there's
equations in the triangle? If you're seeing this message, it means we're having trouble loading external resources on our website. (LogOut/ This is called the triangle inequality theorem. Theorems
includ Please visit 304 Chapter 6 Relationships Within Triangles Using the Angle Bisector Theorems Find each measure. donorschoose.org/molinak Students will learn how to tell if a triangle is acute
or right-angled in this lesson. Confidentiality is important in order to maintain trust between parties. Then, we find the value to get the measure of the angle." Section 13.2: Isosceles Triangle.
They are a great resource for students in fifth and eighth grades. WS It is a very great app, you can just snap a photo of your problem then its solves, app takes you step by step through the
equation. 2022 (CBSE Board Toppers 2022): Relations among Sides and Angles of a Triangle: It is a fact that, as the name suggests, a triangle has three angles and three sides. The interior angles in
this triangle add up to 180. lookup, Rational trigonometry Worksheets are 4 angles in a triangle, Notes 4 3 angle relationships in triangles, Angle relationship practice, Angles sides, Angle
relationships, Triangle, Activity and work the relationship between sides and, Geometry part 1 lines and angles. These two angles (140 and 40) are Supplementary Angles, because they add up to 180.
Anytime I am given a shape I pull out colored pencils. Objective. Lesson 1: Parallel Lines Cut by a Transversal Parallel Lines Cut by a Transversal - Page No. Find the missing angle B B in the right
triangle given below. Fun maths practice! Angle and Triangle Relationships Degrees A degree is a unit of measurement used to measure angles. Review the basics of triangle angles, and then try some
practice problems. This article helps us understand the various ways in which angles and sides of a triangle are related. We also learn to prove the law of sines, law of cosines, law of tangents,
projection rule, and \(m-n\) theorem. lookup, Trigonometric functions An equilateral triangle has three sides and three angles of equal length. Prove the Third Angles Theorem by completing the
two-column proof. How do you find the angle of a right-angled triangle given two sides? "acceptedAnswer": { With Quizizz, teachers can easily create worksheets on mathematics topics such as geometry
and angle-side relationships in triangles. Lets tilt a line by 10 still adds up to 360! Each corner includes thevertex of one angle of the triangle. Pythagorean theorem wikipedia . In the shown
figure, the following inequalities hold. Angles lie on the same side of the transversal t, on the same side of lines a and b. Anglesare nonadjacent anglesthat lie on opposite sides of the transversal
t,between lines a and b. Angleslie on opposite sides ofthe transversal t, outside lines a and b. Angleslie on the same side ofthe transversal t, between lines a and b. HW #7: 5-3 p352 #14,15. These
two are complementary because 27 + 63 = 90. For students in grades 4 through high school, we cover everything from fundamental concepts like naming angles, identifying their components, classifying
angles, and measuring angles with a protractor to more complex ideas like complementary and supplementary angles . 4-2 Angles of Triangles, Learn for free about math, art, computer programming,
economics, physics, chemistry, biology, medicine, finance, history, and more. (LogOut/ If and , then must be . Doing homework can help you learn and understand the material covered in class. Exercise
1. . Digital Math Activities. Get Complete Alphabet Tracing Worksheets here for free! A B C =180 A B =D If A E and B F then C G H K H + K=90 Asynchronous concept checks and study tools, Angle-side
Relationships In Triangles worksheets for Kindergarten, Everything you need for mastery and engagement. Step 1: Identify where the missing angle is. Devin is making a garden in his yard. Let's label
the angles , , and . . three angles. for more information. perimeter, semiperimeter, area and altitude Equilateral Triangles. \( \to a\left( {{b^2} + {c^2}} \right) \cos A + b\left( {{c^2} + {a^2}} \
right) \cos B + c\left( {{a^2} + {b^2}} \right) \cos C\)\( = a{b^2} \cos A + a{c^2} \cos A + b{c^2} \cos B + {a^2}b \cos B + {a^2}c \cos C + {b^2}c \cos C\)\( = ab(b \cos A + a \cos B) + bc(c \cos B
+ b \cos C) + ac(c \cos A + a \cos C)\)Using the projection formula, we have,\( = ab(c) + bc(a) + ac(b)\)\( = 3abc\)Hence, proved. A + B + C = 180 A + 30 + 65 = 180 A = 180 - 95 A = 85 Step 2:
Looking at the relative sizes of the angles. that lie on opposite sides of the transversal t. the transversal t, outside lines a and b. the transversal t, between lines a and b. Although the theorem
may have been known \(1000\) years earlier, he was the first to prove it. Drawing Angles Show your students how to construct angles using a protractor with these drawing angle pdfs. The sum of all
the interior angles is equal to . They make a straight angle and thus add to equal 180 Instruction: The Triangle Sum Theorem The sum of the measures of the interior angles of a triangle is 180. The
sum of the two angles is greater than 90. So if you only have two of the angles with you, just add them together, and then subtract the sum from 180. Lets label the angles , , and . Who established
the relationship between sides and angles in a right-angled triangle? ", If you are trying to find Angle Relationships In Triangles Worksheet, you are arriving at the right site. There are 360
degrees in one Full Rotation (one complete circle around The Degree Symbol: We use a little circle following the number to mean degrees. Keep your eyes open for any trickes, like congruent sides and/
or angles that will shortcut the process. PDF. { Angle-side relationship theorem states that in any triangle: 1. In this section, we are going to see the angle relationships in triangles through the
following steps. I use this to double check my work and it's come in handy with helping fix where I make mistakes. . These segments have equal lengths. These exercises, which are usually short,
contain word problems and illustrations that help students understand the fundamentals of the triangle. Name all the angles that fit the definition of each vocabulary word. the sum of the three
angles of a triangle = 180. MATH 23. Direct link to Alicia N.W. Prove using projection rule: \(a\left( {{b^2} + {c^2}} \right) \cos A + b\left( {{c^2} + {a^2}} \right) \cos B + c\left( {{a^2} + {b^
2}} \right) \cos C = 3abc\)Ans: L.H.S. }] wikipedia , The angle-side relationship theorem defines the geometric relation between sides and interior angles. can someone explain the theorem better to
me? The law of tangents establishes the relationship between two sides of a triangle and the tangents of sum and difference of the opposite angles. D andB are vertically opposite angles and they are
equal. Angle Relationships In Parallel Lines And Triangles Worksheet- You've found the right place if you are looking for Line Angle Worksheets. Find angles in triangles (practice) Practice. 7: The
Triangle Inequality and Inequalities in One Triangle (5.3 & 5.5) Determine if three side lengths will form a triangle. That is,. Similarly, we can get the other argument too. Direct link to BENDER's
post All three angles in any t, Posted 3 years ago. What are the measures of the other two interior angles of the triangle? This worksheet includes:-identifying supplementary or complementary angles
and vertical or adjacent angles-solving one-step and two-step equations to find variables-worked out examples at the top of the worksheet-answer key (pages 6 &7)The file is completely editable and
has two versions!version 1 (pages 1 & 2): solve for each variableversion 2 . If a segment joins the midpoints of two sides of a triangle, then the segment is parallel to the third side and half as
long. a segment that connects the midpoints of two sides of a triangle. Vertical, complementary, and supplementary angles. Angle relationships with parallel lines Get 5 of 7 questions to level up .
Also called the sine rule, this law states that the ratio of the length of a side and the angle opposite to that side is a constant for all the sides and angles in a triangle. The angles in a
triangle measure 2x, 3x, and 4x degrees. Each worksheet has 20+ questions. Khan Academy is a 501(c)(3) nonprofit organization. Direct link to Sureno Pacheco's post In a Euclidean space, the, Posted 9
months ago. "@type": "Question", i'm confused and i already watched like all the videos but i still don't get it. For example: Angle Relationships in Triangles. } The sides of a triangle are \(5\;{\
rm{cm}},7\;{\rm{cm}},\) and \(8\;{\rm{cm}}.\) Find the measure of the middle-sized angle.Ans: The middle-sized angle in a triangle lies opposite the middle-sized side.Therefore, the angle lies
opposite to 7-cm side.Cosine law is stated as, \({c^2} = {a^2} + {b^2} 2ab \cos C\)\(\therefore \cos A = \frac{{{5^2} + {8^2} {7^2}}}{{2 \times 5 \times 8}}\)\( \cos A = \frac{{25 + 64 49}}{{80}}\)\(
\cos A = \frac{{40}}{{80}} = 0.5\)\( \Rightarrow A = {\cos ^{ 1}}\left( {0.5} \right)\)\(\therefore A = {60^{\rm{o}}}\). wikipedia , We can use the following equation to represent the triangle:
Posted 5 years ago. Look no further than Fast Professional Tutoring! This digital math activity allows students to . Start with the one that has 2 of the given angles, add them up and subtract from
180. To solve a math problem, you need to figure out what information you have. 2. Class. You need to shade in or separate out 1 triangle at a time. Here, \(A + B + C = {\rm{18}}{{\rm{0}}^{\rm{o}}}.
\), There are various tools to discover the sides and angles in triangles. Round to the nearest hundredth. It hrlps you when you are stuck on your math homework. Direct link to 20022825's post well
this was two years a, Posted 3 years ago. Third Angles Theorem: If the measures of two angles of one triangle are equal to the . So, RS = 6x 5 = 6(5) 5 = 25. Unit 6 Relationships In Triangles Gina
Wision - The circumcenter is the intersection of the _____ in a triangle . Hope that helps! Angles. Law of cosines is used when lengths of two sides and an included angle or the lengths of three
sides are known. Now, let's study some angle-side triangle relationships. It also gives different options for answers, i love that you can take a puc of it and write it out. Improve your skills with
free problems in 'Angle-angle criterion for similar triangles' and thousands of other practice lessons. Quizizz helps teachers quickly create engaging and interactive worksheets that students can use
to practice and review key concepts. To ensure accuracy, arcs drawn for a perpendicular bisector should be drawn lightly, but they must be visible in the final answer. lookup, Perceived visual angle
These worksheets contain word problems and illustrative exercises that teach students how to figure out the area of a triangle using known values. "@context": "https://schema.org", IfF = 65, find the
measure of each of the remaining angles. Practice. "text": "Ans: The angle measure can be calculated using the trigonometric relations between sides and angles of a right-angled triangle. Hence, a
triangle can have a maximum of one right angle only. What are the relationships between side lengths and angle measures of triangles? 20. m 4. eSolutions Manual - Powered by Cognero. In a triangle,
the measure of sides can often be used to calculate the angles and vice versa. The theorem states that the measure of anexterior angle is equal to the sum of itsremote interiorangles. Some of the
worksheets displayed are Angle relationship practice, Relationship of angles work, Math work, Lesson practice a angle relationships in triangles, Angle relationship interiorexterior s1, Name the
relationship complementary supplementary, Name the relationship complementary linear pair, Types of angles. This worksheet can be used by students to calculate the sum of interior triangle angles. .
So my opinion is to download this app if you having problems with Maths. 4.2: Angle Relationships in Triangles Corollaries to Triangle Sum Theorem 4-2-2: The acute angles of a right triangle are
complementary. (17) $2.00. If you will extend the horizontal line of the triangle going to the left, lets label this . Then, it explains the geometric and trigonometric relations of sides and angles
in a triangle. We use a little circle following the number to mean degrees. I do not understand how to find out the angle of x in a when the triangle is in a star shape. One example is the
Equilateral Triangle worksheet. Perpendicular Bisector Theorem. * Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project .
wikipedia , Q.4. Direct link to Patricia Connors's post Anytime I am given a shap, Posted 2 years ago. any time. C are corresponding angles and they are equal. U7D2_T Angle relationships in
Quadrilaterals: Page 381 # 1-7, 9 - 13, 16,18: 3. This eighth-grade geometry worksheet introduces students to the Triangle Angle Sum Theorem and has them practice finding a missing interior angle in
a. For example, we can calculate the ratio of the opposite to hypotenuse as sin, theta. Q.1. wikipedia , Observe that this is similar to the Pythagorean Theorem, except that, in a right triangle, \(\
angle C = {90^{\rm{o}}},\) and \(\cos \, {90^{\rm{o}}} = 0.\) Hence, there will be no third term. Because 180 - 90 = 90 The sum of the three angles of any triangle is equal to 180 degrees. The Angle
Sum Property says that the sum of the interior angles of a triangle is 180 degrees. Both of these graphics represent pairs of complementary angles. . It might require more information when you plug
in a problem depending what answers are you looking for. For example, complementary angles can be adjacent, as seen in with ABD and CBD in the image below. Example thumbnail for Prove congruent
triangles - Given three pairs of equal segments. Homework is a necessary part of school that helps students review and practice what they have learned in class. CONJECTURE: Sum of the angles of any
triangle is _____ Leading AI Powered Learning Solution Provider, Fixing Students Behaviour With Data Analytics, Leveraging Intelligence To Deliver Results, Exciting AI Platform, Personalizing
Education, Disruptor Award For Maximum Business Impact, Copyright 2023, Embibe. When you are estimating the size of an angle, you should consider what type of angle it is . If a line is split into 2
and you know one angle you can always find the other one. In the ordering triangles exercise it's so hard to find the angles that are smallest & the sides that are smallest. Q.2. "text": "Ans: The
sum of lengths of two sides in a triangle is greater than the length of the third side. Supplementary Angles Calculator . Lets call this angle . ", Now, lets extend the line with angle and call it
angle . three angles of the triangle. Introduce concepts, check for understanding, get instant insights, and more. restrictions on side lengths of a triangle. (thanks for your time if you do
respond). H are vertically opposite angles and they are equal. Page 10. Aside from interior angles, there are other types of triangles, such as right triangles and convex polygons. 52 mbase ? Section
7.3: Perpendicular Lines. "@type": "Answer", For example: Cut a triangle out of paper, tear off the corners and rearrange these corners to form a straight line. FindmW andmX in the triangle given
below. G are vertically opposite angles and they are equal. F andE are together form a straight angle. practice a 1 5 for use with the lesson "describe angle. To log in and use all the features of
Khan Academy, please enable JavaScript in your browser. Example 1: Compare the lengths of the sides of the following triangle. Khan Academy is a, Code for solving linear equations on matlab, How do i
convert fractions to percentages, How to calculate cagr in normal calculator, How to find quadratic function with vertex, How to find the perimeter of a trapezoid with 1 missing side, Intermediate
algebra problems with answers, Position time graph to velocity time graph calculator, Solving systems of quadratic and linear equations virtual nerd, Subtract fractions calculator with variables.
Isosceles & equilateral triangles problems (Opens a modal) Triangle exterior angle example (Opens a modal) Worked example: Triangle angles (intersecting lines) (Opens a modal) Worked example:
Triangle . Equidistant. Page 189: Activity Practice. What is the relationship between the 3 sides of any triangle?Ans: The sum of lengths of two sides in a triangle is greater than the length of the
third side. An angle in a triangle can also be named by the letter at its vertex: C. You can figure out an unknown angle in a triangle if you know the measure of the other two angles. And the
exterior angles is equal to the sum of the other two interior angles. Here's an example: KLM = 95 LMK = 30 . Parallel Lines and Angle Relationships. The smallest angle is opposite to the smallest
side2. The triangle inequality theorem defines the relationship between the sides of a triangle. There are several examples of right triangles, but there are two common ratios for side a: side b:
side c . "acceptedAnswer": { Label the angles A, B, and C. Step 2 : Tear off each "corner" of the triangle.
What Are Most Commonly Used Scales On Architect's Scale?
Isaac Console Commands
Henry Louis Wallace Wife
Jake Hamilton Charlie Stayt
Articles P
|
{"url":"https://derleth.net/TJDm/practice-a-angle-relationships-in-triangles","timestamp":"2024-11-04T20:28:43Z","content_type":"text/html","content_length":"35909","record_id":"<urn:uuid:711703c3-bbcf-48cc-89fe-94a6ba86f0d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00407.warc.gz"}
|
Moveout, velocity, and stacking
Next: INTERPOLATION AS A MATRIX Up: Reproducible Documents
Moveout, velocity, and stacking
In this chapter we handle data as though the earth had no dipping reflectors. The earth model is one of stratified layers with velocity a (generally increasing) function of depth. We consider
reflections from layers, which we process by normal moveout correction (NMO). The NMO operation is an interesting example of many general principles of linear operators and numerical analysis.
Finally, using NMO, we estimate the earth's velocity with depth and we stack some data, getting a picture of an earth with dipping layers. This irony, that techniques developed for a stratified earth
can give reasonable images of non-stratified reflectors, is one of the ``lucky breaks'' of seismic processing. We will explore the limitations of this phenomenon in the chapter on dip-moveout.
First, a few words about informal language. The inverse to velocity arises more frequently in seismology than the velocity itself. This inverse is called the ``slowness.'' In common speech, however,
the word ``velocity'' is a catch-all, so what is called a ``velocity analysis'' might actually be a plane of slowness versus time.
Moveout, velocity, and stacking
Next: INTERPOLATION AS A MATRIX Up: Reproducible Documents
|
{"url":"https://www.ahay.org/RSF/book/bei/vela/paper_html/paper.html","timestamp":"2024-11-08T15:22:53Z","content_type":"text/html","content_length":"6403","record_id":"<urn:uuid:6371bd30-cd78-459c-a5a1-6104eb221f97>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00029.warc.gz"}
|
Path-dependent Martingale Problems and Additive Functionals
april, 2019
Publication type:
Paper in peer-reviewed journals
Stochastics and Dynamics, vol. 19 (4), pp. 1950027 (39 pages)
Keywords :
Path-dependent martingale problems; path-dependent additive functionals.
The paper introduces and investigates the natural extension to the path-dependent setup of the usual concept of canonical Markov class introduced by Dynkin and which is at the basis of the theory of
Markov processes. That extension, indexed by starting paths rather than starting points will be called path-dependent canonical class. Associated with this is the generalization of the notions of
semi-group and of additive functionals to the path-dependent framework. A typical example of such family is constituted by the laws $({\mathbb P}^{s,η})_{(s,\eta) \in{\mathbb R} \times \Omega}$ ,
where for fixed time s and fixed path η defined on [0, s], $({\mathbb P}^{s,η})_{(s,\eta) \in {\mathbb R} \times \Omega}$ being the (unique) solution of a path-dependent martingale problem or more
specifically a weak solution of a path-dependent SDE with jumps, with initial path η. In a companion paper we apply those results to study path-dependent analysis problems associated with BSDEs.
author={Adrien Barrasso and Francesco Russo },
title={Path-dependent Martingale Problems and Additive Functionals },
doi={10.1142/S0219493719500278 },
journal={Stochastics and Dynamics },
year={2019 },
volume={19 (4) },
pages={1950027 (39 pages)},
|
{"url":"https://uma.ensta.fr/uma2/publis/show.html?id=1781","timestamp":"2024-11-06T02:47:46Z","content_type":"text/html","content_length":"15535","record_id":"<urn:uuid:fe44535b-33ec-499a-ac9f-8edf9aa3750e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00087.warc.gz"}
|
Slope and Intercept Worksheet - 15 Worksheets.com
Slope and Intercept
Worksheet Description
This worksheet is designed to help students practice graphing lines by identifying the slope and y-intercept from given linear equations. Each of the four exercises provides a linear equation in
slope-intercept form, y = mx + b, where m is the slope and b is the y-intercept. Students are expected to extract these values from the equation, plot the y-intercept on the graph, use the slope to
find another point, and then draw the line through the coordinates. There are spaces provided for students to write down the slope (m) and y-intercept (c) for each equation before graphing.
The worksheet is structured to teach students how to interpret and graph linear equations. It focuses on the concept that the coefficient of x represents the slope and the constant term represents
the y-intercept. Students learn to apply these values to plot the starting point of the line (y-intercept) and to use the slope to determine the direction and steepness of the line. By completing the
worksheet, students reinforce their understanding of the relationship between algebraic equations and their graphical representations on a coordinate plane.
|
{"url":"https://15worksheets.com/worksheet/graphing-lines-15/","timestamp":"2024-11-08T02:45:18Z","content_type":"text/html","content_length":"108947","record_id":"<urn:uuid:f57d5333-bb97-45f7-8790-c55deefbc7ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00032.warc.gz"}
|
Data Structure Model Questions and Answers Paper
16 May 2023
Data Structure Model Questions and Answers Paper
The Free download links of Data Structure Model Questions and Answers Papers enclosed below. Candidates who are going to start their preparation for the Data Structure Model papers can use these
links. Download the Data Structure Model Papers PDF along with the Answers. Data Structure Model Papers are updated here. A vast number of applicants are browsing on the Internet for the Data
Structure Model Question Papers & Syllabus. For those candidates, here we are providing the links for Data Structure Model Papers. Improve your knowledge by referring the Data Structure Model
Question papers.
Model Questions and Answers on Data Structure
1. Pre order is nothing but
(a) depth-first order
(b) topological order
(c) breadth-first order
(d) linear order
2. The depth of a complete binary tree with n nodes is
(a) log (n + 1)-1
(b) log (n)
(c) log (n-1) +1
(d) log n + 1
3. The number of possible ordered trees with 3 nodes A, B, C is
(a) 16
(b) 6
(c) 12
(d) 10
4. The number of swappings needed to sort the numbers 8, 22, 7, 9, 31, 19, 5, 13 in ascending order, using bubble sort is
(a) 11
(b) 13
(c) 12
(d) 14
5. Given 2 sorted list of size (m) and (n) respectively. The number of comparison needed in the worst case by the merge sort algorithm will be
(a) mn
(b) max (m, n)
(c) m+n-1
(d) min (m, n)
6. Which of the following traversable techniques lists are nodes of a binary search tree in ascending order?
(a) Post order
(b) Pre order
(c) In order
(d) None of these
7. The average successful search time taken by binary search on a sorted array of 10 items is
(a) 2.6
(b) 2.8
(c) 2.7
(d) 2.9
8. The initial configuration of queue is a, b, c, d to get configuration d, c, b, a, one needs a minimum of
(a) 2 deletions and 3 additions
(b) 3 deletions and 3 additions
(c) 3 deletions and 2 additions
(d) 3 deletions and 4 additions
9. The following sequence of operations is performed on stack push (1), push (2), pop, push (1), push (2) pop, pop, pop, push (2), pop. The sequence of popped out values are
(a) 2, 2, 1, 1, 2
(b) 2, 1, 2, 2, 1
(c) 2, 2, 1, 2, 2
(d) 2, 1, 2, 2, 2
10. A hash function defined as f (key) = key mode 7, with linear probing, is used to insert the keys 37, 38, 72, 48, 98, 11, 56 into a table indexed from 0 to 6. 11 will be stored in the location
(a) 3
(b) 5
(c) 4
(d) 6
11. The average successful search time for sequential search on ‘n’ items is
(a) n/2
(b) (n-1)/2
(c) (n+1)/2
(d) log(n) + 1
12. The running time of an algorithm T(n) where (n) is the input size is given by
T(n) = 8T (n/2) + qn, if n > 1
P, if n=1
where p, q are constants. The order of algorithm is
(a) n²
(b) n³
(c) n^{n}
(d) n
13. The running time T(n), where (n) is the input size of a recursive algorithm is given as follows
T(n) = C + T(n-1); if n>1
D, if n ≤ 1
the order of this algorithm is
(a) n^{2}
(b) n
(c) n^{3}
(d) n^{n}
14. There are 4 different algorithms A1, A2, A3, A4 to solve a given problem with the order log(n), log log (n), nlogn, n/logn respectively, which is the best algorithm?
(a) A1
(b) A4
(c) A2
(d) A3
15. The number of possible binary trees with 4 nodes is
(a) 12
(b) 13
(c) 15
(d) 14
16. The time complexity of an algorithm T(n), where n is the input size is given by
T(n) = T(n-1) + 1/n, if n > 1
= 1, otherwise
(a) logn
(b) n
(c) n^{2}
(d) nn
17. A text is made up of the characters a, b, c, d, e each occurring with the probability .12, .4, .15, .08 and .25 respectively. The optimal coding technique will have the average length of
(a) 2.15
(b) 2.3
(c) 3.01
(d) 1.78
18. The running time of an algorithm is given by
T(n) = T(n-1) + T(n-2) – T(n-3), if n>3
n otherwise
the order is
(a) n
(b) logn
(c) n^{n}
(d) n^{2}
19. The Acdermann’s function
(a) has quadratic time complexity
(b) can’t be solved interactively
(c) has exponential fine complexity
(d) has logarithmic time complexity
20. The way a card game player arranges his cards as he picks them up one by one, is an example of
(a) bubble sort
(b) insertion sort
(c) selection sort
(d) merge sort
│ Practice Question │ Objective Papers │
│ Quiz │ Important Papers │
│ Mock Tests │ Previous Papers │
│ Typical Question │ Sample Question │
│ MCQs │ Model Papers │
21. The average number of comparison performed by the merge sort algorithm, in merging two sorted lists of length 2 is
(a) 8/3
(b) 11/7
(c) 8/5
(d) 11/6
22. Which of the following sorting methods will be the best if number of swappings done, is the only measure of efficiency?
(a) Bubble sort
(b) Insertion sort
(c) Selection sort
(d) All of the above
23. You are asked to sort 15 randomly. You should prefer
(a) bubble sort
(b) quick sort
(c) merge sort
(d) heap sort
24. The maximum number of comparisons needed to sort 7 items using radix sort is
(a) 280
(b) 40
(c) 47
(d) 38
25. Which of the following sorting algorithm has the worst time complexity of n log n?
(a) Heap sort
(b) Quick sort
(c) Insertion sort
(d) Selection sort
26. Which of the following sorting methods sorts a given set of items that is already in order or in reverse order with equal speed?
(a) Heap sort
(b) Quick sort
(c) Selection sort
(d) Insertion sort
27. Which of the following algorithms solves the all pair shortest path problem?
(a) Floyd’s algorithm
(b) None of these
(c) Dijkstra algorithm
(d) Prim’s algorithm
28. Merge sort uses
(a) divide and conquer strategy
(b) heuristic search
(c) back tracking approach
(d) greedy approach
29. The principle of locality justifies the use of
(a) Interrupts
(b) DMA
(c) Polling
(d) cache memory
30. The merging two sorted lists of sizes m and n into a sorted list of size (m + n), we require comparisons of
(a) O (m)
(b) O (m + n)
(c) O (n)
(d) O (log m + log n)
31. A binary tree T has n leaf nodes. The number of nodes of degree 2 in T is
(a) log₂n
(b) n-1
(c) 2^{n}
(d) n
32. The minimum number of edges in a connected cyclic graph on n vertices is
(a) n-1
(b) n
(c) n + 1
(d) none of the above
33. The post fix expression for the infix expression A+ B* (C+D)/F+ D * E is
(a) AB + CD + * F/D + E*
(b) A* B+ CD/F * DE ++
(c) ABCD+*F/+DE* +
(d) A+ BCD/F* DE ++
34. Minimum number of colours needed to colour a graph vertices and 2 edges is
(a) 4
(b) 2
(c) 3
(d) 1
35. Stack is useful for implementing
(a) radix sort
(b) recursion
(c) breadth first search
(d) depth first search
36. In a circularly linked list organization; insertion of a record involves the modification of
(a) 1 pointer
(b) 3 pointers
(c) no pointer
(d) 2 pointers
37. Which of the following is useful in traversing a given graph by breadth first search?
(a) set
(b) List
(c) Stack
(d) queue
38. Which of the following is useful in implementing quick sort?
(a) set
(b) List
(c) Stack
(d) queue
39. Queue can be used to implement
(a) radix sort
(b) recursion
(c) stack
(d) list
40. The process of accessing data stored in a tape is similar to manipulating data on a
(a) stack
(b) queue
(c) list
(d) heap
41. The maximum degree of any vertex in a simple graph with n vertices is
(a) n
(b) n-1
(c) n + 1
(d) 2n-1
42. Which of the following algorithm design technique is used in the quick sort algorithm?
(a) Dynamic programming
(b) Divide and Conquer
(c) Back tracking
(d) Greedy method
43. The number of edges in a regular graph of degree d and n vertices is
(a) maximum of n, d
(b) n+d
(c) nd
(d) nd/2
44. A 3-ary tree is a tree in which every internal node has exactly 3 children. The number of leaf nodes in such a tree with 6 internal nodes will be
(a) 10
(b) 23
(c) 17
(d) 13
45. Sorting is useful for
(a) report generation
(b) making searching easier and efficient
(c) responding to queries easily
(d) all of these
46. The information about an array used in a program will be sorted in
(a) symbol table
(b) system table
(c) activation record
(d) dope vector
47. The linked list implementation of sparse matrices is superior to the generalized dope vector method because it is
(a) conceptually easier
(b) completely dynamic
(c) efficient in accessing an entry
(d) efficient if the sparse matrix is a band matrix
48. The average search time of hashing, with linear probing will be less if the load factor
(a) is for less than one
(b) is for greater than one
(c) equals one
(d) none of the above
49. Which of the following remarks about Trie-indexing are true
(a) It is an m-ary tree
(b) Successful searches should terminate in leaf nodes
(c) Unsuccessful searches may terminate in leaf nodes level of the tree structure
(d) All of these
50. Stacks can’t be used to
(a) evaluate an arithmetic expression in post fix form
(b) implement recursion
(c) convert a given arithmetic expression in infix form to its equivalent post fix form
(d) allocate resources by the operating system
51. Which of the following abstract data types can be used to represent a many to many relation?
(a) Tree
(b) Graph
(c) Plex
(d) both (b) and (c) above
|
{"url":"https://www.examyear.com/data-structure-model-questions-and-answers-paper/","timestamp":"2024-11-09T10:25:46Z","content_type":"text/html","content_length":"120888","record_id":"<urn:uuid:350e36eb-754d-46da-8964-fd05a70b1980>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00561.warc.gz"}
|
A tournament approach to pattern avoiding matrices
We consider the following Turán-type problem: given a fixed tournament H, what is the least integer t = t(n,H) so that adding t edges to any n-vertex tournament, results in a digraph containing a
copy of H. Similarly, what is the least integer t = t(T[n],H) so that adding t edges to the n-vertex transitive tournament, results in a digraph containing a copy of H. Besides proving several
results on these problems, our main contributions are the following:Pach and Tardos conjectured that if M is an acyclic 0/1 matrix, then any n × n matrix with n(log n)^O(1) entries equal to 1
contains the pattern M. We show that this conjecture is equivalent to the assertion that t(T[n],H) = n(log n)^O(1) if and only if H belongs to a certain (natural) family of tournaments.We propose an
approach for determining if t(n,H) = n(log n)^O(1). This approach combines expansion in sparse graphs, together with certain structural characterizations of H-free tournaments. Our result opens the
door for using structural graph theoretic tools in order to settle the Pach–Tardos conjecture.
Bibliographical note
Publisher Copyright:
© 2017, Hebrew University of Jerusalem.
ASJC Scopus subject areas
Dive into the research topics of 'A tournament approach to pattern avoiding matrices'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/a-tournament-approach-to-pattern-avoiding-matrices","timestamp":"2024-11-02T02:45:29Z","content_type":"text/html","content_length":"52708","record_id":"<urn:uuid:171d6df8-cb34-4606-838f-1130719f97aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00782.warc.gz"}
|
Block Core Fill Calculator - Online Calculators
Enter the values in required fields to use our basic and advanced block core fill calculator online
The Block Core Fill Calculator can be used to know the volume of material needed to fill the hollow cores of concrete blocks. This calculation is very important in the domain of construction projects
where concrete, grout, or other fill materials are used to strengthen and stabilize concrete block structures. .
Formula & Variables
The Block Core Fill Calculator uses a straightforward formula to determine the volume of block core fill:
• V: Volume of the block core fill
• L: Length of the block
• W: Width of the block
• H: Height of the block
How to Calculate ?
1. Measure the Block Dimensions:
Measure the length (L), width (W), and height (H) of the block that you need to fill.
2. Multiply the Dimensions:
Multiply the length (L) by the width (W), and then multiply the result by the height (H) to calculate the volume (V).
3. Adjust Units if Needed:
Ensure the measurements are in the same units before performing the calculation, and convert the result to cubic feet or cubic meters as needed.
Solved Calculations :
Example 1:
• Length (L) = 2 feet
• Width (W) = 1.5 feet
• Height (H) = 4 feet
Calculation Instructions
Step 1: V = $L \times W \times H$ Start with the formula.
Step 2: V = $2 \times 1.5 \times 4$ Replace L with 2 feet, W with 1.5 feet, and H with 4 feet.
Step 3: V = $3 \times 4$ Multiply 2 feet by 1.5 feet to get 3 cubic feet.
Step 4: V = 12 cubic feet Multiply by the height to get the total volume.
The total volume is 12 cubic feet.
Example 2:
• Length (L) = 0.6 meters
• Width (W) = 0.3 meters
• Height (H) = 1 meter
Calculation Instructions
Step 1: V = $L \times W \times H$ Start with the formula.
Step 2: V = $0.6 \times 0.3 \times 1$ Replace L with 0.6 meters, W with 0.3 meters, and H with 1 meter.
Step 3: V = $0.18 \times 1$ Multiply 0.6 meters by 0.3 meters to get 0.18 cubic meters.
Step 4: V = 0.18 cubic meters Multiply by the height to get the total volume.
The total volume is 0.18 cubic meters.
What is Block Core Fill ?
A Block Core Fill Calculator is valuable tool in hands to measure the amount of concrete or cement needed to fill the hollow cores of concrete blocks. Filling the cores of blocks increases their
structural strength and stability, particularly in load-bearing walls. The calculation depends on the size of the block, the number of cores to fill, and the required cement ratio. For example, for
200 series blocks, the fill rate is higher compared to smaller blocks. The calculator allows users to input the block dimensions, number of blocks, and desired fill ratio to quickly determine how
much core fill is required per square meter or per block.
It is useful for builders, contractors, and DIYers working on large-scale construction projects that use concrete blocks. Whether you’re filling standard 8x8x16 blocks, 200 series blocks, or larger
options like 300 series H-blocks, the Block Core Fill Calculator makes the the process simple and beautiful to provide exact measurement.
Final Words:
Time Efficiency: Calculating block core fill volume manually can be time-consuming and prone to errors. The calculator streamlines the process, saving builders time and ensuring accuracy in their
|
{"url":"https://areacalculators.com/block-core-fill-calculator/","timestamp":"2024-11-04T01:23:35Z","content_type":"text/html","content_length":"109449","record_id":"<urn:uuid:cbb7ec9f-02b5-4b74-b9d1-99bcaf3651e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00321.warc.gz"}
|
Calibration Procedure for Water Distribution Systems: Comparison among Hydraulic Models
Faculty of Science and Technology, Free University of Bozen-Bolzano, Piazza Università 5, 39100 Bolzano, Italy
Department of Civil, Environmental and Mechanical Engineering, University of Trento, via Mesiano 77, 38123 Trento, Italy
Author to whom correspondence should be addressed.
Submission received: 8 April 2020 / Revised: 6 May 2020 / Accepted: 14 May 2020 / Published: 16 May 2020
Proper hydraulic simulation models, which are fundamental to analyse a water distribution system, require a calibration procedure. This paper proposes a multi-objective procedure to calibrate water
demands and pipe roughness distribution in the context of an ill-posed problem, where the number of measurements is smaller than the number of variables. The proposed methodology consists of a
two-steps procedure based on a genetic algorithm. Firstly, several runs of the calibrator are performed and the corresponding pressure and flow-rates values are averaged to overcome the
non-uniqueness of the solutions problem. Secondly, the final calibrated model is achieved using the calibrator with the average values of the previous step as the reference condition. Therefore, the
procedure enables to obtain physically based hydraulic parameters. Moreover, several hydraulic models are investigated to assess their performance on this optimisation procedure. The considered
models are based either on concentrated at nodes or distributed along pipes demands approach, but also either on demand driven or pressure driven approach. Results show the reliability of the final
calibrated model in the context of the ill-posed problem. Moreover, it is observed the overall better performance of the pressure driven approach with distributed demand in scarce pressure condition.
1. Introduction
Nowadays, hydraulic simulation models are widely used for analysing the behaviour of water distribution systems. Due to the high degree of uncertainty and to the lack of details of the system,
reliable management may be achieved only with an accurate calibrated model.
Calibration of water distribution models is a process that adjusts network parameters, such as pipe roughness and nodal demand [
], to minimize the differences between simulation results and real measurements. In order to be reliable, a hydraulic model requires a calibration process [
] that modifies the most sensitive parameters.
A comprehensive literature review of the water distribution network (WDN) model calibration is proposed in [
], where the calibration methods are classified as generally as possible in three different categories. Firstly, iterative and trial and error procedures, where unknown parameters are updated at each
iteration [
]. This approach has a slow convergence rate and typically can handle only small problems. Secondly, explicit methods which are based on the solution of an extended set of steady-state equations [
]. This extended set of equations is composed of initial equations plus additional ones derived from measurements available. Thirdly, implicit methods that are based on optimization techniques. These
latter have to minimise one or more objective functions considering two constraints: energy and mass equation, that are implicit in the hydraulics of the problem, and the range for the chosen
variables. Several different approaches have been proposed in the literature, for instance, based on a single-objective heuristic algorithm [
] or multi-objective [
]. Furthermore, the considered calibration variables have a wide range of possible parameters, such as nodal demand and pipe roughness [
], or valve status and leak parameters [
For example, Meirelles et al. [
] proposed a meta-model based on an artificial neural network to forecast pressure at the network nodes. Afterwards, the calibration was performed by using a Particle Swarm Optimization to estimate
pipes roughness minimising the objective function written as the difference among simulated and forecasted pressure. Do et al. [
] proposed a framework to estimate near-real time demand in a WDN. A predictor-corrector methodology is applied to predict the hydraulics of a water network, and then a particle filter-based model is
used to calibrate water demands. Zhou et al. [
] developed a self-adaptive system based on Kalman filter technique to develop a dual calibration of both pipe roughness and nodal water demands in a water distribution system.
The uncertainty of the results from WDN modelling is caused by many factors, which can be classified according to Hutton et al. [
] in structural, measurements and parameter uncertainty. Structural uncertainty is related to the representation of the real system, such as model aggregation or skeletonisation. Measurement
uncertainty concerns the inability of measurement devices to capture the temporal and spatial variation of consumer demand and to errors related to the measure itself. Parameter uncertainty refers to
the errors of the choice of variables used to model the system. Another source of uncertainty is related to the presence of leakages in the distribution network, which has been widely studied in
literature [
]. According to Kang and Lansey [
], pipes roughness and water demands are the most uncertain input parameters in a hydraulic model because they are not directly measurable. Moreover, given also the general lack of information
regarding the hydraulic state of the networks, the calibration problem is typically ill-posed, meaning that the number of measurements is much smaller than the variables.
Recently Do et al. [
] proposed an approach to deal with an ill-posed calibration problem by using multiple runs of a genetic algorithm model. It was found that a good solution can be achieved in spite of the
non-uniqueness of the solutions, by averaging the hydraulic simulation results of the several runs. A similar approach was proposed by Letting et al. [
], which proposed an approach based on a particle swarm optimisation. Since the stochastic nature of the calibration problem both Do et al. [
] and Letting et al. [
] made multiple runs of their optimisation algorithms and used the average of the solutions as a more accurate result.
Besides the calibration procedure, also the hydraulic modelling approach plays a crucial role in the accuracy of the results. In literature, most of the works are based on the EPANET2 [
] hydraulic solver [
]. The numerical solver adopted in this program is based on the Todini and Pilati [
] algorithm, which proposed a direct solution of the equations of mass conservation at the nodes and energy conservation along pipes of the WDNs. A solution is guaranteed by the convexity of the
system of equations [
], but since the problem is partially non-linear, a linearization is performed and achieved through Newton-Raphson gradient technique. The resulting linear system is solved with an iterative
procedure to find the nodal heads and pipe flow rates. This is called Global Gradient Algorithm (GGA).
The original GGA adopts a nodal demand driven (NDD) approach. NDD means that the water demands spread along the WDN is assumed lumped at the nodes of the network, and always fully satisfied. These
assumptions can lead to inaccuracy in the model, especially in cases where the network has a deficit of pressure and is skeletonised. Therefore, many authors [
] modified the GGA scheme to manage scarce pressure condition, by a formulation of the water demand depending on pressure. These approaches (hereafter NPD nodal pressure driven) were still developed
with the water demand concentrated at the network nodes. However, models which simulate the demand as uniformly distributed along the pipes, contrary to the node-concentrated, have been proposed to
properly represent the demand distribution [
]. These approaches (hereafter DDD distributed demand driven) preserve the energy balance. Recently, a pressure driven distributed (DPD) implementation [
] manage to solve both the demand driven approach and concentrated demand issues.
The aim of the work is twofold: to propose a methodology to calibrate water demands and pipe roughness of a WDN through an optimisation procedure in a context of lack of measurements. In addition, to
assess the influence of the hydraulic modelling on the calibration procedure, by comparing the results obtained using four different hydraulic modelling approaches. In order to achieve this aim, a
static condition is considered, that is the calibration process of the roughness and water demand distribution considering a set of known flow rate at (some) pipes and pressure at (some) nodes values
as measured at a given moment. In other words, the optimisation process does not consider the hourly variation of demand flow and also of heads at the tanks through an Extended Period Simulation
(EPS, see [
]) because it is beyond the scope of the work.
In this paper, the authors propose a two-step multi-objective procedure to calibrate water demands and pipe roughness distribution. The purpose of the work is not to solve the ill-posed problem but
to propose a suitable solution among all possible, that can be a solid starting point to manage a network in the condition of scarce measurements. In the first part, 100 runs of the non-dominated
sorting genetic algorithm II (NSGA-II) [
] calibrator are performed in order to collect a set of pressure values at the nodes and flow rate values along the pipes. Then, with the aim to overcome the non-uniqueness of the solutions problem,
average distribution of pipes flow rate and nodes pressure has been calculated by averaging the corresponding values of the 100 runs.
In the second part, the final calibration of the WDN is performed considering as the new reference condition the average values of the set of pressures and flow rate values obtained at the previous
step. Therefore, in this last run, the number of variables (i.e., water demand and pipe roughness) and the number of equations is the same. The hydraulic consistency of the final solution is
guaranteed by the use of optimisation with appropriate fixed boundaries, thus avoiding the problem of possible non-physical results deriving from the direct solution of the deterministic problem.
This last step allows to obtain a model physically based on a set of pipe roughness and water demands. To verify the proposed approach and to accurately reproduce a real-world scenario, a reference
condition is built with the spatial distribution of the withdrawals in order to represent a realistic distribution of pipes connection. Moreover, the calibration procedure is carried out with a
scarce amount of measurements.
The proposed calibration procedure is tested by means of different water distribution modelling approaches, which are based either on concentrated or distributed demands, but also either on demand
driven or pressure driven demand approaches. Specifically, a sequence of models is used: (1) NDD, (2) NPD, (3) DDD and (4) DPD. These approaches are implemented on the GGA to perform the simulation
for the comparison in the calibration processes.
2. Methodology
The calibration of WDNs is formulated as an optimisation problem using NSGA-II. The algorithm is chosen due to its ability to effectively solve non-linear and complex optimisation engineering
problems. The variables for the decisional process of the genetic algorithm are pipes roughness and average daily demand. The parameters of the algorithm are selected to ensure the stability of the
solutions: this is achieved after multiple runs of the calibration process, and it is decided to use 25% probability to perform a polynomial mutation and 90% probability to perform a simulated binary
Figure 1
shows the flowchart of the proposed methodology characterised by two steps. Due to the stochastic nature of the optimisation procedure, the problem does not have a unique solution [
]. In the first part of the methodology, 100 runs of the algorithm are performed as proposed in [
]. For every cycle of the multi-objective genetic algorithm, firstly the initialisation of a random population with the shuffle of the random seed is performed. Then, the initialisation variables are
used as input for the hydraulic model, which returns the nodal hydraulic heads and the pipe flow rates for each chromosome.
Afterwards, the two objective functions, which are expressed as the difference among measured and simulated values of pressure at the nodes and flow rate along the pipes, are evaluated. The selection
process starts with the non-dominated sorting criteria based on the tournament selection procedure. The population, divided in half from the previous step, has to encounter the two genetic operators.
These latter are the simulated binary crossover, whose aim is to combine the variables between different chromosomes, and the polynomial mutation that has to guarantee the variability with random
addition. As a result, a new generation is created, and a new iteration starts. In this first part, it is adopted 300 for population number and 500 for generation number in the genetic algorithm
After 100 runs, the solutions that minimise the Euclidean distance with the best point, which in this case is the origin of the Euclidean space, are selected for each run. Then, the hydraulic output
of every selected solution is collected in order to have a set of 100 pressure values for each node and 100 flow rate values for each pipe. Therefore, both sets of pressure and flow rate are averaged
to result in a single solution. These average values are a good estimation of the reference condition [
], but they are not directly reproducible through a hydraulic simulation. It means that they do not correspond to a set of roughness and a set of water demands. To overcome this problem, the authors
propose a second step where the WDN is calibrated with another run of the NSGA-II, using the objective function written as the difference among averaged and simulated pressure at the nodes and flow
rates along the pipes. Thus, the number of equations and the number of variables is the same. The population and generation numbers of this second step are 300 and 1000, respectively and the
selection criteria is equal to the first step. This process is carried out for the four different hydraulic approaches proposed.
2.1. Non-Uniqueness of the Solutions
The optimisation problem has a stochastic nature due to the lack of detailed information that affects most of the WDNs. In particular, the uncertainty on the values of the parameters of the network
(e.g., water demands and pipes roughness) generates differences between models and real WDN. To overcome this problem, it is necessary to use measurements taken from the real network to calibrate the
model. However, it is impossible for practical and economic reasons, to have a pressure measurement for each node and a flow rate measurement for each pipe. The solutions obtained for each run of the
calibration procedure through the optimisation algorithm, represent a set of possible solutions. This is given by the fact that a single solution of a run might represent the process of convergence
of the optimisation procedure to a local minimum that can be far from the best solution.
In order to overcome this problem, the approach used is similar to the one proposed in [
] and 100 runs are performed and used to develop the procedure. Despite the lack of information, it was shown that a good solution could be found by using the average of the 100 solutions.
However, the average pressure and flow rate values are not reproducible through a hydraulic simulation due to the unknown corresponding pipes roughness and water demands. To solve this limit, the
last step is performed using as measurements data the average values of the 100 runs. Since the average pressure in each node and the average flow rates in each pipe is known, the number of equations
and the number of variables is the same. The solution that is achieved is finally composed with the set of roughness and water demands that allow to reproduce the average values.
2.2. Hydraulic Models
The traditional modelling of WDNs concerns a system of energy and mass balances to provide the nodal hydraulic heads and pipe flow rates.
The NDD approach involves water demand aggregated at the network nodes, which is fully satisfied [
] independent from pressure condition, leading to the following mass balance equation at
$∑ i Q ik − ∑ j Q kj − q k = 0$
are the top pipe node and the end pipe node, respectively.
is the pipe flow rate, and
is the nodal water demand. The corresponding energy balance equation of the
pipe reads as:
$h i − h j = r ij L ij Q ij | Q ij | n − 1$
represents the pipe length,
represents a coefficient that depends on the head loss mathematical formulation
is the nodal hydraulic head. The Darcy-Weisbach expression with
equal to 2 is used in this work. This approach has become the standard for WDN hydraulic models.
For a proper simulation in scarce pressure condition, the NPD approach has been proposed [
]. In particular, real water demands depend on the available hydraulic pressure at the nodes. Thus, the mass balance equation at
nodes results:
$∑ i Q ik − ∑ j Q kj − q k ( h k ) = 0 .$
The energy balance equation remains Equation (2) because the flow rate along the pipe is still a constant value.
However, this approach considers water demands as aggregated at the network nodes even though the real withdrawals are distributed along the network pipes [
]. Some authors [
] have integrated a uniformly distributed demand along pipe into GGA scheme but maintaining the water demand independent from pressure. This new demand representation leads to a linear flow rate
variation along pipe. In this case, the mass balance at
nodes can be computed as:
$∑ i ( Q ik − p ik L ik ) − ∑ j Q kj = 0$
represents the water demand uniformly distributed along the pipes. The corresponding energy balance equation in the
pipe reads as:
$h i − h j = r ij p ij | Q ij | n + 1 − | Q ij | n + 1 n + 1$
Recently, a model that combines both distributed demand and pressure driven approach has been proposed [
]. Without pressure deficit, the DPD can be simplified to the mathematical formulation of the DDD, i.e., with water demand uniformly distributed and independent from pressure. However, in case of
pressure deficit condition, the DPD approach is able to solve pressure driven simulation with uniformly distributed water demand. In order to do that, the actual water demand function is approximated
with a second order polynomial function as:
$p ij ( ϵ ) = w ij , 1 ( h ij ) ϵ 2 + w ij , 2 ( h ij ) ϵ + w ij , 3 ( h ij )$
$p ij ( ϵ )$
represents the water demand function along the
pipe with spatial coordinate
$w ij , 1 ( h ij )$
$w ij , 2 ( h ij )$
$w ij , 3 ( h ij )$
are three coefficients dependent from the pressure in the
pipe. For additional details see Appendix A in [
]. The mass balance at
nodes can be read as:
$∑ i ( Q ik − ∫ 0 L ik p ik ( ϵ ) d ϵ ) − ∑ j Q ik = 0$
Hence, the energy balance equation over the
pipe is directly integrable, and can be read as:
$h i − h j = ∫ 0 L ij r ij ( Q ij ( x ) ) | Q ij ( x ) | n − 1 d x$
in case of a second-order polynomial water demand function, the complete integrated expression can be found in [
]. Summarising, the hydraulic approaches adopted in this paper concern the NDD based on Equations (1) and (2), the NPD based on Equations (2) and (3), the DDD based on Equations (4) and (5) and the
DPD based on Equations (7) and (8).
In the four methodologies, the Darcy-Weisbach equation is used to model the energy losses. In both NPD and DPD models the used pressure demand relationship at
node is:
$q k = q k 0 exp ( α k + β k p k ) 1 + exp ( α k + β k p k )$
$q k 0$
represents the water requested at
$p k$
is the pressure at
node and
$α k$
$β k$
are coefficient defined as [
] and can be read as:
$α k = − 4.595 p r − 6.907 p min p r − p min , β k = 11.502 p r − p min$
In Equation (10) $p min$ represent the minimum hydraulic pressure condition where the outflow is zero (in our case it is fixed to 0) and $p r$ is the hydraulic pressure where the water request is
fully satisfied (fixed to 30 m).
2.3. Decision Variables
The variables involved in the calibration process are the pipe roughness and the water demand. Since only poor information regard pipes roughness is typically available for a real WDN, a wide
variable range between 0.1 mm and 1 mm is selected. The chosen range is intended to cover the possible roughness values for a common steel pipe.
Regarding the demand, it can be represented as lumped at nodes or distributed along pipes in case of nodal demand models (e.g., NDD and NPD) and distributed demand models (e.g., DDD and DPD),
respectively. This causes a different number of calibration variables for the two types of models. Consequently, the range of the demand variables is set according to Equation (11) for allowing the
heuristic procedure to converge in a reasonable computational time. As the total water demand of the network is known, the bounds for the distributed along the pipes demand
$p ij$
and for the nodal concentrated demand
$q k$
are calculated as follows:
$0 < p ij < 4 Q in ∑ ij = 1 N p L ij , 0 < q k < 4 Q in N n$
$Q i n$
is the total water flow entering the network,
$L ij$
is the length of the
$N n$
is the total number of nodes in the network and
$N p$
is the total number of pipes. The variables increment discretisation is problem dependent and it has to be defined case by case. In this work, it is adopted an increment step of
$10 − 2 mm$
for the pipes roughness,
$10 − 4 l sm$
for the water demands uniformly distributed along the pipes and
$10 − 2 l s$
for the nodal water demands. The chromosome is then built as the sequence of the variables, starting from the roughness for each pipe till the water demands. These latter are at each node if the
hydraulic approach has nodal concentrated demands and at each pipe, if the hydraulic approach has distributed along the pipes demands.
2.4. Objective Functions
The calibration is defined as a heuristic optimization problem where two objective functions have to be minimized. The best expression for an objective function is currently an open question [
]. Different forms are tested, and the expressions Equations (12) and (13) are selected. It consists of the sum of the absolute differences between the field-observed and simulated values of nodal
pressures and pipe flow rates at the measurement points.
$F O 1 = ∑ k | P k , m − P k , c |$
$F O 2 = ∑ ij | Q ij , m − Q ij , c |$
$P k , m$
$Q ij , m$
are respectively the measured pressure value at
node and the measured flow rate value in the
$P k , c$
$Q j , c$
are instead the calculated pressure value at the
node and the calculated flow rate value in the
pipe. These dimensional objective functions are chosen to simplify the comparison among different hydraulic approaches in the proposed calibration procedure.
3. Test Case
In this paper, the so-called Apulian network [
] is used for testing the proposed calibration procedure.
Figure 2
a shows the original Apulian network layout, which consists of 1 reservoir, 23 nodes and 34 pipes. This network is a representation of the real WDN through the skeletonised model. This network is
selected because it can be considered as a medium-small network with a non-complex topology. In addition, this network is affected by a pressure deficit which is a common condition in many
distribution systems. More information about this network is in [
]. In order to have measurements for the calibration procedure, a detailed network (
Figure 2
b) has been built following the procedure in
Section 3.1
. It consists of 1 reservoir, 238 nodes and 268 pipes. Since the paper is focused on the ill-posed calibration, only a low number of measured points are selected according to
Section 3.1
3.1. Data Generation and Sensor Placement
The purpose of this subsection is to describe the adopted procedure for generating the reference network and the sensors positioning. The former is necessary to compare the different models result
during the proposed calibration procedure, while the latter enables to perform the calibration with a few known variables in some points of the network.
Hydraulic models are representations of real WDNs, where the withdrawals of the users are spread throughout the distribution network. Generally, the distance between consecutive withdrawals points
depends on urban population density and its spatial distribution. It can range from a few dozen to hundreds meters. Therefore, a reference network is generated starting from the original Apulian
model, where all the nodes have a random distance between 20 and 45 m. The amount of water demand is distributed randomly with the only constraint of mass balance at every single pipe [
]. Moreover, the roughness values are fixed considering a random pipes age between 20 and 50 years using the formulation of Colebrook and White reported in [
The method used to select the measurement locations has been proposed by [
] and four nodes and one pipe are selected to monitor pressure and flow rate, respectively. The methodology to sensor placement considered a localisation of the best measurement points based on a two
stages analysis. The first is related to the sensitivity analysis of the nodes at leakages and consist on a calculation of a sensitivity matrix by placing a known leakage into each pipe and recording
for each hour of a day the pressure. The sensitivity matrix is built for each hour of the day and so that each element represents the percentage variation of the pressure at the measurement’s node
with respect to the nominal case where no leakage is placed in the network. Then a feature reduction is made by calculating four performance indexes representing the mean of the mean percentage
pressure variations across different positions of the leakages, the variance across the day, the mean across the whole day and the variance across the whole day. Through a principal component
analysis, the most sensitive nodes are extracted. The second step is a correlation analysis whose aim is to find the most sensitive and uncorrelated locations. Deeper information about the procedure
can be found in [
Table 1
presents the reference data to perform the calibration problem. In addition, a pressure driven approach is used to have reference pressure values and reference flow rate values closer to reality. In
particular, a nodal demand representation is adopted due to the correspondence of withdrawals and connection pipes in the reference network.
3.2. Results and Discussion
This section presents the result of the proposed calibration procedure using the four hydraulic approaches. To achieve a proper calibrated model, the simulated pressure at each node and the flow rate
at each pipe have to match the reference network, which tries to represent the actual behaviour of the network.
A number of 100 runs of the optimisation algorithm is considered enough to converge to a stable solution as can be recognised looking at
Figure 3
a,b, where the pressure and flow rate Mean Absolute Error (MAE) is reported respectively, as a function of the number of runs, for the DPD approach. The plots show that after about 30 runs the MAE
values stabilise toward an almost constant value.
The results in terms of the difference between values in the reference network and in the calibrated models are reported under a Box and Whiskers plot.
Figure 4
reports the pressure absolute error at each node, defined as the difference between the pressure at the node as from calibration procedure and pressure reference value at the node, and
Figure 5
the flow rate absolute error related to the flow rate at each pipe.
Figure 4
shows the pressure results at each node of the 100 runs for the four hydraulic models in the different panels. It is observed that all the 100 runs of the four approaches converged to an optimum
since in the nodes taken as measurements points (4, 13, 16 and 23) the simulated pressure is the same of the reference network (i.e., reference solution). As expected, the simulated pressure of the
other nodes is fluctuating around the reference values due to the non-uniqueness problem of the solutions. This fluctuation appears to increase in case of hydraulic model formulated with a demand
driven approach (
Figure 4
a,c with
Figure 4
b,d). For instance, the variability of the simulated pressure at node 12 has a range of 15.9 m with an NDD model and 15.6 m with a DDD. Contrary, the pressure obtained at the same node with a
pressure driven approach has a variability of 4.87 m with an NPD and 4.84 m with a DPD. This behaviour can be ascribed to the inability of the demand driven models to simulate WDNs in pressure
deficit condition.
Figure 4
a,c with
Figure 4
b,d, it can be seen that the pressure of the average values of the pressure driven approach approximates better the reference solution than the demand driven one. In fact, the line representing the
error of the average values of the NPD and DPD shows less error excursion at each node compared to the NDD and DDD ones. Moreover, the MAE of the average values compared with the reference solution
are reported in the second column of
Table 2
. For each demand representation (i.e., concentrated at the nodes and distributed along pipes) the pressure driven approach halved the error compared to the demand driven ones. It can be noticed that
the lowest MAE for the averaged values is reached by the DPD.
In addition, the solution that among the 100 runs achieved the lowest MAE compared with the reference network are also reported for each hydraulic approach. These are called the best solutions and
the MAE are shown in the first column of
Table 2
. It is worth noting that the best solutions are not identifiable during a real calibration because clearly the reference network is unknown by definition. In this case, the best solutions are useful
as a comparison to test the performances of the proposed methodology.
Then, a final solution is achieved by using as measurements data the mean values of the 100 runs of the calibrator of both node pressures and pipe flow rates. This is performed for the four hydraulic
approaches and the MAE values are reported in the third column of
Table 2
. The importance of this last step is to have a model that is physically based, even if the error for these final solutions is slightly worst compared to one of the average values. For instance, the
final DPD model makes a mean error of 0.24 m (1.5%) for each node compared to the 0.19 m (1.2%) of the average values.
For the flow rate sides,
Figure 5
shows the flow rate error at each pipe for each hydraulic approach in the different panels. The convergence of the solutions can be noticed in the measurement point (pipe 34), where the flow rate is
matching the reference solution. Since only one flow measurement is available, the ill-posed problem is more underlined. Consequently, the dispersion of the error related to the flow rate is
generally higher than the pressure fluctuation. For instance, the reference flow rate in pipe 17 is 62.34 L/s. The 100 runs of the calibration with the NDD approach ranged from 42 L/s to 96 L/s, with
the NPD from 38 L/s to 87 L/s, with the DDD from 26 L/s to 110 L/s and with the DPD from 30 L/s to 87 L/s. Despite the significant dispersion of the 100 runs results, the average values are a good
estimator. In fact, the considered pipe presents flow rate values of 67.9 L/s with the NDD, 66.6 L/s with the NPD, 64.5 L/s with the DDD and 63.45 L/s with the DPD.
The Apulian network presents flow rate inversion in some locations (e.g., pipes 20, 21, 23). The reference network is built as a real system with a dense spatial distribution of the withdrawals,
while the models that are used from the calibrator are skeletonised. For this reason, none of the models formulated with water demands aggregated on the nodes (i.e., NDD and NPD) is able to match the
real solution in the pipe affected by flow inversion. Despite the capability of distributed approaches (i.e., DDD and DPD) to detect the pipe inversion, it is a hard task due to the lack of
measurements. In fact, the average values present its worst performance in the pipes affected by the flow rate inversion. The averaged values of the 100 runs of both the distributed approaches
achieved a better result compared to the concentrated ones.
Table 3
shows how the distributed demand representation improves the performance of the calibrated model in terms of flow rate. Specifically, the DPD achieved significantly the lowest MAE.
Table 2
Table 3
show that the MAEs of the final solution with the DPD approach are smaller compared to the MAEs of the Best solution. Given that the best solution is obtained by optimisation process on a few known
values, it can happen that in some point the estimated values (of the flow rate and/or pressure values) can be far from the local “true” value. Therefore, given that the average values are obtained
by averaging the results (pressure at the nodes in
Table 2
and flow rates at the pipes in
Table 3
) at all the network of the 100 runs, the average values can show lower MAE values with respect to the best solutions. In other words, calibrating with respect to the average values allows to achieve
a more robust and performant solution than calibrating just on the few measurement points in a context of lack of measurements.
A comparison between the best solution and the average values among the 100 runs and the final solution is shown in
Figure 6
. This figure presents the three different solutions for each hydraulic approach divided into four groups. The absolute errors regarding the pressure at each node are reported under Box and Whisker
plot in
Figure 6
a, and the absolute errors regarding the flow rate at each pipe in
Figure 6
b. It is clear that the four approaches reach different performance due to the two-breakthrough introduced in WDNs modelling in the last decade, which are distributed pipe demand and pressure driven
approach. Therefore, the NDD approach, which presents none of these improvements, shows the worst performance in terms of Mean Absolute Error and error dispersion. On the contrary, the NPD and DDD
approach, which presents only one of the two improvements, perform better than the NDD both for pressure and flow rate results. Nonetheless, the DPD approach, which assumes both distributed along
pipes and pressure driven demand, achieve the lowest error both in terms of dispersion and mean.
The final solution shown in
Figure 6
, has performance comparable to the average values. Furthermore, this solution obtained with a DPD approach is capable of making a mean of 0.24 m of pressure error at the nodes and a mean of 2.55 L/s
at each pipe, being a good estimation of the reference solution.
To evaluate the goodness of the selection of the average of 100 runs to perform the proposed calibration procedure, it is applied the statistical t-test. This test allows to verify whether the
average value of the distribution of the 100 runs deviates significantly from the reference solution. The test is applied to each collected set of pressure and set of flow rate at each node and
pipes, respectively. Given a significant level alpha of 0.01 and 99 degrees of freedom, concerning the pressure distribution, the null hypothesis is rejected in 25% of the cases and concerning the
flow rate is rejected in 44% of the cases. In general, the mean is a good estimator of the reference solution but in some cases, where the test is rejected, the 100 runs distribution do not represent
the reference solution. This happens due to the lack of measurement points of the ill-posed problem. It is also worth noting that most of the cases regarding the flow rate, where the test is
rejected, are affected by the flux inversion problem previously described. The t-test has also been applied to the other hydraulic approaches highlighting worse results. The null hypothesis rejection
values are 60% and 80% for the NDD approach, 65% and 76% for the NPD approach and 47% and 58% for the DDD approach concerning the pressure and flow rate distribution, respectively.
To highlight the reliability of the selection of the average with the DPD calibrated model, the confidence intervals are calculated by multiplying the standard error of the mean with the inverse
value of the t-distribution with 0.99 probability and with 99 degrees of freedom. In
Figure 7
a,b are reported the results with the confidence intervals for the calibrated pressure at the nodes and the calibrated flow rate at the pipes, respectively.
In fact, as reported in
Figure 7
a, the reference solution is well bounded by the confidence intervals. On the contrary,
Figure 7
b shows the higher uncertainty according to the presence of just one flow rate measurement. However, as demonstrated by the
-test, the final calibrated with the DPD approach can be considered a consistent solution in a context of lack of measurements.
For the other hydraulic approaches, the statistical t-test has significantly inferior performance, meaning that, as already reported, the DPD performs better compared to the other approaches.
4. Test Case 2
To test the robustness of the proposed calibration, it is proposed a second test case based on a larger network, which is the Modena one [
]. According to
Section 3.1
, a detailed network is built and showed in
Figure 8
b, whereas
Figure 8
a shows the original Modena network layout and consists of 4 reservoirs, 267 nodes and 317 pipes. Following the procedure described in
Section 3.1
, ten pressure sensors and four flow rate sensors are placed.
Results and Discussion
The second test case aims to test the capability of the proposed methodology to handle larger networks. The pressure at the nodes and the flow rates at the pipes resulting from the calibration
procedure are displayed in
Figure 9
a,b, respectively. Only the DPD approach is selected for this analysis due to the best performances already highlighted in the previous test case.
Despite the size of Modena network is higher than that of Apulian one, also in this case, a number of 100 runs of the optimisation algorithm is considered enough to converge to a stable solution. In
Figure 9
a shows the pressure distribution of the 100 runs, the reference solution, the average values and the final solution. It is worth noting that the final solution follows the behaviour of the reference
one leading to a good approximation in the context of an ill-posed problem. The mean absolute percentage error of the final calibrated model related to the pressure at each node results 4.4% (i.e., a
mean of 0.9 m pressure error for each node), which can be considered as a robust solution for such a large network.
Figure 9
b displays the flow rate distribution. As for the pressure distribution, the final calibrated model well resembles the flow rate of the reference solution.
The absolute errors regarding the pressure at each node are reported under Box and Whisker plot in
Figure 10
a and regarding the flow rate at each pipe in
Figure 10
The computational effort required for the 100 runs for this network is approximately three times higher than the time required for the Apulian. Nevertheless, the final calibrated model can be
considered as a consistent starting point for network management.
5. Conclusions
This study proposes a multi-objective procedure to deal with the ill-posed calibration problem in WDNs. The whole calibration process has been developed considering a reduced number of measurements,
as typically happen in reality. A procedure based on sensitivity and correlation analysis has been used to choose the optimal position to place pressor sensors. To overcome the non-uniqueness of the
solution problem, 100 runs of the calibrator have been performed to obtain the average values. Therefore, it has been proposed a final solution that is achieved by using pressures and flows from the
average values as measurements during the last run of the calibrator. This allows to have a model with almost the same performances of the average values of the 100 runs. To test the appropriate
selection of the average, a Student t-test has been performed. The final solution of the proposed calibration methodology overcomes the non-uniqueness problem being also physically based on a set of
pipe roughness and water demand. To evaluate the calibration procedure, a test case based on the Apulian network has been used. In particular, the reference network has been built in order to
resemble a real system with a realistic spatial distribution of the withdrawals. In addition, to test the robustness of the proposed procedure, a second test case based on the larger Modena network
has been proposed.
A comparison among the calibration process of four WDN simulation approaches ((1) nodal demand driven; (2) nodal pressure driven; (3) distributed demand driven; (4) distributed pressure driven) has
been carried out. It has been proved that the selection of the more reliable hydraulic approach to simulate a real system can significantly improve the result of the calibration. In particular, the
better performance of the distributed pressure driven approach based emerged. Future efforts will address the problem of the computational requirements, which are intensive for a large network like
the Modena one, and also will involve the problem of the leakage presence and of the measurements noise in a real network. Despite that, this calibration procedure can be replicable in WDNs due to
its capability to address the problems of lack of measurements and pressure deficit condition, which are common in real systems worldwide.
Author Contributions
Conceptualization, A.Z., A.M., S.S. and M.R.; data curation, A.Z.; funding acquisition, M.R.; investigation, A.Z.; methodology, A.Z., A.M. and S.S.; software, A.Z.; supervision, M.R.; validation,
A.Z. and A.M.; writing-original draft, A.Z.; writing-review and editing, A.Z., A.M., S.S. and M.R. All authors have read and agreed to the published version of the manuscript.
Part of this research has been carried out under the project “Applied Thermo-Fluid Dynamics Laboratories, Applied Research Infrastructures for Companies and Industry in South Tyrol” (FESR1029),
financed by the European Regional Development Fund (ERDF) Investment for Growth and Jobs Programme 2014–2020 and the Autonomous Province of Bolzano. This work has been also partially carried out
within the Research project “AI-ALPEN”, CUP: B26J16000300003 funded by the PAB (Autonomy Provence of Bozen-Italy) for University Research-2014.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 2. Apulian network: (a) model network where the calibration processes are launched (number of nodes = 23, number of pipes = 34); (b) reference network where the measurements are taken (number
of nodes = 238, number of pipes = 268).
Figure 3. Behaviour of the Mean Absolute Errors (MAEs), calculated as the difference between average values and values of the reference network, of the distributed pressure driven (DPD) approach,
with respect to the number of runs. In panel (a) is displayed the MAE related to the pressures and in panel (b) is displayed the MAE related to the flow rates.
Figure 4. Pressure errors (i.e., the difference between simulated and reference values) at each node of the 100 runs for each hydraulic approach: (a) NDD; (b) NPD; (c) DDD; (d) DPD. The error of the
average values (i.e., average values line) is also displayed in each plot.
Figure 5. Flow rate errors (i.e., the difference between simulated and reference values) at each pipe of the 100 runs for each hydraulic approach: (a) NDD; (b) NPD; (c) DDD; (d) DPD. The error of the
average values (i.e., average values line) is also displayed in each plot.
Figure 6. Comparison of the solutions. (a) Pressure absolute errors; (b) flow rate absolute errors. Error distribution of the best solution among the 100 runs (left box); average values (middle box);
final solution (right box).
Figure 7. Resulting solution in the Apulian network with the DPD approach. In panel (a) are displayed the pressure distribution of the 100 runs, the reference values, the average values and the final
one with confidence intervals. The flow rate distribution of the 100 runs, the reference values, the average values and the final one with confidence intervals are displayed in panel (b).
Figure 8. Modena network: (a) model network where the calibration processes are launched; (b) reference network where the measurements are taken.
Figure 9. Resulting calibration of the Modena network with the DPD approach. In panel (a) is displayed the pressure distribution of the 100 runs, the reference values, the average values and the
final one. The flow rate distribution of the 100 runs is displayed in panel (b).
Figure 10. Absolute errors calculated as the absolute difference between reference values and simulated values in the final calibrated model of the Modena network. Panel (a) is related to the
pressure at each node and panel (b) to the flow rate at each pipe.
Table 1. Measured flow rate (on the bottom) and measured pressure (on the top) data for the Apulian network.
Node (ID) Pressure (m)
4 17.92
13 13.37
16 16.55
23 13.57
Pipe ID Flow Rate (L/s)
34 240.82
Table 2. MAE regarding the simulated pressure of the Best solution, the Average values and the Final solution with respect to the reference network.
Solution Best Solution Average Values Final Solution
Approach (m) (m) (m)
NDD 0.57 0.60 0.62
NPD 0.28 0.34 0.34
DDD 0.52 0.41 0.46
DPD 0.28 0.19 0.24
Table 3. MAE regarding the simulated flow rate of the Best solution, the Average values and the Final solution with respect to the reference network.
Solution Best Solution Average Values Final Solution
Approach (L/s) (L/s) (L/s)
NDD 4.60 4.66 4.76
NPD 4.29 4.33 4.34
DDD 4.95 2.98 3.63
DPD 3.57 2.49 2.55
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Zanfei, A.; Menapace, A.; Santopietro, S.; Righetti, M. Calibration Procedure for Water Distribution Systems: Comparison among Hydraulic Models. Water 2020, 12, 1421. https://doi.org/10.3390/
AMA Style
Zanfei A, Menapace A, Santopietro S, Righetti M. Calibration Procedure for Water Distribution Systems: Comparison among Hydraulic Models. Water. 2020; 12(5):1421. https://doi.org/10.3390/w12051421
Chicago/Turabian Style
Zanfei, Ariele, Andrea Menapace, Simone Santopietro, and Maurizio Righetti. 2020. "Calibration Procedure for Water Distribution Systems: Comparison among Hydraulic Models" Water 12, no. 5: 1421.
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/12/5/1421?utm_source=releaseissue&utm_medium=email&utm_campaign=releaseissue_water&utm_term=titlelink166","timestamp":"2024-11-06T23:51:28Z","content_type":"text/html","content_length":"469060","record_id":"<urn:uuid:042f777b-14dc-4d22-8c3e-45c3ebad7603>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00370.warc.gz"}
|
Section 7.1: Geometry - The Nature of Mathematics - 13th Edition
Section 7.1: Geometry
7.1 Outline
A. Greek (Euclidean) geometry
1. undefined terms
a. point
b. line
c. plane
d. surface
2. categories
a. traditional (Euclidean) geometry
b. transformational geometry
3. Euclid’s postulates
a. postulate
b. axiom
c. theorem
d. five postulates
4. parallel lines
5. non-Euclidean geometries
6. straightedge
7. line segment
8. congruent figures
9. construct a figure
a. construct a circle
b. construct a line parallel to a given line through a given point
B. Transformational geometry
1. transformation
2. reflection
3. line of symmetry
C. Similarity
1. definition
2. similar
7.1 Essential Ideas
Geometry can be separated into two categories:
1. Traditional (which is the geometry of Euclid)
2. Transformational (which is more algebraic than the traditional approach)
When Euclid was formalizing traditional geometry, he based it on the following five postulates:
1. A straight line can be drawn from any point to any other point.
2. A straight line extends infinitely far in either direction.
3. A circle can be described with any point as center and with a radius equal to any finite straight line drawn from the center.
4. All right angles are equal to each other.
5. Given a straight line and any point not on this line, there is one and only one line through that point that is parallel to the given line.
|
{"url":"https://mathnature.com/essential-ideas-7-1/","timestamp":"2024-11-04T05:44:43Z","content_type":"text/html","content_length":"113032","record_id":"<urn:uuid:43379b7f-bf93-4b2b-8b19-b3f9799cad9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00005.warc.gz"}
|
Introduction to Graphs of Polynomial Functions
By the end of this lesson, you will be able to:
• Recognize characteristics of graphs of polynomial functions.
• Use factoring to find zeros of polynomial functions.
• Identify zeros and their multiplicities.
• Determine end behavior.
• Understand the relationship between degree and turning points.
• Graph polynomial functions.
• Use the Intermediate Value Theorem.
The revenue in millions of dollars for a fictional cable company from 2006 through 2013 is shown in the table below.
Year 2006 2007 2008 2009 2010 2011 2012 2013
Revenues 52.4 52.8 51.2 49.5 48.6 48.6 48.7 47.1
The revenue can be modeled by the polynomial function
[latex]R\left(t\right)=-0.037{t}^{4}+1.414{t}^{3}-19.777{t}^{2}+118.696t - 205.332\\[/latex]
where R represents the revenue in millions of dollars and t represents the year, with t = 6 corresponding to 2006. Over which intervals is the revenue for the company increasing? Over which intervals
is the revenue for the company decreasing? These questions, along with many others, can be answered by examining the graph of the polynomial function. We have already explored the local behavior of
quadratics, a special case of polynomials. In this section we will explore the local behavior of polynomials in general.
|
{"url":"https://courses.lumenlearning.com/odessa-collegealgebra/chapter/introduction-to-graphs-of-polynomial-functions/","timestamp":"2024-11-07T12:53:38Z","content_type":"text/html","content_length":"47897","record_id":"<urn:uuid:1d0e09f2-0d46-431a-b9c3-8bbfd1a29020>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00321.warc.gz"}
|