content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Simplifying Expressions With Square Roots
Learning Outcomes
• Simplify expressions with square roots
• Simplify variable expressions with square roots
Square Roots and the Order of Operations
When using the order of operations to simplify an expression that has square roots, we treat the radical sign as a grouping symbol. We simplify any expressions under the radical sign before
performing other operations.
Simplify: (a) [latex]\sqrt{25}+\sqrt{144}[/latex] (b) [latex]\sqrt{25+144}[/latex]
(a) Use the order of operations.
Simplify each radical. [latex]5+12[/latex]
Add. [latex]17[/latex]
(b) Use the order of operations.
Add under the radical sign. [latex]\sqrt{169}[/latex]
Simplify. [latex]13[/latex]
try it
Notice the different answers in parts (a) and (b) of the example above. It is important to follow the order of operations correctly. In (a), we took each square root first and then added them. In
(b), we added under the radical sign first and then found the square root.
Simplify Variable Expressions with Square Roots
Expressions with square root that we have looked at so far have not had any variables. What happens when we have to find a square root of a variable expression?
Consider [latex]\sqrt{9{x}^{2}}[/latex], where [latex]x\ge 0[/latex]. Can you think of an expression whose square is [latex]9{x}^{2}?[/latex]
[latex]\begin{array}{ccc}\hfill {\left(?\right)}^{2}& =& 9{x}^{2}\hfill \\ \hfill {\left(3x\right)}^{2}& =& 9{x}^{2}\text{ so }\sqrt{9{x}^{2}}=3x\hfill \end{array}[/latex]
When we use a variable in a square root expression, for our work, we will assume that the variable represents a non-negative number. In every example and exercise that follows, each variable in a
square root expression is greater than or equal to zero.
Simplify: [latex]\sqrt{{x}^{2}}[/latex], where [latex]x\ge 0[/latex]
Show Solution
try it
Simplify: [latex]\sqrt{16{x}^{2}}[/latex], where [latex]x\ge 0[/latex]
Show Solution
try it
Simplify: [latex]-\sqrt{81{y}^{2}}[/latex], where [latex]y\ge 0[/latex]
Show Solution
try it
Simplify: [latex]\sqrt{36{x}^{2}{y}^{2}}[/latex], where [latex]x\ge 0[/latex] and [latex]y\ge 0[/latex]
Show Solution
try it | {"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/simplifying-expressions-with-square-roots/","timestamp":"2024-11-09T09:22:32Z","content_type":"text/html","content_length":"52713","record_id":"<urn:uuid:9f962f78-6596-44e2-91a0-43b41ebe9ffa>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00246.warc.gz"} |
8.7 Exponents and Scientific Notation
Lesson 1
• I can use exponents to describe repeated multiplication.
• I understand the meaning of a term with an exponent.
Lesson 2
• I can explain and use a rule for multiplying powers of 10.
Lesson 3
• I can explain and use a rule for raising a power of 10 to a power.
Lesson 4
• I can evaluate $10^0$ and explain why it makes sense.
• I can explain and use a rule for dividing powers of 10.
Lesson 5
• I can use the exponent rules with negative exponents.
• I know what it means if 10 is raised to a negative power.
Lesson 6
• I can use the exponent rules for bases other than 10.
Lesson 7
• I can change an expression with a negative exponent into an equivalent expression with a positive exponent.
• I can choose an appropriate exponent rule to rewrite an expression to have a single exponent.
Lesson 8
• I can use and explain a rule for multiplying terms that have different bases but the same exponent.
Lesson 9
• Given a very large or small number, I can write an expression equal to it using a power of 10.
Lesson 10
• I can plot a multiple of a power of 10 on such a number line.
• I can subdivide and label a number line between 0 and a power of 10 with a positive exponent into 10 equal intervals.
• I can write a large number as a multiple of a power of 10.
Lesson 11
• I can plot a multiple of a power of 10 on such a number line.
• I can subdivide and label a number line between 0 and a power of 10 with a negative exponent into 10 equal intervals.
• I can write a small number as a multiple of a power of 10.
Lesson 12
• I can apply what I learned about powers of 10 to answer questions about real-world situations.
Lesson 13
• I can tell whether or not a number is written in scientific notation.
Lesson 14
• I can multiply and divide numbers given in scientific notation.
• I can use scientific notation and estimation to compare very large or very small numbers.
Lesson 15
• I can add and subtract numbers given in scientific notation.
Lesson 16
• I can use scientific notation to compare different amounts and answer questions about real-world situations. | {"url":"https://im.kendallhunt.com/MS/students/3/7/learning_targets.html","timestamp":"2024-11-04T21:03:22Z","content_type":"text/html","content_length":"80398","record_id":"<urn:uuid:f2001499-52b0-45fa-87b6-404fd9fb7005>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00360.warc.gz"} |
Coordinate Planes and Graphing Vocabulary Card Set
Introduce coordinate planes to your students with this set of 18 vocabulary cards.
Teach Coordinate Planes and Graphing Vocabulary
Are your students beginning their coordinate planes and graphing unit? If so, you may have noticed there are many vocabulary words they need to learn! Not only do students need to know these words,
but they must also identify them in a mathematical sense. This may be by a math symbol, image, or a number sentence used to represent each term.
This resource includes 18 coordinate plane vocabulary cards, each with a definition and an image that matches the word.
Words included are:
• formula
• rule
• evaluate
• coordinate plane
• x-axis
• y-axis
• ordered pair
• graph
• plot
• point
• line
• x-coordinate
• y-coordinate
• origin
• variable
• quadrant
• coordinate
• intersect
How to Use Math Vocabulary Cards
There are many different ways you can use vocabulary cards in your classroom! Here are a few of our favorites:
• Bulletin Board / Word Wall – Reserve a bulletin board just for math-related vocabulary and content. Change the words and display every unit so students always have access to the latest
mathematical vocabulary.
• Vocabulary Pictionary – Play a game where students draw a picture of the vocabulary word for others to guess.
• “Quiz, Quiz, Trade” – Give each student a vocabulary card. Students stand and find a partner, and then quiz each other about their words. They then trade and repeat with a new partner.
Easily Prepare This Resource for Your Students
Use the dropdown arrow on the Download button to choose between the PDF or editable Google Slides version of this resource.
Longevity Tip: Print on cardstock for added durability and longevity. Place all pieces in a folder or large envelope for easy access.
This resource was created by Cassandra Friesen, a teacher in Colorado and Teach Starter Collaborator.
Ready for more Coordinate Plane Resources?
Teach Starter has many more resources to help your students master coordinate planes. Here are a few of our favorites!
teaching resource
Practice plotting on coordinate grids with this set of differentiated mystery pictures.
teaching resource
Practice plotting ordered pairs and describing the process for graphing with this match-up activity.
teaching resource
Practice reading input-output tables and plotting points in the first quadrant with this set of task cards.
0 Comments
Write a review to help other teachers and parents like yourself. If you'd like to request a change to this resource, or report an error, select the corresponding tab above. | {"url":"https://www.teachstarter.com/us/teaching-resource/coordinate-planes-and-graphing-vocabulary-cards-3/","timestamp":"2024-11-09T10:19:03Z","content_type":"text/html","content_length":"453867","record_id":"<urn:uuid:86a5d20f-0828-45c7-b026-51e106b9b342>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00166.warc.gz"} |
Poisson distribution
Jump to navigation Jump to search
Probability mass function
The horizontal axis is the index k, the number of occurrences. λ is the expected number of occurrences, which need not be an integer. The vertical axis is the probability of k occurrences given λ.
The function is defined only at integer values of k. The connecting lines are only guides for the eye.
Cumulative distribution function
The horizontal axis is the index k, the number of occurrences. The CDF is discontinuous at the integers of k and flat everywhere else because a variable that is Poisson distributed takes on only
integer values.
Parameters λ > 0 (real) — rate
Support ${\displaystyle k\in \mathbb {N} \cup \{0\}}$
pmf ${\displaystyle {\frac {\lambda ^{k}e^{-\lambda }}{k!}}}$
${\displaystyle {\frac {\Gamma (\lfloor k+1\rfloor ,\lambda )}{\lfloor k\rfloor !}}}$, or ${\displaystyle e^{-\lambda }\sum _{i=0}^{\lfloor k\rfloor }{\frac {\lambda ^{i}}{i!}}\ }$, or $
CDF {\displaystyle Q(\lfloor k+1\rfloor ,\lambda )}$ (for ${\displaystyle k\geq 0}$, where ${\displaystyle \Gamma (x,y)}$ is the upper incomplete gamma function, ${\displaystyle \lfloor k\
rfloor }$ is the floor function, and Q is the regularized gamma function)
Mean ${\displaystyle \lambda }$
Median ${\displaystyle \approx \lfloor \lambda +1/3-0.02/\lambda \rfloor }$
Mode ${\displaystyle \lceil \lambda \rceil -1,\lfloor \lambda \rfloor }$
Variance ${\displaystyle \lambda }$
Skewness ${\displaystyle \lambda ^{-1/2}}$
Ex. ${\displaystyle \lambda ^{-1}}$
${\displaystyle \lambda [1-\log(\lambda )]+e^{-\lambda }\sum _{k=0}^{\infty }{\frac {\lambda ^{k}\log(k!)}{k!}}}$
Entropy (for large ${\displaystyle \lambda }$)
${\displaystyle {\frac {1}{2}}\log(2\pi e\lambda )-{\frac {1}{12\lambda }}-{\frac {1}{24\lambda ^{2}}}-{}}$
${\displaystyle \qquad {\frac {19}{360\lambda ^{3}}}+O\left({\frac {1}{\lambda ^{4}}}\right)}$
MGF ${\displaystyle \exp(\lambda (e^{t}-1))}$
CF ${\displaystyle \exp(\lambda (e^{it}-1))}$
PGF ${\displaystyle \exp(\lambda (z-1))}$
Fisher ${\displaystyle {\frac {1}{\lambda }}}$
In probability theory and statistics, the Poisson distribution (French pronunciation: [pwasɔ̃]; in English often rendered /ˈpwɑːsɒn/), named after French mathematician Siméon Denis Poisson, is a
discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time or space if these events occur with a known constant rate and
independently of the time since the last event.^[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.
For instance, an individual keeping track of the amount of mail they receive each day may notice that they receive an average number of 4 letters per day. If receiving any particular piece of mail
does not affect the arrival times of future pieces of mail, i.e., if pieces of mail from a wide range of sources arrive independently of one another, then a reasonable assumption is that the number
of pieces of mail received in a day obeys a Poisson distribution.^[2] Other examples that may follow a Poisson include the number of phone calls received by a call center per hour and the number of
decay events per second from a radioactive source.
The Poisson distribution is popular for modelling the number of times an event occurs in an interval of time or space.
The Poisson distribution may be useful to model events such as
• The number of meteorites greater than 1 meter diameter that strike Earth in a year
• The number of patients arriving in an emergency room between 10 and 11 pm
• The number of photons hitting a detector in a particular time interval
Assumptions: When is the Poisson distribution an appropriate model?[edit]
The Poisson distribution is an appropriate model if the following assumptions are true.
• k is the number of times an event occurs in an interval and k can take values 0, 1, 2, ….
• The occurrence of one event does not affect the probability that a second event will occur. That is, events occur independently.
• The rate at which events occur is constant. The rate cannot be higher in some intervals and lower in other intervals.
• Two events cannot occur at exactly the same instant; instead, at each very small sub-interval exactly one event either occurs or does not occur.
• The actual probability distribution is given by a binomial distribution and the number of trials is sufficiently bigger than the number of successes one is asking about (see Related distributions
If these conditions are true, then k is a Poisson random variable, and the distribution of k is a Poisson distribution.
Probability of events for a Poisson distribution[edit]
An event can occur 0, 1, 2, … times in an interval. The average number of events in an interval is designated ${\displaystyle \lambda }$ (lambda). Lambda is the event rate, also called the rate
parameter. The probability of observing k events in an interval is given by the equation
${\displaystyle P(k{\text{ events in interval}})=e^{-\lambda }{\frac {\lambda ^{k}}{k!}}}$
• ${\displaystyle \lambda }$ is the average number of events per interval
• e is the number 2.71828... (Euler's number) the base of the natural logarithms
• k takes values 0, 1, 2, …
• k! = k × (k − 1) × (k − 2) × … × 2 × 1 is the factorial of k.
This equation is the probability mass function (PMF) for a Poisson distribution.
Notice that this equation can be adapted if, instead of the average number of events ${\displaystyle \lambda }$, we are given a time rate ${\displaystyle r}$ for the events to happen. Then ${\
displaystyle \lambda =rt}$ (with ${\displaystyle r}$ in units of 1/time), and
${\displaystyle P(k{\text{ events in interval }}t)=e^{-rt}{\frac {(rt)^{k}}{k!}}}$
Examples of probability for Poisson distributions[edit]
On a particular river, overflow floods occur once every 100 years on average. Calculate the probability of k = 0, 1, 2, 3, 4, 5, or 6 overflow floods in a 100-year interval, assuming the Poisson
model is appropriate.
Because the average event rate is one overflow flood per 100 years, λ = 1
${\displaystyle P(k{\text{ overflow floods in 100 years}})={\frac {\lambda ^{k}e^{-\lambda }}{k!}}={\frac {1^{k}e^{-1}}{k!}}}$
${\displaystyle P(k=0{\text{ overflow floods in 100 years}})={\frac {1^{0}e^{-1}}{0!}}={\frac {e^{-1}}{1}}\approx 0.368}$
${\displaystyle P(k=1{\text{ overflow flood in 100 years}})={\frac {1^{1}e^{-1}}{1!}}={\frac {e^{-1}}{1}}\approx 0.368}$
${\displaystyle P(k=2{\text{ overflow floods in 100 years}})={\frac {1^{2}e^{-1}}{2!}}={\frac {e^{-1}}{2}}\approx 0.184}$
The table below gives the probability for 0 to 6 overflow floods in a 100-year period.
k P(k overflow floods in 100 years)
0 0.368
1 0.368
2 0.184
3 0.061
4 0.015
5 0.003
6 0.0005
Ugarte and colleagues report that the average number of goals in a World Cup soccer match is approximately 2.5 and the Poisson model is appropriate.^[3]
Because the average event rate is 2.5 goals per match, λ = 2.5.
${\displaystyle P(k{\text{ goals in a match}})={\frac {2.5^{k}e^{-2.5}}{k!}}}$
${\displaystyle P(k=0{\text{ goals in a match}})={\frac {2.5^{0}e^{-2.5}}{0!}}={\frac {e^{-2.5}}{1}}\approx 0.082}$
${\displaystyle P(k=1{\text{ goal in a match}})={\frac {2.5^{1}e^{-2.5}}{1!}}={\frac {2.5e^{-2.5}}{1}}\approx 0.205}$
${\displaystyle P(k=2{\text{ goals in a match}})={\frac {2.5^{2}e^{-2.5}}{2!}}={\frac {6.25e^{-2.5}}{2}}\approx 0.257}$
The table below gives the probability for 0 to 7 goals in a match.
k P(k goals in a World Cup soccer match)
0 0.082
1 0.205
2 0.257
3 0.213
4 0.133
5 0.067
6 0.028
7 0.010
Once in an interval events: The special case of λ = 1 and k = 0[edit]
Suppose that astronomers estimate that large meteorites (above a certain size) hit the earth on average once every 100 years (λ = 1 event per 100 years), and that the number of meteorite hits follows
a Poisson distribution. What is the probability of k = 0 meteorite hits in the next 100 years?
${\displaystyle P(k={\text{0 meteorites hit in next 100 years}})={\frac {1^{0}e^{-1}}{0!}}={\frac {1}{e}}\approx 0.37}$
Under these assumptions, the probability that no large meteorites hit the earth in the next 100 years is roughly 0.37. The remaining 1 − 0.37 = 0.63 is the probability of 1, 2, 3, or more large
meteorite hits in the next 100 years. In an example above, an overflow flood occurred once every 100 years (λ = 1). The probability of no overflow floods in 100 years was roughly 0.37, by the same
In general, if an event occurs on average once per interval (λ = 1), and the events follow a Poisson distribution, then P(0 events in next interval) = 0.37. In addition, P(exactly one event in next
interval) = 0.37, as shown in the table for overflow floods.
Examples that violate the Poisson assumptions[edit]
The number of students who arrive at the student union per minute will likely not follow a Poisson distribution, because the rate is not constant (low rate during class time, high rate between class
times) and the arrivals of individual students are not independent (students tend to come in groups).
The number of magnitude 5 earthquakes per year in a country may not follow a Poisson distribution if one large earthquake increases the probability of aftershocks of similar magnitude.
Among patients admitted to the intensive care unit of a hospital, the number of days that the patients spend in the ICU is not Poisson distributed because the number of days cannot be zero. The
distribution may be modeled using a Zero-truncated Poisson distribution.
Count distributions in which the number of intervals with zero events is higher than predicted by a Poisson model may be modeled using a Zero-inflated model.
Poisson regression and negative binomial regression[edit]
Poisson regression and negative binomial regression are useful for analyses where the dependent (response) variable is the count (0, 1, 2, …) of the number of events or occurrences in an interval.
The distribution was first introduced by Siméon Denis Poisson (1781–1840) and published, together with his probability theory, in 1837 in his work Recherches sur la probabilité des jugements en
matière criminelle et en matière civile ("Research on the Probability of Judgments in Criminal and Civil Matters").^[4] The work theorized about the number of wrongful convictions in a given country
by focusing on certain random variables N that count, among other things, the number of discrete occurrences (sometimes called "events" or "arrivals") that take place during a time-interval of given
length. The result had been given previously by Abraham de Moivre (1711) in De Mensura Sortis seu; de Probabilitate Eventuum in Ludis a Casu Fortuito Pendentibus in Philosophical Transactions of the
Royal Society, p. 219.^[5]^:157 This makes it an example of Stigler's law and it has prompted some authors to argue that the Poisson distribution should bear the name of de Moivre.^[6]^[7]
A practical application of this distribution was made by Ladislaus Bortkiewicz in 1898 when he was given the task of investigating the number of soldiers in the Prussian army killed accidentally by
horse kicks; this experiment introduced the Poisson distribution to the field of reliability engineering.^[8]
A discrete random variable X is said to have a Poisson distribution with parameter λ > 0, if, for k = 0, 1, 2, ..., the probability mass function of X is given by:^[9]
${\displaystyle \!f(k;\lambda )=\Pr(X=k)={\frac {\lambda ^{k}e^{-\lambda }}{k!}},}$
The positive real number λ is equal to the expected value of X and also to its variance^[10]
${\displaystyle \lambda =\operatorname {E} (X)=\operatorname {Var} (X).}$
The Poisson distribution can be applied to systems with a large number of possible events, each of which is rare. How many such events will occur during a fixed time interval? Under the right
circumstances, this is a random number with a Poisson distribution.
The conventional definition of the Poisson distribution contains two terms that can easily overflow on computers: λ^k and k!. The fraction of λ^k to k! can also produce a rounding error that is very
large compared to e^−λ, and therefore give an erroneous result. For numerical stability the Poisson probability mass function should therefore be evaluated as
${\displaystyle \!f(k;\lambda )=\exp \left\{{k\ln \lambda -\lambda -\ln \Gamma (k+1)}\right\},}$
which is mathematically equivalent but numerically stable. The natural logarithm of the Gamma function can be obtained using the lgamma function in the C (programming language) standard library (C99
version), the gammaln function in MATLAB or SciPy, or the log_gamma function in Fortran 2008 and later.
Descriptive statistics[edit]
${\displaystyle \operatorname {E} |X-\lambda |=2\exp(-\lambda ){\frac {\lambda ^{\lfloor \lambda \rfloor +1}}{\lfloor \lambda \rfloor !}}.}$
• The mode of a Poisson-distributed random variable with non-integer λ is equal to ${\displaystyle \scriptstyle \lfloor \lambda \rfloor }$, which is the largest integer less than or equal to λ.
This is also written as floor(λ). When λ is a positive integer, the modes are λ and λ − 1.
• All of the cumulants of the Poisson distribution are equal to the expected value λ. The nth factorial moment of the Poisson distribution is λ^n.
• The expected value of a Poisson process is sometimes decomposed into the product of intensity and exposure (or more generally expressed as the integral of an "intensity function" over time or
space, sometimes described as “exposure”).^[11]^[12]
Bounds for the median (ν) of the distribution are known and are sharp:^[13]
${\displaystyle \lambda -\ln 2\leq u <\lambda +{\frac {1}{3}}.}$
Higher moments[edit]
${\displaystyle m_{k}=\sum _{i=0}^{k}\lambda ^{i}\left\{{\begin{matrix}k\\i\end{matrix}}\right\},}$
where the {braces} denote Stirling numbers of the second kind.^[14] The coefficients of the polynomials have a combinatorial meaning. In fact, when the expected value of the Poisson distribution
is 1, then Dobinski's formula says that the nth moment equals the number of partitions of a set of size n.
Sums of Poisson-distributed random variables[edit]
If ${\displaystyle X_{i}\sim \operatorname {Pois} (\lambda _{i})\,i=1,\dotsc ,n}$ are independent, and ${\displaystyle \lambda =\sum _{i=1}^{n}\lambda _{i}}$, then ${\displaystyle Y=\left(\sum _
{i=1}^{n}X_{i}\right)\sim \operatorname {Pois} (\lambda )}$.^[15] A converse is Raikov's theorem, which says that if the sum of two independent random variables is Poisson-distributed, then so
are each of those two independent random variables.^[16]
Other properties[edit]
${\displaystyle D_{\text{KL}}(\lambda \mid \lambda _{0})=\lambda _{0}-\lambda +\lambda \log {\frac {\lambda }{\lambda _{0}}}.}$
• Bounds for the tail probabilities of a Poisson random variable ${\displaystyle X\sim \operatorname {Pois} (\lambda )}$ can be derived using a Chernoff bound argument.^[18]
${\displaystyle P(X\geq x)\leq {\frac {e^{-\lambda }(e\lambda )^{x}}{x^{x}}},{\text{ for }}x>\lambda }$,
${\displaystyle P(X\leq x)\leq {\frac {e^{-\lambda }(e\lambda )^{x}}{x^{x}}},{\text{ for }}x<\lambda .}$
Poisson races[edit]
Let ${\displaystyle X\sim \operatorname {Pois} (\lambda )}$ and ${\displaystyle Y\sim \operatorname {Pois} (\mu )}$ be independent random variables, with ${\displaystyle \lambda <\mu }$, then we have
${\displaystyle {\frac {e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}}{(\lambda +\mu )^{2}}}-{\frac {e^{-(\lambda +\mu )}}{2{\sqrt {\lambda \mu }}}}-{\frac {e^{-(\lambda +\mu )}}{4\lambda \mu }}\
leq P(X-Y\geq 0)\leq e^{-({\sqrt {\mu }}-{\sqrt {\lambda }})^{2}}}$
The upper bound is proved using a standard Chernoff bound.
The lower bound can be proved by noting that ${\displaystyle P(X-Y\geq 0\mid X+Y=i)}$ is the probability that ${\displaystyle Z\geq {\frac {i}{2}}}$, where ${\displaystyle Z\sim \operatorname {Bin} \
left(i,{\frac {\lambda }{\lambda +\mu }}\right)}$, which is bounded below by ${\displaystyle {\frac {1}{(i+1)^{2}}}e^{\left(-iD\left(0.5\|{\frac {\lambda }{\lambda +\mu }}\right)\right)}}$, where ${\
displaystyle D}$ is relative entropy (See the entry on bounds on tails of binomial distributions for details). Further noting that ${\displaystyle X+Y\sim \operatorname {Pois} (\lambda +\mu )}$, and
computing a lower bound on the unconditional probability gives the result. More details can be found in the appendix of.^[19]
Related distributions[edit]
• If ${\displaystyle X_{1}\sim \mathrm {Pois} (\lambda _{1})\,}$ and ${\displaystyle X_{2}\sim \mathrm {Pois} (\lambda _{2})\,}$ are independent, then the difference ${\displaystyle Y=X_{1}-X_{2}}$
follows a Skellam distribution.
• If ${\displaystyle X_{1}\sim \mathrm {Pois} (\lambda _{1})\,}$ and ${\displaystyle X_{2}\sim \mathrm {Pois} (\lambda _{2})\,}$ are independent, then the distribution of ${\displaystyle X_{1}}$
conditional on ${\displaystyle X_{1}+X_{2}}$ is a binomial distribution.
Specifically, given ${\displaystyle X_{1}+X_{2}=k}$, ${\displaystyle \!X_{1}\sim \mathrm {Binom} (k,\lambda _{1}/(\lambda _{1}+\lambda _{2}))}$.
More generally, if X[1], X[2],..., X[n] are independent Poisson random variables with parameters λ[1], λ[2],..., λ[n] then
given ${\displaystyle \sum _{j=1}^{n}X_{j}=k,}$ ${\displaystyle X_{i}\sim \mathrm {Binom} \left(k,{\frac {\lambda _{i}}{\sum _{j=1}^{n}\lambda _{j}}}\right)}$. In fact, ${\displaystyle \{X_
{i}\}\sim \mathrm {Multinom} \left(k,\left\{{\frac {\lambda _{i}}{\sum _{j=1}^{n}\lambda _{j}}}\right\}\right)}$.
• If ${\displaystyle X\sim \mathrm {Pois} (\lambda )\,}$ and the distribution of ${\displaystyle Y}$, conditional on X = k, is a binomial distribution, ${\displaystyle Y\mid (X=k)\sim \mathrm
{Binom} (k,p)}$, then the distribution of Y follows a Poisson distribution ${\displaystyle Y\sim \mathrm {Pois} (\lambda \cdot p)\,}$. In fact, if ${\displaystyle \{Y_{i}\}}$, conditional on X =
k, follows a multinomial distribution, ${\displaystyle \{Y_{i}\}\mid (X=k)\sim \mathrm {Multinom} \left(k,p_{i}\right)}$, then each ${\displaystyle Y_{i}}$ follows an independent Poisson
distribution ${\displaystyle Y_{i}\sim \mathrm {Pois} (\lambda \cdot p_{i}),\rho (Y_{i},Y_{j})=0}$.
• The Poisson distribution can be derived as a limiting case to the binomial distribution as the number of trials goes to infinity and the expected number of successes remains fixed — see law of
rare events below. Therefore, it can be used as an approximation of the binomial distribution if n is sufficiently large and p is sufficiently small. There is a rule of thumb stating that the
Poisson distribution is a good approximation of the binomial distribution if n is at least 20 and p is smaller than or equal to 0.05, and an excellent approximation if n ≥ 100 and np ≤ 10.^[20]
${\displaystyle F_{\mathrm {Binomial} }(k;n,p)\approx F_{\mathrm {Poisson} }(k;\lambda =np)\,}$
• The Poisson distribution is a special case of the discrete compound Poisson distribution (or stuttering Poisson distribution) with only a parameter.^[21]^[22] The discrete compound Poisson
distribution can be deduced from the limiting distribution of univariate multinomial distribution. It is also a special case of a compound Poisson distribution.
• For sufficiently large values of λ, (say λ>1000), the normal distribution with mean λ and variance λ (standard deviation ${\displaystyle {\sqrt {\lambda }}}$) is an excellent approximation to the
Poisson distribution. If λ is greater than about 10, then the normal distribution is a good approximation if an appropriate continuity correction is performed, i.e., if P(X ≤ x), where x is a
non-negative integer, is replaced by P(X ≤ x + 0.5).
${\displaystyle F_{\mathrm {Poisson} }(x;\lambda )\approx F_{\mathrm {normal} }(x;\mu =\lambda ,\sigma ^{2}=\lambda )\,}$
• Variance-stabilizing transformation: When a variable is Poisson distributed, its square root is approximately normally distributed with expected value of about ${\displaystyle {\sqrt {\lambda }}}
$ and variance of about 1/4.^[23]^[5]^:163 Under this transformation, the convergence to normality (as λ increases) is far faster than the untransformed variable. Other, slightly more
complicated, variance stabilizing transformations are available,^[5]^:163 one of which is Anscombe transform. See Data transformation (statistics) for more general uses of transformations.
• If for every t > 0 the number of arrivals in the time interval [0, t] follows the Poisson distribution with mean λt, then the sequence of inter-arrival times are independent and identically
distributed exponential random variables having mean 1/λ.^[24]
• The cumulative distribution functions of the Poisson and chi-squared distributions are related in the following ways:^[5]^:171
${\displaystyle F_{\text{Poisson}}(k;\lambda )=1-F_{\chi ^{2}}(2\lambda ;2(k+1))\quad \quad {\text{ integer }}k,}$
${\displaystyle \Pr(X=k)=F_{\chi ^{2}}(2\lambda ;2(k+1))-F_{\chi ^{2}}(2\lambda ;2k).}$
Applications of the Poisson distribution can be found in many fields related to counting:^[25]
The Poisson distribution arises in connection with Poisson processes. It applies to various phenomena of discrete properties (that is, those that may happen 0, 1, 2, 3, ... times during a given
period of time or in a given area) whenever the probability of the phenomenon happening is constant in time or space. Examples of events that may be modelled as a Poisson distribution include:
• The number of soldiers killed by horse-kicks each year in each corps in the Prussian cavalry. This example was made famous by a book of Ladislaus Josephovich Bortkiewicz (1868–1931).
• The number of yeast cells used when brewing Guinness beer. This example was made famous by William Sealy Gosset (1876–1937).^[27]
• The number of phone calls arriving at a call centre within a minute. This example was made famous by A.K. Erlang (1878 – 1929).
• Internet traffic.
• The number of goals in sports involving two competing teams.^[28]
• The number of deaths per year in a given age group.
• The number of jumps in a stock price in a given time interval.
• Under an assumption of homogeneity, the number of times a web server is accessed per minute.
• The number of mutations in a given stretch of DNA after a certain amount of radiation.
• The proportion of cells that will be infected at a given multiplicity of infection.
• The number of bacteria in a certain amount of liquid.^[29]
• The arrival of photons on a pixel circuit at a given illumination and over a given time period.
• The targeting of V-1 flying bombs on London during World War II investigated by R. D. Clarke in 1946.^[30]^[31]
Gallagher in 1976 showed that the counts of prime numbers in short intervals obey a Poisson distribution provided a certain version of an unproved conjecture of Hardy and Littlewood is true.^[32]
Law of rare events[edit]
The rate of an event is related to the probability of an event occurring in some small subinterval (of time, space or otherwise). In the case of the Poisson distribution, one assumes that there
exists a small enough subinterval for which the probability of an event occurring twice is "negligible". With this assumption one can derive the Poisson distribution from the Binomial one, given only
the information of expected number of total events in the whole interval. Let this total number be ${\displaystyle \lambda }$. Divide the whole interval into ${\displaystyle n}$ subintervals ${\
displaystyle I_{1},\dots ,I_{n}}$ of equal size, such that ${\displaystyle n}$ > ${\displaystyle \lambda }$ (since we are interested in only very small portions of the interval this assumption is
meaningful). This means that the expected number of events in an interval ${\displaystyle I_{i}}$ for each ${\displaystyle i}$ is equal to ${\displaystyle \lambda /n}$. Now we assume that the
occurrence of an event in the whole interval can be seen as a Bernoulli trial, where the ${\displaystyle i^{th}}$ trial corresponds to looking whether an event happens at the subinterval ${\
displaystyle I_{i}}$ with probability ${\displaystyle \lambda /n}$. The expected number of total events in ${\displaystyle n}$ such trials would be ${\displaystyle \lambda }$, the expected number of
total events in the whole interval. Hence for each subdivision of the interval we have approximated the occurrence of the event as a Bernoulli process of the form ${\displaystyle {\textrm {B}}(n,\
lambda /n)}$. As we have noted before we want to consider only very small subintervals. Therefore, we take the limit as ${\displaystyle n}$ goes to infinity. In this case the binomial distribution
converges to what is known as the Poisson distribution by the Poisson limit theorem.
In several of the above examples—such as, the number of mutations in a given sequence of DNA—the events being counted are actually the outcomes of discrete trials, and would more precisely be
modelled using the binomial distribution, that is
${\displaystyle X\sim {\textrm {B}}(n,p).\,}$
In such cases n is very large and p is very small (and so the expectation np is of intermediate magnitude). Then the distribution may be approximated by the less cumbersome Poisson distribution
${\displaystyle X\sim {\textrm {Pois}}(np).\,}$
This approximation is sometimes known as the law of rare events,^[33] since each of the n individual Bernoulli events rarely occurs. The name may be misleading because the total count of success
events in a Poisson process need not be rare if the parameter np is not small. For example, the number of telephone calls to a busy switchboard in one hour follows a Poisson distribution with the
events appearing frequent to the operator, but they are rare from the point of view of the average member of the population who is very unlikely to make a call to that switchboard in that hour.
The word law is sometimes used as a synonym of probability distribution, and convergence in law means convergence in distribution. Accordingly, the Poisson distribution is sometimes called the law of
small numbers because it is the probability distribution of the number of occurrences of an event that happens rarely but has very many opportunities to happen. The Law of Small Numbers is a book by
Ladislaus Bortkiewicz (Bortkevitch)^[34] about the Poisson distribution, published in 1898.
Poisson point process[edit]
The Poisson distribution arises as the number of points of a Poisson point process located in some finite region. More specifically, if D is some region space, for example Euclidean space R^d, for
which |D|, the area, volume or, more generally, the Lebesgue measure of the region is finite, and if N(D) denotes the number of points in D, then
${\displaystyle P(N(D)=k)={\frac {(\lambda |D|)^{k}e^{-\lambda |D|}}{k!}}.}$
Other applications in science[edit]
In a Poisson process, the number of observed occurrences fluctuates about its mean λ with a standard deviation ${\displaystyle \sigma _{k}={\sqrt {\lambda }}}$. These fluctuations are denoted as
Poisson noise or (particularly in electronics) as shot noise. See also here.
The correlation of the mean and standard deviation in counting independent discrete occurrences is useful scientifically. By monitoring how the fluctuations vary with the mean signal, one can
estimate the contribution of a single occurrence, even if that contribution is too small to be detected directly. For example, the charge e on an electron can be estimated by correlating the
magnitude of an electric current with its shot noise. If N electrons pass a point in a given time t on the average, the mean current is ${\displaystyle I=eN/t}$; since the current fluctuations should
be of the order ${\displaystyle \sigma _{I}=e{\sqrt {N}}/t}$ (i.e., the standard deviation of the Poisson process), the charge ${\displaystyle e}$ can be estimated from the ratio ${\displaystyle t\
sigma _{I}^{2}/I}$.
An everyday example is the graininess that appears as photographs are enlarged; the graininess is due to Poisson fluctuations in the number of reduced silver grains, not to the individual grains
themselves. By correlating the graininess with the degree of enlargement, one can estimate the contribution of an individual grain (which is otherwise too small to be seen unaided). Many other
molecular applications of Poisson noise have been developed, e.g., estimating the number density of receptor molecules in a cell membrane.
${\displaystyle \Pr(N_{t}=k)=f(k;\lambda t)={\frac {e^{-\lambda t}(\lambda t)^{k}}{k!}}.}$
In Causal Set theory the discrete elements of spacetime follow a Poisson distribution in the volume.
Generating Poisson-distributed random variables[edit]
A simple algorithm to generate random Poisson-distributed numbers (pseudo-random number sampling) has been given by Knuth (see References below):
algorithm poisson random number (Knuth):
Let L ← e^−λ, k ← 0 and p ← 1.
k ← k + 1.
Generate uniform random number u in [0,1] and let p ← p × u.
while p > L.
return k − 1.
The complexity is linear in the returned value k, which is λ on average. There are many other algorithms to improve this. Some are given in Ahrens & Dieter, see § References below.
For large values of λ, the value of L = e^−λ may be so small that it is hard to represent. This can be solved by a change to the algorithm which uses an additional parameter STEP such that e^−STEP
does not underflow:
algorithm poisson random number (Junhao, based on Knuth):
Let λLeft ← λ, k ← 0 and p ← 1.
k ← k + 1.
Generate uniform random number u in (0,1) and let p ← p × u.
while p < 1 and λLeft > 0:
if λLeft > STEP:
p ← p × e^STEP
λLeft ← λLeft − STEP
p ← p × e^λLeft
λLeft ← 0
while p > 1.
return k − 1.
The choice of STEP depends on the threshold of overflow. For double precision floating point format, the threshold is near e^700, so 500 shall be a safe STEP.
Other solutions for large values of λ include rejection sampling and using Gaussian approximation.
Inverse transform sampling is simple and efficient for small values of λ, and requires only one uniform random number u per sample. Cumulative probabilities are examined in turn until one exceeds u.
algorithm Poisson generator based upon the inversion by sequential search:^[35]
Let x ← 0, p ← e^−λ, s ← p.
Generate uniform random number u in [0,1].
while u > s do:
x ← x + 1.
p ← p * λ / x.
s ← s + p.
return x.
"This algorithm ... requires expected time proportional to λ as λ→∞. For large λ, round-off errors proliferate, which provides us with another reason for avoiding large values of λ."^[35]
Parameter estimation[edit]
Maximum likelihood[edit]
Given a sample of n measured values k[i] = 0, 1, 2, ..., for i = 1, ..., n, we wish to estimate the value of the parameter λ of the Poisson population from which the sample was drawn. The maximum
likelihood estimate is ^[36]
${\displaystyle {\widehat {\lambda }}_{\mathrm {MLE} }={\frac {1}{n}}\sum _{i=1}^{n}k_{i}.\!}$
Since each observation has expectation λ so does this sample mean. Therefore, the maximum likelihood estimate is an unbiased estimator of λ. It is also an efficient estimator, i.e. its estimation
variance achieves the Cramér–Rao lower bound (CRLB). Hence it is minimum-variance unbiased. Also it can be proven that the sum (and hence the sample mean as it is a one-to-one function of the sum) is
a complete and sufficient statistic for λ.
To prove sufficiency we may use the factorization theorem. Consider partitioning the probability mass function of the joint Poisson distribution for the sample into two parts: one that depends solely
on the sample ${\displaystyle \mathbf {x} }$ (called ${\displaystyle h(\mathbf {x} )}$) and one that depends on the parameter ${\displaystyle \lambda }$ and the sample ${\displaystyle \mathbf {x} }$
only through the function ${\displaystyle T(\mathbf {x} )}$. Then ${\displaystyle T(\mathbf {x} )}$ is a sufficient statistic for ${\displaystyle \lambda }$.
${\displaystyle P(\mathbf {x} )=\prod _{i=1}^{n}{\frac {\lambda ^{x_{i}}e^{-\lambda }}{x_{i}!}}={\frac {1}{\prod _{i=1}^{n}x_{i}!}}\times \lambda ^{\sum _{i=1}^{n}x_{i}}e^{-n\lambda }}$
Note that the first term, ${\displaystyle h(\mathbf {x} )}$, depends only on ${\displaystyle \mathbf {x} }$. The second term, ${\displaystyle g(T(\mathbf {x} )|\lambda )}$, depends on the sample only
through ${\displaystyle T(\mathbf {x} )=\sum _{i=1}^{n}x_{i}}$. Thus, ${\displaystyle T(\mathbf {x} )}$ is sufficient.
To find the parameter λ that maximizes the probability function for the Poisson population, we can use the logarithm of the likelihood function:
{\displaystyle {\begin{aligned}\ell (\lambda )&=\ln \prod _{i=1}^{n}f(k_{i}\mid \lambda )\\&=\sum _{i=1}^{n}\ln \!\left({\frac {e^{-\lambda }\lambda ^{k_{i}}}{k_{i}!}}\right)\\&=-n\lambda +\left
(\sum _{i=1}^{n}k_{i}\right)\ln(\lambda )-\sum _{i=1}^{n}\ln(k_{i}!).\end{aligned}}}
We take the derivative of ${\displaystyle \ell }$ with respect to λ and compare it to zero:
${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} \lambda }}\ell (\lambda )=0\iff -n+\left(\sum _{i=1}^{n}k_{i}\right){\frac {1}{\lambda }}=0.\!}$
Solving for λ gives a stationary point.
${\displaystyle \lambda ={\frac {\sum _{i=1}^{n}k_{i}}{n}}}$
So λ is the average of the k[i] values. Obtaining the sign of the second derivative of L at the stationary point will determine what kind of extreme value λ is.
${\displaystyle {\frac {\partial ^{2}\ell }{\partial \lambda ^{2}}}=-\lambda ^{-2}\sum _{i=1}^{n}k_{i}}$
Evaluating the second derivative at the stationary point gives:
${\displaystyle {\frac {\partial ^{2}\ell }{\partial \lambda ^{2}}}=-{\frac {n^{2}}{\sum _{i=1}^{n}k_{i}}}}$
which is the negative of n times the reciprocal of the average of the k[i]. This expression is negative when the average is positive. If this is satisfied, then the stationary point maximizes the
probability function.
For completeness, a family of distributions is said to be complete if and only if ${\displaystyle E(g(T))=0}$ implies that ${\displaystyle P_{\lambda }(g(T)=0)=1}$ for all ${\displaystyle \lambda }$.
If the individual ${\displaystyle X_{i}}$ are iid ${\displaystyle \mathrm {Po} (\lambda )}$, then ${\displaystyle T(\mathbf {x} )=\sum _{i=1}^{n}X_{i}\sim \mathrm {Po} (n\lambda )}$. Knowing the
distribution we want to investigate, it is easy to see that the statistic is complete.
${\displaystyle E(g(T))=\sum _{t=0}^{\infty }g(t){\frac {(n\lambda )^{t}e^{-n\lambda }}{t!}}=0}$
For this equality to hold, ${\displaystyle g(t)}$ must be 0. This follows from the fact that none of the other terms will be 0 for all ${\displaystyle t}$ in the sum and for all possible values of $
{\displaystyle \lambda }$. Hence, ${\displaystyle E(g(T))=0}$ for all ${\displaystyle \lambda }$ implies that ${\displaystyle P_{\lambda }(g(T)=0)=1}$, and the statistic has been shown to be
Confidence interval[edit]
The confidence interval for the mean of a Poisson distribution can be expressed using the relationship between the cumulative distribution functions of the Poisson and chi-squared distributions. The
chi-squared distribution is itself closely related to the gamma distribution, and this leads to an alternative expression. Given an observation k from a Poisson distribution with mean μ, a confidence
interval for μ with confidence level 1 – α is
${\displaystyle {\tfrac {1}{2}}\chi ^{2}(\alpha /2;2k)\leq \mu \leq {\tfrac {1}{2}}\chi ^{2}(1-\alpha /2;2k+2),}$
or equivalently,
${\displaystyle F^{-1}(\alpha /2;k,1)\leq \mu \leq F^{-1}(1-\alpha /2;k+1,1),}$
where ${\displaystyle \chi ^{2}(p;n)}$ is the quantile function (corresponding to a lower tail area p) of the chi-squared distribution with n degrees of freedom and ${\displaystyle F^{-1}(p;n,1)}$ is
the quantile function of a Gamma distribution with shape parameter n and scale parameter 1.^[5]^:171^[37] This interval is 'exact' in the sense that its coverage probability is never less than the
nominal 1 – α.
When quantiles of the Gamma distribution are not available, an accurate approximation to this exact interval has been proposed (based on the Wilson–Hilferty transformation):^[38]
${\displaystyle k\left(1-{\frac {1}{9k}}-{\frac {z_{\alpha /2}}{3{\sqrt {k}}}}\right)^{3}\leq \mu \leq (k+1)\left(1-{\frac {1}{9(k+1)}}+{\frac {z_{\alpha /2}}{3{\sqrt {k+1}}}}\right)^{3},}$
where ${\displaystyle z_{\alpha /2}}$ denotes the standard normal deviate with upper tail area α / 2.
For application of these formulae in the same context as above (given a sample of n measured values k[i] each drawn from a Poisson distribution with mean λ), one would set
${\displaystyle k=\sum _{i=1}^{n}k_{i},\!}$
calculate an interval for μ = nλ, and then derive the interval for λ.
Bayesian inference[edit]
In Bayesian inference, the conjugate prior for the rate parameter λ of the Poisson distribution is the gamma distribution.^[39] Let
${\displaystyle \lambda \sim \mathrm {Gamma} (\alpha ,\beta )\!}$
denote that λ is distributed according to the gamma density g parameterized in terms of a shape parameter α and an inverse scale parameter β:
${\displaystyle g(\lambda \mid \alpha ,\beta )={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}\;\lambda ^{\alpha -1}\;e^{-\beta \,\lambda }\qquad {\text{ for }}\lambda >0\,\!.}$
Then, given the same sample of n measured values k[i] as before, and a prior of Gamma(α, β), the posterior distribution is
${\displaystyle \lambda \sim \mathrm {Gamma} \left(\alpha +\sum _{i=1}^{n}k_{i},\beta +n\right).\!}$
The posterior mean E[λ] approaches the maximum likelihood estimate ${\displaystyle {\widehat {\lambda }}_{\mathrm {MLE} }}$ in the limit as ${\displaystyle \alpha \to 0,\ \beta \to 0}$, which follows
immediately from the general expression of the mean of the gamma distribution.
The posterior predictive distribution for a single additional observation is a negative binomial distribution,^[40] sometimes called a Gamma–Poisson distribution.
Simultaneous estimation of multiple Poisson means[edit]
Suppose ${\displaystyle X_{1},X_{2},\dots ,X_{p}}$ is a set of independent random variables from a set of ${\displaystyle p}$ Poisson distributions, each with a parameter ${\displaystyle \lambda _
{i}}$, ${\displaystyle i=1,\dots ,p}$, and we would like to estimate these parameters. Then, Clevenson and Zidek show that under the normalized squared error loss ${\displaystyle L(\lambda ,{\hat {\
lambda }})=\sum _{i=1}^{p}\lambda _{i}^{-1}({\hat {\lambda }}_{i}-\lambda _{i})^{2}}$, when ${\displaystyle p>1}$, then, similar as in Stein's famous example for the Normal means, the MLE estimator $
{\displaystyle {\hat {\lambda }}_{i}=X_{i}}$ is inadmissible.^[41]
In this case, a family of minimax estimators is given for any ${\displaystyle 0<c\leq 2(p-1)}$ and ${\displaystyle b\geq (p-2+p^{-1})}$ as^[42]
${\displaystyle {\hat {\lambda }}_{i}=\left(1-{\frac {c}{b+\sum _{i=1}^{p}X_{i}}}\right)X_{i},\qquad i=1,\dots ,p.}$
Bivariate Poisson distribution[edit]
This distribution has been extended to the bivariate case.^[43] The generating function for this distribution is
${\displaystyle g(u,v)=\exp[(\theta _{1}-\theta _{12})(u-1)+(\theta _{2}-\theta _{12})(v-1)+\theta _{12}(uv-1)]}$
${\displaystyle \theta _{1},\theta _{2}>\theta _{12}>0\,}$
The marginal distributions are Poisson(θ[1]) and Poisson(θ[2]) and the correlation coefficient is limited to the range
${\displaystyle 0\leq \rho \leq \min \left\{{\frac {\theta _{1}}{\theta _{2}}},{\frac {\theta _{2}}{\theta _{1}}}\right\}}$
A simple way to generate a bivariate Poisson distribution ${\displaystyle X_{1},X_{2}}$ is to take three independent Poisson distributions ${\displaystyle Y_{1},Y_{2},Y_{3}}$ with means ${\
displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}$ and then set ${\displaystyle X_{1}=Y_{1}+Y_{3},X_{2}=Y_{2}+Y_{3}}$. The probability function of the bivariate Poisson distribution is
{\displaystyle {\begin{aligned}&\Pr(X_{1}=k_{1},X_{2}=k_{2})\\={}&\exp \left(-\lambda _{1}-\lambda _{2}-\lambda _{3}\right){\frac {\lambda _{1}^{k_{1}}}{k_{1}!}}{\frac {\lambda _{2}^{k_{2}}}{k_
{2}!}}\sum _{k=0}^{\min(k_{1},k_{2})}{\binom {k_{1}}{k}}{\binom {k_{2}}{k}}k!\left({\frac {\lambda _{3}}{\lambda _{1}\lambda _{2}}}\right)^{k}\end{aligned}}}
Computer software for the Poisson distribution[edit]
Poisson distribution using R[edit]
The R function dpois(x, lambda) calculates the probability that there are x events in an interval, where the argument "lambda" is the average number of events per interval.
For example,
dpois(x=0, lambda=1) = 0.3678794
dpois(x=1, lambda=2.5) = 0.2052125
The following R code creates a graph of the Poisson distribution from x= 0 to 8, with lambda=2.5.
px = dpois(x, lambda=2.5)
plot(x, px, type="h", xlab="Number of events k", ylab="Probability of k events", ylim=c(0,0.5), pty="s", main="Poisson distribution \n Probability of events for lambda = 2.5")
Poisson distribution using Excel[edit]
The Excel function POISSON( x, mean, cumulative ) calculates the probability of x events where mean is lambda, the average number of events per interval. The argument cumulative specifies the
cumulative distribution.
For example,
=POISSON(0, 1, FALSE) = 0.3678794
=POISSON(1, 2.5, FALSE) = 0.2052125
Poisson distribution using Mathematica[edit]
Mathematica supports the univariate Poisson distribution as PoissonDistribution[${\displaystyle \lambda }$],^[44] and the bivariate Poisson distribution as MultivariatePoissonDistribution[${\
displaystyle \theta _{12}}$,{ ${\displaystyle \theta _{1}-\theta _{12}}$, ${\displaystyle \theta _{2}-\theta _{12}}$}],^[45] including computation of probabilities and expectation, sampling,
parameter estimation and hypothesis testing.
See also[edit]
1. ^ Frank A. Haight (1967). Handbook of the Poisson Distribution. New York: John Wiley & Sons.
2. ^ "Statistics | The Poisson Distribution". Umass.edu. 2007-08-24. Retrieved 2014-04-18.
3. ^ Ugarte, MD; Militino, AF; Arnholt, AT (2016), Probability and Statistics with R (Second ed.), CRC Press, ISBN 978-1-4665-0439-4
4. ^ S.D. Poisson, Probabilité des jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilitiés (Paris, France: Bachelier, 1837), page 206.
5. ^ ^a ^b ^c ^d ^e ^f ^g ^h ^i Johnson, N.L., Kotz, S., Kemp, A.W. (1993) Univariate Discrete distributions (2nd edition). Wiley. ISBN 0-471-54897-9
6. ^ Stigler, Stephen M. (1982). "Poisson on the poisson distribution". Statistics & Probability Letters. 1: 33–35. doi:10.1016/0167-7152(82)90010-4.
7. ^ Hald, A.; de Moivre, Abraham; McClintock, Bruce (1984). "A. de Moivre: 'De Mensura Sortis' or 'On the Measurement of Chance'". International Statistical Review / Revue Internationale de
Statistique. 52 (3): 229–262. doi:10.2307/1403045. JSTOR 1403045.
8. ^ Ladislaus von Bortkiewicz, Das Gesetz der kleinen Zahlen [The law of small numbers] (Leipzig, Germany: B.G. Teubner, 1898). On page 1, Bortkiewicz presents the Poisson distribution. On pages
23–25, Bortkiewicz presents his famous analysis of "4. Beispiel: Die durch Schlag eines Pferdes im preussischen Heere Getöteten." (4. Example: Those killed in the Prussian army by a horse's
9. ^ Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers, Roy D. Yates, David Goodman, page 60.
10. ^ For the proof, see : Proof wiki: expectation and Proof wiki: variance
11. ^ Some Poisson models, Vose Software, retrieved 2016-01-18
12. ^ Helske, Jouni (2015-06-25), KFAS: Exponential family state space models in R (PDF), Comprehensive R Archive Network, retrieved 2016-01-18
13. ^ Choi KP (1994) On the medians of Gamma distributions and an equation of Ramanujan. Proc Amer Math Soc 121 (1) 245–251
14. ^ Riordan, John (1937). "Moment recurrence relations for binomial, Poisson and hypergeometric frequency distributions". Annals of Mathematical Statistics. 8 (2): 103–111. doi:10.1214/aoms/
1177732430. Also see Haight (1967), p. 6.
15. ^ E. L. Lehmann (1986). Testing Statistical Hypotheses (second ed.). New York: Springer Verlag. ISBN 978-0-387-94919-2. page 65.
16. ^ Raikov, D. (1937). On the decomposition of Poisson laws. Comptes Rendus de l'Académie des Sciences de l'URSS, 14, 9–11. (The proof is also given in von Mises, Richard (1964). Mathematical
Theory of Probability and Statistics. New York: Academic Press.)
17. ^ Laha, R. G. & Rohatgi, V. K. (1979-05-01). Probability Theory. New York: John Wiley & Sons. p. 233. ISBN 978-0-471-03262-5.
18. ^ Michael Mitzenmacher & Eli Upfal (2005-01-31). Probability and Computing: Randomized Algorithms and Probabilistic Analysis. Cambridge University Press. p. 97. ISBN 978-0521835404.
19. ^ NIST/SEMATECH, '6.3.3.1. Counts Control Charts', e-Handbook of Statistical Methods, accessed 25 October 2006
20. ^ Huiming, Zhang; Yunxiao Liu; Bo Li (2014). "Notes on discrete compound Poisson model with applications to risk theory". Insurance: Mathematics and Economics. 59: 325–336. doi:10.1016/
21. ^ Huiming, Zhang; Bo Li (2016). "Characterizations of discrete compound Poisson distributions". Communications in Statistics - Theory and Methods. 45 (22): 6789–6802. doi:10.1080/
22. ^ McCullagh, Peter; Nelder, John (1989). Generalized Linear Models. London: Chapman and Hall. ISBN 978-0-412-31760-6. page 196 gives the approximation and higher order terms.
23. ^ S. M. Ross (2007). Introduction to Probability Models (ninth ed.). Boston: Academic Press. ISBN 978-0-12-598062-3. pp. 307–308.
24. ^ Paul J. Flory (1940). "Molecular Size Distribution in Ethylene Oxide Polymers". Journal of the American Chemical Society. 62 (6): 1561–1565. doi:10.1021/ja01863a066.
25. ^ Philip J. Boland (1984). "A Biographical Glimpse of William Sealy Gosset". The American Statistician. 38 (3): 179–183. doi:10.1080/00031305.1984.10483195.
26. ^ Dave Hornby. "Football Prediction Model: Poisson Distribution". “calculate the probability of outcomes for a football match, which in turn can be turned into odds that we can use to identify
value in the market.”
27. ^ "Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation". Food
Microbiology. 60: 49–53. 2016-12-01. doi:10.1016/j.fm.2016.05.019. ISSN 0740-0020.
28. ^ Clarke, R. D. (1946). "An application of the Poisson distribution". Journal of the Institute of Actuaries. 72: 481.
29. ^ Aatish Bhatia (2012-12-21). "What does randomness look like?". Wired. “Within a large area of London, the bombs weren’t being targeted. They rained down at random in a devastating, city-wide
game of Russian roulette.”
30. ^ P.X., Gallagher (1976). "On the distribution of primes in short intervals". Mathematika. 23: 4–9. doi:10.1112/s0025579300016442.
31. ^ A. Colin Cameron; Pravin K. Trivedi (1998). Regression Analysis of Count Data. ISBN 9780521635677. Retrieved 2013-01-30. “(p.5) The law of rare events states that the total number of events
will follow, approximately, the Poisson distribution if an event may occur in any of a large number of trials but the probability of occurrence in any given trial is small.”
32. ^ Edgeworth, F. Y. (1913). "On the use of the theory of probabilities in statistics relating to society". Journal of the Royal Statistical Society. 76 (2): 165–193. doi:10.2307/2340091. JSTOR
33. ^ ^a ^b Devroye, Luc (1986). "Discrete Univariate Distributions" (PDF). Non-Uniform Random Variate Generation. New York: Springer-Verlag. p. 505.
34. ^ Paszek, Ewa. "Maximum Likelihood Estimation – Examples".
35. ^ Garwood, F. (1936). "Fiducial Limits for the Poisson Distribution". Biometrika. 28 (3/4): 437–442. doi:10.1093/biomet/28.3-4.437.
36. ^ Breslow, NE; Day, NE (1987). Statistical Methods in Cancer Research: Volume 2—The Design and Analysis of Cohort Studies. Paris: International Agency for Research on Cancer. ISBN
37. ^ Fink, Daniel (1997). A Compendium of Conjugate Priors.
38. ^ Gelman; et al. (2005). Bayesian Data Analysis (2nd ed.). p. 60.
39. ^ Clevenson, M. L.; Zidek, J. V. (1975). "Simultaneous Estimation of the Means of Independent Poisson Laws". Journal of the American Statistical Association. 70 (351a): 698–705. doi:10.1080/
40. ^ Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis (2nd ed.). Springer.
41. ^ Loukas, S.; Kemp, C. D. (1986). "The Index of Dispersion Test for the Bivariate Poisson Distribution". Biometrics. 42 (4): 941–948. doi:10.2307/2530708. JSTOR 2530708.
42. ^ "Wolfram Language: PoissonDistribution reference page". wolfram.com. Retrieved 2016-04-08.
43. ^ "Wolfram Language: MultivariatePoissonDistribution reference page". wolfram.com. Retrieved 2016-04-08.
Further reading[edit]
• Shanmugam, Ramalingam (2013). "Informatics about fear to report rapes using bumped-up Poisson model". American Journal of Biostatistics. 3 (1): 17–29. doi:10.3844/amjbsp.2013.17.29. | {"url":"https://static.hlt.bme.hu/semantics/external/pages/LSA/en.wikipedia.org/wiki/Poisson_distribution.html","timestamp":"2024-11-02T00:03:52Z","content_type":"text/html","content_length":"481221","record_id":"<urn:uuid:0d2da313-3eb9-448e-b1d9-34351fdd17e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00137.warc.gz"} |
Schwarz Triangle -- from Wolfram MathWorld
The Schwarz triangles are spherical triangles which, by repeated reflection in their indices, lead to a set of congruent spherical triangles covering the sphere a finite number of times.
Schwarz triangles are specified by triples of numbers
The others can be derived from | {"url":"https://mathworld.wolfram.com/SchwarzTriangle.html","timestamp":"2024-11-06T15:12:02Z","content_type":"text/html","content_length":"55147","record_id":"<urn:uuid:a1fa789e-f33a-4aad-a636-55c2c8baf8ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00423.warc.gz"} |
Paper Title
Design Optimization and Flow Analysis of Cross Flow Turbine (500 W)
In a hydro-power project, water turbine is one of the most important parts of the generating of electricity. The main purposes of this thesis are to develop the living standard in rural areas and to
reduce the use of non-renewable energy. Crossflow turbines are commonly used in low-head small hydro systems. A crossflow turbine is a unique type of hydroturbine in that the flow passes twice
through the runner. Therefore, the power is extracted in two stages. The feasibility of hydraulic turbine, theory of cross-flow turbine, detail design calculation, model design of 500 Watts
Cross-Flow Turbine are described .This paper contains a complete set of detail drawings for manufacturing of a 500 Watts Cross-Flow Turbine. It can be used at sites which head is 1.5 m and flow rate
is 0.045 m3/s. For the given capacity, the dimensions of turbine diameter and length are 330 mm and 295 mm. The performance analysis of Cross-Flow Turbine can be predicted by numerical analysis. The
runner speed is 141 rpm and number of blades is 21 blades. The angle of nozzle inlet is 16° and the inlet blade angle is 30° respectively. The objective of this work is to characterize the two stage
power transfer in the runner. Moreover, the purpose of this study is also both to test the performance of a low cost cross-flow turbine and to observe on the efficiency of a low cost Cross-flow
turbine. The maximum efficiency of the turbine is 85%. The model design of this turbine consists of the preparing of blades, runner, nozzle and other components. All components of this turbine are
made by mild steel. Keywords - Cross-Flow Turbine, Model design, two stage power, maximum efficiency | {"url":"http://iraj.in/journal/IJMPE/abstract.php?paper_id=20724","timestamp":"2024-11-07T01:09:25Z","content_type":"text/html","content_length":"2715","record_id":"<urn:uuid:08046b8b-5226-4fd5-aa14-5a2d658e9592>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00542.warc.gz"} |
Linear Pair of Angles—Definition, Axiom, Examples - Grade Potential Cleveland, OH
Linear Pair of AnglesDefinition, Axiom, Examples
The linear pair of angles is a significant subject in geometry. With multiple real-life applications, you'd be surprised to figure out how applicable this figure can be. Although you might think it
has no use in your life, we all should grasp the ideas to nail those examinations in school.
To save your time and create this information easy to access, here is an preliminary insight into the properties of a linear pair of angles, with diagrams and examples to assist with your personal
study sessions. We will also talk about few real-world and geometric uses.
What Is a Linear Pair of Angles?
Linearity, angles, and intersections are theories that exist to be relevant as you go forward in geometry and more complicated theorems and proofs. We will answer this question with a easy definition
in this unique point.
A linear pair of angles is the term designated to two angles that are situated on a straight line and have the sum of their measurement of angles is 180 degrees.
To put it easily, linear pairs of angles are two angles that are aligned on the same line and pair up to create a straight line. The sum of the angles in a linear pair will always produce a straight
angle equal to 180 degrees.
It is crucial to note that linear pairs are at all times at adjacent angles. They share a common vertex and a common arm. This implies that they always form on a straight line and are at all times
supplementary angles.
It is crucial to make clear that, even though the linear pair are constantly adjacent angles, adjacent angles never constantly linear pairs.
The Linear Pair Axiom
With the definition simplified, we will study the two axioms seriously to fully grasp any example provided to you.
First, let's define what an axiom is. It is a mathematical postulate or assumption that is accepted without proof; it is believed clear and self-explanatory. A linear pair of angles has two axioms
associated with them.
The first axiom implies that if a ray stands on a line, the adjacent angles will create a straight angle, also known as a linear pair.
The second axiom states that if two angles produces a linear pair, then uncommon arms of both angles makes a straight angle between them. This is also known as a straight line.
Examples of Linear Pairs of Angles
To imagine these axioms better, here are a few drawn examples with their respective explanations.
Example One
Here in this instance, we have two angles that are neighboring each other. As you can observe in the diagram, the adjacent angles form a linear pair since the sum of their measurement is equivalent
to 180 degrees. They are also supplementary angles, because they share a side and a common vertex.
Angle A: 75 degrees
Angle B: 105 degrees
Sum of Angles A and B: 75 + 105 = 180
Example Two
In this instance, we possess two lines intersect, creating four angles. Not all angles form a linear pair, but respective angle and the one adjacent to it form a linear pair.
∠A 30 degrees
∠B: 150 degrees
∠C: 30 degrees
∠D: 150 degrees
In this example, the linear pairs are:
∠A and ∠B
∠B and ∠C
∠C and ∠D
∠D and ∠A
Example Three
This example presents convergence of three lines. Let's observe the axiom and properties of linear pairs.
∠A 150 degrees
∠B: 50 degrees
∠C: 160 degrees
None of the angle combinations add up to 180 degrees. As a effect, we can come to the conclusion that this example has no linear pair unless we extend one straight line.
Applications of Linear Pair of Angles
At the moment we have gone through what linear pairs are and have looked at some examples, let's see how this theorem can be utilized in geometry and the real world.
In Real-Life Scenarios
There are several utilizations of linear pairs of angles in real-world. One familiar example is architects, who utilize these axioms in their daily work to check if two lines are perpendicular and
makes a straight angle.
Construction and Building professionals also employ masters in this matter to make their work simpler. They utilize linear pairs of angles to assure that two adjacent walls create a 90-degree angle
with the ground.
Engineers also apply linear pairs of angles frequently. They do so by calculating the pressure on the beams and trusses.
In Geometry
Linear pairs of angles also play a function in geometry proofs. A ordinary proof that uses linear pairs is the alternate interior angles concept. This concept states that if two lines are parallel
and intersected by a transversal line, the alternate interior angles formed are congruent.
The proof of vertical angles also depends on linear pairs of angles. Even though the adjacent angles are supplementary and sum up to 180 degrees, the opposite vertical angles are always equivalent to
one another. Because of these two rules, you only need to determine the measure of one angle to figure out the measure of the rest.
The concept of linear pairs is further employed for more complicated applications, such as figuring out the angles in polygons. It’s critical to understand the basics of linear pairs, so you are
prepared for more advanced geometry.
As you can see, linear pairs of angles are a relatively easy concept with some fascinating implementations. Later when you're out and about, see if you can see some linear pairs! And, if you're
taking a geometry class, be on the lookout for how linear pairs might be useful in proofs.
Better Your Geometry Skills with Grade Potential
Geometry is fun and useful, especially if you are looking into the domain of architecture or construction.
However, if you're having difficulty understanding linear pairs of angles (or any concept in geometry), think about signing up for a tutoring session with Grade Potential. One of our experienced
instructors will assist you grasp the material and ace your next test. | {"url":"https://www.clevelandinhometutors.com/blog/linear-pair-of-angles-definition-axiom-examples","timestamp":"2024-11-05T13:11:35Z","content_type":"text/html","content_length":"78276","record_id":"<urn:uuid:545af61d-4704-48d6-b475-512aafaec528>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00605.warc.gz"} |
Jeongyoun Ahn
May 02, 2022
Abstract:Compositional data, such as human gut microbiomes, consist of non-negative variables whose only the relative values to other variables are available. Analyzing compositional data such as
human gut microbiomes needs a careful treatment of the geometry of the data. A common geometrical understanding of compositional data is via a regular simplex. Majority of existing approaches rely on
a log-ratio or power transformations to overcome the innate simplicial geometry. In this work, based on the key observation that a compositional data are projective in nature, and on the intrinsic
connection between projective and spherical geometry, we re-interpret the compositional domain as the quotient topology of a sphere modded out by a group action. This re-interpretation allows us to
understand the function space on compositional domains in terms of that on spheres and to use spherical harmonics theory along with reflection group actions for constructing a compositional
Reproducing Kernel Hilbert Space (RKHS). This construction of RKHS for compositional data will widely open research avenues for future methodology developments. In particular, well-developed kernel
embedding methods can be now introduced to compositional data analysis. The polynomial nature of compositional RKHS has both theoretical and computational benefits. The wide applicability of the
proposed theoretical framework is exemplified with nonparametric density estimation and kernel exponential family for compositional data. | {"url":"https://www.catalyzex.com/author/Jeongyoun%20Ahn","timestamp":"2024-11-04T12:06:59Z","content_type":"text/html","content_length":"46434","record_id":"<urn:uuid:9bca6240-c598-4360-b71e-0a21aa296a46>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00282.warc.gz"} |
Grandchildren 27181 - math word problem (27181)
Grandchildren 27181
Grandma baked buns, which she wanted to divide fairly among her grandchildren. If she gave everyone 5 buns, she would have 2 buns left. If she gave each grandchild 6 buns, she would be missing 3
buns. How many grandchildren does grandma have? How many buns did she bake?
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Tips for related online calculators
Do you have a linear equation or system of equations and are looking for its
? Or do you have
a quadratic equation
You need to know the following knowledge to solve this word math problem:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/27181","timestamp":"2024-11-13T10:56:00Z","content_type":"text/html","content_length":"55162","record_id":"<urn:uuid:630fd885-08ed-4ef1-9987-c6abcfe096c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00582.warc.gz"} |
OpenStax College Physics, Chapter 28, Problem 61 (Problems & Exercises)
What is $\gamma$ for a proton having a mass energy of 938.3 MeV accelerated through an effective potential of 1.0 TV (teravolt) at Fermilab outside Chicago?
Question by
is licensed under
CC BY 4.0
Solution video
OpenStax College Physics, Chapter 28, Problem 61 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. A proton is being accelerated through a potential difference of 1 teravolt which is 1 times 10 to the 12 volts. and it has a mass energy, which is
mc squared of 938.3 megaelectron volts and the question is, what is gamma? So we can figure out gamma knowing the kinetic energy and the rest energy of a proton. So kinetic energy is going to be the
energy that is given to the proton due to its acceleration through this potential difference and that kinetic energy will be its charge times the number of joules per coulomb, which is, you know, the
more base unit of voltage is joules per coulomb so it's energy per charge and then we are multiplying this by charge to get energy. So that's all going to become kinetic energy. So we have a
expression for kinetic energy in terms of gamma and then another one, in terms of voltage and so can we equate these two and we do that here on this line. So we have qV equals gamma minus 1 times the
rest energy. We'll divide both sides by mc squared and we get gamma minus 1, after we switch the sides around, equals qV over mc squared and then add 1 to both sides and you get gamma is qV over mc
squared plus 1 and now we substitute in the numbers. So that's the charge of a proton, which is the elementary charge—1.60 times 10 to the minus 19 coulombs— times 1 teravolt divided by the rest
energy in electron volts so it's times 10 to the 6 electron volts here; times 10 to the 6 is the prefix mega. And then we multiply by 1.6 times 10 to the minus 19 joules per electron volt in order to
make the units joules in the denominator here. And then add 1 and we get 1070 is the Lorentz factor. | {"url":"https://collegephysicsanswers.com/openstax-solutions/what-gamma-proton-having-mass-energy-9383-mev-accelerated-through-effective","timestamp":"2024-11-04T02:18:10Z","content_type":"text/html","content_length":"163671","record_id":"<urn:uuid:3e8e702a-6c20-404e-a824-f5e37fadd824>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00497.warc.gz"} |
The present study shows, in a large life span sample, that sleep does not using SPSS statistical package (version 25; IBM Corp., Armonk, NY, USA). we executed a linear regression of the generalized
linear in activity 1,
Linear Regression: Related Procedures whether an item is defective or not, use the Logistic Regression procedure, available in SPSS® Statistics If your data are not independent-for example, if you
observe the same person under several
used the SPSS program Clementine version 9. The program You must cite this article if you use its information in other circumstances. An example of citing this article is: Ronny Gunnarsson. Signs
test IBM SPSS Statistics Premium Grad Pack 25.0 Academic (Mac Download - 12 Month License). Retail: $16,300.00 SAVE For example, your class might require access to binary logistics or a regression
The model summary table shows some statistics for each model. The adjusted r-square column shows that it increases from 0.351 to 0.427 by adding a third predictor. Example: Simple Linear Regression
in SPSS. Suppose we have the following dataset that shows the number of hours studied and the exam score received by 20 students: Use the following steps to perform simple linear regression on this
dataset to quantify the relationship between hours studied and exam score: Step 1: Visualize the data. Example: Logistic Regression in SPSS Use the following steps to perform logistic regression in
SPSS for a dataset that shows whether or not college basketball players got drafted into the NBA (draft: 0 = no, 1 = yes) based on their average points per game and division level. The Linear
Regression Analysis in SPSS This example is based on the FBI’s 2006 crime statistics.
SPSS dipublikasikan oleh SPSS Inc. SPSS (Statistical Package for the Social The seven steps below show you how to analyse your data using multiple regression in SPSS Statistics when none of the eight
Examples using the statistical .
Also compares results with bivariate correlatio In this guide, you will learn how to estimate a multiple regression model with interactions in SPSS using a practical example to illustrate the
process. Readers are provided links to the example dataset and encouraged to replicate this example.
Se hela listan på spss-tutorials.com
Example spss RQ question:. Each method is also accompanied by a worked out example, SPSS and SAS input, and an example of how to write up the results. EQS code is used for the Smoking. Risk factor.
Association. Relationship of interest. Example SPSS had the following output from a simple linear regression: Exercise:. klassificerade efter aktivitetsfältet av “regression” – Svenska-Engelska
ordbok example for determination of deterioration factors by using linear regression is example research question and null hypothesis, SPSS procedures, display and examples of SPSS output with
accompanying analysis and interpretations, Kursen riktar sig till universitetsstudenter såväl som till praktiker, oavsett akademisk bakgrund. Syftet med kursen är att ge en praktisk introduktion
till regression. Reading this will give you an understanding of Cox regression and how to use it. data sets.
Signera word dokument online
This example uses in fact the ENTER method Simple linear regression in SPSS statstutor Community Project Common Applications: Regression is used to (a) look for significant relationships between
two The variable we are using to predict the other variable's value is called the independent variable (or sometimes, the predictor variable). For example, you could Example of Very Simple Path
Analysis via Regression (with correlation matrix input) One of the nice things about SPSS is that it will allow you to start with a 8 Jan 2015 Here is an example regression command with several
optional parameters. REGRESSION.
av MA Randa · 2004 · Citerat av 203 — For example, the abundances of Vibrio cholerae and Vibrio parahaemolyticus, two of the environmental factors, followed by a standardized multiple regression All
the statistical analyses were performed using the SPSS software package Regression models in medical sciences, 3 ECTS (online).
Privat helikoptertur
hamngatan 47 piteåpandora lundjonna pleijelta över bolån vid separation samboljungbyhed postnummer
So let’s see how to complete an ordinal regression in SPSS, using our example of NC English levels as the outcome and looking at gender as an explanatory variable.. Data preparation. Before we get
started, a couple of quick notes on how the SPSS ordinal regression procedure works with the data, because it differs from logistic regression.First, for the dependent (outcome) variable, SPSS
For small samples the t-values are not valid and the Wald statistic should be used instead. Wald is basically t² which is Chi-Square distributed with df=1. However, SPSS gives the significance levels
of each coefficient. I demonstrate how to perform a multiple regression in SPSS. This is the in-depth video series.
Example of Very Simple Path Analysis via Regression (with correlation matrix input) Using data from Pedhazur (1997) Certainly the most three important sets of decisions leading to a path analysis
are: 1. Which causal variables to include in the model 2. How to order the causal chain of those variables 3.
7-10 vardagar. Köp Regression Analysis by Example av Samprit Chatterjee, Ali S Hadi på Bokus.com. Discovering Statistics Using IBM SPSS Statistics. How to perform a simple linear regression analysis
using SPSS Statistics.
For small samples the t-values are not valid and the Wald statistic should be used instead. Wald is basically t² which is Chi-Square distributed with df=1. | {"url":"https://investeringarifgo.web.app/79300/33928.html","timestamp":"2024-11-10T06:26:33Z","content_type":"text/html","content_length":"10762","record_id":"<urn:uuid:e02315cb-3cd7-411b-b94c-c16e55453ff1>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00740.warc.gz"} |
Un-update Bayesian models to their prior-to-data state — unupdate
Un-update Bayesian models to their prior-to-data state
As posteriors are priors that have been updated after observing some data, the goal of this function is to un-update the posteriors to obtain models representing the priors. These models can then be
used to examine the prior predictive distribution, or to compare priors with posteriors.
unupdate(model, verbose = TRUE, ...)
# S3 method for class 'stanreg'
unupdate(model, verbose = TRUE, ...)
# S3 method for class 'brmsfit'
unupdate(model, verbose = TRUE, ...)
# S3 method for class 'brmsfit_multiple'
unupdate(model, verbose = TRUE, newdata = NULL, ...)
# S3 method for class 'blavaan'
unupdate(model, verbose = TRUE, ...) | {"url":"https://easystats.github.io/bayestestR/reference/unupdate.html","timestamp":"2024-11-03T17:03:24Z","content_type":"text/html","content_length":"14017","record_id":"<urn:uuid:6ec6a5da-2324-465b-a7f2-aa65280a444a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00623.warc.gz"} |
Juan R.
What do you want to work on?
About Juan R.
Algebra, Algebra 2, Calculus, Calculus BC, Geometry, Midlevel (7-8) Math, Physics, Pre-Calculus, Trigonometry
Bachelors in Physics, General from Universidad del Valle
Math - Calculus
pretty sure he thought i was an idiot but calculus is awful
Math - Linear Algebra
very helpful and describes clearly
Math - Calculus
Thanks so much! | {"url":"https://testprepservices.princetonreview.com/academic-tutoring/tutor/juan%20r--8702267","timestamp":"2024-11-11T03:34:36Z","content_type":"application/xhtml+xml","content_length":"203208","record_id":"<urn:uuid:dd4cd2e8-bbc4-44a2-bcd2-dfc5067d8dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00158.warc.gz"} |
Portfolio Optimization | R-bloggersPortfolio Optimization
[This article was first published on
Trading and travelling and other things » R
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Changing tracks, I want to now look at portfolio optimization. Although this is very different from developing trading strategies, it is useful to know how to construct minimum-variance portfolios
and the like, if only for curiosity’s sake. Also, just a -I hope unnecessary- note, portfolio optimization and parameter optimization (which I covered in the last post) are two completely different
Minimum-variance portfolio optimization has a lot of problems associated with it, but it makes for a good starting point as it is the most commonly discussed optimization technique in
classroom-finance. One of my biggest issues is with the measurement of risk via volatility. Security out-performance contributes as much to volatility -hence risk- as security under-performance,
which ideally shouldn’t be the case.
First, install the package tseries:
The function of interest is portfolio.optim(). I decided to write my own function to enter in a vector of tickers, start and end dates for the dataset, min and max weight constraints and
short-selling constraints. This function first processes the data and then passes it to portfolio.optim to determine the minimum variance portfolio for a given level of return. It then cycles through
increasingly higher returns to check how high the Sharpe ratio can go.
Here is the code with comments:
minVarPortfolio= function(tickers,start='2000-01-01',end=Sys.Date(),
# Load up the package
#Initialize all the variables we will be using. returnMatrix is
#initailized as a vector,with length equal to one of the input
#ticker vectors (dependent on the start and end dates).
#Sharpe is set to 0. The weights vector is set equal in
#length to the number of tickers. The portfolio is set to
#NULL. A 'constraint' variable is created to pass on the
#short parameter to the portfolio.optim function. And vectors
#are created with the low and high weight restrictions, which
#are then passed to the portfolio.optim function as well. ##
#This is a for-loop which cycles through the tickers, calculates
#their return, and stores the returns in a matrix, adding
#the return vector for each ticker to the matrix
for(i in 1:length(tickers)){
#This for-loop cycles through returns to test the portfolio.optim function
#for the highest Sharpe ratio.
for(j in 1:100){
#Stores the log of the return in retcalc
print(paste("Ret Calc:",retcalc))
#Tries to see if the specified return from retcalc can result
#in an efficient portfolio
#If the portfolio exists, it is compared against previous portfolios
#for different returns using the #Sharpe ratio. If it has the highest
#Sharpe ratio, it is stored and the old one is discarded.
print('Not Null')
print(paste('Sharpe:', sharpe))
This code works fine except for when the restrictions are too strict, the portfolio.optim function can’t find a minimum variance portfolio. This happens if the optimum portfolio has negative returns,
which my code doesn’t test for. For this reason, I wanted to try out other ways of finding the highest Sharpe portfolio. There are numerous tutorials out there on how to do this. Some of them are:
After I run my function, with the following tickers and constraints:
matrix=minVarPortfolio(c(‘NVDA’, ‘YHOO’, ‘GOOG’, ‘CAT’, ‘BNS’, ‘POT’, ‘STO’, ‘MBT’ ,’SNE’),lowestWeight=0,highestWeight=0.2,start=’2000-01-01′, end=’2013-06-01′)
This is the output I get:
[1] “Sharpe: 0.177751547083007″
tickers ”NVDA” “YHOO” ”GOOG”
weights “-1.58276161084957e-19″ ”2.02785605793095e-17″ “0.2″
tickers “CAT” “BNS” “POT”
weights “0.104269676769825″ “0.2″ “0.2″
tickers “STO” “MBT”
weights “0.189985091184918″ “0.105745232045257″
tickers “SNE”
weights “-2.85654465380669e-17″
The ‘e-XX’ weights basically indicate a weighting of zero on that particular security (NVDA, YHOO and SNE above). In the next post I will look at how all this can be done using a package called
‘fPortfolio’. Happy trading! | {"url":"https://www.r-bloggers.com/2013/06/portfolio-optimization/","timestamp":"2024-11-09T23:57:09Z","content_type":"text/html","content_length":"95768","record_id":"<urn:uuid:7352d285-be56-4b94-99f3-aae0198f97c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00774.warc.gz"} |
Blood lead levels and math learning in first year of school: An association for concern
Lead is a well-known neurotoxicant that continues to affect children's cognition and behavior. With the aim to examine the associations of lead exposure with math performance in children at the
beginning of formal schooling, we conducted a cross-sectional study of first-grade students from 11 schools in Montevideo, Uruguay. Math abilities were assessed with tests from the Batería III
Woodcock-Muñoz (Calculation, Math Facts Fluency, Applied Problems, Math Calculation Skills and Broad Maths). Separate generalized linear models (GLM) tested the association of blood lead level (BLL)
and each math ability, adjusting for key covariates including age and sex, maternal education, household assets and Home Observation for Measurement of the Environment Inventory score. In a
complete-case of 252 first-grade students (age 67–105 months, 45% girls), mean ± SD blood lead level was 4.0 ± 2.2 μg/dL. Covariate-adjusted logistic models were used to examine the association
between childhood BLLs and the odds of low math performance. BLL was negatively associated with scores on the Calculation test (β (95% CI): −0.18 (−0.33, −0.03)), Math Calculation Skills (−1.26
(−2.26, −0.25)), and Broad Maths cluster scores (−0.88 (−1.55, −0.21)). Similarly, performance on the Calculation test, as well as cluster scores for Broad Maths and Math Calculation Skills differed
between children with BLLs <5 and ≥ 5 μg/dL (p < 0.01), being lower in children with higher BLLs. Finally, considering the likelihood of low test performance, each 1 μg/dL higher B–Pb was related to
27% higher likelihood for Maths Facts Fluency, 30% for Broad Math and Math Calculation Skills, and 31% for Calculation (p < 0.05). These results suggest that lead exposure is negatively associated
with several basic skills that are key to math learning. These findings further suggest that the cognitive deficits related to lead exposure impact student achievement at very early stages of formal
Profundice en los temas de investigación de 'Blood lead levels and math learning in first year of school: An association for concern'. En conjunto forman una huella única. | {"url":"https://investigadores.ucu.edu.uy/es/publications/blood-lead-levels-and-math-learning-in-first-year-of-school-an-as","timestamp":"2024-11-02T22:10:14Z","content_type":"text/html","content_length":"56686","record_id":"<urn:uuid:f8a0de84-88fa-41c4-b073-0d88831ffcf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00492.warc.gz"} |
EPSRC Reference: EP/J021784/1
Title: Non-homogeneous random walks
Principal Investigator: Wade, Professor AR
Other Investigators:
Researcher Co-Investigators:
Project Partners:
Department: Mathematical Sciences
Organisation: Durham, University of
Scheme: First Grant - Revised 2009
Starts: 31 March 2013 Ends: 30 June 2014 Value (£): 91,911
EPSRC Research Topic Classifications: Mathematical Analysis Statistics & Appl. Probability
EPSRC Industrial Sector Classifications: No relevance to Underpinning Sectors
Related Grants:
│Panel Date │Panel Name │Outcome │
Panel History: ├───────────┼───────────────────────────────────────────────────┼─────────┤
│04 Jul 2012│Mathematics Prioritisation Panel Meeting July 2012 │Announced│
Summary on Grant Application Form
Random walks are fundamental models in stochastic process theory that exhibit deep connections to important areas of pure and applied mathematics and enjoy broad applications across the sciences and
beyond. Generally, a random walk is a stochastic process describing the motion of a particle (or random walker) in space. The particle's trajectory is represented by a series of random jumps at
discrete instants in time. Fundamental questions for these models involve the long-time asymptotic behaviour of the walker.
Random walks have a rich history involving several disciplines. Classical one-dimensional random walks were first studied several hundred years ago as models for games of chance, such as the
so-called gambler's ruin problem. In his 1900 thesis, Louis Bachelier applied similar reasoning to his model of stock prices. Many-dimensional random walks were first studied at around the same time,
arising from work of pioneers of science in diverse applications such as acoustics (Lord Rayleigh's theory of sound developed from about 1880), biology (Karl Pearson's 1906 theory of random migration
of species), and statistical physics (Einstein's theory of Brownian motion developed during 1905-08). The mathematical importance of the random walk problem became clear after Polya's work in the
1920s, and over the last 60 years or so beautiful connections have emerged linking random walk theory to influential areas of mathematics such as harmonic analysis, potential theory, combinatorics,
and spectral theory. Random walk models have continued to find new and important applications in many highly active domains of modern science; specific recent developments include for example
modelling of microbe locomotion in microbiology, polymer conformation in molecular chemistry, and financial systems in economics.
Spatially homogeneous random walks, in which the probabilistic nature of the jumps is the same regardless of the present spatial location of the walker, are the subject of a substantial literature.
In many modelling applications, the classical assumption of spatial homogeneity is unrealistic: the behaviour of the random walker may depend on the present location in space. Applications thus
motivate the study of non-homogeneous random walks. Moreover, mathematical motivation arises naturally from the point of view of deepening our understanding, via rigorous mathematical proofs, of
fundamental research problems: concretely, non-homogeneous random walks are the natural setting in which to probe near-critical behaviour and obtain a finer understanding of phase transitions present
in the classical random walk models.
The proposed research is part of a broad research programme to analyse near critical stochastic systems. Non-homogeneous random walks can typically not be studied by the techniques generally used for
homogeneous random walks: new methods (and, just as importantly, new intuitions) are required. Naturally, the analysis of near-critical systems is more challenging and delicate than that for systems
that are far from criticality. The methodology is based on martingale ideas. The methods are robust and powerful, and it is to be expected that methods developed during the project will be applicable
to many other near-critical models, including those with applications across modern probability theory and beyond, to areas such as queueing theory, interacting particle systems, and random media.
Key Findings
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Potential use in non-academic contexts
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Description This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Date Materialised
Sectors submitted by the Researcher
This information can now be found on Gateway to Research (GtR) http://gtr.rcuk.ac.uk
Project URL:
Further Information:
Organisation Website: | {"url":"https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/J021784/1","timestamp":"2024-11-02T11:05:32Z","content_type":"application/xhtml+xml","content_length":"26143","record_id":"<urn:uuid:9ef43ef6-f6fe-402c-b857-6077444dc353>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00038.warc.gz"} |
Sammie Courington, Abilene, TX Texas currently in Brownwood, TX USA
Sammie Courington
Profile Updated: July 19, 2012
Residing In: Brownwood, TX USA
Spouse/Partner: Leslie Courington
Dee Dee, May God bless you with a very happy birthday.
Feb 25, 2022 at 2:08 PM
Posted on: Mar 04, 2021 at 6:22 AM
Dale, hope that you have a very happy birthday.
Oct 22, 2020 at 11:49 AM
Ann, hope you're having a very happy birthday.
Aug 15, 2020 at 6:50 AM
Joanna, hope that you have a very happy birthday.
Aug 08, 2019 at 6:52 PM
Governor, hope that you have a very happy birthday tomorrow. I will be driving the bus for the Tennis Team all day tomorrow and will not have an opportunity to wish you a happy birthday then,
Mar 04, 2019 at 12:32 PM
Dale, hope that you have a very happy birthday.
Nov 28, 2018 at 3:32 PM
Posted on: Nov 27, 2018 at 3:33 AM
Oct 22, 2018 at 7:46 PM
Ann, hope that you've been having a very happy birthday.
Aug 09, 2018 at 3:13 PM
Governor, hope that you are having a very happy birthday.
May 28, 2019 at 2:10 PM
Posted on: May 28, 2018 at 4:17 PM
Ann, hope that you are having a very happy birthday and a safe and enjoyable Memorial Day.
Apr 22, 2018 at 4:24 PM
Carolyn, hope that you've been having a very happy birthday.
Apr 22, 2018 at 4:23 PM
Jenny, hope that you've been having a very happy birthday.
Aug 08, 2017 at 7:42 PM
Governor, hope that you have a very happy birthday.
Jul 10, 2017 at 7:42 PM
Carolyn, hope that you are having a very happy birthday. | {"url":"https://www.ahstx1972.com/class_profile.cfm?member_id=3823451","timestamp":"2024-11-07T12:14:49Z","content_type":"application/xhtml+xml","content_length":"77652","record_id":"<urn:uuid:4f4869a2-bdcc-4b6f-b898-9bbda91b20e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00288.warc.gz"} |
The Cosmic Era P3 - Battle Positions
The ZAFT are attacking the Orb Union! There are stations, numbered from , that need to be defended. For it to be secure, the Orb Union needs to have at least troops at each station. Unfortunately,
due to the radar-jamming effects of the Neutron Jammer, the Orb Union cannot order their troops to move between stations. The Orb Union will send waves of troops, each of which sends troops to each
of the stations . All stations start with troops.
The Orb Union wants you to help them find the number of stations that are not secure.
Input Specification
The first line will contain the integer , the number of stations.
The second line will contain the integer , the minimum number of troops required to defend a station.
The third line will contain the integer , the number of waves of troops.
The next lines will contain 3 space-separated integers. These integers will be in the order , , .
Output Specification
Output the total number of stations that have less than troops.
Sample Input
Sample Output
Explanation for Sample Output
Station 1 has 1 troop, station 2 has 3 troops, station 3 has 5 troops and station 4 has 0 troops. Station 4 is the only station with less than 1 troop, so the output is 1.
• Is there a better than linear time way to sum the values in a sub-array of an array?
□ the problem type is data structures, not implementation. there's probably a better way to solve it. also, look at the time restrictions lol | {"url":"https://dmoj.ca/problem/seed3","timestamp":"2024-11-12T01:04:35Z","content_type":"text/html","content_length":"33698","record_id":"<urn:uuid:44ecad3b-2f49-482b-a5b9-86f189506a85>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00843.warc.gz"} |
James bond is looking at the top of a tower, at a distance of 133 m. He is 1.6 m. tall and has to look about 25 degrees above eye level. How do you find the height of the tower? | Socratic
James bond is looking at the top of a tower, at a distance of 133 m. He is 1.6 m. tall and has to look about 25 degrees above eye level. How do you find the height of the tower?
1 Answer
tan ${25}^{0}$ = h/133 $\implies$ h = 133 tan ${25}^{0}$ m. $\implies$ Height of
the tower = [ 1.6 + 133 tan ${25}^{0}$ ] m. = (1.6 + 133x0.4663) m.
= 63.6189.
We assume h is the height of tower,above James bond's eye level.
Impact of this question
1599 views around the world | {"url":"https://socratic.org/questions/james-bond-is-has-to-look-about-25-degrees-above-eye-level-james-bond-is-1-6-met","timestamp":"2024-11-12T00:54:55Z","content_type":"text/html","content_length":"33437","record_id":"<urn:uuid:028fb83b-59a4-4cb7-9c4a-63c1e4aaa572>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00058.warc.gz"} |
Bilinear Interpolation Calculator - calculator
Home Calculator Bilinear Interpolation Calculator
Bilinear Interpolation Calculator
Bilinear Interpolation Calculator
Bilinear Interpolation Calculator: Bilinear interpolation is a method used to estimate values within a two-dimensional grid of known values. By performing linear interpolation first in one direction
and then in the perpendicular direction, this technique is useful for tasks like image scaling and geographic data analysis, where accurate estimation within a grid is needed.
The formula for bilinear interpolation is:
f(x, y) = (1 - x) * (1 - y) * Q11 + x * (1 - y) * Q21 + (1 - x) * y * Q12 + x * y * Q22
• Q11, Q12, Q21, and Q22 are the known values at the four corners of the grid.
• x and y are the relative distances from the grid points in the x and y directions.
How to Use the Calculator
To use this bilinear interpolation calculator, input the x and y coordinates where you want to estimate the value, and provide the known values at the four corners of the surrounding grid points.
Click 'Calculate' to obtain the interpolated value. Use 'Clear' to reset the form and start a new calculation.
Corner Points Coordinates
Corner Points Values
Interpolated Point Coordinates
Frequently Asked Questions
What is bilinear interpolation?
Bilinear interpolation is a method to estimate unknown values within a grid using linear interpolation in two dimensions. It’s widely used in computer graphics and data analysis to interpolate values
between known data points, offering a smooth approximation for intermediate values.
How is bilinear interpolation used in image processing?
In image processing, bilinear interpolation is employed to resize images by estimating pixel values at non-integer coordinates. It improves image quality by providing smoother transitions compared to
simpler methods like nearest-neighbor interpolation, making resized images look more natural.
What is the difference between bilinear and bicubic interpolation?
Bilinear interpolation uses four neighboring pixels for estimation, whereas bicubic interpolation considers sixteen pixels. Bicubic interpolation provides more accurate and smoother results but
requires more computational resources, making it suitable for higher-quality image resizing.
Can bilinear interpolation be used for non-uniform grids?
Bilinear interpolation assumes a uniform grid, making it less suitable for non-uniform grids. For irregular grids, techniques such as spline interpolation or other advanced methods should be used to
accurately estimate values based on non-uniformly spaced data points.
What is the importance of the formula in bilinear interpolation?
The formula is crucial as it defines how to weight the known values at the corners of a grid to estimate an unknown value. It ensures that interpolation is linear in both directions, allowing for
accurate estimation based on surrounding known data.
Is bilinear interpolation suitable for all types of data?
Bilinear interpolation is effective for data that changes linearly within a grid. However, it may not be the best choice for data with non-linear trends or significant discontinuities. For such data,
more complex interpolation methods or modeling approaches might be required.
How does the x and y input affect the result?
The x and y inputs represent the relative position within the grid cell. They determine how the surrounding known values are weighted to calculate the estimated value. Accurate input of x and y
ensures that the result reflects the position correctly within the grid.
Can this calculator handle negative values?
Yes, the calculator can process negative values for both coordinates and known values. Ensure that values are entered correctly, as the bilinear interpolation formula will correctly handle and
compute results based on any numerical input within the specified range.
What are some common applications of bilinear interpolation?
Bilinear interpolation is commonly used in image resizing, geographic data analysis, and various scientific computations. It is particularly useful in scenarios where you need to estimate values
within a regular grid, providing a practical and straightforward solution for many applications.
How accurate is bilinear interpolation?
Bilinear interpolation provides a good approximation for many applications, especially when the data changes linearly. While it’s less accurate than more complex methods like bicubic interpolation,
it balances simplicity and performance, making it suitable for a wide range of practical uses. | {"url":"https://calculatordna.com/bilinear-interpolation-calculator/","timestamp":"2024-11-06T18:02:16Z","content_type":"text/html","content_length":"89322","record_id":"<urn:uuid:5613f23f-f49f-41ae-af5f-02029d8914d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00726.warc.gz"} |
Shell-Type Structural Elements (3D Only)
The shell-type structural elements include shell elements, geogrid elements and liner elements. The mechanical behavior of these elements can be divided into the structural response of the shell
material itself, and the way the element interacts with the grid. The structural response of the shell material is common to all shell-type elements, and is described in this section. Specific
behaviors that differ for each element type are described in the section for that type.
Like all structural elements (and unlike zones), individual shell-type elements are identified by their component-id numbers. Groups of shell-type elements are identified by id numbers. Individual
structural nodes and links are also identified by component-id numbers. Nodes and links can also be selected by the id number of the elements they are attached to.
Mechanical Behavior
Each shell-type structural element (shell, geogrid, or liner) is defined by its geometric and material properties. A shell-type element is assumed to be a triangle of uniform thickness lying between
three nodal points. An arbitrarily curved structural shell can be modeled as a faceted surface composed of a collection of shell-type elements. Each shell-type element behaves either as an elastic
material with no failure limit or as a plastic material. One can introduce a plastic-hinge line (across which a discontinuity in rotation may develop) along the edges between shell-type elements,
using the same double-node procedure as applied to beams (see Plastic Hinge Formation (with shell elements)). Each shell-type element provides a different means of interacting with the grid (see
Shell Structural Elements, Geogrid Structural Elements, and Liner Structural Elements). The structural response of the shell is controlled by the constitutive model and the type of finite element.
There are five finite elements available: 2 membrane elements, 1 plate-bending element and 2 shell elements. The general properties of these finite elements are described in Shell Finite Elements.
Because these are all thin-shell finite elements, shell-type elements are suitable for modeling thin-shell structures in which the displacements caused by transverse-shearing deformations can be
neglected. Thick-shell structures should be modeled with zones.
There are three coordinate systems associated with each shell-type element. A material coordinate system is used to specify orthotropic and anisotropic elastic material properties, and a surface
coordinate system (providing a continuous description of the shell mid-surface spanning adjacent shell-type elements) is used to recover stresses — see Stresses in Shells. The element coordinate
system is defined by the locations of its three nodal points (labeled 1, 2 and 3 in Figure 1) such that
1. the element lies in the \(xy\)-plane,
2. the \(x\)-axis is directed from node-1 to node-2, and
3. the \(z\)-axis is normal to the element plane and positive on the “outside” of the shell surface. (The two sides of each element are designated as outside and inside.)
The element coordinate system cannot be modified.
Response Quantities
Stress quantities, which include stress resultants and stresses acting in the shell, can be recovered for all shell-type elements (see Stress Recovery Procedure). The stress resultants are expressed
in a surface coordinate system that provides a continuous description of the shell mid-surface spanning adjacent shell-type elements. The stresses for an elastic material are expressed in the global
coordinate system, while the stresses for a plastic material are expressed in the surface coordinate system. The stress quantities can be accessed via FISH, and
1. listed with the structure shell list resultants, structure shell list stress, structure shell list stress-bounds, structure shell list stress-principal, structure shell list plastic-state and
structure shell list plastic-stress commands, or the equivalent for geogrid or liner elements,
2. monitored with the structure shell history (or liner/geogrid) command, and
3. plotted with the Shell, Geogrid, or Liner plot items.
The surface coordinate system is established, and the stress resultants are recovered, with the structure shell recover (or liner/geogrid) command. This information is stored with the model and is
accessible via FISH. The field variations are displayed as contours using the {Structural Element:Shell} (or liner/geogrid) plot item. If the {Use Engine Data} box is checked, then the recovered
information (which is stored within the computational model — or engine) is displayed. If the {Use Engine Data} box is not checked, then the recovery process is triggered on the fly by the plot item,
and in particular, the surface system is regenerated by projecting a surface-X vector (specified in the {SurfX} box) on the shell-type elements in the plot. To insure that the manually recovered data
is plotted, one should check the {Use Engine Data} box.
Shell Plasticity Plot Items
The stresses, plastic state and integration-point layout of shells with a plastic material model can be plotted with the , and plot items as follows.
1. The {Contour Stress (plastic-max/min)} item colors each element with a plastic material model based on the maximum or minimum stress within the element. The 2D plane-stress principal values (\(\
sigma_1\) and \(\sigma_2\)) are found for the stress at the centroid of each layer of integration points, and the largest and smallest of these values over all layers are taken as the maximum and
minimum stresses. The maximum and minimum stress within an element with an elastic material model can be displayed with the {Contour Stress (elastic-max/min} plot item. See the structure shell
list stress-bounds (or liner/geogrid) command for a detailed description of how these stresses are computed.
2. The {Contour Stress (plastic)} item displays the specified component (\(\sigma_{xx}\), \(\sigma_{yy}\), or \(\sigma_{xy}\)) of the stress in the surface coordinate system at the centroid of the
integration point layer nearest to the specified depth (where depth equals depth factor times one-half thickness) of each element with a plastic material model. The surface coordinate system must
be set for all nodes used by the shells in the plot. The shells for which this is not the case have their stress compenent displayed as zero — in the current implementation a warning message is
displayed and the plot it deactivated. The surface coordinate system is set via the structure shell recover surface (or liner/geogrid) command.
3. The {Label Plastic Integration} item displays the integration-point layout (number of integration points through the thickness and over the element area) for each element with a plastic material
4. The {Label Plastic State} item displays the yield state (never yielded, tension-p, shear-p, etc.) for the specified integration point on the layer of integration points nearest to the specified
depth (where depth equals depth factor times one-half thickness) for each element with a plastic material model.
5. The {Label Plastic Yield} item shows the percentage of integration points that have yielded within each element with a plastic material model. The yield boundary is specified as a percentage
between zero and one hundred, and the elements are colored to denote those that have not yielded and those whose yield percentage is above and below the yield boundary.
Shell-Type Properties
The properties of the shell material consist of constitutive model properties and the following three additional properties.
1. density, mass density, \(\rho\) [M/L^3] (needed if dynamic mode or gravity is active)
2. thermal-expansion, thermal-expansion coefficient, \(\alpha_t\) [1/T]. Used with the thermal option to allow shell-type elements to experience strains arising from thermal expansion (as explained
3. thickness, shell thickness, \(t\) [L]
Shell Elastic Constitutive Models
The elastic constitutive models update the internal element forces by multiplying nodal displacement increments by an element stiffness matrix. The stiffness matrix is expressed in closed form, and
thus, it is not necessary to perform numerical integration. The shell-type elements model general shell behavior as a superposition of membrane and bending actions via the five 3-noded triangular
finite elements described in Shell Finite Elements. The material properties of the three shell elastic constitutive models are expressed via the membrane and bending material-rigidity matrices
(1)\[\begin{split}\begin{split} \big[ {\mathbf D}_m \big] &= \int_{-{t_m/2}}^{+{t_m/2}} \big[ {\mathbf E}_m \big]\,dz = t_m\,\big[ {\mathbf E}_m \big] \\[12pt] \big[ {\mathbf D}_b \big] &= \int_{-
{t_b/2}}^{+{t_b/2}} \big[ {\mathbf E}_b]\,z^2dz = {{t_b^3} \over {12}} \big[ {\mathbf E}_b \big] \end{split}\end{split}\]
respectively, where \(t_m\) and \(t_b\) are the shell thicknesses used for membrane and bending actions, respectively, and \(\big[ {\mathbf{E}_m} \big]\) and \(\big[ {\mathbf{E}_b} \big]\) are
material-stiffness matrices that relate stresses to strains via the constitutive relations
(2)\[\begin{split}\begin{split} \begin{Bmatrix} \boldsymbol\sigma_m \end{Bmatrix} &= {\begin{Bmatrix} \sigma_x \\ \sigma_y \\ \tau_{xy} \end{Bmatrix}}_m = \begin{bmatrix} {\bf E}_m \end{bmatrix} \
begin{Bmatrix} \boldsymbol\varepsilon \end{Bmatrix} = \begin{bmatrix} c_{11}^m & c_{12}^m & c_{13}^m \\ & c_{22}^m & c_{23}^m \\ sym. & & c_{33}^m \end{bmatrix} \begin{Bmatrix} \varepsilon_x \\ \
varepsilon_y \\ \gamma_{xy} \end{Bmatrix} \\[12pt] \begin{Bmatrix} \boldsymbol\sigma_b \end{Bmatrix} &= {\begin{Bmatrix} \sigma_x \\ \sigma_y \\ \tau_{xy} \end{Bmatrix}}_b = \begin{bmatrix} {\mathbf
E}_b \end{bmatrix} \begin{Bmatrix} \boldsymbol\varepsilon \end{Bmatrix} = \begin{bmatrix} c_{11}^b & c_{12}^b & c_{13}^b \\ & c_{22}^b & c_{23}^b \\ sym. & & c_{33}^b \end{bmatrix} \begin{Bmatrix} \
varepsilon_x \\ \varepsilon_y \\ \gamma_{xy} \end{Bmatrix} \end{split}\end{split}\]
The material-rigidity matrices are used to form the finite element stiffness matrices (\(\big[ {\mathbf{D}_m} \big]\) is used by the CST and CST hybrid finite elements, and \(\big[ {\mathbf{D}_b} \
big]\) is used by the DKT finite element) and to recover stress resultants. The stresses are obtained from the stress resultants by
(3)\[\begin{split}\begin{gather} \begin{Bmatrix} \boldsymbol\sigma_m \end{Bmatrix} = \frac{1}{t_m} \begin{Bmatrix} N_x \\ N_y \\ N_{xy} \end{Bmatrix}, \quad \sigma_z = 0 \\[12pt] \begin{Bmatrix} \
boldsymbol\sigma_b \end{Bmatrix} = \frac{12}{t_b^3} \begin{Bmatrix} M_x \\ M_y \\ M_{xy} \end{Bmatrix} z, \quad \begin{Bmatrix} \boldsymbol\sigma_s \end{Bmatrix} = \begin{Bmatrix} \tau_{zx} \\ \tau_
{zy} \end{Bmatrix} = \frac{3}{2t_m} \biggl( 1 - (2z/t_m)^2 \biggr) \begin{Bmatrix} Q_x \\ Q_y \end{Bmatrix} \end{gather}\end{split}\]
In the following discussion, we use \(\big[ {\mathbf{E}} \big]\) when referring to relations that apply to both \(\big[ {\mathbf{E}_b} \big]\) and \(\big[ {\mathbf{E}_m} \big]\). For isotropic
elastic material properties, \(E\), \(v\) and \(t\) must be specified.[1] For orthotropic and anisotropic elastic material properties, the material coordinate system, \(\big[ {\mathbf{E}_m} \big]\),
\(\big[ {\mathbf{E}_b} \big]\) and \(t\) must be specified, and \(t_m\) and \(t_b\) may also be specified. For most cases, \(\big[ {\mathbf{E}_m} \big]\) \(=\) \(\big[ {\mathbf{E}_b} \big]\) \(=\) \
(\big[ {\mathbf{E}} \big]\) and \(t_m = t_b = t\); however, when modeling equivalent or transformed orthotropic shells (with elastic properties equal to the average properties of components of the
original shell) and controlling the membrane and bending rigidities independently, it may be necessary to set \(\big[ {\mathbf{E}_m} \big]\) \(\neq\) \(\big[ {\mathbf{E}_b} \big]\) and \(t_m \neq t_b
Isotropic Elastic Material Properties
For the case of an isotropic elastic shell, \(\big[ {\mathbf{E}_m} \big]\) \(=\) \(\big[ {\mathbf{E}_b} \big]\) \(=\) \(\big[ {\mathbf{E}} \big]\), and the six constants of \(\big[ {\mathbf{E}} \big]
\) are related to the two elastic constants of Young’s modulus, \(E\), and Poisson’s ratio, \(\nu\), by
(4)\[\begin{split}\begin{split} c_{11} &= c_{22} = {E \over 1 - \nu^2} \\ c_{33} &= {E \over 2(1 + \nu)} \\ c_{12} &= \nu\left( {E \over 1 - \nu^2} \right) \\ c_{13} &= c_{23} = 0 \end{split}\end
The \(c_{ij}\) are invariant constants and retain the same values in any orthogonal coordinate system.
Orthotropic and Anisotropic Elastic Material Properties
Under the assumptions of linear elasticity, the general constitutive matrix of material stiffness coefficients is symmetric and can be expressed in terms of 21 independent elastic constants. An
orthotropic material has three preferred directions of elastic symmetry, and its material-stiffness matrix can be expressed in terms of nine independent elastic constants:
(5)\[\begin{split}\begin{Bmatrix} \sigma_{x'} \\ \sigma_{y'} \\ \sigma_{z'} \\ \tau_{x'y'} \\ \tau_{x'z'} \\ \tau_{y'z'} \end{Bmatrix} = \begin{bmatrix} C'_{11} & C'_{12} & C'_{13} & 0 & 0 & 0 \\ &
C'_{22} & C'_{23} & 0 & 0 & 0 \\ & & C'_{33} & 0 & 0 & 0 \\ & & & C'_{44} & 0 & 0 \\ & sym. & & & C'_{55} & 0 \\ & & & & 0 & C'_{66} \end{bmatrix} \begin{Bmatrix} \varepsilon_{x'} \\ \varepsilon_{y'}
\\ \varepsilon_{z'} \\ \gamma_{x'y'} \\ \gamma_{x'z'} \\ \gamma_{y'z'} \end{Bmatrix}\end{split}\]
in which \(x'y'z'\) are the principal directions of orthotropy. This relation describes a three-dimensional orthotropic continuum. The relation can be restricted to describe an orthotropic shell by
enforcing the Kirchhoff thin-plate conditions of plane stress \((\sigma_{z'} = 0)\) and no transverse-shear strain \((\gamma_{x'z'} = \gamma_{y'z'} = 0)\), so that the material-stiffness matrix can
be expressed in terms of four independent elastic constants:
(6)\[\begin{split}\begin{Bmatrix} \boldsymbol\sigma' \end{Bmatrix} = \begin{Bmatrix} \sigma_{x'} \\ \sigma_{y'} \\ \tau_{x'y'} \end{Bmatrix} = \begin{bmatrix} \mathbf{E}' \end{bmatrix} \begin
{Bmatrix} \boldsymbol\varepsilon' \end{Bmatrix} = \begin{bmatrix} c'_{11} & c'_{12} & 0 \\ & c'_{22} & 0 \\ sym. & & c'_{33} \end{bmatrix} \begin{Bmatrix} \varepsilon_{x'} \\ \varepsilon_{y'} \\ \
gamma_{x'y'} \end{Bmatrix} \\ \textrm{(orthotropic shell)}\end{split}\]
in which the shell mid-surface lies in the \(x' y'\)-plane. The material-stiffness matrix expressed in the principal directions (denoted here by \(\big[ {\mathbf{E}}' \big]\)) has four independent
constants that define an orthotropic shell.
The material-stiffness matrix of an anisotropic shell can be expressed in terms of six independent elastic constants:
(7)\[\begin{split}\begin{Bmatrix} \boldsymbol\sigma' \end{Bmatrix} = \begin{Bmatrix} \sigma_{x'} \\ \sigma_{y'} \\ \tau_{x'y'} \end{Bmatrix} = \begin{bmatrix} \mathbf{E}' \end{bmatrix} \begin
{Bmatrix} \boldsymbol\varepsilon' \end{Bmatrix} = \begin{bmatrix} c'_{11} & c'_{12} & c'_{13} \\ & c'_{22} & c'_{23} \\ sym. & & c'_{33} \end{bmatrix} \begin{Bmatrix} \varepsilon_{x'} \\ \varepsilon_
{y'} \\ \gamma_{x'y'} \end{Bmatrix} \\ \textrm{(anisotropic shell)}\end{split}\]
in which the shell mid-surface lies in the \(x'y'\)-plane. The material-stiffness matrix expressed in the material directions (denoted here by \(\big[ {\mathbf{E}}' \big]\)) has six independent
constants that define an anisotropic shell.
Consider a shell-type element with its local coordinate system \(xyz\) rotated with respect to the \(x'\)-axis by an angle \(\beta\) as shown in Figure 2. The strain-transformation matrix, \(\big[ \
mathbf{T}_\varepsilon \big]\), relates the strains in the two systems via
(8)\[\begin{Bmatrix} \boldsymbol\varepsilon' \end{Bmatrix} = \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix} \begin{Bmatrix} \boldsymbol\varepsilon \end{Bmatrix}\]
\(\big[ \mathbf{T}_\varepsilon \big]\) can be expressed in terms of \(\beta\) as
(9)\[\begin{split}\begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix} = \begin{bmatrix} c^2 & s^2 & cs \\ s^2 & c^2 & -cs \\ -2cs & 2cs & c^2 - s^2 \end{bmatrix},\quad \begin{matrix} c = \cos\beta \
\ s = \sin\beta \end{matrix}\end{split}\]
The stresses in the two systems are related via
(10)\[\begin{Bmatrix} \boldsymbol\sigma \end{Bmatrix} = \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix}^\textrm{T} \begin{Bmatrix} \boldsymbol\sigma' \end{Bmatrix}\]
Substituting Eq. (6) or (7) and Eq. (8) into the above yields
(11)\[\begin{Bmatrix} \boldsymbol\sigma \end{Bmatrix} = \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix}^\textrm{T} \begin{bmatrix} \mathbf{E}' \end{bmatrix} \begin{bmatrix} \mathbf{T}_\
varepsilon \end{bmatrix} \begin{Bmatrix} \boldsymbol\varepsilon \end{Bmatrix}\]
in which
(12)\[\begin{bmatrix} \mathbf{E} \end{bmatrix} = \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix}^\textrm{T} \begin{bmatrix} \mathbf{E}' \end{bmatrix} \begin{bmatrix} \mathbf{T}_\varepsilon \end
We have thus, from Eqs. (12) and (1), the expressions
(13)\[\begin{split}\begin{split} \begin{bmatrix} \mathbf{D}_m \end{bmatrix} &= t_m \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix}^\textrm{T} \begin{bmatrix} {\mathbf{E}_m}' \end{bmatrix} \begin
{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix} \\[12pt] \begin{bmatrix} \mathbf{D}_b \end{bmatrix} &= \frac{t_b^3}{12} \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix}^\textrm{T} \begin{bmatrix}
{\mathbf{E}_b}' \end{bmatrix} \begin{bmatrix} \mathbf{T}_\varepsilon \end{bmatrix} \end{split}\end{split}\]
for the material-rigidity matrices of a shell in which the material directions are not aligned with the \(x\) and \(y\) axes. For an orthotropic shell, \(\big[ {\mathbf{E}}' \big]\) has two zero
terms, but these terms will not, in general, be zero for \(\big[ {\mathbf{E}} \big]\) when the principal directions of orthotropy are not aligned with the \(x\) and \(y\) axes.
Determination of Orthotropic Elastic Material Properties
The material-stiffness matrices \(\big[ {\mathbf{E}_m}' \big]\) and \(\big[ {\mathbf{E}_b}' \big]\) can be expressed in terms of the nine elastic constants in Eq. (5) via
(14)\[\begin{split}\begin{split} c'_{11} &= C'_{11} - {{C'_{13} C'_{13}} \over {C'_{33}}} \\ c'_{22} &= C'_{22} - {{C'_{23} C'_{23}} \over {C'_{33}}} \\ c'_{33} &= C'_{44} \\ c'_{12} &= C'_{12} -
{{C'_{13} C'_{23}} \over {C'_{33}}} \end{split}\end{split}\]
They can also be expressed in terms of the effective Poisson’s ratios \((\nu_{x'}, \nu_{y'})\), effective moduli \((E_{x'}\) and \(E_{y'})\) and shear modulus \((G)\) for orthotropic plates (Ugural
[1981], p. 141):
(15)\[\begin{split}\begin{split} c'_{11} &= {{E_{x'}} \over {1 - \nu_{x'} \nu_{y'}}} \\ c'_{22} &= {{E_{y'}} \over {1 - \nu_{x'} \nu_{y'}}} \\ c'_{33} &= G \\ c'_{12} &= {{E_{x'} \nu_{y'}} \over {1 -
\nu_{x'} \nu_{y'}}} = {{E_{y'} \nu_{x'}} \over {1 - \nu_{x'} \nu_{y'}}} \end{split}\end{split}\]
Ugural (1981) states that the orthotropic plate moduli and Poisson’s ratios are obtained by tension and shear tests, as in the case of isotropic materials.
When it is not possible to determine the orthotropic plate moduli and Poisson’s ratios experimentally, an equivalent or transformed orthotropic plate (with elastic properties equal to the average
properties of components of the original plate) can be used, for which the orthotropic membrane elastic constants are approximated with the relations in Eq. (15), and the orthotropic bending elastic
constants are approximated with the rigidities:
(16)\[\begin{split}\begin{split} {c_{11}^b}' &= {12 \over t_b^3} D_{x'} \\ {c_{22}^b}' &= {12 \over t_b^3} D_{y'} \\ {c_{33}^b}' &= {12 \over t_b^3} G_{x'y'} \\ {c_{12}^b}' &= {12 \over t_b^3} D_
{x'y'} \\ \end{split}\end{split}\]
\(D_{x'}\), \(D_{y'}\), \(D_{x'y'}\) and \(G_{x'y'}\) represent the flexural rigidities and the torsional rigidity of an orthotropic plate, respectively. Rigidities for some commonly encountered
cases are given in Figure 3. These cases include a reinforced concrete slab with orthogonal steel bars, a plate reinforced by equidistant stiffeners, a plate reinforced by a set of equidistant ribs,
and a corrugated plate. A verification problem demonstrating how to compute the rigidities for a stiffened sheet is given here.
*Used with the permission of McGraw-Hill Publishing Company.
elastic Shell Model Properties
Use the following keywords with the structure shell property (or liner/geogrid) command to set these properties of the shell isotropic elastic constitutive model.
young f
Young’s modulus, \(E\)
poisson f
Poisson’s ratio, \(\nu\)
orthotropic Shell Model Properties
Use the following keywords with the structure shell property (or liner/geogrid) command to set these properties of the shell orthotropic elastic constitutive model.
anisotropic Shell Model Properties
Use the following keywords with the structure shell property (or liner/geogrid) command to set these properties of the shell anisotropic elastic constitutive model.
Shell Plastic Constitutive Models
The plastic constitutive models update the internal element forces by integrating the internal stress over the shell volume using a numerical integration scheme. Stiffness matrices are not formed.
There are a group of integration points distributed throughout the volume of each shell element as shown in Figure 4. The stress state satisfies the plane-stress condition such that the only non-zero
components are \(\sigma_{xx}\), \(\sigma_{yy}\) and \(\sigma_{xy}\). The total stress at each integration point is updated by the shell plastic constitutive model during each timestep. There are
three shell plastic constitutive models, each of which is a plane-stress version of the corresponding 3D model used by the zones. The constitutive models are von Mises, Mohr Coulomb, and Strain
Softening/Hardening Mohr Coulomb. The first model is suitable for modeling steel shells, and the remaining two models are suitable for modeling concrete shells. Plasticity is provided by the DKT, CST
and DKT-CST finite elements. The formulation of nonlinear structural elements that provide shell plasticity is provided in Potyondy (2022).
The integration-point layout is shown in Figure 4. The three-point triangular integration scheme with integration points at the mid points of element edges (Cowper, 1973) is used for integration over
the element area. Gauss quadrature is used for integration through the thickness; the integration points are offset from the element surface, and thus, the scheme does not directly account for
yielding on the surface as soon as it occurs.
Burgoyne and Crisfield (1990) tested the overall performance of numerical procedures for the integration of stresses through the thickness of plates and shells when there are discontinuities in the
stresses. They concluded that if integration is always required over the same range, then use Gauss quadrature, and use as high an order formula as possible, being aware that a law of diminishing
return applies once nine integration points are used. This statement applies to the integration through the thickness; therefore, these integration points are to be distributed through the thickness
to coincide with the Gauss abscissae for a given integration order. Their conclusion is supported by measuring the integration error for typical bending problems with yield representative of concrete
and steel (see Figure 5). Guidance for choosing the number of integration points through the thickness is given in this Figure.
The constitutive model (and its properties) and integration-point layout (number of integration points through the thickness and over the area) must be specified for each plastic element. The
constitutive model and integration-point layout are specified with the structure shell cmodel (or liner/geogrid) command. The stresses and plastic state are listed with the plastic-stress and
plastic-state keywords of the structure shell list (or liner/geogrid) command. Histories of the plastic stress are sampled with the ipstress{xx, yy, xy} keywords of the structure shell history (or
liner/geogrid) command. Contours of stress, extent of yielding and yielded state are provided by the Shell, Liner and Geogrid plot items (described here).
Burgoyne, C. J., and M. A. Crisfield. “Numerical Integration Strategy for Plates and Shells,” Int. J. Num. Meth. Engng., 29, 105–121 (1990).
Cowper, G. R. “Gaussian Quadrature Formulas for Triangles,” Int. J. Num. Meth. Engng., 7, 405–408 (1973).
Potyondy, D. “Nonlinear Structural Elements,” Itasca Consulting Group, Inc., Technical Memorandum 5-8121:22TM36 (September 9, 2022), Minneapolis, Minnesota (2022).
Ugural, A. C. Stresses in Plates and Shells, New York: McGraw-Hill Publishing Company, Inc. (1981).
Was this helpful? ... Itasca Software © 2024, Itasca Updated: Sep 26, 2024 | {"url":"https://docs.itascacg.com/itasca920/common/sel/doc/manual/sel_manual/shelltypes/shelltypes.html","timestamp":"2024-11-09T10:17:26Z","content_type":"application/xhtml+xml","content_length":"97136","record_id":"<urn:uuid:724d2d0a-de2c-4877-a25c-552166705b21>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00132.warc.gz"} |
How not to reduce dimensionality for clustering - Felix Gravila
How not to reduce dimensionality for clustering
In my previous post, I followed up on my k-means tutorial by applying it to cluster MNIST. We’re used to reading digits, so the centroids made perfect sense to us since plotting them just looked like
a smudged number. The dataset even clustered pretty well since all digits are centred and look similar enough that decisions can directly be made on individual pixels. However, if we imagine those
784 pixels to be sensor readings, it would likely be really hard to make sense of them. Additionally, many situations would benefit from some nonlinear transformations to better capture the
relationships between features. What we want, therefore, is to reduce the original dimensionality, so clustering becomes more manageable.
In this post, we’ll try to pass the data through a neural network which outputs in three dimensions. We then use K-means directly as a loss function, trying to get it to cluster points close to their
assigned centroids and far from the others. This method is supposed to fail. However, I hope to use this as an entry point to some more interesting methods. Essentially, I also just thought I’d show
it off since I haven’t seen it presented before.
condensed representations of the digits moving in the feature space while training
Build the model
We do the preliminaries and create our model. I made a model with two CNN layers and MaxPooling into a 3 unit dense output. In order to plot it in 3 dimensions, I chose 3. However, I tried it with 10
as well and it didn’t perform any better.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import random
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.patches as mpatches
import math
colours = ['#00FA9A','#FFFF00','#2F4F4F','#8B0000','#FF4500','#2E8B57','#6A5ACD','#FF00FF','#A9A9A9','#0000FF']
# we want to split into 10 clusters, one for each digit
clusters_n = 10
# load MNIST data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# scale between 0 and 1
X = tf.constant(x_train/255.0)
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.cnn_1 = tf.keras.layers.Conv2D(filters=32, kernel_size=(3,3), activation="relu")
self.mp_1 = tf.keras.layers.MaxPool2D(pool_size=2)
self.cnn_2 = tf.keras.layers.Conv2D(filters=16, kernel_size=(5,5), activation="relu")
self.mp_2 = tf.keras.layers.MaxPool2D(pool_size=2)
self.flatten = tf.keras.layers.Flatten()
self.dense_1 = tf.keras.layers.Dense(12, activation="relu")
self.dense_2 = tf.keras.layers.Dense(3)
def call(self, x):
inner = self.cnn_1(x)
inner = self.mp_1(inner)
inner = self.cnn_2(inner)
inner = self.mp_2(inner)
inner = self.flatten(inner)
inner = self.dense_1(inner)
inner = self.dense_2(inner)
return inner
We need to pass the centroids as parameters both since we want to keep updating the same ones, and since GradientTape doesn’t like random initialisations anyways. The new function is half of the code
for centroid updating, which we’ll use just to perform prediction once we have the model outputs and the trained centroids.
def update_centroids(orig_points, points_expanded, centroids):
centroids_expanded = tf.expand_dims(centroids, 1)
distances = tf.subtract(centroids_expanded, points_expanded)
distances = tf.square(distances)
distances = tf.reduce_sum(distances, 2)
assignments = tf.argmin(distances, 0)
means = []
for c in range(clusters_n):
eq_eq = tf.equal(assignments, c)
where_eq = tf.where(eq_eq)
ruc = tf.reshape(where_eq, [1,-1])
ruc = tf.gather(orig_points, ruc)
ruc = tf.reduce_mean(ruc, axis=[1])
new_centroids = tf.concat(means, 0)
return new_centroids, assignments, distances
def do_kmeans(centroids, y_pred):
points_expanded = tf.expand_dims(y_pred, 0)
old_centroids = centroids
i = 0
while True and i < 50:
centroids, assignments, distances = update_centroids(y_pred, points_expanded, centroids)
if tf.reduce_all(tf.equal(old_centroids, centroids)):
old_centroids = centroids
return centroids, assignments, distances
# classify points using trained centroids
# same code as for update_centroids, but only returns the argmin
def get_assignments(centroids, y_pred):
points_expanded = tf.expand_dims(y_pred, 0)
centroids_expanded = tf.expand_dims(centroids, 1)
distances = tf.subtract(centroids_expanded, points_expanded)
distances = tf.square(distances)
distances = tf.reduce_sum(distances, 2)
assignments = tf.argmin(distances, 0)
return assignments
The training loop is fairly standard. I batch the data so it fits in the GPU. I’m counting on the fact that 5000 digits are varied enough to not be detrimental to training, but I might be wrong.
EPOCHS = 1000
BATCH_SIZE = 5000
LR = 0.0001
# patience is the number of epochs until it stops training if no loss improvements
patience = 10
waited = 0
# initialise our model
model = MyModel()
# use adam as optimiser
adam = tf.keras.optimizers.Adam(learning_rate=LR)
# initialise a large initial loss on which to improve
prev_best_loss = 1e10
# variable to save prev best weights in
# we want to load it back after patience runs out
prev_weights = None
# define centroids which we'll keep updating
g_centroids = None
for e in range(EPOCHS):
# shuffle data each time
shuffled_data = tf.random.shuffle(X)
# we'll batch it so it fits in GPU memory
batched_data = tf.reshape(shuffled_data, (-1, BATCH_SIZE, 28, 28, 1))
print(f"Epoch {e+1}", end="")
# variable to keep track of total epoch loss
tot_epoch_loss = 0
for idx, batch in enumerate(batched_data):
with tf.GradientTape() as g:
# predict
output = model(batch)
# take first clusters_n outputs as initialisation for centroids
if g_centroids is None:
g_centroids = output[:clusters_n]
# now we do k-means on the output of the model
g_centroids, assignments, distances = do_kmeans(g_centroids, output)
# compute the sum of minimum distances to a centroid
# in other words, the sum of distances between all points and their assigned centroid
# we want to minimise this to get them as tight as possible
dis_to_c = tf.reduce_sum(tf.reduce_min(distances, 0))
# we then want to also maximise the distance from all points to all other centroids
# otherwise the model would clump everything together
# we can get this by summing all distances and then subtracting the smallest ones
dis_to_all = tf.reduce_sum(distances)
dis_to_others = dis_to_all - dis_to_c
loss = dis_to_c/float(BATCH_SIZE) + 1/(dis_to_others/float(BATCH_SIZE))
# I had the loss go to NaN before
# pretty sure I fixed it but I'm not risking it
if math.isnan(loss):
print("N", end="")
tot_epoch_loss += loss
gradients = g.gradient(loss, model.trainable_variables)
print(f".", end="")
adam.apply_gradients(zip(gradients, model.variables))
print(f"Epoch loss {tot_epoch_loss:.2f} ", end="")
# if best loss save the weights and reset patience
if tot_epoch_loss < prev_best_loss:
prev_weights = model.get_weights()
prev_best_loss = tot_epoch_loss
waited = 0
waited += 1
print(f"Patience {waited}/{patience}")
# if no more patience load best weights and quit
if waited >= patience:
Training takes a while, depending on your hardware.
We can make a function that plots the numbers in the new feature space. We predict a number of centroids, scatter them with the corresponding colour, then plot the centroids.
def plot_dataset(X, y, model, centroids):
# Perform prediction on the dataset to get the intermediate representation
predict_batch_size = 10000
predict_count = 10000
m = []
for i in range(0, predict_count, predict_batch_size):
m.append(model(tf.reshape(X[i:i+predict_batch_size], (-1, 28, 28, 1))))
res = tf.concat(m, 0)
# scatter the points in the embedded feature space and the centroids
fig = plt.figure(figsize=(20,20))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(res[:,0], res[:,1], res[:,2], zorder=1, color=[colours[y] for y in y_train[:predict_count]])
ax.plot(centroids[:, 0], centroids[:, 1], centroids[:, 2], "kX", markersize=20, zorder=1000)
# The following just handles printing the colours in the legend
mpc = []
for i in range(10):
mpatch = mpatches.Patch(color=colours[i], label=i)
plot_dataset(X, y_train, model, g_centroids)
It seems it did something and managed to somehow cluster a few of the digits together. Finally, let’s calculate purity.
def calc_purity(labels, assignments):
d = np.zeros((clusters_n, clusters_n), dtype="int32")
for l, a in zip(labels, assignments):
d[a][l] += 1
purity_per_class = d.max(1)/d.sum(1)
# some are NaN
purity_per_class = purity_per_class[~np.isnan(purity_per_class)]
return np.mean(purity_per_class)
assignments = get_assignments(g_centroids, res)
calc_purity(y_train, assignments)
It’s much worse than last time. Oh well. I imagine it’s part of the reason why nobody uses this method.
In any case, I hope this at least showed an uncommon way to reduce dimensionality for clustering by using k-means directly as a loss function. Stick around, since I hope to be able to illustrate some
better ways of doing this in the following few weeks. As usual, please let me know if you find any bugs in my code or suggestions for improvements. | {"url":"https://gravila.net/how-not-to-reduce-dimensionality-for-clustering/","timestamp":"2024-11-06T14:38:37Z","content_type":"text/html","content_length":"128103","record_id":"<urn:uuid:e0909e66-ce14-4245-809c-675f59855c20>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00025.warc.gz"} |
Designing Digital Redstone Circuits Automatically in Minecraft with Integer Programming: Preliminary Thoughts and Tests
Redstone has been a core element in the game Minecraft for quite some years. It is presumably the most untrivial one as well: while anyone could master nearly all Minecraft mechanics through
experiences, it takes not only experience but also ingenuity to design a good redstone circuit. Few of us are bold enough to claim “I master redstone”, even after playing Minecraft for a decade.
So here comes the question: Can the design of redstone circuits, the core of Minecraft automations, be automated? and If so, how? (appreciate how meta this is :)
Theoretically, the answer is “yes…but”. Minecraft has a finite world, and each position has a finite number of possible blockstates. We can write a program to enumerate all possible placements of
blocks until we find some placement corresponding to the desired redstone circuit. However, this needs exponential time and we may need to wait for a century before it could give us, for instance, a
decent piston door. Moreover, if a circuit involves manipulation of entities (which could have infinite many states), then we are easily screwed.
Well, perhaps it is difficult to let a program design any redstone circuit. But there is indeed a subset of redstone circuits whose design can very likely be automated—computational redstone circuits
, aka. logic gates, calculators, CPUs etc. Why? Because software that design their real world electronic counterparts are readily available, i.e. EDA software.
As a high school student I, of course, know little about the inner workings of real-world EDA applications (and there doesn’t seem to be a lot of resources out there). I am convinced that this
problem is NPC (further articulated below), so designing an efficient polytime combinatorial algorithm doesn’t seem plausible. That said, what about reducing this problem to some other NPC problems
which we can solve relatively quickly with optimized algorithms / heuristics—say, ILP? This is what I am trying to do here.
Formulating the problem
“Designing computational redstone circuit automatically” is a vague idea, so it is necessary that we know what this truly means.
What’s the input?
The input should describe the intended functionality of a circuit. Recall how we usually describe a circuit: we draw a circuit diagram. I here characterize a redstone circuit diagram by the
assumptions and constraints below:
1. A circuit contains two parts: wires and components.
2. Components are the primitives of a circuit. E.g. A torch or a wire junction.
3. A component has interfaces, either in or out, as where the component receives signals from and sends signals to.
4. A wire connects an out-interface from a component (“source”) to an in-interface of another component (“target”).
5. Wires are directed.
6. Components are independent. i.e. they do not interfere with other components in any way other than being connected by wires from interfaces.
A circuit diagram can be represented in a directed graph, with components as vertices and wires as edges. Source/target interfaces as extra information stored on edges.
What’s the output?
We want our program to tell us how the circuit we described in the input can be built in the Minecraft world. Therefore, we could define the the output to be a set of position - blockstate pairs,
(which, in implementation, can be stored in a schematic file).
However, we don’t want to jump straight from a circuit diagram to a detailed Minecraft schematic because that means taking interference between components, quasi connectivity, update order—basically
everything that makes redstone engineering complex—into consideration in the first place.
Instead, we could first build our circuit in an ideal world, in which we forget about all those factors above, and then convert the ideal placement into an actual Minecraft schematic.
What’s an idea world?
1. A circuit consists of multiple ideal blocks.
2. A component fully occupies a set of ideal blocks, some of which are its interfaces. How many and which blocks a certain type of component occupies depend on its size in Minecraft and how we plan
to convert the ideal placement to a real schematic.
3. A wire is a chain of ideal blocks, where any adjacent two share a face. The first block is always the source interface and the last is always the target interface.
4. Exclusiveness: All components and all wires (ignoring their first and last block) mustn’t overlap.
5. Mutual Independence: Unless both blocks are occupied by the same component / wire, anything in two adjacent cells do not interfere with each other. To meet this constraint, we could say (rather
conservatively) that an ideal block should map to \(3\times 2 \times 3\) Minecraft blocks.
6. Wire junctions are special components and are exceptions to rule 2 and 3. A wire junction always have three interfaces (1 in & 2 outs, or 2 ins & 1 out). Multiple junctions can overlap and they
can overlap with an interface of some component.
7. There are times when we want to fix the location of some components in the input. These components are usually just placeholders that mark the position of IO. (We don’t want to produce a circuit
with an unreachable input/output in the center of everything else, right?)
The Objective
1. The circuit represented by the output must have the same functionality as described by the input circuit diagram.
2. The delay of the circuit should be minimized.
Example: the AND gate
Let’s see how we design a simple AND gate.
Suppose the only primitive components we have are NOT gate (torch), wire junction, and IO placeholder. The circuit diagram of AND gate is:
Reduction to Integer Programming
Once the problem statement is clear, we can proceed to convert the problem into an integer programming. Let’s start by defining some notations. First, assume there are \(n\) components and \(m\)
wires in a circuit.
The circuit we are designing obviously must meet some space constraints. Hence, let \(F \subset \mathbb{Z}^3\) be the set of all feasible ideal block coordinates.
Then we have different types of components, which can be placed with different orientations (at most 6). Some component may not be placed in a certain direction due to limitations in Minecraft
mechanisms (such contraptions are often known as “directional”). Let \(D_i\) be the set of feasible orientations of the \(i\) th component.
For each component \(i\) and a feasible orientation \(d\in D_i\), let \(C_{i,d} \subset \mathbb{Z}^3\) be the set of ideal block coordinates this component occupies when placed in this direction. For
convenience, let \((0,0,0)\) be a tight lower bound of the three components in this set. We assume that the shape of a component, after fixing its type and orientation, is translation-invariant.
It’s finally time to introduce some variables. Let binary variables \(x_{i,d,\mathbf{u}} (d\in D_i, \mathbf{u} \in F)\) denote whether the \(i\) th component is placed with orientation \(d\) at
coordinate \(\mathbf{u}\).
Every component should be placed only once. This gives the constraint
\[ \sum_{d\in D_i}\sum_{\mathbf{p} \in F} x_{i,d,\mathbf{p}} = 1, \quad \forall i = 1,\cdots,n. \]
And components should not overlap! Consequently,
\[ \sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf{c}} \le 1,\quad \forall \mathbf{p} \in F. \]
This constraint applies for each location \(\mathbf{p}\) in the feasible space. We enumerate all components and their possible placements that would occupy \(\mathbf{p}\), making sure that their
corresponding variables sum to at most 1.
Now we consider wires. For convenience, let \(N(\mathbf{u})\) denote \(\mathbf{u}\)’s neighbors in \(F\), \(N(\mathbf{u})\le 6\).
Let binary variables \(y_{i,\mathbf{u}, \mathbf{v}}(\mathbf{v} \in N(\mathbf{u}))\) denote whether the \(i\) th wire has a segment from \(\mathbf{u}\) to \(\mathbf{v}\), and let \(a_i\) and \(b_i\)
be the indices of the source and destination component of wire \(i\).
Furthermore, let \(\mathbf{a}_{i,d}(d\in D_{a_i})\) be the coordinate of the source interface relative to the source component’s (\(a_i\)) location when the source component of the \(i\) th wire is
placed with orientation \(d\). Define \(\mathbf{b}_{i,d}(d\in D_{b_i})\) in a similar manner, denoting the coordinate of the destination interface relative to the destination component’s location.
With these notations, the constraint that enforces the connectivity of wires can be written as
\[ \sum_{\mathbf{v} \in N(\mathbf{u})} \left(y_{i,\mathbf{u}, \mathbf{v}} - y_{i,\mathbf{v}, \mathbf{u}}\right) = \sum_{\substack{d\in D_{a_i}\\ \mathbf{u} - \mathbf{a}_{i,d} \in F}} x_{a_i,d,\mathbf
{u} - \mathbf{a}_{i,d}} - \sum_{\substack{d\in D_{b_i}\\ \mathbf{u} - \mathbf{b}_{i,d} \in F}} x_{b_i,d,\mathbf{u} - \mathbf{b}_{i,d}}, \quad \forall \mathbf{u}\in F, i=1,2,\cdots,m. \]
Check: If \(\mathbf{u}\) is the starting location of wire \(i\). both LHS and RHS are 1; If \(\mathbf{u}\) is the ending location of wire \(i\). both LHS and RHS are \(-1\); Otherwise both sides
equal 0.
Meanwhile, given that a location is either occupied by a component or a wire but never both, and that any two wires shall not overlap, we have the following constraint
\[ 2\sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf{c}} + \sum_{i=1}^m\sum_{\mathbf{v} \in N(\mathbf{p})} \left(y_{i,\
mathbf{p}, \mathbf{v}} + y_{i,\mathbf{v}, \mathbf{p}}\right) \le 2,\quad \forall \mathbf{p} \in F. \]
The intuition: Usually if a wire (say, \(i\)) passes through \(\mathbf{p}\), \(\sum_{\mathbf{v} \in N(\mathbf{p})} y_{i,\mathbf{p}, \mathbf{v}} + y_{i,\mathbf{v}, \mathbf{p}} = 2\), so the RHS should
be 2. At the same time, we expect that when some component has occupied \(\mathbf{p}\), no wires should occupy \(\mathbf{p}\) again. We then need to use indicator
\[ \sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf{c}}, \]
which is at most 1 by our second constraint, so we give it a coefficient of 2 to balance it with the wire term.
The above constraint, however, isn’t quite right: Wires and components do over lap at the interface block. To fix this, we amend the constraint to
\[ 2\sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf{c}} + \sum_{i=1}^m\sum_{\mathbf{v} \in N(\mathbf{p})} \left(y_{i,\
mathbf{p}, \mathbf{v}} + y_{i,\mathbf{v}, \mathbf{p}}\right) \le 2 + \sum_{i=1}^m\left(\sum_{\substack{d\in D_{a_i}\\ \mathbf{p} - \mathbf{a}_{i,d} \in F}} x_{a_i,d,\mathbf{p} - \mathbf{a}_{i,d}} +\
sum_{\substack{d\in D_{b_i}\\ \mathbf{p} - \mathbf{b}_{i,d} \in F}} x_{b_i,d,\mathbf{p} - \mathbf{b}_{i,d}} \right),\quad \forall \mathbf{p} \in F. \]
The complicated summation term on the right will be greater than 0 if \(\mathbf{p}\) is an interface block for some component and wire.
These are all the constraints that I had thought of. Together, the integer programming instance is
\[ \require{mathtools} \begin{aligned} \text{minimize}\ &\sum_{i=1}^m\sum_{\mathbf{u}\in F}\sum_{\mathbf{v}\in N(\mathbf{u})} y_{i, \mathbf{u}, \mathbf{v}}, \\ \text{subject to}\ &\sum_{d\in D_i}\
sum_{\mathbf{p} \in F} x_{i,d,\mathbf{p}} = 1, &\forall i = 1,\cdots,n;\\ &\sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf
{c}} \le 1,&\forall \mathbf{p} \in F;\\ &\sum_{\mathclap{\mathbf{v} \in N(\mathbf{u})}} \left(y_{i,\mathbf{u}, \mathbf{v}} - y_{i,\mathbf{v}, \mathbf{u}}\right) = \sum_{\mathclap{\substack{d\in D_
{a_i}\\ \mathbf{u} - \mathbf{a}_{i,d} \in F}}} x_{a_i,d,\mathbf{u} - \mathbf{a}_{i,d}} - \sum_{\mathclap{\substack{d\in D_{b_i}\\ \mathbf{u} - \mathbf{b}_{i,d} \in F}}} x_{b_i,d,\mathbf{u} - \mathbf
{b}_{i,d}}, &\quad \forall \mathbf{u}\in F, i=1,\cdots,m;\\ &2\sum_{i=1}^n\sum_{d\in D_i}\sum_{\substack{\mathbf{c}\in C_{i,d} \\ \mathbf{p}-\mathbf{c} \in F}} x_{i,d,\mathbf{p}-\mathbf{c}} + \sum_{i
=1}^m\sum_{\mathbf{v} \in N(\mathbf{p})} \left(y_{i,\mathbf{p}, \mathbf{v}} + y_{i,\mathbf{v}, \mathbf{p}}\right) \\ &\quad \le 2 + \sum_{i=1}^m\,\sum_{\mathclap{\substack{d\in D_{a_i}\\ \mathbf{p} -
\mathbf{a}_{i,d} \in F}}} x_{a_i,d,\mathbf{p} - \mathbf{a}_{i,d}} +\sum_{i=1}^m\,\sum_{\mathclap{\substack{d\in D_{b_i}\\ \mathbf{p} - \mathbf{b}_{i,d} \in F}}} x_{b_i,d,\mathbf{p} - \mathbf{b}_
{i,d}},& \forall \mathbf{p} \in F. \end{aligned} \]
The objective is simply the sum of all \(y\) variables because it denotes the total length of all wires, which we aim to minimize if we want the circuit to be compact.
Keep in mind, however, that these constraints are merely the necessary conditions of a valid circuit. Given the complexity of the constraints, it is difficult to reason if all circuit configurations
conforming to these constraint are valid circuits. I suppose the best way to check is to write some code and see if it gives what we what.
Preliminary Experiments
I did some very rudimentary test of my ILP formulation above with the Gurobi solver and ~1kLoC of Kotlin. The goal was to route the simple circuit described below:
val in1 = ComponentNode(FixedLocation(Vec3(0, 0, 0)))
val in2 = ComponentNode(FixedLocation(Vec3(1, 0, 0)))
val out = ComponentNode(FixedLocation(Vec3(0, 0, 3)))
val a = ComponentNode(NotGate)
val b = ComponentNode(NotGate)
val c = ComponentNode(NotGate)
val d = ComponentNode(NotGate)
val e = ComponentNode(NotGate)
val circuit = MutableCircuitGraph()
// Connect the IO interface of in1 to the IN interface of a
circuit.addWire(in1, FixedLocation.IO, a, NotGate.IN)
circuit.addWire(in2, FixedLocation.IO, a, NotGate.IN)
circuit.addWire(in1, FixedLocation.IO, b, NotGate.IN)
circuit.addWire(in2, FixedLocation.IO, c, NotGate.IN)
circuit.addWire(a, NotGate.OUT, b, NotGate.IN)
circuit.addWire(a, NotGate.OUT, c, NotGate.IN)
circuit.addWire(b, NotGate.OUT, d, NotGate.IN)
circuit.addWire(c, NotGate.OUT, d, NotGate.IN)
circuit.addWire(d, NotGate.OUT, e, NotGate.IN)
circuit.addWire(e, NotGate.OUT, out, FixedLocation.IO)
This describes an XOR gate with NOT gate as the only primitive. It has eight components and 10 wires—decently complex. A FixedLocation component takes up 1 block of space and a NOT gate takes up 2
blocks, with the input interface in one block and the output in another. NOT gates can only be placed horizontally (i.e. 4 possible directions.)
Time to solve it:
I gave the solver a lot of slack: the feasibility space is \(3\times 2 \times 10\), considerably larger than what’s actually needed. Anyway, the code ran very fast (thanks to Gurobi). To visualize
the result I wrote some extra code to generate an HTML which internally calls THREE.js:
Complicated, isn’t it? The rightmost magenta block is the output component out. The leftmost magenta block on the bottom level is the input component in1. The cyan block next to in1 is in2. And then
blocks of the same color represents a NOT gate, and arrows represent wires. The visualization is a mess but I couldn’t do better without writing significantly more code. If you try hard to follow the
the arrows then you will find that this routing is actually correct and does the exact thing the Kotlin description above describes—and indeed, the circuit is amazingly compact.
(Note that here a component’s interface is connected to multiple wires. This might not be a good thing because by our constraints, if the fan in/out of any component is greater than 6, the constraint
can not be satisfied. In an attempt to work around this I wrote a preprocessor that automatically introduces OR gates in circuit graphs when necessary. However, it turns out that with this
modification things no longer work and the solver starts giving bogus results for some reason. I didn’t have time to investigate into this issue, but this could be an error in my ILP formulation
Afterwords—Two Years Later
Aha, I am finally here!
I did this project in 2020 but I somehow left this article unfinished for two years. What a shame! I spent an entire evening doing some archaeological work today and dug out my code and notes for
this project (and confirmed they did work). The article is complete—finally—but please tolerate some inconsistencies in the writing. Everything after the AND gate example was written in 2022.
Hopefully this article is helpful.
In retrospect, this small project is by no means complete. The test was too simple and I didn’t even make a Minecraft circuit out of the result! Probably too much handwaving. I could have of course
went much much deeper if I hadn’t been busy with my college applications back then. I wrote the first part of this article on Nov. 25, 2020. A couple of days later I would have found myself rejected
by Cornell. That was very frustrating then and I had to write more and more essays—I guess that was when this unfinished article got buried somewhere.
Anyway. it had been great fun doing this project and I learned a ton of interesting stuff despite the shallow progress on the surface. I still vividly recollect the moment I found in my inbox an
email from the Gurobi China sales team because I was the first high school student in China to apply for an academic license and they really wanted to confirm I was not joking. I said that I was
serious and several days later I got my license—that reply email literally made my day back then. Things have changed. When I re-install Gurobi to re-run my code last night, the Gurobi server
recognized my MIT IP and gave me a license instantly. That was fast, but felt… different, certainly not as rewarding as before. I no longer play Minecraft as much as I did now, or do these fun
projects as much as I had done. The decline in motivation on these things doesn’t seem to be a good sign to me. I can’t blame everything on the course load or UROP or anything else. It is a problem I
am actively seeking a solution to. | {"url":"https://danglingpointer.fun/posts/Minecraft%20ILP","timestamp":"2024-11-02T18:35:23Z","content_type":"text/html","content_length":"62340","record_id":"<urn:uuid:82963b89-e91b-46f1-a206-cdeec9bf631b>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00518.warc.gz"} |
Non-Deterministic Finite Automata
In computer science and theoretical computation, automata theory stands as a foundational concept that helps us understand how machines process information. Among the various types of automata,
Non-Deterministic Finite Automata (NFA) hold a special place due to their intriguing properties and applications.
Since NFAs have fewer constraints than DFAs, they can make complex Automata easier to understand and depict in a diagram. We can define the non-deterministic finite automaton as a finite automaton
variant with two characteristics:
• ε-transition: state transition can be made without reading a symbol; and
• Nondeterminism: state transition can have zero or more than one possible value.
However, the above said features do not give NFA any additional power. When it comes to power, NFA and DFA are equatable.
Due to the above additional features, NFA has a different transition function, rest is the same as DFA.
Let us understand the concept of NFA with an example:
One thing to keep in mind is that in NFA if any path for an input string leads to a final state, the input string is accepted. As shown in the above excerpt, there are different paths for the input
string "00" inside the preceding NFA. Since one of the paths leads to a final state, the above NFA accepts "00."
Also See, Moore Machine
Formal definition of Non-Deterministic Finite Automata
The formal definition of NFA, like that of DFA, is: (Q, 𝚺, δ, q0, F), where
• Q is a finite set of states.
• 𝚺 is a finite set of all alphabet symbols.
• δ: Q x 𝚺 → Q is the transition function from state to state.
• q0 ∈ Q is the starting state, and the starting state must be in the set Q
• F ⊆ Q is the set of accepted states, all of which must be in the set Q.
The only difference between an NFA and a DFA in terms of formal definition is that an NFA requires to include the empty string (ε) in the delta function along with the other symbols.
Read About - Simplification of CFG | {"url":"https://www.naukri.com/code360/library/non-deterministic-finite-automata","timestamp":"2024-11-13T06:23:14Z","content_type":"text/html","content_length":"408221","record_id":"<urn:uuid:05b224da-8db0-43a0-8834-5608b1da789b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00055.warc.gz"} |
Main functions
The main functions of the packages are designed to calculate distance-based measures of spatial structure. Those are non-parametric statistics able to summarize and test the spatial distribution
(concentration, dispersion) of points.
The classical, topographic functions such as Ripley’s K are provided by the spatstat package and supported by dbmss for convenience.
Relative functions are available in dbmss only. These are the \(M\) and \(m\) and \(K_d\) functions.
The bivariate \(M\) function can be calculated for Q. Rosea trees around V. Americana trees: | {"url":"https://www.stats.bris.ac.uk/R/web/packages/dbmss/vignettes/dbmss.html","timestamp":"2024-11-14T07:47:44Z","content_type":"text/html","content_length":"44626","record_id":"<urn:uuid:c6072587-f4de-4f79-b3e0-3282c68c510c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00147.warc.gz"} |
Euclid's Elements
A Site About Euclid's Elements
Herbert M Sauro: August, 2023
"He was 40 years old before he looked on Geometry; which happened accidentally. Being in a Gentleman’s Library, Euclid’s Elements lay open, and ’twas the 47 El. Libri 1. He read the proposition. By
God, sayd he (he would now and then sweare an emphaticall Oath by way of emphasis) this is impossible! So he reads the Demonstration of it, which referred him back to such a Proposition; which
proposition he read. That referred him back to another, which he also read. Et sic deinceps [and so on] that at last he was demonstratively convinced of that trueth. This made him in love with
Thomas Hobbes by John Aubrey (1626-1697). Quoted in O L Dick, Brief Lives (Oxford 1960)
Euclid's Elements is probably one of the most famous books in the world. It's certainly one of the most published, with over 1000 different editions. It's a collection of 13 `books' (today, we might
call them chapters) that lay out the foundation for geometry, number theory, and many core concepts of mathematics and logic still important today. Its popularity is highlighted by the fact it was
still used in schools and colleges well into the 20th century as a textbook on geometry, though more modern treatments have now supplanted it. There is, however, continuing interest in using it as an
approach to teaching deductive reasoning.
The entire collection comprises definitions, postulates, and a large number of mathematical proofs, many of which are related to geometric constructions. The 13 books cover plane and solid Euclidean
geometry, elementary number theory, and incommensurable lines.
I recently put together a new color rendering of Book I which might be of interest.
You can find a wide range of links to Euclid related pages at List of Web Links
A list of classical editions at archive and google at the bottom of the page.
Very little is known about Euclid. What we do know is that he (we assume he was a he) lived in Alexandra about 300 BC. This is based on a passage from Proclus' "Commentary on the First Book of
Euclid's Elements.", copies which you can still purchase today, Proclus was a philosopher who lived from 412 to 485. Note that he was writing 700 years after Euclid. Here is a quote from Proclus
who describes Euclid:
"All those who have written histories bring to this point their account of the development of this science. Not long after these men came Euclid, who brought together the Elements, systematizing many
of the theorems of Eudoxus, perfecting many of those of Theatetus, and putting in irrefutable demonstrable form propositions that had been rather loosely established by his predecessors. He lived in
the time of Ptolemy the First, for Archimedes, who lived after the time of the first Ptolemy, mentions Euclid. It is also reported that Ptolemy once asked Euclid if there was not a shorter road to
geometry that through the Elements, and Euclid replied that there was no royal road to geometry. He was therefore later than Plato's group but earlier than Eratosthenes and Archimedes, for these two
men were contemporaries, as Eratosthenes somewhere says. Euclid belonged to the persuasion of Plato and was at home in this philosophy; and this is why he thought the goal of the Elements as a whole
to be the construction of the so-called Platonic figures."
This passage tells us that Euclid collated earlier work as well as contributing his own work. It also tells us roughly when Euclid lived as well as an anecdote about his character. The passage also
tells us that Proclus didn't have a precise date for when he lived. We also know that Euclid wrote other books on mathematics, a number of which we still have and a number which, sadly, have been
lost to history.
By far, the best book to learn more about Euclid is the series of volumes by Heath, which is still readily available today in print and online:
Heath, Sir Thomas Little (1861-1940)
The thirteen books of Euclid's Elements translated from the text of Heiberg with introduction and commentary. Three volumes. University Press, Cambridge, 1908. Second edition: University Press,
Cambridge, 1925. Reprint: Dover Publ., New York, 1956.
The Wikipedia page on Euclid is also an excellent source of information on Euclid.
The main material in Euclid' Elements are the proofs, but each book usually has at the beginning one or more definitions, postulates, and axioms. There are a huge number of editions available online
as well as copies that can be purchased from sites like Amazon, abebooks, Etsy, or eBay. However, the editions by Todhunter and by Hall & Stevens were specifically written for High School students
and the new reader might find these a good entry to Euclid. However, no matter what editions you use, Euclid requires sustained effort to master. However, there is a nice shortcut today.... YouTube.
On YouTube, you'll find numerous videos on Euclid's Elements. I can recommend three channels: one by Sandy Bultena, which covers Book I to Book VII, and part of Book VIII. A second is by Euler's
Academy, which covers Book I and part of Book II; and a third by mathematicsonline which covers Book I. There are lots of other smaller channels that go over some of the material. Euclid is a
substantial piece of work, and it is no surprise that there isn't a complete set of videos for every Book in Euclid.
If you're after a completely new fresh copy of Euclid, there are a couple to choose from. The most well-known current copy of Euclid is by Dana Densmore. Other than a short introduction, the book is
pure Euclid and comes in at 527 pages, This gives you some idea of the volume of material.
If you're looking for an edition with lots of commentary, then you should get hold of the books by Heath (mentioned above). Because of the amount of commentary provided by Heath, this edition comes
in three separate volumes.
The Proofs
The Greeks had particular restrictions on how to do geometric proofs and constructions. The first is that any proof or construction could only use a straight-rule and a compass. This is one reason
why Propositions 2 and 3 in Book I appear to be a bit odd when read today. In fact, for those coming to Euclid for the first time, they will find the first preposition easy to digest but might be in
for a bit of a shock when they read propositions 2 and 3 which seem totally unrelated to the kind of geometry one might be used to. After propostion 3 the book settles down again to what we might
think of as geometry. I have more to say about this in the Book I page.
The other interesting point about Euclid is that actual measurement was not allowed even though the Greeks did have standards of measurement. Instead, magnitudes are compared, such that they might
be equal, or that one magnitude is larger than another. You may also notice that the book never talks about measuring angles in terms of degrees even though in astronomy the ancient Greeks used
degrees in their measurements. In Euclid, everything is in terms of right angles which partly accounts for the strangeness of Postulate 4: "That all right angles equal one another.".
The proofs also have a particular structure which was used in other ancient mathematical books although the details can vary. Proclus gave a detailed description as follows:
Enunciation: This states the result with possible reference to a figure.
Setting-Out: A statement on how we will start out on the proof.
Demonstration: This is the proof itself.
Conclusion: A statement of the result with reference to the enunciation.
Let's look at the first proposition of Book I to give you an idea of what Euclid looks like. This describes the construction of an equilateral triangle. That is a triangle with three sides of equal
length. It should be noted that many theorems are not constructions but proofs related to some geometric theorem.
First I will present Proposition 1 using more modern language which might be easier for you to read. This is from my book on A rendering of Book I. Changing Euclid is, of course, committing sacrilege
but this is the only place I do this.
Hall & Stevens 1898
Let's now look at some genuine Euclid editions to see how Proposition 1 is described. What follows is a series of screenshots from various editions (in no particular order) of Euclid that
highlights some of the variations you'll find. Most of the time the variations are quite minor but sometimes there are significant deviations especially among editions from 1000 years ago when
finding unadulterated copies of Euclid was difficult.
I'll first give a rendering of Proposition 1 from an edition by Hall & Stevens, a book used by High Schools in the last century. One thing you'll notice in the following examples is that in a number
of places you'll find references to a postulate or definition next to a statement. For example, Post. 3 or often in square or round brackets such as [Post 3.] This is actually a new development,
since there is no evidence that this is what Euclid actually did and editions up to the 16th century show no such referencing to definitions, postulates, axioms or earlier propositions in a proof.
Dana Densmore
The second example of Proposition 1 is a rendering from Dana Densmore, an edition of Euclid published in the last 20 years, but is essentially the proof show in Heath, 1920:
Book I: A new rendering
The third is my own, based on Richard Fitzpatrick's edition, which is a re-rendering of Heiberg's Greek edition in Greek and English. but which is very similar to Heath's edition. I have also
colored-coded the proposition to make it easier to see as hunting for the letters in the figure can be a nuisance.
Oliver Byrne: 1847
The fourth is the version presented by Oliver Byrne who wrote the graphical edition of Euclid. The screenshot is from a copy held at archive.org but several people have republished this edition in
the last few years. I won't give links since there are a number of editions but do a search on Amazon. Note the letters that look like 'f' are actually the old way to write the letter 's'.
Adelard of Bath
Here is a version of Proposition 1 from Adelard of Bath, probably written between 1126 and 1130 (The first Latin translation of Euclid's Elements commonly ascribed to Adelard of Bath, Busard, 1983,
pp20). Adelard was an English natural philosopher (likely Anglo-Saxon) who did much to help restart learning in England and beyond in medieval Europe. His edition of Euclid is based on an earlier
Arabic edition, probably by al-Hajjaj, so it should come as no surprise to find that it is very different from current versions. First, I will give the Latin version I obtained from "The first Latin
translation of Euclid's Elements commonly ascribed to Adelard of Bath" by the great late scholar Busard, 1983. Not having a classical education in Latin, I called upon Chatgpt to translate Latin to
English, which you can see below the Latin copy.
What follows is a computer-generated Latin translation of the above to English of Book I, Proposition 1 by Adelard:
Now we have to show how to make the surface of a triangle of equal sides over a straight line of assigned size.
Let the line be assigned ab. Let the center be placed above a occupying the space that is between a and b circle, above which gdb. Another circle is placed above the center of b occupying the space
between a and b, above which gah.
Let them proceed from the point g above which the intersection of two circles is made by straight lines to the point a and to the point b Let him/them be called ga and gb. I say that here/behold we
have made/constructed a triangle of equal sides above the assigned line ab.
Reason: Because the point a became the center of the circle gdb, the line ag became equal to the line ab. And since the point b is the center of the circle gah, the line bg is equal to the line
ba. Thus, each of the lines ga and gb is equal to the line ab. But each thing is equal to one thing, and each thing is equal to another. Therefore, the three lines ag and ab and bg are equal to
each other. A triangle of equal sides abg is therefore made above the line assigned to ab. And this is what we intend to demonstrate in this figure.
This is an image of Proposition 1 (and by the looks of it maybe part of Prop 2) from the Harley MS 5266. Image taken from the British Library site. This is an early 14th century copy of Adelard I.
The oldest copy MS 47 in Trinity college is not available online :(
D'Orvile Edition 888 AD
Last but no means least here is a translation provided by the Clay Institute of Proposition 1 from the oldest known edition of Euclid's Elements, the MS D’Orville 301 at the Bodleian in Oxford.
Written in Constantinople in 888AD.
"On a given finite straight line to construct an equilateral triangle. Let AB be the given finite straight line.
Thus it is required to construct an equilateral triangle on the straight line AB.
With centre A and distance AB let the circle BCD be described; [Post. 3] again, with centre B and distance BA let the circle ACE be described; [Post. 3] and from the point C, in which the circles cut
one another, to the points A, B let the straight lines CA, CB be joined. [Post. 1] Now, since the point A is the centre of the circle CDB, AC is equal to AB. [Def. 15] Again, since the point B is the
centre of the circle CAE, BC is equal to BA. [Def. 15] But CA was also proved equal to AB; therefore each of the straight lines CA, CB is equal to AB. And things which are equal to the same thing are
also equal to one another; [C.N. 1] therefore CA is also equal to CB. Therefore the three straight lines CA, AB, BC are equal to one another.
Therefore the triangle ABC is equilateral; and it has been constructed on the given finite straight line AB. "
The Bodleian D'Orville 301 manuscript, written in Constantinople in 888AD. Opened to the page showing Proposition 1 on the right. | {"url":"https://euclid.analogmachine.org/","timestamp":"2024-11-12T00:45:41Z","content_type":"text/html","content_length":"137051","record_id":"<urn:uuid:db85d075-5945-4931-9e19-667c25d91fa0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00584.warc.gz"} |
Problem 3
Part a. Using the proof of the maximum principle, prove the maximum principle for subharmonic functions, and the minimum principle for superharmonic functions.
Maximum principle for subharmonic functions:
Let: $u\left(\mathbf{x}\right)$ be subharmonic, $ \Delta u\left(\mathbf{x}\right) \ge 0$, on a bounded domain $\Omega$ with boundary $\partial\Omega = \Sigma$, $\mathbf{x} \in \Omega$. Let: $v\left(\
mathbf{x}\right) = u\left(\mathbf{x}\right) + \epsilon |\mathbf{x}|^2$ where $|\mathbf{x}|^2 = \sum_{i=1}^{n} x_i^2 $, $\epsilon > 0$. Let $\delta$ be the diameter of the set $\bar\Omega$, namely the
largest value of $| \mathbf{x}|^2 = \delta^2$.
At any interior maximum point of a function $f$ we require $f_{x_i x_i} \le 0$, $\Delta f = \sum_{i=1}^{n}f_{x_i x_i} \le 0$ by the second derivative test. Notice that for $\mathbf{x} \in \Omega$:
$$ \Delta v\left(\mathbf{x}\right) = \Delta u\left(\mathbf{x}\right) + \Delta \epsilon |\mathbf{x}|^2 \ge 0 + 2 n \epsilon > 0 $$
So $v$ has no interior maximum on $\Omega$. But $\Omega$ is bounded so $v$ has a maximum somewhere on $\bar\Omega$. As there is no interior maximum of $v$ it must have its maximum on $\partial\Omega
= \Sigma$. Let: $\mathbf{x_o}$ be the maximum of $v$ on $\bar\Omega$. Then, for $\mathbf{x} \in \bar\Omega$:
$$ u\left(\mathbf{x}\right) \le v\left(\mathbf{x}\right) \le v\left(\mathbf{x_0}\right) = u\left(\mathbf{x_0}\right) + \epsilon |\mathbf{x_0}|^2 \le \max_{\mathbf{x} \in \Sigma} u\left(\mathbf{x}\
right) + \epsilon \delta^2 $$
As $\delta$ is some constant and we have this inequality for all $\epsilon > 0$, we take the limit as $\epsilon \rightarrow 0$ to yield:
$$ \lim_{\epsilon \rightarrow 0} \{u\left(\mathbf{x}\right) \le \max_{\mathbf{x} \in \Sigma} u\left(\mathbf{x}\right) + \epsilon \delta^2 \} \rightarrow \{ u\left(\mathbf{x}\right) \le \max_{\mathbf
{x} \in \Sigma} u\left(\mathbf{x}\right) \} $$
We have the maximum of $u\left(\mathbf{x}\right)$ is attained on $\mathbf{x} \in \Sigma = \partial\Omega$, as needed. $\square$
Minimum principle for subharmonic functions:
Let: $u\left(\mathbf{x}\right)$ be superharmonic, $ \Delta u\left(\mathbf{x}\right) \le 0$, on a bounded domain $\Omega$ with boundary $\partial\Omega = \Sigma$, $\mathbf{x} \in \Omega$. Let: $v\left
(\mathbf{x}\right) = - u\left(\mathbf{x}\right)$.
Then $v\left(\mathbf{x}\right)$ is subharmonic and attains its maximum on $\Sigma$, by the maximum principle for subharmonic functions. This is equivalent to a minimum of $u$ and so $u\left(\mathbf
{x}\right)$ attains its minimum on $\Sigma = \partial\Omega$. We are done. $\blacksquare$
Part b. Show that the minimum principle for subharmonic functions, and maximum principle for superharmonic functions do not hold.
Let: $u\left(\mathbf{x}\right) = |\mathbf{x}|^2 = \sum_{i=1}^{n} x_i^2$ defined on some ball in $\mathbb{R}^n$ about the origin with radius larger than $0$.
Now: $\Delta u = \Delta |\mathbf{x}|^2 = 2 n > 0$ so $u$ is subharmonic. But clearly at $ \mathbf{x} = \mathbf{0}$, $u\left(\mathbf{x}\right) = |\mathbf{x}|^2 = 0$ but at any $\mathbf{x} \ne \mathbf
{0}$, $ |\mathbf{x}|^2 > 0$. Then $u$ has an interior minimum and the minimum principle for subharmonic functions does not hold. $\square$
Let: $u\left(\mathbf{x}\right)$ be subharmonic, and $v\left(\mathbf{x}\right) = - u\left(\mathbf{x}\right)$. Then $v$ is superharmonic.
Because the minimum principle does not hold for subharmonic functions as shown, $u$ may have an interior minimum. This is equivalent to an interior maximum for $v$, so the maximum principle does not
hold for superharmonic functions. We are done. $\blacksquare$
Part c. Prove that if $u, v, w$ are respectively harmonic, subharmonic, and superharmonic functions in the bounded domain $ \Omega$, coinciding on its boundary $ \{u \bigr|_\Sigma= v\bigr|_\Sigma= w\
bigr|_\Sigma\}$, then $w \ge u \ge v$ in $\Omega$.
Let: $\Delta u = 0$, $\Delta v \ge 0$, $ \Delta w \le 0$ on $ \Omega$, coinciding on its boundary $\partial\Omega = \Sigma$, $ \{u \bigr|_\Sigma= v\bigr|_\Sigma, w\bigr|_\Sigma\}$.
Define: $f = v - u$, $g = w - u$.
Then: $\Delta f = \Delta v - \Delta u \ge 0$, $\Delta g = \Delta w - \Delta u \le 0$, and: $ \{u \bigr|_\Sigma= v\bigr|_\Sigma= w\bigr|_\Sigma\}$ so:
$$ \{ f \bigr|_\Sigma= \left(v-u\right)\bigr|_\Sigma = 0, \, g \bigr|_\Sigma= \left(w-u\right)\bigr|_\Sigma = 0 \} $$
$\Delta f \ge 0$, so $f$ is subharmonic and by the maximum principle attains its maximum on $\Sigma$. Then in $\Omega$, $f \le f \bigr|_\Sigma = 0$ so $f = v - u \le 0 \implies v \le u$ in $\Omega$.
Similarly, $\Delta g \le 0$ so $g$ is superharmonic and by the minimum principle attains its minimum on $\Sigma$. So in $\Omega$, $g \ge g \bigr|_\Sigma = 0$ so $g = w - u \ge 0 \implies w \ge u$ in
We have in $\Omega$, $v \le u$, and $ w \ge u$. So $w \ge u \ge v$ inside of $\Omega$, as needed. $\blacksquare$ | {"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=dftck8b0bjhducb5knhnfofef3&topic=157.msg886","timestamp":"2024-11-09T17:03:35Z","content_type":"application/xhtml+xml","content_length":"41214","record_id":"<urn:uuid:a6f4adbf-4e60-4517-9432-40aeb4df0d11>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00719.warc.gz"} |
Does python not automatically convert integer types when doing math?
Why Python Doesn't Automatically Convert Integers in Math: A Deep Dive
Imagine you're writing a Python program to calculate the average of two numbers. You might think, "Python is smart, it'll figure out I want a decimal result." But you run your code, and instead of
getting a nice decimal number, you get a whole number. What's going on?
Let's break down this common Python quirk and learn how to get the results you expect.
The Problem:
a = 5
b = 3
average = (a + b) / 2
print(average) # Output: 4
Why does this code output 4 instead of 4.0?
The Answer:
Python, by default, performs integer division when both operands are integers. This means it discards any remainder, leaving you with a whole number.
Diving Deeper:
• Integer Division: When you divide two integers in Python, the result is always an integer, even if the actual result is a decimal. The remainder is discarded.
• Floating-Point Division: To get a decimal result, you need at least one of the operands to be a floating-point number. This is achieved by adding a decimal point (.) to one of the numbers.
To fix our average calculation and get the desired decimal result, we need to ensure at least one of the operands is a float:
a = 5
b = 3
average = (a + b) / 2.0 # Adding .0 makes 2 a float
print(average) # Output: 4.0
Practical Example:
Let's say you're calculating the average grade for a student:
grades = [85, 92, 78, 89]
total_grades = sum(grades)
average_grade = total_grades / len(grades) # This will be a float
Here, len(grades) returns an integer, but since total_grades is calculated using sum(grades), which returns a float, the division automatically results in a floating-point number.
Key Takeaways:
• Python's integer division behavior is often unexpected for beginners.
• To achieve decimal results, ensure at least one operand in your division operation is a float.
• The float() function can be used to explicitly convert an integer to a float: float(5) == 5.0
By understanding this behavior, you can avoid unexpected results in your Python programs and write code that produces the accurate and expected output. | {"url":"https://laganvalleydup.co.uk/post/does-python-not-automatically-convert-integer-types-when","timestamp":"2024-11-14T23:42:09Z","content_type":"text/html","content_length":"81763","record_id":"<urn:uuid:a9851160-5352-4336-9745-0418846a3b67>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00347.warc.gz"} |
6th Grade Word Problems and Answers
In 6th grade, students learn to solve real-world problems using ratios, proportions, equations and statistics. These skills are used frequently in daily activities, like calculating gas mileage or
measuring ingredients for cooking. Help your child make the connection between the classroom and real life by writing your own word problems at home. Read on to learn how.
How to Create 6th Grade Word Problems
One of the more challenging aspects of 6th grade math is solving word problems. They integrate math concepts with real-world applications and can be tough because students must select the appropriate
operations and write accurate expressions. Word problems also appear on most standardized tests.
Though your child's teacher is a valuable resource for extra practice, you can create your own word problems at home. There are just two things to remember. First, use the topics that the teacher
covers in class. Your child will benefit the most from the extra practice if it supports what he is learning in class. In 6th grade, this can include ratios, expressions and statistics.
Second, use topics that appeal to your 6th grader. This means that the subject of your word problems should incorporate aspects from your son's life that he is passionate about, such as sports
statistics or weather probability. If your daughter likes animals and nature, give her a few problems in which she uses ratios to compare the speeds of different animals.
Problems by Concept
Ratios and Proportions
1. For every hour that Nate played video games, Ted played three hours. What is the ratio of hours that Nate and Ted play games?
Because Nate plays one hour for every three hours that Ted plays, the ratio is 1 to 3. Ratios can also be written as a fraction, 1/3, or using a colon, 1:3.
2. A lion runs 50 miles per hour and a cheetah runs 75 miles per hour. Write the ratio comparing the lion's speed to the cheetah's speed.
The ratio is 50/75.
3. Lindsey's mom had a garage sale. After the first three hours, she made $100. At this rate, how much money will be made in nine hours?
This problem requires a proportion to solve. The proportion should compare the two sets of ratios: 100/3 = x/9. To solve, cross-multiply so that 3x = 900, and then divide both sides by three. After
nine hours, Lindsey's mom should make $300.
Expressions and Equations
1. Anne has two dogs, Rex and Evie. Rex is seven years old, and he's one year older than twice Evie's age. How old is Evie?
An equation is needed to solve this problem. Begin by writing out what we know. In this case, Evie's age is the variable x. We know that Rex is seven years old, so 7 = 1 + 2x. To solve, isolate the
variable by subtracting one from both sides: 6 = 2x. Then, divide both sides by two. As a result, we know that Evie is three years old.
2. Randy ate twice the number of pizza slices that Samantha ate, and Samantha ate one more slice than Madeline. If Madeline ate two slices, how many slices were there in all?
It is helpful to begin word problems by listing out the known facts. We know that Madeline ate two slices and Samantha ate three because she only ate one more than Madeline did. We can multiply 2 x 3
to figure out how many slices Randy ate because he ate twice the amount that Samantha ate. Add up the totals: 2 + 3 + 6 = 11, so there were 11 slices in all.
Probability and Statistics
When practicing probability at home, it can be fun to use hands-on activities. For instance, give your child a bag of colored candy that contains five red candies, three orange candies, one green
candy and one blue candy. Then, ask your child to calculate the probability that she will pick out the blue candy. In this case, it would be 1/10, or 1%. This can similarly be done with other props,
like a coin or a deck of cards.
Other Articles You May Be Interested In
Mad Libs: A Fun Word Game That Teaches Nouns, Verbs, and Adjectives.
Mad Libs is an interactive game that allows your child to use nouns, verbs, and adjectives in a way that creates a funny story. Read on to learn how to use Mad Libs to help your child learn about
nouns, verbs, and adjectives.
Hangman-The Classic Effect of Word Games
Hangman may not be as hip as the newest video game, but this classic word game still remains a surefire way to increase your child's vocabulary and interest in reading and writing.
We Found 7 Tutors You Might Be Interested In
Huntington Learning
• What Huntington Learning offers:
• Online and in-center tutoring
• One on one tutoring
• Every Huntington tutor is certified and trained extensively on the most effective teaching methods
• What K12 offers:
• Online tutoring
• Has a strong and effective partnership with public and private schools
• AdvancED-accredited corporation meeting the highest standards of educational management
Kaplan Kids
• What Kaplan Kids offers:
• Online tutoring
• Customized learning plans
• Real-Time Progress Reports track your child's progress
• What Kumon offers:
• In-center tutoring
• Individualized programs for your child
• Helps your child develop the skills and study habits needed to improve their academic performance
Sylvan Learning
• What Sylvan Learning offers:
• Online and in-center tutoring
• Sylvan tutors are certified teachers who provide personalized instruction
• Regular assessment and progress reports
In-Home, In-Center and Online
Tutor Doctor
• What Tutor Doctor offers:
• In-Home tutoring
• One on one attention by the tutor
• Develops personlized programs by working with your child's existing homework
• What TutorVista offers:
• Online tutoring
• Student works one-on-one with a professional tutor
• Using the virtual whiteboard workspace to share problems, solutions and explanations | {"url":"http://mathandreadinghelp.org/6th_grade_word_problems.html","timestamp":"2024-11-01T19:04:58Z","content_type":"application/xhtml+xml","content_length":"27311","record_id":"<urn:uuid:bc85d6e1-8d97-4755-ad3c-6d964b5a6014>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00607.warc.gz"} |
stepper motor steps per revolution calculator
A stepper motor is a popular type of synchronous motor because it allows for precise movements or "steps". The Balls screws on my machine are "1204" Question: Steps Per Revolution. #include const int
stepsPerRevolution = 200; // change this to fit the number of steps per revolution // for your motor // initialize the stepper library on pins 8 through 11: This type of motor also produces a lot of
torque given it size which is why it has found itself in a number of industrial applications. The step angle is: 0.9 . Stepper motor step rate ? Calculate The Number Of Steps Per Revolution For Step
Angle Of 7.5 Degrees. If this value is smaller that needed, fewer steps per a unit distance, then the calculation will say it takes fewer steps to move the desired distance. A stepper motor or step
motor or stepping motor is a brushless DC electric motor that divides a full rotation into a number of equal steps. 1. There are different kinds of stepper motor, but the most common type is 200
steps per revolution. I'm new to this forum and I'm trying to use the STEPS PER UNIT CALCULATOR. Stepper Motor Maximum Speed and Power Calculator. STEPMOT-1) is a four phase, unipolar, permanent
magnet stepper motor. EE100 ELECTRICAL TECHNOLOGY LAB MANUAL Circuit Diagram: Fig 13.3 Stepper motor in full step mode Observations: The number of steps per revolution is: 400 Use the following
formula to calculate the step angle á. 5. – Microsteps: While your stepper drive may accomodate very fine degrees of microstepping, it’s wise to remember that you lose a lot of torque the more
microsteps you use. Get Stepper Motor Calculator for iOS latest version. Calculate the number of steps per revolution for step angle of 7.5 degrees. The motor is shown with both phases energized. It
is a standard size, 200-steps-per-revolution, NEMA 17 (1.7 in. Full-stepping is when the drive turns the motor one full physical step (1/200 of a revolution) per indexer step that it receives. Some
precision motors can make 1000 steps in one revolution with a step angle of 0.36 degrees. A step rate of 200 per second will give you 60 RPM, provided the mechanical load does not cause slippage. The
motor spins very fast in one direction or another. The formula: Resolution is steps per inch or steps per milimeter I will go over this using steps/inch: steps = motor steps x driver microstepping
inch = the amount of travel with one full stepper motor rotation In the case of our 1/2" 5 start 10 TPI lead screw, the axis will travel .5 inches with one stepper motor … 2mm) N t is the number of
teeth on the pulley attached to the motor shaft. GRBL uses this step/mm setting to make that calculation. square footprint, 5 mm shaft diameter), 12 V motor. Say you set the microstepping to be 1/4
on the stepper motor driver and your stepper motor has 200 natural steps per revolution (1.8 degrees per step), then the total steps would be 200 x 4 = 800. A stepper motor with a step angle of 5
degrees has __ steps per revolution. The Mosaic stepper motor (part no. Here’s a simple equation you can use to calculate steps per mm for linear motion with belts and pulleys. A stepper motor or
step motor or stepping motor is a brushless DC electric motor that divides a full rotation into a number of equal steps.… The rotor is shown with 12 poles resulting in 24 steps per revolution with a
15° step angle. The stepper motor is the lower part with the shaft and the orange cable on it. Please note that more steps per revolution for stepper motors will cause those motors to rotate at a
lower speed and provide lower torque than a similarly sized motor with fewer steps per revolution. 3. This discrete motion means the motor’s rotation isn’t perfectly smooth, and the slower the
rotation, the less smooth it is due to the relatively large step size. Stepper motors move in discrete steps, or fractions of a revolution. But wait! If A Motor Takes 90 Steps To Make One Complete
Revolution, What Is The Step Angle For This Motor? s rev is the number of steps per revolution for the motor f m is the microstepping factor (1, 2, 4, 8 etc.) If a motor takes 90 steps to make one
complete revolution, what is the step angle for this motor? Stepper Motor Calculator is designed to calculate the maximum speed of a stepper motor as well as the minimum time per step and the maximum
power dissipation. For a 0.9 degree stepper this would be 360°/0.9°, or 400 full steps. More steps per revolution shows that this stepper motor is able to move in smaller increments and thus control
its position more precisely. That's full steps, which is one phase on at a time. Now I've mounted stepper motors with 400 fullsteps per revolution to be able to draw at a higher resolution. Can find
motor data, and will likely look at a Raspberry Pi to control, easy to write the code for that. Decoding Steps per Motor Revolution. 1.8 degrees per step will require 200 steps per revolution.
Stepper motor doubt in the data sheet of the stepper motor it is specified that step angle is 15 5 rpm @ 200pps unipolar ... or 200 full steps per revolution of the motor shaft. Your machine
controller gets a command in distance and then calculates how many steps to turn the stepper motor. A Stepper Motor With A Step Angle Of 5 Degrees Has 2. This motor is difficult to illustrate clearly
because of the way it is constructed. Now we can convert speed from rotations per minute to step per second by using the following formula: step_per_second = rotations_per_minute *
steps_per_revolution / 60. Stepper Motor Calculator is designed to calculate the maximum speed of a stepper motor as well as the minimum time per step and the maximum power dissipation This type of
motor also produces a lot of torque given it size which is why it has found itself in a number of industrial applications. The step angle is the angle the rotor moves from one step to the next.
Stepper Motor Calculator is designed to calculate the maximum speed of a stepper motor as well as the minimum time per step and the maximum power dissipation. If you require a particular RPM, such as
43, simply divide 43 by 60 then multiply by 200 to get 143.33 steps per second. March 2014 edited March 2014 in Egg-Bot. find the steps per revolution in stepper motor May 21, 2016, 04:11 pm i have
stepper motor,how i can fined how many steps per revolution of it experimentally because i don't have any information about that stepper A standard motor will have a step angle of 1.8 degrees with
200 steps per revolution. Hybrid Stepper Motor. Hi, I builded an Egg-Bot myself. - posted in ATM, Optics and DIY Forum: No idea where to put this post, decided here after the other 2 ideas. For
example, a stepper motor with a 1.8 degree step angle will make 200 steps for every full revolution of the motor (360 ÷ 1.8). 400 steps / revolution steppermotors higher speeds than 1000 steps per
second. This calculator computes the maximum speed of a stepper motor, which is limited by the time it takes for the coil to energize to it's maximum holding current, and then de-energize as polarity
flips. This motor, like most stepper motors is a permanent magnet motor. Stepper Motor Calculator is designed to calculate the maximum speed of a stepper motor as well as the minimum time per step
and the maximum power dissipation. So, the steps/inch is 800 / 1/2" = 1600 steps per inch Hope that helps. Conversion Calculator Wiring Diagrams RMA Requisition Form News & … The stepper motor can be
controlled with or without feedback. Now let's put this in perspective: your particular stepper motor is 1.8 degrees per step or 200 steps for 360 degrees. Every revolution of the stepper motor is
divided into a discrete number of steps, in many cases 200 steps, and the motor must be sent a separate pulse for each step. For servos, it is a function of how many steps per revolution the encoder
has. Want possibly at some time to change the DC motors on a mount to stepper motors. The name Stepper Motor itself shows that the rotor movement is in the form of various steps or discrete steps.
Half-stepping means each step is half as big, so the motor will need twice as many steps to go 1 revolution. Why Do We Put A Driver Between The Microcontroller And The Stepper Motor? A stepper mottor
is a brushless, synchronous electric motor that converts digital pulses into mechanical shaft rotation. In Arduino we will be operating the stepping motor in 4-step sequence so the stride angle will
be 11.25° since it is 5.625°(given in datasheet) for 8 step sequence it will be 11.25° (5.625*2=11.25). In this case, we can calculate: steps_per_revolution = 360 / angle_per_step. Download Stepper
Motor Calculator App 1.0 for iPad & iPhone free online at AppPure. The cutaway in Figure 5 is an attempt to show how this type of PM step motor looks. – Steps per revolution: Most steppers use 200
steps per revolution, but you should see what your manufacturer says. Stepper Motor Calculator is designed to calculate the maximum speed of a stepper motor as well as the minimum time per step and
the maximum power dissipation. Stepper motors are normally used for positioning, and are not known for their speed. StefanL38. With microstepping, we can control the motor with much greater
precision, making it take extremely tiny steps. 4. I have a chinese 3 axis 40x30 machine with a JK02-M5 Breakout board, 2M542 Stepper Motors, and a Dell Optiplex 745 dedicated computer. Imagine a
motor on an RC airplane. A stepper motor or step motor or stepping motor is a brushless DC electric motor that divides a full rotation into a number of equal steps.… The Stepper Motors therefore are
manufactured with steps per revolution of 12, 24, 72, 144, 180, and 200, resulting in stepping angles of 30, 15, 5, 2.5, 2, and 1.8 degrees per step. If the A4988 is set at 1/16 step it will take
3200 steps to turn the stepper motor shaft 360 degrees. The variable in these equations that is determined by your motor is 'Steps per Motor Revolution' which is the number of steps it takes for the
motor to make one, full, three-hundred sixty degree turn. With this Stepper motor calculator you can quickly calculate the speed of the stepper motor in your circuit. Sort of bog standard one what
appears to be Nema 23 or 17 or similar. It is important to know how to calculate the steps per Revolution for your stepper motor because only then you can program it effectively. p is the pitch (e.g.
Why do we put a driver between the microcontroller and the stepper motor? I'm quite new to CNCing and I'm spanish so excuse my ignorance and eventual language barrier. The A4988 data sheet shows that
this device will produce 1/16 steps. A stepper motor is a popular type of synchronous motor because it allows for precise movements or "steps". Per inch Hope that helps steps_per_revolution = 360 /
angle_per_step Do we put a Driver Between the Microcontroller the! = 360 / angle_per_step is when the drive turns the motor shaft 360 degrees a brushless, synchronous electric that... Very fast in
one revolution with a step angle for this motor much greater precision, making take..., which is one phase on at a Raspberry Pi to control, easy to write the code for.... The form of various steps or
discrete steps stepper mottor is a function of how steps! 'Ve mounted stepper motors move in smaller increments and thus control its more., and are not known for their speed the Microcontroller and
the stepper motor is a function of many. Many steps to make that calculation know how to calculate the steps per revolution is to. Increments and thus control its position more precisely RPM,
provided the mechanical load does not cause slippage the... Greater precision, making it take extremely tiny steps 60 RPM, provided the mechanical does! Much greater precision, making it take
extremely tiny steps without feedback motor Takes steps. Microstepping, we can calculate: steps_per_revolution = 360 / angle_per_step 2mm ) N t is the angle! Iphone free online at AppPure many steps
per UNIT CALCULATOR form of various steps discrete. And I 'm spanish so excuse my ignorance and eventual language barrier be 360°/0.9° or... Of 5 degrees has 2 itself shows that this stepper motor is
1.8 degrees per will. A higher resolution revolution, what is the step angle of 7.5 degrees make that calculation into... The code for that the cutaway in Figure 5 is an attempt show... To calculate
the number of steps per revolution with a step angle the... T is the number of steps per revolution, synchronous electric motor that converts pulses! Perspective: your particular stepper motor is a
popular type of synchronous motor only. To this forum and I 'm trying to use the steps per revolution not cause slippage with fullsteps. Rotor moves from one step to the next control its position
more precisely has.! Quite new to CNCing and I 'm new to CNCing and I new. Not cause slippage 400 steps / revolution steppermotors higher speeds than 1000 steps per revolution various steps or steps.
It take extremely tiny steps data, and are not known for their speed 've mounted stepper motors move smaller. Fullsteps per revolution shows that this device will produce 1/16 steps at 1/16 step it
take. From one step to the next positioning, and will likely look at Raspberry! Control, easy to write the code for that is an attempt to show this... A permanent magnet motor Driver Between the
Microcontroller and the orange cable on it a Driver Between Microcontroller! Precision, making it take extremely tiny steps revolution steppermotors higher speeds than 1000 steps revolution.
Steppermotors higher speeds than 1000 steps per revolution the encoder has the and... Which is one phase on at a Raspberry Pi to control, easy to write code. Data sheet shows that this stepper motor
is a brushless, synchronous motor! Time to change the DC motors stepper motor steps per revolution calculator a mount to stepper motors move in smaller and. The most common type is 200 steps per
second will give you 60 RPM provided... Write the code for that quite new to CNCing and I 'm new. Step motor looks be able to move in discrete steps, which is one phase on at a time make! 800 / 1/2
'' = 1600 steps per revolution means each step half... ) N t is the number of teeth on the pulley attached to the next digital. Without feedback revolution ) per indexer step that it receives need
twice as many steps to turn the stepper shaft... To calculate the steps per revolution with a step rate of 200 per second will give you 60,! Hope that helps one step to the next is half as big, so
the motor shaft degrees with steps! More steps per revolution with a 15° step angle for this motor, the... Spins very fast in one direction or another control, easy to write the code that! What is
the number of steps per revolution the encoder has degrees with 200 steps per revolution for step for. Is half as big, so the motor spins very fast in one revolution with a step angle the.
Steps_Per_Revolution = 360 / angle_per_step, like most stepper motors move in smaller and! Extremely tiny steps trying to use the steps per revolution for your motor... Motors is a permanent magnet
stepper motor because only then you can program it effectively moves from one to... Of 200 per second 360°/0.9°, or fractions of a revolution of PM step motor.... Is 200 steps per revolution from one
step to the motor with a step rate of 200 second! Rotor is shown with 12 poles resulting in 24 steps per revolution for step angle 7.5! A4988 is set at 1/16 step it will take 3200 steps to make
Complete... Nema 23 or 17 or similar is the lower part with the shaft the! Will likely look at a time shaft diameter ), 12 V motor will produce 1/16.... Degree stepper this would be 360°/0.9°, or 400
full steps, is! For their speed a motor Takes 90 steps to turn the stepper motor is 1.8 degrees 200! Can be controlled with or without feedback this forum and I 'm new to this forum I! A four phase,
unipolar, permanent magnet stepper motor itself shows that the moves... Controlled with or without feedback for precise movements or `` steps '' turn the stepper motor with a rate! This in
perspective: your particular stepper motor with a step angle is a popular type synchronous. Motor itself shows that this device will produce 1/16 steps and are not for! Degree stepper this would be
360°/0.9°, or 400 full steps you 60 RPM, provided the mechanical load not! 360 degrees are different kinds of stepper motor CALCULATOR App 1.0 for stepper motor steps per revolution calculator &
iPhone free online at.... Does not cause slippage in smaller increments and thus control its position more.. Will require 200 steps per revolution degrees with 200 steps per revolution of motor! To
the motor spins very fast in one revolution with a step angle for this?... Steps per revolution the encoder has 360 degrees the pulley attached to the next the Microcontroller and stepper! Degree
stepper this would be 360°/0.9°, or fractions of a revolution quite new to this forum and I spanish! 2Mm ) N t is the lower part with the shaft and the orange cable it... Be Nema 23 or 17 or similar
this type of PM step motor looks that converts digital pulses into shaft... Degrees with 200 steps per revolution for step angle of 5 degrees has 2 's this! Of bog standard one what appears to be
Nema 23 or 17 or similar the and!, synchronous electric motor that converts digital pulses stepper motor steps per revolution calculator mechanical shaft rotation with 12 poles in. Motor data, and
will likely look at a time the most common type 200. Of synchronous motor because only then you can program it effectively 12 poles resulting in 24 steps per UNIT.... Motor shaft for a 0.9 degree
stepper this would be 360°/0.9°, or 400 full steps function of many. Turns the motor one full physical step ( 1/200 of a revolution does not cause slippage motor itself that. Step is half as big, so
the motor one full physical step ( 1/200 a... A4988 data sheet shows that the rotor moves from one step to the motor need! Fractions of a revolution ) per indexer step that it receives 'm new to
forum... Rotor moves from one step to the motor spins very fast in revolution. Now let 's put this in perspective: your particular stepper motor require 200 steps for degrees... For your stepper
motor can be controlled with or without feedback your motor. The stepper motor CALCULATOR App 1.0 for iPad & iPhone free online at AppPure the Microcontroller and stepper. Program it effectively 1600
steps per revolution to be Nema 23 or 17 or similar in revolution... Rpm, provided the mechanical load does not cause slippage revolution with a step rate of 200 per second give... With 200 steps per
revolution speeds than 1000 steps per revolution the encoder has you 60,... ( 1.7 in 5 mm shaft diameter ), 12 V motor turn the motor! In perspective: your particular stepper motor, like most stepper
motors 5 degrees has __ steps inch... Is in the form of various steps or discrete steps, which is one phase on at Raspberry! Make 1000 steps per revolution the next revolution steppermotors higher
speeds than 1000 per... Because only then you can program it effectively synchronous motor because only you! Perspective: your particular stepper motor is the step angle of 7.5 degrees turns the
motor will need as... Calculator App 1.0 for iPad & iPhone free online at AppPure, the steps/inch is 800 1/2... Revolution steppermotors higher speeds than 1000 steps per revolution shows that the
rotor movement is in the of! Orange cable on it is one phase on at a Raspberry Pi control. In 24 steps per revolution function of how many steps per revolution to be able to move in increments... Is
set at 1/16 step it will take 3200 steps to make one Complete revolution, what the. 2Mm ) N t is the number of teeth on the pulley attached to the motor one full step. | {"url":"https://campaignspace.se/the-good-sxjh/673295-stepper-motor-steps-per-revolution-calculator","timestamp":"2024-11-09T01:37:55Z","content_type":"text/html","content_length":"28489","record_id":"<urn:uuid:d2af1f38-65de-407b-8d01-f213d5d8c53a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00184.warc.gz"} |
Convert kilometers to miles ( km to mi )
Last Updated: 2024-11-06 07:39:46 , Total Usage: 877002
Converting kilometers to miles is a common task in understanding the relationship between the metric and imperial systems of measurement.
Historical Background
The kilometer, a unit of length in the metric system, is used worldwide and represents one thousand meters. The mile, part of the imperial system, is more commonly used in the United States and the
United Kingdom. Its origin dates back to Roman times and was defined as a thousand paces.
Calculation Formula
The conversion formula from kilometers to miles is:
\[ \text{Length in miles} = \text{Length in kilometers} \times 0.621371 \]
This factor, 0.621371, comes from the definition that one mile is approximately equal to 1.60934 kilometers.
Example Calculation
To convert, for example, 5 kilometers to miles:
\[ 5 \, \text{km} \times 0.621371 \, \text{mi/km} = 3.106855 \, \text{mi} \]
So, 5 kilometers is approximately 3.11 miles.
Usage and Importance
This conversion is crucial for understanding and communicating distances in regions where different systems are used. It's particularly relevant in international travel, athletics (such as road
races), and global communications.
Common FAQs
Q: Why is there a difference between the metric and imperial systems? A: These systems evolved separately, with the metric system designed for universal use and simplicity, while the imperial system
has historical roots specific to regions like the UK and USA.
Q: How accurate is the conversion factor? A: The conversion factor is quite precise for most practical purposes. For highly precise scientific calculations, additional decimal places may be used.
Q: Are there easy ways to estimate kilometers to miles in my head? A: A simple estimation is to use the factor of 0.6. For example, 10 kilometers can be roughly estimated as 6 miles (10 × 0.6).
In summary, converting kilometers to miles is an essential skill in a world where different measurement systems coexist. It involves a simple multiplication by the conversion factor 0.621371. This
conversion is not just a mathematical exercise but a practical necessity for international travel, sports, and various global activities. | {"url":"https://calculator.fans/en/tool/km-to-mi-convertor.html","timestamp":"2024-11-06T17:16:32Z","content_type":"text/html","content_length":"12136","record_id":"<urn:uuid:cad4ee77-852a-4e32-81f9-68a9e026465a>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00657.warc.gz"} |
Minimum separation between incoming proton and alpha particle
• Thread starter issacnewton
• Start date
In summary: No, because the particles are not at their minimum separation.Think about momentum.Let proton have velocity of ##v_1## and let alpha particle have velocity of ##v_2## when they are near.
Now, initially proton has velocity of ##v##. Now, we can think of an alpha particle as having almost 4 times mass of a proton. So conservation of linear momentum gives us$$mv = m v_1 + 4m v_2 $$Also,
since this can be treated as the perfectly inelastic...No, because the particles are not at their minimum separation.
Homework Statement
A proton of mass ##m## and charge ##e## is projected from a very large distance towards an ##\alpha## particle with velocity ##v##. Initially, ##\alpha## particle is at rest, but it is free to
move. If gravity is neglected, then the minimum separation along the straight line of their motion will be
A. ##e^2/4\pi\varepsilon_0 m v^2##
B. ##5e^2/4\pi\varepsilon_0 m v^2##
C. ##2e^2/4\pi\varepsilon_0 m v^2##
D. ##4e^2/4\pi\varepsilon_0 m v^2##
Relevant Equations
Coulomb's law of interaction between charged particles
Proton is going towards the ##\alpha## particle. So, I am thinking of using the conservation of energy as the initial kinetic energy of the proton is known and initial interaction potential energy is
zero. But, we don't know the kinetic energies of proton and ##\alpha## particle when they are at their minimum separation. So, what could be possible approach here ?
IssacNewton said:
Homework Statement: A proton of mass ##m## and charge ##e## is projected from a very large distance towards an ##\alpha## particle with velocity ##v##. Initially, ##\alpha## particle is at rest,
but it is free to move. If gravity is neglected, then the minimum separation along the straight line of their motion will be
A. ##e^2/4\pi\varepsilon_0 m v^2##
B. ##5e^2/4\pi\varepsilon_0 m v^2##
C. ##2e^2/4\pi\varepsilon_0 m v^2##
D. ##4e^2/4\pi\varepsilon_0 m v^2##
Homework Equations: Coulomb's law of interaction between charged particles
Proton is going towards the ##\alpha## particle. So, I am thinking of using the conservation of energy as the initial kinetic energy of the proton is known and initial interaction potential
energy is zero. But, we don't know the kinetic energies of proton and ##\alpha## particle when they are at their minimum separation. So, what could be possible approach here ?
One approach is to consider a frame of reference in which the kinetic energy at the point of minimal separation is known.
But, we don't know the kinetic energy at the point of minimal separation.
IssacNewton said:
But, we don't know the kinetic energy at the point of minimal separation.
Can you think of a reference frame in which you would know this?
Would that be a center of mass frame ? And would the kinetic energy at the point of minimal separation be zero in this frame ?
IssacNewton said:
Would that be a center of mass frame ? And would the kinetic energy at the point of minimal separation be zero in this frame ?
Yes and yes It's often a useful trick in collision problems.
IssacNewton said:
But, we don't know the kinetic energy at the point of minimal separation.
This process is a perfectly elastic collision*, so both conservation of energy and conservation of momentum apply. So once you pick a frame of reference, you know the
energy for all time, because the total energy is a constant.
The tricky bit (as
alludes to) is choosing a good frame of reference. Kinetic energy is relative (even under Galilean transformations). So you'll first need to pick a frame of reference and stick with it, and only then
calculate kinetic energies.
What can you say about the center of mass of this system?
*(Although the process doesn't happen instantly, and there is always a finite separation between the particles, the process is technically a collision.)
[Edit: already beaten to the punch.]
Well, if the kinetic energy in the CM frame is zero, we must have both particles stationary. So, it doesn't make sense.
IssacNewton said:
Well, if the kinetic energy in the CM frame is zero, we must have both particles stationary. So, it doesn't make sense.
Why does that not make sense?
Since proton is approaching from an infinite distance, I would guess that in CM frame, both the proton and alpha particle have some velocities. So, total kinetic energy can not be zero
IssacNewton said:
Since proton is approaching from an infinite distance, I would guess that in CM frame, both the proton and alpha particle have some velocities. So, total kinetic energy can not be zero
Kinetic energy is not the only form energy can take!
I know that there will be interaction potential energy. But at infinite distances, there is no potential energy, there is only kinetic energy present. So initially, total kinetic energy can not be
IssacNewton said:
I know that there will be interaction potential energy. But at infinite distances, there is no potential energy, there is only kinetic energy present. So initially, total kinetic energy can not
be zero
Of course not initially. The critical moment is when there is minimum separation.
But I am not convinced why total kinetic energy would be zero at the minimum separation ? Both could be moving. Proton will be slowing down and alpha particle will be speeding up. Can we analytically
prove that total kinetic energy is zero at the minimum separation ?
IssacNewton said:
But I am not convinced why total kinetic energy would be zero at the minimum separation ? Both could be moving. Proton will be slowing down and alpha particle will be speeding up. Can we
analytically prove that total kinetic energy is zero at the minimum separation ?
Think about momentum.
Let proton have velocity of ##v_1## and let alpha particle have velocity of ##v_2## when they are near. Now, initially proton has velocity of ##v##. Now, we can think of an alpha particle as having
almost 4 times mass of a proton. So conservation of linear momentum gives us
$$mv = m v_1 + 4m v_2 $$
Also, since this can be treated as the perfectly inelastic collision, the relative velocity is same except for the signs.
$$v - 0 = -(v_1 - v_2)$$
Well, solving this together, we reach the conclusion that ##2 v_1 = -3v_2##. So, how does it follow from here that total kinetic energy is zero when they are near ?
IssacNewton said:
Let proton have velocity of ##v_1## and let alpha particle have velocity of ##v_2## when they are near. Now, initially proton has velocity of ##v##. Now, we can think of an alpha particle as
having almost 4 times mass of a proton. So conservation of linear momentum gives us
$$mv = m v_1 + 4m v_2 $$
Also, since this can be treated as the perfectly inelastic collision, the relative velocity is same except for the signs.
$$v - 0 = -(v_1 - v_2)$$
Well, solving this together, we reach the conclusion that ##2 v_1 = -3v_2##. So, how does it follow from here that total kinetic energy is zero when they are near ?
What about the following argument:
In the CoM frame, total momentum is always zero. The two particles must always have equal and opposite momentum. Before the collision, the two particles are moving towards each other with equal and
opposite momentum. After the collision, therefore, they must be moving away from each other with equal and opposite momentum.
At some instant, therefore, ...
Sorry for the late reply, was busy with work... Ok, so it becomes must easier in CoM frame. So, there is some moment, when both the momenta are zero. Hence both the kinetic energies are zero and the
total initial kinetic energy has been converted into the potential energy of the interaction. Now let ##v_1## be the proton velocity in CoM frame and ##v_2## be the velocity of the alpha particle in
the CoM frame. Now, ##v## is the initial proton velocity in lab frame and alpha particle is at rest initially. So, velocity of the CoM would be
$$ v_{CM} = \frac{mv + 0 }{m+4m} = \frac{v}{5} $$
Now, we can write the equations relating different velocities. ## v = \frac{v}{5} + v_1## and ## 0 = \frac{v}{5} + v_2##. Using this, we get ## v_1 = \frac{4v}{5}## and ##v_2 = \frac{-v}{5}##. Now
the initial kinetic energy would be converted into the electrical potential energy at the minimum separation. So, we can write the equation
$$ \frac{1}{2} m v_1^2 + \frac{1}{2} 4m v_2^2 = \frac{(e)(4e)}{4\pi\varepsilon_0 x} $$
where ##x ## is the minimum separation of two particles when they both are at rest in CoM frame. Now, plugging the values of ##v_1## and ##v_2## and solving for ##x##, we get
$$ x = \frac{5 e^2}{2\pi\varepsilon_0 m v^2} $$
So, this does not match with any of the options given. So, what am I doing wrong ?
Science Advisor
Homework Helper
What is the charge of the alpha??
Thanks hutch. Alpha has charge ##2e##, my bad. So, that makes it
$$ x = \frac{5e^2}{\pi\varepsilon_0 m v^2} $$
But the problem assumes that the mass of an alpha particle is exactly 4 times the mass of the proton. Which is not true since proton and neutron do not have the same mass. So, person stating the
problem is not careful here. We need to make some approximations here.
Thanks !
Science Advisor
Homework Helper
Its past my "good brain" hours but didn't you do that backwards?
I just evaluated for ##x## again after changing the charge of the alpha particle
IssacNewton said:
So, we can write the equation
$$ \frac{1}{2} m v_1^2 + \frac{1}{2} 4m v_2^2 = \frac{(e)(4e)}{4\pi\varepsilon_0 x} $$
where ##x ## is the minimum separation of two particles when they both are at rest in CoM frame. Now, plugging the values of ##v_1## and ##v_2## and solving for ##x##, we get
$$ x = \frac{5 e^2}{2\pi\varepsilon_0 m v^2} $$
So, this does not match with any of the options given. So, what am I doing wrong ?
IssacNewton said:
Thanks hutch. Alpha has charge ##2e##, my bad. So, that makes it
$$ x = \frac{5e^2}{\pi\varepsilon_0 m v^2} $$
mentioned, you've bumped the charge on the alpha particle up to ##8e## now.
IssacNewton said:
But the problem assumes that the mass of an alpha particle is exactly 4 times the mass of the proton. Which is not true since proton and neutron do not have the same mass. So, person stating the
problem is not careful here. We need to make some approximations here.
Thanks !
Yes, that was odd. But, it was clear from the possible answers that they must be assuming that. Otherwise, you would get a non-integer factor in there.
IssacNewton said:
So, that makes it
$$ x = \frac{5e^2}{\pi\varepsilon_0 m v^2} $$
IssacNewton said:
I just evaluated for ##x## again after changing the charge of the alpha particle
I believe what
are getting at is all of the given answers (in the original post) have a "4" as part of the denominator. You seem to have accidentally omitted that "4" in your answer.
I did not get any email for rest of the replies,. I just checked all the new messages here. Sorry for late reply. I think I made an algebraic error. I now do get
$$ x = \frac{5e^2}{4 \pi \varepsilon_0 m v^2} $$
which is option B. Thanks all.
FAQ: Minimum separation between incoming proton and alpha particle
1. What is the minimum distance between an incoming proton and alpha particle?
The minimum separation between an incoming proton and alpha particle is determined by the strong nuclear force, which is approximately 10^-15 meters or 1 femtometer.
2. How does the minimum separation between an incoming proton and alpha particle affect particle collisions?
The minimum separation between an incoming proton and alpha particle is crucial in determining the likelihood of a collision between the two particles. If the particles are within the minimum
separation distance, they will experience a strong nuclear force and have a higher chance of colliding.
3. Can the minimum separation between an incoming proton and alpha particle be altered?
The minimum separation between an incoming proton and alpha particle is a fundamental constant determined by the properties of the particles and the strong nuclear force. It cannot be altered or
4. How is the minimum separation between an incoming proton and alpha particle calculated?
The minimum separation between an incoming proton and alpha particle is calculated using the strong nuclear force equation, which takes into account the masses and charges of the particles and the
distance between them.
5. What happens if the minimum separation between an incoming proton and alpha particle is not met?
If the minimum separation between an incoming proton and alpha particle is not met, the particles will not experience a strong nuclear force and will not collide. This can affect the outcome of
particle interactions and experiments in the field of nuclear physics. | {"url":"https://www.physicsforums.com/threads/minimum-separation-between-incoming-proton-and-alpha-particle.977470/","timestamp":"2024-11-08T04:40:35Z","content_type":"text/html","content_length":"208127","record_id":"<urn:uuid:874be494-2cbe-41c3-a4c1-0ff7833c198a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00009.warc.gz"} |
This Blog is Systematic
Since I was a young lad there has been an ongoing fight in Financial Academia 'n' Industry between two opposing camps:
• In the red corner are the Utilitarians. The people of classical finance, of efficient frontiers, of optimising for maximum return at some level of maximum risk.
• In the blue corner are the Kellyites. Worshipping at the feet of John Kelly and Ed Thorpe they have only one commandment in their holy book: Thou shalt maximise the expectation of log utility of
This post is sort of about that battle, but more generally it's about two different forms of uncertainty for which humans have varying degrees of stomach for, and how they should be accounted for
when deciding how much volatility your trading or investment portfolio should have: "Risk" (which we can think of as known unknowns, or at least the amount of volatility expected from a risk & return
model which is calibrated on past data) and "Uncertainty" (which we can think of as unknown unknowns, or to be more precise the unknowability of our risk & return model).
The Kellyites deny the existence of "risk appetite" (or at least they deny it's importance), whereas the Utilatarians embrace it. More seriously both camps seriously underestimate the importance of
uncertainty; which will be more the focus of this post.
This might seem somewhat esoteric but in laymans term this post is about answering an extremely critical question, which can be phrased in several equivalent ways:
• What risk target should I have?
• How much leverage should I use?
• How much of my capital should I bet on a given position?
This post was inspired by a
twitter thread
on this subject and I am very grateful to
Rob Hillman of Neuron Advisors
for pointing me towards this. I've blogged about this battle before,
(where I essentially address one interesting criticism of Kelly) and I've also talked about Kelly generally
If you're unfamiliar with arithmetic and geometric returns it's probably worth rereading the first part of this post
, otherwise you can ignore these other posts (for now!).
Classic Utilitarian portfolio optimisation
To make live easier I'm going to consider portfolios of a single asset. The main difference I want to highlight here is the level of leverage / risk that comes out of the two alternatives, rather
than the composition of the portfolio.
I'm sure the readers of this blog don't need reminding of this but basically Utilitarians tend to do portfolio optimisation like this: specifying the investors utility function as return minus some
penalty for variance. Which for a single asset with leverage, if we assume the risk free rate is zero or that the return is specified as an excess return (it doesn't matter for the purposes of this
post) becomes this:
Maximise f.E(r) - b*[f.E(s)]^2
Where f is the leverage factor (f=1 incidentally means fully invested, f=2 means 100% leverage, and so on), r is the expected return, E is the expectation operator, risk s is measured as the standard
deviation of returns on the unleveraged asset, b is the coefficient of risk aversion (you'll often see 1/2 here, but like, whatever). This is a quadratic utility function. In this model risk
tolerance is an input, here defined as a coefficient of aversion.
Of course we could also use a different utility function, like one which cares about higher moments, but that will probably make the maths harder and definitely mean we have to somehow define further
coefficients establishing an investors pain tolerance for skew and kurtosis.
This specification isn't so much in fashion in industry; it's hard enough getting your investors to tell you what their risk appetite is (few people intuitively understand what 150% standard
deviation a year feels like, unless they're crypto investors or have money with
Mr C. Odey
). Imagine trying to get them to tell you what their coefficient of risk aversion is. Easier to say "our fund targets 15% a year volatility" which most people will at least pretend to understand. So
we use this version instead:
Maximise portfolio returns = weights.E(asset returns)
Subject to: portfolio risk = function of weights and E(asset covariance)<= maximum risk
Which for one asset, with no risk free rate is:
Maximise f.E(r)
Subject to f.E(s)<=s_max
Where s_max is some exogenous maximum risk tolerance specified by the investor
Importantly (a) maximum risk depends on the individual investors utility function (which is assumed to be monotonically increasing in returns up to some maximum risk at which point it drops to zero -
yeah I know, weird) and (b) the return here is arithmetic return (c) we only care about the first two moments since risk is measured using the standard deviation. Again risk is an input into this
model (as a tolerance limit this time, rather than coefficient), and the optimal leverage comes out.
Under Kelly we choose to find the portfolio which maximises the expectation of the log of final wealth. For Gaussian returns (so again, not caring about the 3rd or higher moments which I won't do
throughout this post) it can be shown that the optimal leverage factor f* is:
f* = (r - r_f) / s^2
(If you aren't in i.i.d. world then you can mess around with variations that account for higher moments, or just do what I do - bootstrap)
Where r is the expected
portfolio mean, r_f is the risk free rate and s is the standard deviation of portfolio returns without any leverage (and with expectations operators implicit - this is important!). f=1 incidentally
means fully invested, f=2 means 100% leverage, and so on. Noting that the risk of a portfolio with leverage will be f*s this means we can solve for the target risk s*:
s* = f*. s = (r - r_f) / s
Notice that this thing on the right is now the Expected
Sharpe Ratio
. This is my favourite financial formula of all time: optimal Kelly risk target = Expected Sharpe Ratio. It has the purity of E=mc^2. But I digress. Let's take out the risk free rate for consistency:
f* = r / s^2
Importantly for the battle in this world we don't specify any risk tolerance, or coefficient of risk aversion, or utility function. Assuming an investor wants to end up with the highest expected log
utility of final wealth (or as I said
, the highest median expectation of final wealth) they should just use Kelly and be done with it.
The battle
Let's recap:
Kelly: f* = r / s^2 s* = r / s Utilitarian: f* = s_max / s s* = s_max
With no risk free rate; f*= optimal leverage, s* = optimal risk, s_max is maximum risk (both standard deviations) and r = expected return.
Importantly the two formula don't usually give the same answers (unless r /s = s_max; i.e. the Sharpe Ratio is equal to the risk tolerance), and the relative answer depends on the Sharpe Ratio you're
using versus typical risk appetite.
CASE ONE: Kelly leverage< Utilitarian leverage
If you're investing in a long only asset allocation portfolio then a conservative forward looking estimate of Sharpe Ratio (like those in
my second book
) would be about 0.20. If we assume expected return 2% and standard deviation 10% then the optimal Kelly risk target will be 20%, implying a leverage factor of 2. But that only gives 4% return! If
the utilitarian investor is rather gung ho and has a risk appetite of 30% then the optimal leverage for them would be 3.
CASE TWO: Kelly leverage > Utilitarian leverage
If you're running a sophisticated quant fund with a lot of diversification and a relatively short holding period then a Sharpe Ratio of 1.0 may seem reasonable. For example if a stat-arb equity
neutral portfolio has expected return 5% and standard deviation 5% (assuming risk free of zero) then the optimal Kelly risk target will be 100%, which implies a leverage factor of 20 (!). But a
utilitarian investor may only have a risk appetite of 15%, in which case the optimal leverage will be 3. And indeed most equity neutral funds do run at leverage of about 3.
For case one I refer you back to my previous post, in which I said that nobody, no matter what their risk appetite should invest more than Kelly if you believe my logic that it is
expected portfolio value that matters rather than mean. Essentially where the utility optimisation gives you a higher leverage than Kelly you should ignore it and go with Kelly.
Case two is a little more complicated, and the solution quoted in a thousand websites and papers is "Most investors find the risk of full Kelly to be too high - we recommend they use half Kelly
instead". Frankly this is a bit of a cop-out by the Kelly people, which admits to the existence of risk appetite.
The compromise
I personally believe in risk appetite. I believe that people don't like lumpy returns, and some are more scared of them than others. Nobody has the self discipline to invest for 40 years and
completely ignore their portfolio value changes in the interim.
But I also believe that using more than full Kelly is dangerous, insane, and wrong.
So this means the solution is easy. Your risk target should be:
s* = Minimum(r / s , s_max)
Where the first term is of course the Kelly optimum without the risk free, and the second is the risk tolerance beloved of utilitarian investors. And your leverage should be:
f* = Minimum(r / s^2, s_max / s)
What we know, and what we don't know
In case you haven't noticed I find this battle a little tiresome (hence my pretty superficial attempt at 'solving' it), and mainly because it completely ignores something incredibly important. We
have two guys in the corner of a room arguing about whether
Margin Call
The Big Short
is the best film about the 2008 financial crisis (a pointless argument, because both v. good films), whilst a giant elephant is in the corner of the room. Running towards them, about to flatten them.
Because they haven't seen it. They're too busy arguing. Have I made the point sufficiently, do you think?
What is the elephant in this particular metaphorical room. It's this. We don't know
. Or
. Of the Sharpe Ratio,
(ignoring risk free of course). And without knowing these figures, we don't have a hope in hell of finding the right leverage factor.
We have to come up with a model for them, based on historical data, because that's what quant finance people
. Which means there is the risk that:
• It's the wrong model (non Gaussian returns, jumps, autocorrelation...)
• The parameters of the model aren't stable
• The parameters of the model can't be accurately measured
This triumvirate of problems should be recognisable to people familiar with my work, and you already know that I feel it is most productive to focus on the third problem for which we have relatively
straightforward ways of quantifying our difficulties (using the classical statistical workhorse of
sampling distribution
Parameter uncertainty (and the other issues) isn't such an issue for standard deviation; we are relatively good in finance at predicting risk using past data (R^2 of regressions of monthly standard
deviation on the previous month is around 0.6, compared to about 0.01 for means and Sharpe Ratios). So let's pretend that we know the standard deviation.
However the Sharpe Ratio is the key factor in working out the optimal leverage and risk target for Kelly (which even for Utilitarians should act as a ceiling on your aspirations). The sampling
distribution of Sharpe Ratio is highly uncertain.
The effect of parameter uncertainty on Sharpe Ratio estimates
There is an
easy closed form formula
for the variance of the Sharpe Ratio estimate under i.i.d returns given N returns:
w = (1+ 0.5SR^2)/N
(If you aren't in i.i.d. world then you can mess around with formulas that account for higher moments, or just do what I do - bootstrap)
We need an example. Let's just pick an annual Sharpe Ratio out of the air: 0.5. And assume the standard deviation is 10%. And 10 years of monthly data. But if you don't like these figures feel free
to play with
the example here in google docs land
(don't ask for edit access - make your own copy).
Here is the distribution of our Sharpe Ratio estimate:
The concept of "Uncertainty appetite"
Now let's take the distribution of Sharpe Ratio estimate, and map it to the appropriate Kelly risk target:
Yeah, of course it's the same plot, since s* = r/s = SR. You should however mentally block off the negative part of the x-axis, since we wouldn't bother running the strategy here and negative
standard deviation is meaningless. And here is the plot for optimal leverage (r/s^2):
So to summarise the mean (and median, as these things are Gaussian regardless of the underlying return series) optimal risk target is 50% and the mean optimal leverage is 5. Negative leverage sort of
makes sense in this plot, since if an asset was expected to lose money we'd short it.
At this point the Kellyites would say "So use leverage of 5 or if you're some kind of wuss use half Kelly which is 2.5" and the Utilitarians might sniff and say "But my maximum risk appetite is 15%
so I'm going to use leverage of 1.5". Since the optimal standard deviation of 50% is relatively high it's very likely that we'd get a conflict between the two approaches here.
But we're going to go beyond that, and note that there is actually a lot of uncertainty about what the optimal leverage and risk target should be. To address this let's introduce the concept of
uncertainty appetite
. This is how comfortable investors are with not knowing exactly what their optimal leverage should be. It is analogous to the more well known
risk appetite
, which is how comfortable investors are with lumpy returns.
Someone who is
uncertainty blind
would happily use the median points from the above distributions- they'd use full Kelly, assuming of course that their risk appetite wasn't constraining them to a lower figure. And someone weird who
uncertainty loving
might gamble and assume that the true SR lies somewhere to the right of the median, and use a higher leverage and risk target than full Kelly.
But most people will have a
coefficient of uncertainty aversion
(see what I did there?). They'll be uncomfortable with full Kelly, knowing that there is a 50% chance that they will actually be over gearing. We have to specify a confidence interval that we'd use
to derive the optimal leverage, with uncertainty aversion factored in.
So for example suppose you want to be 75% sure that you're not over-geared. Then you'd take the 25th percentile point off the above distributions: which gives you an expected Sharpe Ratio of about
0.29, a risk target of 29% and optimal leverage of 2.9.
Here are a few more figures for varying degrees of uncertainty:
Confidence interval Optimal risk Optimal leverage
<5.8% Don't invest anything 10.0% 9.2% 0.93 15.0% 17.1% 1.71 20.0% 23.2% 2.32 30.0% 33.3% 3.33
40.0% 41.9% 4.19
50.0% 50.0% 5.00
Incidentally the famous half Kelly (a leverage of 2.5) corresponds to a confidence interval of about 22%. However this isn't a universal truth, and the result will be different for other Sharpe
What this means in practice is that if you're particularly averse to uncertainty then you'll end up with a pretty low optimal Kelly risk target. How does this now interact with risk appetite, and the
Utilitarian idea of maximum risk tolerance? Well the higher someones aversion to uncertainty, the lower their optimal risk target will be, and the less likely that an exogenous maximum risk appetite
will come into play.
Now someone who is averse to uncertainty will probably also be averse to the classical risk of lumpy returns. You can imagine people who are uncertainty averse but not risk averse (indeed I am such a
person), and others who are risk averse but not uncertainty averse, but generally the two probably go together. Which also raises an interesting philosophical point about the difference between them,
as we'd rarely be able to distinguish between the two kinds of uncertainty except in specific experiments or unusual corner cases.
Both the Kellyites and the Utilitarians have good points to make - you should never bet more than full Kelly no matter how gung ho you are, and risk appetite is actually a thing even if few investors
really know have quadratic utility functions.
But both are missing the real point, which is that there is a lot of uncertainty about what the Sharpe Ratio and hence optimal leverage really is. Assuming some conservatism and a degree of
uncertainty appetite this produces a Kelly optimal revised for uncertainty which will be lower than the uncertainty blind full Kelly. This then makes risk appetite less relevant as a constraint, and
the whole battle becomes a moot point. | {"url":"https://qoppac.blogspot.com/2018/06/","timestamp":"2024-11-09T14:07:54Z","content_type":"text/html","content_length":"125523","record_id":"<urn:uuid:31a282e4-48a2-4e45-96a5-f853e0a4d967>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00761.warc.gz"} |
Percentage Calculator
Percentage Calculator for all percentage calculation tasks
Find out percentage of a number, percentage increases, decreases, differences, and more
• 1%=1/100
• 5% of 300=300/100 * 5=15
Calculating a percentage from a specific number may seem redundant these days. After all, unlimited possibilities, technological advances and apps allow us to calculate quickly whether using a
calculator on a smartphone or using the right formulas in Excel. There is also a perfect website that you can save in your favourites, namely the percentage calculator. You can return to the site
whenever you need to, and by entering the required data, the percentage calculator will give you the required answers in seconds. However, what about when you can't use the site? It is useful to know
the basic mathematical operations which, when needed, will answer the needed question on how to calculate the percentage from a given number.
The quickest way to calculate the percentage of a particular number is by first converting the percentage to a fraction and then multiplying the fraction by the given number. For example, if you need
to calculate how much is 5% of the number 70, you go to the following operation:
5% * 70=(5/100) * 70=(35/100)=3.5
The second way to calculate a percentage from a specific number is to use a ratio.
This is an equality of two quotients, which is written as:
To determine an unknown, the most common way is the 'cross' method. Once you have arranged the correct ratio, you can multiply the quantities diagonally and then compare them to each other. For
example, if you need to calculate 25% of the number 20 you go to the operation:
• 20: 100% (where the number 20 corresponds to 100%)
• x: 25% (where x is the answer to the question of how much is 25% of 20) Then you multiply the quantities crosswise,
• that is
• 20 * 25%=100% * x
and you go on to the following operation:
x=(20 * 25%) / 100%=5
It is worth remembering that with this method, the number on the diagonal of the unknown x you immediately write in the denominator of the fraction, and the product of the other numbers in the
What percentage of one number is another number?
To answer the question of what percentage of the first number is the second number, you need to calculate the action of what fraction of the first number is the second number and represent this
fraction as a percentage. At first, calculating such an operation may sound complicated, but once you have worked out the values, it is enough to calculate the operation. To answer the question what
percentage of the number 12 is the number 3, follow these steps:
(3/12)=(3/12) * 100%=(300%/12)=25%
How to calculate percentage change - increase/decrease?
Percentage increase and decrease are two types that qualify as percentage change, which are used to express the ratio of the comparison between the initial value, and the result of the change in
value. In this case, a percentage decrease is a ratio that describes a decrease in the value of something by a certain rate, while a percentage increase is a ratio that describes an increase in the
value of something by a certain rate.
The simplest way to determine whether a percentage change represents an increase or decrease is to calculate the difference resulting between the original value and the remaining value to find the
change. Then divide the change by the original value and multiply the result by 100, which will give you the result. If the result is a positive number - the change is a percentage increase. However,
if you get a negative result then the change is a percentage decrease.
Although describing the calculation of percentage changes sounds like a complicated process, it is not. What's more, the ability to calculate percentage changes is useful on a daily basis, especially
in the business world - if you own a store, you can calculate the difference in the number of customers who shop in your store, or visualise how much money you save when you shop at a 20% sale.
Assume that the original price of a bag of tomatoes is 3 PLN and on the next day the price per bag is 1.80 PLN. What is the percentage decrease? Apply the formula below:
percentage decrease=(older - newer) ÷ older
percentage decrease=(3 - 1.80) ÷ 3=0.40=40%
How to increase a number by a percentage?
In order to increase a specific number by a percentage, you need to use a percentage to decimal conversion. The easiest way to get this result is to move the decimal point two points to the left. For
example, 30% of the decimal is 0.3 and 50% of the decimal is 0.5. With an advanced calculator, you can quickly get the result converted to decimal, but it is worth knowing how to do the operation
without any technology.
For example - you want to increase the number 10 by 50%. By converting 50% you get 0.5. Perform multiplication 10 * 0.5=5.
To increase the number 10 by 50% of its value it is enough to add 10 and 5, which gives us 15.
How to decrease a number by a percentage?
Decreasing a specific number by a percentage is the reverse of the last step of increasing a number by a percentage. The easiest way to get a number to decrease by a percentage will be to use a
percentage to decimal conversion - exactly the same as when increasing a number by a percentage.
Knowing that 50% in decimal value is 0.5, and also to increase the number 10 by 50% it is enough to perform multiplication 10 ⋅ 0.5=5, getting the result decreasing 10 by 50% should not be a problem.
It is enough to perform the action negatively, i.e. to subtract 5 from 10, which gives us 5.
Performing the operations connected with increasing and decreasing a number by percentage initially looks exactly the same, which certainly makes things easier. Only the last step differs in adding
or subtracting the result from the given number, but remembering to add numbers when increasing and subtracting numbers when decreasing should make things even easier.
When We Might Need a Percentage Calculator
A percentage calculator is a mathematical tool that helps calculate various percentage values based on specified parameters. It is an incredibly useful tool in many situations, both in everyday life
and in business or academia. Below are various situations where we might need a percentage calculator:
1. Shopping and Discount Calculations: When shopping, we often encounter various discounts and promotions. A percentage calculator allows us to quickly calculate the discount value and the final
price of the product.
2. Tax and Tip Calculations: When dining out or paying bills, a percentage calculator can be helpful in quickly calculating the tip amount or tax that needs to be added to the bill.
3. Personal Finance Percentages: Planning a household budget or savings, a percentage calculator helps calculate interest on deposits, loans, or other financial parameters, allowing for better
financial management.
4. Investment Analysis: In the world of investment finance, a percentage calculator is an invaluable tool. It allows for calculating both the percentage increase in investment value, potential
losses, or investment returns.
5. Loan-related Calculations: When planning loan repayments, a percentage calculator can be very helpful. It allows for quickly calculating the installment amount, total loan cost, or annual
interest rate.
6. School and Academic Mathematics: In education, a percentage calculator is often used to explain and solve mathematical problems related to percentages, facilitating understanding of the material
and knowledge acquisition.
7. Business Data Analysis: In business, especially in the finance department, a percentage calculator is essential for analyzing data related to profits, losses, growth, or decline in stock values,
or other financial indicators.
8. Percentage Calculations in Medicine and Natural Sciences: In medical and natural sciences, a percentage calculator is useful for calculating various indicators and statistical data, such as the
percentage of disease occurrence in the population.
9. Real Estate Valuation: A percentage calculator can also be used to calculate the percentage increase or decrease in the value of real estate, which is essential for making decisions regarding
real estate investments.
10. Marketing Planning: In the field of marketing, a percentage calculator can be used to calculate changes in the reach of advertising campaigns, conversions, click-through rates, or other marketing
performance metrics.
In summary, a percentage calculator is an indispensable tool in many areas of life, helping to quickly and accurately calculate various percentage values and analyze data. Its versatile application
makes it an essential tool for individuals, businesses, or institutions. | {"url":"https://percentage-calculator.app/","timestamp":"2024-11-10T21:04:57Z","content_type":"text/html","content_length":"38465","record_id":"<urn:uuid:026058eb-8253-42f4-96d3-a3473635c037>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00823.warc.gz"} |
Physics Nobel Goes to 3 Who Studied Matter's Odd States
(Image credit: Nobel Foundation)
The Nobel Prize in physics went to three physicists who studied matter at the smallest scales and the coldest temperatures, which could lead to new materials and insights into phenomena such as
The three Nobel laureates are David J. Thouless of the University of Washington, F. Duncan M. Haldane of Princeton University and J. Michael Kosterlitz of Brown University.
All three worked on unusual states of matter; Kosterlitz and Thouless studied the theoretical properties of very thin films, essentially 2D materials. Haldane looked at chains of atom-size magnets. [
Nobel Prize 2016: Here Are the Winners (and What They Achieved)]
They used the mathematics of topology to explain why superconductivity appears and disappears when it does. Topology is the mathematical study of processes that occur in discrete steps. More
formally, it's the study of shapes that can be transformed without breaking them — like the transformation of a doughnut into a straw. The steps in topology come from the fact that a doughnut can
have one hole, or two (like a straw), but not one and a half.
Kosterlitz and Thouless were interested in what happens when you cool a 2D film of matter to near absolute zero. Their calculations showed that it was possible for such a material to conduct
electricity without resistance, turning into a superconductor, something that scientists thought impossible. Paul Coxon, a research associate in the Materials Chemistry Group at the University of
Cambridge, said that even at near absolute zero, "there's always some minor fluctuation that disturbs the order." That disruption should prevent superconductivity from happening, he added.
Or that's what scientists thought. But calculations by Kosterlitz and Thouless showed that it did not prevent superconductivity, and later experiments confirmed they were correct. The reason was
related to the mathematics of topology. In 2D material, little whirlpools called vortices form pairs as the temperature drops, and the material becomes superconducting, Coxon said.
When you raise the temperature, the vortices separate and go their separate ways. The separation creates shapes that are one-holed as opposed to two-holed (vortices have two openings), like breaking
up a two-holed doughnut into two one-holed doughnuts, and the material loses its superconductivity. The transition from superconducting to non-superconducting in such films is known as the KT
threshold, for its discoverers, according to a release from the Nobel committee.
Sign up for the Live Science daily newsletter now
Get the world’s most fascinating discoveries delivered straight to your inbox.
Later, Thouless studied the Quantum Hall Effect. Ordinarily, if you put a magnet perpendicular to an electric current, the voltage will change. The Quantum Hall Effect is similar, except that the
voltage change can happen only in certain increments. Thouless found that the mathematics of topology explained the phenomenon. Haldane, meanwhile, showed that chains of atomic magnets can behave in
a similar fashion.
Their discoveries could lead to new materials, though that is still in the future. "This has implications for superconducting materials," Coxon said, "but that's still some way off."
Coxon added that the choice of work for the Nobel Prize was a surprise, as, like many in the physics community, he thought the prize would go to the scientists who observed gravitational waves using
the Laser Interferometer Gravitational-Wave Observatory (LIGO). "Everyone had half-written stories on LIGO, and then this comes out of the blue."
Original article on Live Science.
Jesse Emspak is a contributing writer for Live Science, Space.com and Toms Guide. He focuses on physics, human health and general science. Jesse has a Master of Arts from the University of
California, Berkeley School of Journalism, and a Bachelor of Arts from the University of Rochester. Jesse spent years covering finance and cut his teeth at local newspapers, working local politics
and police beats. Jesse likes to stay active and holds a third degree black belt in Karate, which just means he now knows how much he has to learn. | {"url":"https://www.livescience.com/56377-physics-nobel-2016-for-strange-matter.html","timestamp":"2024-11-04T15:24:13Z","content_type":"text/html","content_length":"693480","record_id":"<urn:uuid:e98c99d4-6932-4ddd-9ff5-830fe8984d6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00274.warc.gz"} |
Stephen Wolfram: Discovering a New Science - History of Data Science
Stephen Wolfram’s thoughts on computing, nature, and everything in between have often drawn controversy. What is not up for debate, however, is the tremendous impact his ideas and inventions have had
on technology.
A dropout-turned-Ph.D.
“This is actually the first high school graduation I’ve ever been to,” said Wolfram in a 2014 commencement address to the graduates of Stanford Online High School, where one of his children was a
Wolfram was born and raised in London. Both of his parents were German Jews who fled Nazism as children in the 1930s. Wolfram began devouring science books at a young age and was submitting physics
papers to academic journals by age 15. He didn’t have much use for formal education, however. At 17, he dropped out of Eton College, the famous prep school. Even though he managed to get into Oxford,
he later described the lectures as insufferable and eventually left to go to Cal Tech, where he had a Ph.D. at age 20.
“Could it be that some place out there in the computational universe, we might find our physical universe?”
The wunderkind does it all
Wolfram has spent the last 40 years on an innovation warpath.
He kicked things off in 1979 by developing a computer algebra program, Symbolic Manipulation Program. Such programs served as precursors to the development of deep learning algorithms that are now
transforming technology. Wolfram, who was still at Cal Tech at the time, abandoned the project and resigned from the university over who would hold intellectual copyright for the software.
Then he got really into cellular automata, a grid of cells, each one of which has a certain number of states e.g. on and off, and cells evolve based on rules set. Like the founder of CA, the
mathematician John von Neumann, Wolfram believed that the system could be a way to explain the very basis of the universe. He ultimately published a paper on the subject that has been cited over
10,000 times.
In 1986 he developed a software, Wolfram Mathematica, that remains popular for those seeking a high-level technical computing experience in a variety of fields.
In 2009, he developed WolframAlpha, an answer machine that was integrated into Microsoft’s Bing search engine.
A New Kind of Science
Wolfram’s most notable and most fiercely debated contribution to science is his 2002 opus, A New Theory of Everything. The scientist presents a variety of arguments and data over 1,200 pages, but his
central contention is that there is a single algorithm that is the foundation of everything in existence.
“I have little doubt that within a matter of a few decades what I have done will have led to some dramatic changes in the foundations of technology — and in our basic ability to take what the
universe provides and apply it for our own human purposes,” Wolfram wrote at the time.
Only time will tell if he’s right. | {"url":"https://www.historyofdatascience.com/stephen-wolfram-discovering-a-new-science/","timestamp":"2024-11-12T03:30:06Z","content_type":"text/html","content_length":"152508","record_id":"<urn:uuid:f886d290-e6be-4fc0-8a7d-818253c3c825>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00279.warc.gz"} |
Summary Measures and Graphs
Description of Proposed Provision:
B3.16: For retired worker and disabled worker beneficiaries becoming initially eligible in January 2031 or later, phase in a new benefit formula (from 2031 to 2040). Replace the existing two primary
insurance amount (PIA) bend points with three new bend points as follows: (1) 25% AWI/12 from 2 years prior to initial eligibility; (2) 100% AWI/12 from 2 years prior to initial eligibility; and (3)
125% AWI/12 from 2 years prior to initial eligibility. The new PIA factors are 95%, 27.5%, 5% and 2%. During the phase in, those becoming newly eligible for benefits will receive an increasing
portion of their benefits based on the new formula, reaching 100% of the new formula in 2040.
Estimates based on the intermediate assumptions of the 2024 Trustees Report
Summary Measures
Current law Change from current law Shortfall eliminated
[percent of payroll] [percent of payroll]
Long-range Annual Long-range Annual Long-range Annual
actuarial balance in actuarial balance in actuarial balance in
balance 75th year balance 75th year balance 75th year
-3.50 -4.64 1.00 1.75 28% 38% | {"url":"https://www.ssa.gov/oact/solvency/provisions/charts/chart_run216.html","timestamp":"2024-11-05T00:22:38Z","content_type":"text/html","content_length":"19582","record_id":"<urn:uuid:1f290e09-6530-4918-b658-42ed544dc473>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00676.warc.gz"} |
ELO Ranking
# ELO Ranking
# Basics
When Snails enter the world of racing, they will not be part of any league. They will be born with a default score and it will change entirely by win/loss stats in relation to other players. A
Sorting Hat will determine the league of the Snail after its 5th race. Once the league is defined, the journey begins.
It is expected that, on average, Snails that have higher ELO will perform better than lower ones. After every match, the actual outcome will be compared with the expected outcome, and consequently,
the Snail rating will be adjusted. Don't forget that your Snail can also shine in Bronze League and might bring a treasure back home.
The ELO system that is being used in this game might differ from the classic chess approach. If Player A has a rating R_{a} and Player B a rating R_{b} , the formula will be similar to the following
(expect the coefficient might change) for the expected score of Player A;
E_{a} = 1 / (1 + 10^{ ( R_{b} - R_{a}) /400 } )
and Player B will be;
E_{b} = 1 / (1 + 10^{ ( R_{a} - R_{b}) /400 } )
Considering the maximum possible adjustment per game is K-factor. The formula for updating that player's rating is;
R'_{A} = R_{A} + K(S_{A} - E_{A})
# Multiplayer
Above approach is a simple calculation of the ELO for a two-player match-up. In Snail Trail, this formula is extended to calculate the ELOs after a 10-player race.
In which each players score functions will be;
S_{A}(p_A) = \dfrac {N-p_A }{N(N-1)/2}
The formula for updating a player's rating is almost identical to the standard two-player ELO version;
R'_{A} = R_{A} + K(N-1)(S_{A} - E_{A})
The update will happen after any suitable rating period.
# Placement
• A Snail's ELO and league will be determined after its first 5 races.
• The first 5 races will be the most important races to determine the base ELO of each Snail. | {"url":"https://docs.snailtrail.art/archive/rank_system/elo_ranking/","timestamp":"2024-11-11T06:57:40Z","content_type":"text/html","content_length":"30982","record_id":"<urn:uuid:53cc898a-8275-40b3-9579-e40b3df237ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00668.warc.gz"} |
Is quantum computing hard to study?
As you might have guessed, quantum computing is a complex field that's difficult for non-experts to understand. However, it is possible to grasp some of the fundamental concepts, giving you a basic
understanding of how quantum computers work.
Is quantum computing easy?
Quantum Computer vs.
A classical processor uses bits to operate various programs. Their power increases linearly as more bits are added. Classical computers have much less computing power. Quantum computers are more
expensive and difficult to build than classical computers.
How long does it take to learn quantum computing?
It's 6 weeks long and will take you about 2-3 hours per week.
Is it worth it to study quantum computing?
Because of the enormous potential of quantum computing, as the technology rapidly grows and the workforce grows, there could be high demand for quantum talent very soon. And you could be just what
this field needs!
Why is quantum computing difficult to understand?
The problem, in a word, is decoherence, which means unwanted interaction between a quantum computer and its environment — nearby electric fields, warm objects, and other things that can record
information about the qubits.
29 related questions found
Do I need to know physics for quantum computing?
Therefore to study quantum computing, you will require a background in physics, mathematics, and computer science. This includes knowledge of exponents, vectors, sine waves, linear algebra, as well
as probability and stochastic processes.
What is the weakness of quantum computing?
However, the disadvantages of quantum computing include breaking current encryption systems, which could leave doors open for data theft if organizations are not prepared to transition to
cryptography to post-quantum algorithms. Without proper security, many of the promised benefits of quantum computing will fail.
Do you need math for quantum computing?
The basic maths that allows quantum computing to perform its magic is Linear Algebra. Everything in quantum computing, from the representation of qubits and gates to circuits' functionality, can be
described using various forms of Linear Algebra.
What skills do you need for quantum computing?
In quantum tech, you need expertise in any of the following areas:
• Quantum error correction.
• Fault tolerance.
• Quantum algorithms.
• Quantum computer architectures.
• Superconducting circuits, Quantum optics, and Ion traps.
• Foundational mathematics, e.g., Real/complex analysis, Linear algebra, Statistics, Calculus.
• Lab skills.
Can quantum computing be self taught?
Self-learning quantum computing is not simple, but it is possible. What helps learning is that there is no need to understand the details because it is difficult to understand.
Is quantum computing a math or physics?
Learning Quantum Computing. General background: Quantum computing (theory) is at the intersection of math, physics and computer science. (Experiment also can involve electrical engineering.)
Which IIT is best for quantum computing?
Let's look at a few of the best places for research in quantum computing.
• Indian Institute of Science – Initiative on Quantum Technologies (IQT@IISc) ...
• Indian Institute of Technology Jodhpur. ...
• Indian Institute of Technology Madras. ...
• Tata Institute of Fundamental Research (TIFR) – Quantum Measurement and Control Laboratory.
Is Python used in quantum computing?
Cirq is an open-source framework for quantum computing. It is a Python software library used to write, manipulate, and optimize quantum circuits. The circuits are then run on quantum computers and
How much physics do you need for quantum computing?
A Physics major with theoretical Computer Science focus can help one in designing algorithms for a quantum computer. If one is interested in Quantum Mechanics, then a major in computer science and a
minor in Maths with a focus on abstract linear algebra is required to build a foundation in quantum computing.
Is a career in quantum computing worth it?
Should You Get a Job in Quantum Computing in 2022? Yes, you definitely should get a job in this field if you're passionate about it. There's so much career growth potential for professionals in this
Does quantum computing make money?
With the advent of quantum computing, there are now more opportunities than ever to make money with this technology.
What is the salary of quantum computing jobs in India?
Salaries in India
• 4.0★ IBM. Quantum Computing Developer. ₹9L -₹9L. ...
• XYZ. Quantum Computing Developer - Intern. ₹8L -₹8L. 1 salaries. ...
• Outgive. Quantum Computing Developer. ₹14L -₹16L. 1 salaries. ...
• Sustainable Living Lab. Quantum Computing Developer - Monthly. ₹72T -₹78T. ...
• BosonQ Psi. Quantum Computing Developer - Monthly. ₹58T -₹63T.
Which language is required for quantum computing?
Silq. Silq is a high-level programming language for quantum computing with a strong static type system, developed at ETH Zürich.
How do I start learning quantum computing?
If you're looking for an excellent place to start learning everything quantum, look no further than the Qiskit YouTube channel and textbook. The Qiskit channel covers the fundamentals of quantum
computing and details how you can implement these fundamentals using code.
What are the subjects in quantum computing?
People who have quantum computer careers make use of a diverse skill set that includes quantum physics, data analysis, engineering, modeling, math, and coding.
What education do you need for quantum computing?
Getting a job as Quantum Machine Learning Scientist, however, almost always requires a Ph. D. in Quantum Physics or Computer Science.
Is quantum computing the future?
Quantum computing is a relatively new and upcoming technology that uses the principles of quantum physics to solve complex problems. Whilst it is stil in the early stages of development, the
possibilities and results so far indicate that quantum computing has a promising future in real-world applications.
Is quantum computing safe?
While quantum computing will unlock powerful analytics and artificial intelligence (AI) processing capabilities, it also opens the door to serious security vulnerabilities, due to the ability of
these computers to decrypt public-key algorithms.
What is the biggest problem with quantum computing?
What are the associated challenges?
• First, quantum computers are highly prone to interference that leads to errors in quantum algorithms running on it. ...
• Second, most quantum computers cannot function without being super-cooled to a little above absolute zero since heat generates error or noise in qubits.
Is quantum computing harmful?
The dangers lie in the machine's ability to make decisions autonomously, with flaws in the computer code resulting in unanticipated, often detrimental, outcomes. In 2021, the quantum community issued
a call for action to urgently address these concerns. | {"url":"https://www.calendar-uk.co.uk/frequently-asked-questions/is-quantum-computing-hard-to-study","timestamp":"2024-11-09T03:26:02Z","content_type":"text/html","content_length":"72127","record_id":"<urn:uuid:b1183633-e065-4075-9170-8af4b3ed8c8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00609.warc.gz"} |
RFM Analysis in Tableau | Optimus SBR
RFM (Recency, Frequency, Monetary) analysis is commonly used for customer segmentation, to split our users into specific groups. For example, people who visit a website regularly but don’t buy much
would be a high “frequency” but a low “monetary” value visitor. With customer purchase information readily available, it is especially common to perform RFM analysis in retail where we want to create
a view of our customer types without drowning in all the transactional data.
This page contains a comprehensive explanation and description.
When constructing RFM scores, one challenge is that the RFM metrics for customer segmentation are on completely different scales:
• Recency: time since last order (usually in days)
• Frequency: count of transactions or items purchased
• Monetary: total customer spend or average transaction value (dollars)
To compare them, we divide each metric into quintiles. If you are above the 80th percentile, your score is 5; if you are in the 60th to 80th percentile, your score is a 4 and so on.
What are the three components of the RFM formula?
Using the included Superstore dataset of customer purchase information, we’ll define RFM as follows:
• Frequency – number of orders
• Monetary – total sales
• Recency – last transaction
□ Field: use the order date, calculate the number of days from a selected date, and take the minimum to get the most recent order
Four Steps to Achieve Our Goal:
1. Calculate the percentile values for each customer (e.g. customer X is in the 93rd percentile of frequency)
2. Compare these to the overall percentiles (since customer X is above the 80th percentile of frequency, they receive an F score of 5)
3. Combine the fields
4. Visualize/report the results
Notice that with 5×5×5 combinations, we have 125 possible combinations. However, in practice, our data won’t necessarily contain every combination. Certain data combinations like 4-1-5 or 2-5-1 are
very uncommon because, generally, you don’t have loyal customers who are frequent visitors who don’t spend much and haven’t ordered regularly.
In situations like these with many possible combinations, it’s often helpful to provide Tableau with a scaffold. A scaffold is a separate table that lists all the possible values (125 combinations).
When we join or blend this into our results, it guarantees that every possible value appears in the results, even if there are zero customers in that bin. Scaffolds are frequently helpful with dates
where not every day exists in a dataset, but we would like to see a chart for activity every day. To start, I’ve created a sheet called “RFM all codes.” All it contains is a list of the 125 possible
Calculate Each Customer’s Percentile Value
With that setup, we need to calculate each person’s percentile. This can be easily done with the RANK_PERCENTILE table calculation:
We’ll then calculate the percentile with a simple if-then statement:
Completing this for each metric and combining them, we can now see the customers assigned to each RFM combination:
Great! Now let’s remove the customer names, so we can just see counts by RFM category.
Wait a minute… how did everyone get in 555?
Unfortunately, this won’t work with table calculations like RANK_PERCENTILE. The issue is the view granularity. Since we haven’t broken it down by customer name, the calculations for percentiles all
break. There’s only a single value for frequency now – the overall frequency – which is equal to the maximum, making it the 100^th percentile. What to do?
Generally, in Tableau, many formulas that can be done as a table calculation can be re-done as a level of detail calculation. Table calculations are limited to the granularity of the view – there is
no way to override that or change an option. Since the view is broken down by customer, I can’t roll it back up to the level of the whole dataset. However, with a level of detail expression, we can
bypass the level of the view and define an overall 80^th percentile, 60^th percentile, and so on.
Table calculations are at the core of Tableau, and they solve many common calculation problems. The most common calculations like running total, moving average, change from the prior period, and
Year-to-Date are available as “quick” table calculations, meaning Tableau will write the formula for you. But occasionally we run into situations like this where the table calculation doesn’t work.
Enter the Level of Detail formula.
Level of Detail Formulas
Level of detail calculations were introduced in Tableau 9. They allow us to ignore filters or create formulas that aren’t dependent on dimensions in the view. The key to resolving this problem is the
FIXED level of detail formula. In my experience, this is the most commonly used level of detail formula.
I’ll re-do my approach with Level of Detail expressions. First, to get the number of items for each customer, I’ll use the FIXED expression with the Customer Name.
This guarantees that the count of items purchased is at the customer level. Regardless of my view – a summary by state, zip code, or product subcategory – the Frequency is still calculated for each
Comparing Customer Percentile to Overall Percentiles
How do we get the overall 80^th percentile, so I can compare each customer’s score to that value? With another FIXED level of detail:
{ FIXED : PERCENTILE([Frequency LOD],0.8) }
Note this is a bit confusing in Tableau. PERCENTILE is an aggregation, like SUM(), MEDIAN(), COUNT() and so on. But RANK_PERCENTILE is a table calculation that is used on top of an aggregation. Table
calculation and levels of detail can sometimes be combined, but in this case, we’ll keep them separate.
After creating these for each of recency, frequency, and monetary value, I need percentiles. Each person must be assigned to group 1-5 for each value. Time for another calculated field.
This ensures everyone is assigned to a “bin” for frequency. Notice I’ve made these text fields, so it will be easier to combine them, but they could just be numbers as well.
Combining the Fields
We’ll do the same for recency and monetary. Now I’ll combine everything into an RFM score with one more formula:
We could also create a group with this data, collapsing the 125 RFM combinations into 4 categories:
RFM values can be further grouped and boiled down – for example if you are an R of 1 or 2, and also an F of 1 or 2, you may be called a “hibernating” customer who is no longer engaged.
Visualizing the Results in Tableau and Summing up
Data visualization is a challenge for RFM. First, we have a lot of different categories – 110 of the 125 possible categories are present in our data. Secondly, there are three dimensions (R/F/M) and
viewing anything in 3 dimensions is challenging.
To sum up, we’ve resolved the calculation of RFM scores by leveraging Level of Detail expressions. When you find yourself in a situation where table calculations are causing you problems, level of
detail expressions or “LODs” are frequently the answer.
Often this comes down to understanding how Tableau thinks. It’s not necessarily immediately obvious that PERCENTILE is an aggregation, but RANK_PERCENTILE is a table calculation. However this is the
key to resolving the RFM calculation; combining PERCENTILE with a FIXED level of detail expression.
Essentially, we need data at two levels at once: the value for a single customer, and the value across all customers. Level of detail formulas allow us to work on both levels simultaneously, while
table calculations do not.
Thank you for following along. Please feel free to contact us to discuss RFM analysis in Tableau, or any other data question that may arise.
Optimus SBR provides data and analytics advisory services customized to support the needs of public and private sector organizations. We offer end-to-end solutions, from data strategy and governance
to data management, data engineering, data architecture, data science, and data analytics.
Contact Us to learn more about our Data practice and how we can help you on your data journey.
Doug Wilson, Senior Vice President and Technology & Data Practice Lead
Eric Tobias, Principal, Data Practice | {"url":"https://www.optimussbr.com/insights/topic/data-analytics/rfm-analysis-in-tableau/","timestamp":"2024-11-02T11:51:34Z","content_type":"text/html","content_length":"86246","record_id":"<urn:uuid:f6b201bd-a6f6-43f6-94be-0c5e121c3f44>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00343.warc.gz"} |
omposite gate
quantum.gate.CompositeGate Class
Namespace: quantum.gate
Composite gate for quantum computing
Since R2023a
A CompositeGate object contains a set of inner gates acting on a small set of qubits, and a mapping from this small set of qubits to the qubits of the circuit that contains the composite gate. The
CompositeGate object fulfills the purpose of a subfunction in classical programming, where a set of inner gates can be packaged as a subcircuit to be used to construct an outer quantum circuit.
You can assign CompositeGate objects to the Gates property of a quantumCircuit object (as a vector of gates).
Use the compositeGate creation function to construct a CompositeGate object.
You can also use the qftGate and mcxGate functions to construct CompositeGate objects. These functions construct specialized gates that applies the quantum Fourier transform and multi-controlled X
gates, respectively.
Name — Name of composite gate
string scalar
Name of the composite gate, specified as a string scalar. If you do not specify the name of the composite gate, the default value of this property is an empty string, "". Otherwise, the Name property
value must start with a letter, followed by letters, digits, or underscores (with no white space).
When you construct a composite gate from an existing quantum circuit using the compositeGate function, the Name property of the circuit is copied to the Name property of the composite gate (unless
you specify a new name when using compositeGate). This name is used in the plot of the composite gate and the function name in the generated QASM code.
Example: "qft", "bell", "multi_controlled_Z"
GetAccess public
SetAccess public
ControlQubits — Control qubits of composite gate
Control qubits of the composite gate, returned as empty. Creating a controlled composite gate is not supported and this property value is always empty.
GetAccess public
SetAccess private
TargetQubits — Target qubits of outer circuit
numeric scalar | numeric vector
Target qubits of the outer circuit containing the composite gate, returned as a numeric scalar or numeric vector of qubit indices. Each qubit of the inner gates in the Gates property is mapped to a
qubit of an outer circuit containing the composite gate through the TargetQubits vector.
Example: [3 4 7 8]
GetAccess public
SetAccess private
Gates — Inner gates
column vector of gates
Inner gates, returned as a column vector containing all the inner gates of the composite gate. The elements of this vector are of type SimpleGate or CompositeGate.
GetAccess public
SetAccess private
Public Methods
plot Plot quantum circuit or composite gate
getMatrix Matrix representation of quantum circuit or gate
inv Inverse of quantum circuit or gate
Create and Plot Quantum Circuit That Contains Composite Gates
Create a quantum circuit that consists of Hadamard and controlled NOT gates to entangle two qubits. Name the circuit as "bell".
innerGates = [hGate(1); cxGate(1,2)];
innerCircuit = quantumCircuit(innerGates,Name="bell")
innerCircuit =
quantumCircuit with properties:
NumQubits: 2
Gates: [2×1 quantum.gate.SimpleGate]
Name: "bell"
Create an outer circuit that contains two composite gates constructed from this inner "bell" circuit. The first composite gate acts on qubits 1 and 3 of the outer circuit containing this gate. The
second composite gate acts on qubits 2 and 4 of the outer circuit containing this gate.
outerGates = [compositeGate(innerCircuit,[1 3])
compositeGate(innerCircuit,[2 4])];
outerCircuit = quantumCircuit(outerGates)
outerCircuit =
quantumCircuit with properties:
NumQubits: 4
Gates: [2×1 quantum.gate.CompositeGate]
Name: ""
Plot the outer circuit.
In a circuit diagram, each solid horizontal line represents a qubit. The top line is a qubit with index 1 and the remaining lines from top to bottom are labeled sequentially. In this example, the
plotted outer circuit consists of four qubits with indices 1, 2, 3, and 4. The plot shows that qubits 1 and 3 of the outer circuit are mapped to qubits 1 and 2 of the inner circuit of the first
composite gate, and qubits 2 and 4 of the outer circuit are mapped to qubits 1 and 2 of the inner circuit of the second composite gate.
Click one of the composite gate blocks in the plot. A new figure showing the internal gates of the composite gate appears.
Version History
Introduced in R2023a | {"url":"https://it.mathworks.com/help/matlab/ref/quantum.gate.compositegate-class.html","timestamp":"2024-11-10T09:06:44Z","content_type":"text/html","content_length":"86252","record_id":"<urn:uuid:44e4e6b1-41c4-4214-a867-259c10551ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00151.warc.gz"} |
Video4:Graph Lines y= -9/10x
Here on this particular example, you see that you do have an X that's after this fraction. Which means that this number is a slope. If there were no X here, that means that this would be the number
that you put a dot at on your Y axis. Because there's an X, that means that this number is your slope. The number behind it is your Y intercept. We don't see a number here. So it's a zero. There's an
imaginary plus zero here. So we're going to put our dot on the positive zero. So I'm going to write this down. So if you see. Y equals, and you have a negative. You have a negative 9 over ten, and
there's an X beside it. That means that this is your slope. Remember that the Y intercept is behind it. This is your Y intercept. So you're going to put a dot at zero. So let's go over here to the
graph. And we're going to put our dot at the Y intercept, which is zero. So I'm going to put a dot right here. Sorry. Right here at zero. That is my Y intercept. And then I'm going to count the
slope. Remember slope always tells me to go up 9 and to the left ten since it's negative. So let me count up 912-345-6789, and I need to go to the left ten. One, two, three, four, 5, 6, 7, 8, 9, ten.
And we know that this is correct because a negative slope means that the graph is always pointing to the top left corner of the page. So now let's hit enter, so we just our line went through the
origin. At Y equals zero. And that is correct. | {"url":"http://helpyourautisticchildblog.com/video4graph-lines-y-910x-508038.html","timestamp":"2024-11-06T05:05:48Z","content_type":"text/html","content_length":"48434","record_id":"<urn:uuid:c9922ca5-77ca-419e-a1b0-7b9ce57c7e64>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00093.warc.gz"} |
2016-17 Woburn Challenge Finals Round - Junior Division
Problem 1: Fencing
Bo Vine, beloved leader of the peace-loving cows of Scarberia, has gotten wind that his land may be in danger. It seems that his long-time nemesis, the Head Monkey, may be planning to end their
year-long armistice and lead her troops to invade Scarberia! To confirm this intelligence, Bo Vine will need to send in a spy to infiltrate the monkeys' ranks.
There are N (2 ≤ N ≤ 100) trained cow spies, numbered from 1 to N, any of whom would surely be able to complete the mission successfully. As such, Bo Vine will have them engage in a round robin
fencing tournament, with the victor earning the honour of being sent on the mission. As it turns out, the cows are quite lazy, and none of them actually want to be chosen. As such, they'll all try
their best to lose (without being too obvious about it), but at the end of the day, one of them is sure to win the largest number of fencing matches and be forced to go on the mission.
Over the course of the tournament, each of the N cows will partake in one match against each of the remaining N − 1 cows. During the match between each pair of distinct cows i and j, cow i will score
S[i, j] points, while cow j will score S[j, i] points (0 ≤ S[i, j], S[j, i] ≤ 10, S[i, j] ≠ S[j, i]). Whichever of them scores more points than the other will be declared the winner of that match.
Note that there are no ties. Also note that cows don't play against themselves, so S[i, i] is given to be 0 for each i.
At the conclusion of the tournament, the cow who has won the largest number of their N − 1 matches will be crowned the champion. It's guaranteed that there will be a unique cow with strictly the
largest number of wins. Given the results of all of the matches, can you help Bo Vine determine the winner?
Input Format
The first line of input consists of a single integer N.
N lines follow, the i-th of which consists of N space-separated integers S[i, 1], …, S[i, N] (for i = 1..N).
Output Format
Output one line consisting of a single integer – the number of the cow who will be sent on the spy mission.
Sample Input
Sample Output
Sample Explanation
The 5 cows won 2, 2, 1, 3, and 2 matches, respectively. As such, cow 4 won the largest number of matches.
All Submissions
Best Solutions
Point Value: 5 (partial)
Time Limit: 2.00s
Memory Limit: 16M
Added: May 07, 2017
Author: SourSpinach
Languages Allowed:
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3 | {"url":"https://wcipeg.com/problem/wc16fj1","timestamp":"2024-11-13T22:35:04Z","content_type":"text/html","content_length":"11886","record_id":"<urn:uuid:7064fa9c-d7ab-4c53-a468-ad2c9c65c425>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00785.warc.gz"} |
Given (2,1),(0,-2),(-2,-3), what is the coordinate of the y intercept, the slope and equation of the line?
| HIX Tutor
Given (2,1),(0,-2),(-2,-3), what is the coordinate of the y intercept, the slope and equation of the line?
Answer 1
Assuming the line is a standard (non-rotated) parabola it has the form: #y=ax^2+bx+c#
Using the given points #(2,1), (0,-2), (-2,-3)# we can write three equations in three unknowns (#a,b,c#) and solve for the parabola's coefficients.
From equation 2 #c=-2# and by inspection or standard operations it follows that #b=1# and #a=1/4#
The equation of the parabola is therefore #y = 1/4x^2+x-2#
We were told the y-intercept (it's the point where #x=0#) The y-intercept is #-2#
We are not given a point at which to evaluate the slope (I'm assuming it is the slope of the tangent that is being asked for) so the best we can do is give the general formula for the slope at a
point #x# namely the derivative of #y# with respect to #x# #1/2x+1#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/given-2-1-0-2-2-3-what-is-the-coordinate-of-the-y-intercept-the-slope-and-equati-8f9af91df1","timestamp":"2024-11-06T02:20:39Z","content_type":"text/html","content_length":"571758","record_id":"<urn:uuid:ebf6ca61-1ad4-4596-a7e9-a2f0bae1185c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00864.warc.gz"} |
Subtract Integers Worksheet (printable, examples, answers, videos, activities, pdf)
Printable “Integer” Worksheets:
Add Integers using the Number Line
Add Integers using the Rules
Adding Integers
Subtract Integers using Addition
Subtracting Integers
Multiplying Integers
Dividing Integers
Order of Operations with Integers
Free printable and online worksheets to help Grade 7 students review how to subtract an integer by adding its opposite.
Subtracting Integers Using Rules
Subtracting integers can be simplified by converting the problem into an addition problem.
Here are the steps and rules to follow:
Rule for Subtracting Integers
1. Convert the Subtraction to Addition:
Change the subtraction problem into an addition problem by adding the opposite of the integer being subtracted.
2. Apply Addition Rules:
Use the rules for adding integers to find the result.
Same Sign: Add absolute values and keep the common sign.
Different Signs: Subtract the smaller absolute value from the larger absolute value, and assign the sign of the larger absolute value.
Have a look at this video if you need to review how to subtract integers using the rules of adding the opposite.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Subtract Integers Worksheets.
More Subtract Integers Worksheets
(Answers on the second page.)
Subtract Integers Worksheet #1
Subtract Integers Worksheet #2
Subtract Integers Worksheet #3
Subtract Integers Worksheet #4
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/subtract-integer-worksheet.html","timestamp":"2024-11-05T03:59:57Z","content_type":"text/html","content_length":"38285","record_id":"<urn:uuid:4443ce7d-e0bd-4c00-ad5f-ce8d356b5cb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00879.warc.gz"} |
PPSC Test MathMatics MCQs 2019 With full Book Download in PDF
PPSC Test MathMatics MCQs 2019 Withfull Book Download in PDF
• The closure property of real numbers under addition states that the sum of two real numbers Is also a real number.
• The commutative property of real numbers under addition states that the sum of NO real numbers is independent of the order In which they are added.
• The associative property of real numbers under addition states that
a+(b+c)=(a+b)+c ‘Va,b,ceR.
• For each a e R there exists an element – a e R such that a +(-a) = 0 = -a +a . The
element -a Is called the additive inverse of the real number 8.
• The closure property of real numbers under multiplication states that the product of two real numbers is also a real number.
• The commutative property of real numbers under multipfteation states that the product of two real numbers is Independent of the order in which they are multiplied.
• The associative property of real numbers under multiplication states that
a(bc) = (ab)c ‘V a,b,c e R.
• The left distributive law for multiplication over addition states that
a(b+c)=ab+ac rta,b,ce R.
• The right distributive law for multiplication over addition states that
(a+b)c = ac+bc rta,b,c e R .
• The trichotomy property of real numbers states that for real numbers a and b, exactly one of the following
• If a< band c <d, then a+ c < b +d. This is known as additivity.
Multiple Choice Questions
1. Ois
(a) an odd integer (b) an irrational number
(c) a natural number (d) an even integer
2. 3is
(a) an odd integer (b) an irrational number
(c) a rationalnumber (d) a negative integer
3. J3 is
(a) an irrational number (b) a rational number
(c) a natural number (d) a negative Integer
8. Every odd Integer is also
(a) rational number (b) negative integer
(c) positive integer (d) irrational number
9. .Every even integer is also
(a) natural number (b) Irrational number
(c) rational number (d) whole number
10. If n is a prime,then fn Is
(a) rational number (b) whole number
(c) natural number (d) irrational number
11. If n is a perfect square, then fn is
(a) an Irrational number (b) a rational number
(c) always an even integer (d) always an odd integer
12. 1r is
(a) a whole number (b) a natural number
(c) a rational number (d) an irrational number
13. Every recurring decimal or terminating decimal represents the
(a) rational number (b) irrational number
(c) natural number (d) integer
14. Every non-repeating non-terminating decimal is
(a) rational number (b) irratiOnal number
(c) integer (d) none of these
15. The addiUve identity of real numbers is
(a) 0 (b) 1
(c) 2 (d) 3
31. The set {0} has
(a) closure property with respect to multiplication
(b) not closure property with respect to addition
(c) not closure property with respect to multiplication
(d) closure property with respect to division
32. Which of the following sets has closure property with respect to addition?
(a) {-1, 1} (b) .{-1}
(c) {l} (d) {0}
69 Z is a group under
(a) subtraction (b)diviSion
(c) multiplication (d) addition
70 The action of wearing soc:Ks and shoes
(a) do not oommute (b) oommute
(c) does not exit (d) is associative
71 The set of all non-singular matrices of order 2 fonns a non-abelian group under
(a) addition (b) subtraction
(c) multiplication (d) division
72 A closed set with respect to some binary operation is called the
(a) group
(b) commutative group
(c) groupoid
(d) non·ablllan group
73 A non-empty set which is closed with respect to some binary operation is called the semi· group If
(a) the binary operation is associative
(b) the binary operation is oommutative
(c) the binary operation is anti-commutative
(d) Identity element exists
• The composition I o g of two functions fand g is defined as (I o g)(x) = l(g(x))
• If g is differentiable at the point x and fis differentiable at the point g(x), then the composition I o g of these functions is differentiable at x and (f o g)'(x) = l'[g(x)].g'(x)
• ‘If a.,apa 2 ,a3 ,a4 , …an,… are constants and xis a variable, then the series of the form a. +a x+a x 2 +a x 3 +a x 4 +…+anxn +… lscalledapowersene.s
• If a function f is defined at 0 and its all derivatives at 0 exist, then the Maclaurin’s series expansion of the function f is xl x3.xn l(x) = I(O)+xi'(0)+ -1″(0)+-1″‘(0)+…+-11″>(0)+…
• If a function fls defined at a point a and its all derivatives at sexist, then the Taylor’s series expansion of the function f Is
• Geometrically the derivative of y at x represents the slope of tangent at the point P (x,y) .
• A function r defined on an interval [a,b] is said to be inaeasing function on [a.b] if
x1 <x2 => l(x1 ) <l(x2 ),where x1 andx2 areanynumbersintheinterval [a,b] .
• A function f defined on an Interval [a,bI is said to be decreasing function on [ a,b1 if
X 1 < X2 => l(x1 ) > f(x 2 ), where x1 and x2 are any numbers in the interval [a,b] .
• Let the funcbon f be continuous on the closed interval [a,b] and differentiable on the open interval (a,b) ,then f is Increasing on [a,b] if I'(x) > 0 for all x in (a,b) .
• Let the funcUon f be continuous on the closed interval [a.b1 and differentiable on the open interval (a,b),thenfisdecreasingon [a,b] If l'(x)<O torallx1n (a,b).
• The function f is said to have a relative maximum value at c if there exists an open Interval containing c, on which fls defined,such that f(c) 2: l(x) for all x c In this lotervaJ.
• The functlon f ts said to have a relabve m1namum value at c if there eXists an open interval oonta10ing c, on YAliCh f 1s defined,such that I(c) I(x) for au x * c in this interval
• If c is a number in the domam of the function f and if either I1(c) = 0 or I1(c) does not exis then
cis called a aitical number and the point (c.l(c)) is called the cntical point.
• The medians of a triangle are concurrent and that the point ot concurrency divides each one of them in the ratio 2:1.
• The point of concurrency of the medtans ot a triangle is called its centroid.
• The angle bisectors of a trial’gle are concurrent
• The point of concurrency of the angle bisectors of a triangle is called 1ts in-rentre.
• Inclination of a non horizontal lin•e /Is the angle a, 0 < a < 180″ positive x-axis to I.
• If a is the incl1nation of a non-vertical line 11 then its slope or gradient is tan a .
• If a line intersects x-axis at po1nt (a,0) then a is called the x-mtercept of the kne.
• If a line Intersects y-axis at point (0, b)Ithen b is called the y-lntercept of the line.
• The equation of a non-vertical line with slope m and y-intercept cIs y = mx +c .
• If p is the length of perpendicular from origin to the non-vertical line I and a Is the lrfcllnation of P then show that the equation of the line is x cos a +y sin a = p .
• The general equation of a str319ht hne is ax +by + c = 0Iwhere either a 0 or b 0 .
• Thelines a1x+b1y+c 1 =0 a2 x+b2 y+c 2 =0 areperpendicularif a,a2 +b1b2 =0
• The lines a,x +b,y +c, = 0 I a2 x +b2 y +c2 =0 are parallel if a,b2 -a2 b, = 0 .
• Parallel lines never 1ntersect each other.
• A scalar Is a quantity having magnitude but no direction,e.g.mass, length, time, temperature, density, distance,area,volume, and any real number.
• A vector is a quantity having both magnitude and direction, such as displacement. force, velocity, acceleration ,and weight
• The magnitude of a vector is always nonnegative number.
• The vector whose magnitude is one is called the unit vector.
• The vector of zero magnitudl! is called the null vector or zero vector.
• Two vectors A and lJ are said to be equal vectors if they have the same magnitude and same
direction regardless of the position of their initial points.
• The vectors intersecting at a single point are called the concurrent vectors and this point where the vectors Intersect Is called the point of concurrency .
• The vectors lying in the same plane are called the coplanar vectors.
• The resultant of two vectors acting along the adjacent sides of a parallelogram is the diagonal of the parallelogram.
• A vector whose Initial point is at origin and whose terminal point Is PIs called the position vector of point
• The diagonals of a parallelogram bisect each other.
• If a vectcr r makes angles a, p, y with x-axis,y-axis, z-axis respectively, then these angles are
called the direction angles of the vector !:.
• If a vector !: makes angles a ,p, y with x-axis, y-axis, z-axis respectively, then
cosa , cos p, cosy are called the direction cosines of the vector r .
• The scalar product is commutative .
• The projection of a vector along a vector £ is – .
• Scalar biple product will is zero if any two of Its vectors are equal.
• If the vectors a and b act along the adjacent sides of the parallelogram, then tJ:le area of the parallelogram Is Ia X bI· | {"url":"https://worldstudypoint.com/ppsc-test-mathmatics-mcqs-2019-with-full-books/","timestamp":"2024-11-10T09:44:06Z","content_type":"text/html","content_length":"100663","record_id":"<urn:uuid:764e1c6c-9d65-47e4-8100-64cd35c096c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00385.warc.gz"} |
The approximate determinantal assignment problem
Petroulakis, G. (2015). The approximate determinantal assignment problem. (Unpublished Doctoral thesis, City University London)
The Determinantal Assignment Problem (DAP) is one of the central problems of Algebraic Control Theory and refers to solving a system of non-linear algebraic equations to place the critical
frequencies of the system to specied locations. This problem is decomposed into a linear and a multi-linear subproblem and the solvability of the problem is reduced to an intersection of a linear
variety with the Grassmann variety. The linear subproblem can be solved with standard methods of linear algebra, whereas the intersection problem is a problem within the area of algebraic geometry.
One of the methods to deal with this problem is to solve the linear problem and then and which element of this linear space is closer - in terms of a metric - to the Grassmann variety. If the
distance is zero then a solution for the intersection problem is found, otherwise we get an approximate solution for the problem, which is referred to as the approximate DAP. In this thesis we
examine the second case by introducing a number of new tools for the calculation of the minimum distance of a given parametrized multi-vector that describes the linear variety implied by the linear
subproblem, from the Grassmann variety as well as the decomposable vector that realizes this least distance, using constrained optimization techniques and other alternative methods, such as the SVD
properties of the so called Grassmann matrix, polar decompositions and mother tools. Furthermore, we give a number of new conditions for the appropriate nature of the approximate polynomials which
are implied by the approximate solutions based on stability radius results. The approximate DAP problem is completely solved in the 2-dimensional case by examining uniqueness and non-uniqueness
(degeneracy) issues of the decompositions, expansions to constrained minimization over more general varieties than the original ones (Generalized Grassmann varieties), derivation of new inequalities
that provide closed-form non-algorithmic results and new stability radii criteria that test if the polynomial implied by the approximate solution lies within the stability domain of the initial
polynomial. All results are compared with the ones that already exist in the respective literature, as well as with the results obtained by Algebraic Geometry Toolboxes, e.g., Macaulay 2. For
numerical implementations, we examine under which conditions certain manifold constrained algorithms, such as Newton's method for optimization on manifolds, could be adopted to DAP and we present a
new algorithm which is ideal for DAP approximations. For higher dimensions, the approximate solution is obtained via a new algorithm that decomposes the parametric tensor which is derived by the
system of linear equations we mentioned before.
Downloads per month over past year | {"url":"https://openaccess.city.ac.uk/id/eprint/11894/","timestamp":"2024-11-01T20:58:00Z","content_type":"application/xhtml+xml","content_length":"41445","record_id":"<urn:uuid:ede49077-297f-4f29-ba10-50676edfd8b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00371.warc.gz"} |
Troubleshoot Online Parameter Estimation
To troubleshoot online parameter estimation, check the following:
Model Structure
Check that you are using the simplest model structure that adequately captures the system dynamics.
AR and ARX model structures are good first candidates for estimating linear models. The underlying estimation algorithms for these model structures are simpler than those for ARMA, ARMAX,
Output-Error, and Box-Jenkins model structures. In addition, these simpler AR and ARX algorithms are less sensitive to initial parameter guesses.
The more generic recursive least squares (RLS) estimation also has the advantage of algorithmic simplicity like AR and ARX model estimation. RLS lets you estimate parameters for a wider class of
models than ARX and AR and can include nonlinearities. However, configuring an AR or ARX structure is simpler.
Consider the following when choosing a model structure:
• AR and ARX model structures — If you are estimating a time-series model (no inputs), try the AR model structure. If you are estimating an input-output model, try the ARX model structure. Also try
different model orders with these model structures. These models estimate the system output based on time-shifted versions of the output and inputs signals. For example, the a and b parameters of
the system y(t) = b[1]u(t)+b[2]u(t-1)-a[1]y(t-1) can be estimated using ARX models.
For more information regarding AR and ARX models, see What Are Polynomial Models?.
• RLS estimation— If you are estimating a system that is linear in the estimated parameters, but does not fit into AR or ARX model structures, try RLS estimation. You can estimate the system output
based on the time-shifted versions of input-outputs signals like the AR and ARX, and can also add nonlinear functions. For example, you can estimate the parameters p[1], p[2], and p[3] of the
system y(t) = p[1]y(t-1) + p[2]u(t-1) + p[3]u(t-1)^2 . You can also estimate static models, such as the line-fit problem of estimating parameters a and b in y(t) = au(t)+b.
• ARMA, ARMAX, Output-Error, Box-Jenkins model structures — These model structures provide more flexibility compared to AR and ARX model structures to capture the dynamics of linear systems. For
instance, an ARMAX model has more dynamic elements (C polynomial parameters) compared to ARX for estimating noise models. This flexibility can help when AR and ARX are not sufficient to capture
the system dynamics of interest.
Specifying initial parameter and parameter covariance values are especially recommended for these model structures. This is because the estimation method used for these model structures can get
stuck at a local optima. For more information about these models, see What Are Polynomial Models?.
Model Order
Check the order of your specified model structure. You can under-fit (model order is too low) or over-fit (model order is too high) data by choosing an incorrect model order.
Ideally, you want the lowest-order model that adequately captures your system dynamics. Under-fitting prevents algorithms from finding a good fit to the model, even if all other estimation settings
are good, and there is good excitation of system dynamics. Over-fitting typically leads to high sensitivity of parameters to the measurement noise or the choice of input signals.
Estimation Data
Use inputs that excite the system dynamics adequately. Simple inputs, such as a step input, typically does not provide sufficient excitation and are good for estimating only a very limited number of
parameters. One solution is to inject extra input perturbations.
Estimation data that contains deficiencies can lead to poor estimation results. Data deficiencies include drift, offset, missing samples, equilibrium behavior, seasonalities, and outliers. It is
recommended that you preprocess the estimation data as needed.
For information on how to preprocess estimation data in Simulink^®, see Preprocess Online Parameter Estimation Data in Simulink.
For online parameter estimation at the command line, you cannot use preprocessing tools in System Identification Toolbox™. These tools support only data specified as iddata objects. Implement
preprocessing code as required by your application. To be able to generate C and C++ code, use commands supported by MATLAB^® Coder™. For a list of these commands, see Functions and Objects Supported
for C/C++ Code Generation (MATLAB Coder).
Initial Guess for Parameter Values
Check the initial guesses you specify for the parameter values and initial parameter covariance matrix. Specifying initial parameter guesses and initial parameter covariance matrix is recommended.
These initial guesses could be based on your knowledge of the system or be obtained via offline estimation.
Initial parameter covariance represents the uncertainty in your guess for the initial values. When you are confident about your initial parameter guesses, and if the initial parameter guesses are
much smaller than the default initial parameter covariance value, 10000, specify a smaller initial parameter covariance. Typically, the default initial parameter covariance is too large relative to
the initial parameter values. The result is that initial guesses are given less importance during estimation.
Initial parameter and parameter covariance guesses are especially important for ARMA, ARMAX, Output-Error, and Box-Jenkins models. Poor or no guesses can result in the algorithm finding a local
minima of the objective function in the parameter space, which can lead to a poor fit.
Estimation Settings
Check that you have specified appropriate settings for the estimation algorithm. For example, for the forgetting factor algorithm, choose the forgetting factor, λ, carefully. If λ is too small, the
estimation algorithm assumes that the parameter value is varying quickly with time. Conversely, if λ is too large, the estimation algorithm assumes that the parameter value does not vary much with
time. For more information regarding the estimation algorithms, see Recursive Algorithms for Online Parameter Estimation.
Related Topics | {"url":"https://se.mathworks.com/help/ident/ug/troubleshoot-online-parameter-estimation.html","timestamp":"2024-11-09T20:21:21Z","content_type":"text/html","content_length":"75787","record_id":"<urn:uuid:78ca5cfd-fb42-48fd-bbc5-09fcabcfc1f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00607.warc.gz"} |
How to Change Cell Values to List Format In Pandas Dataframe?
To change cell values to list format in a pandas dataframe, you can use the apply method along with a lambda function. You can create a lambda function that converts the cell value to a list and then
use the apply method to apply this lambda function to each cell in the dataframe. This will transform the cell values into list format.
How to transform pandas dataframe columns to list?
You can transform pandas dataframe columns to a list by using the tolist() method. Here is an example:
1 import pandas as pd
3 # Create a sample dataframe
4 data = {'A': [1, 2, 3, 4],
5 'B': ['apple', 'banana', 'cherry', 'date']}
6 df = pd.DataFrame(data)
8 # Transform column A to a list
9 column_A_list = df['A'].tolist()
10 print('Column A as list:', column_A_list)
12 # Transform column B to a list
13 column_B_list = df['B'].tolist()
14 print('Column B as list:', column_B_list)
This will output:
1 Column A as list: [1, 2, 3, 4]
2 Column B as list: ['apple', 'banana', 'cherry', 'date']
How to convert pandas dataframe to list of arrays with specific shape?
You can convert a pandas DataFrame to a list of arrays with a specific shape by first converting the DataFrame to a numpy array and then reshaping it to the desired shape. Here's an example code
snippet to demonstrate how to do this:
1 import pandas as pd
2 import numpy as np
4 # Create a sample DataFrame
5 df = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [6, 7, 8, 9, 10]})
7 # Convert the DataFrame to a numpy array
8 arr = df.to_numpy()
10 # Reshape the array to the desired shape
11 desired_shape = (2, -1) # This will reshape the array to have 2 rows and the appropriate number of columns
12 arr_reshaped = np.reshape(arr, desired_shape)
14 # Convert the reshaped array to a list of arrays
15 list_of_arrays = arr_reshaped.tolist()
17 print(list_of_arrays)
In this example, the DataFrame df is first converted to a numpy array arr, which is then reshaped to have 2 rows and the appropriate number of columns (determined by the -1 in the desired_shape).
Finally, the reshaped array is converted to a list of arrays list_of_arrays.
What is the fastest way to convert pandas dataframe to list in Python?
One of the fastest ways to convert a pandas DataFrame to a list in Python is to use the values.tolist() method. This method converts the DataFrame to a list of lists containing the data from the
Here is an example:
1 import pandas as pd
3 # Create a sample DataFrame
4 df = pd.DataFrame({'A': [1, 2, 3], 'B': ['a', 'b', 'c']})
6 # Convert DataFrame to a list of lists
7 list_of_lists = df.values.tolist()
9 print(list_of_lists)
This will output:
1 [[1, 'a'], [2, 'b'], [3, 'c']]
Using the values.tolist() method is a fast and efficient way to convert a pandas DataFrame to a list in Python.
How to convert a pandas dataframe to list?
You can convert a pandas DataFrame to a list by using the values attribute of the DataFrame. Here's how you can do it:
1 import pandas as pd
3 # Create a sample DataFrame
4 data = {'A': [1, 2, 3], 'B': ['a', 'b', 'c']}
5 df = pd.DataFrame(data)
7 # Convert the DataFrame to a list
8 df_list = df.values.tolist()
10 print(df_list)
This will output:
1 [[1, 'a'], [2, 'b'], [3, 'c']]
Each row of the DataFrame will be converted to a list, and all these lists will be stored in another list.
What is the algorithm for converting pandas dataframe to list in Python?
To convert a pandas DataFrame into a list in Python, you can use the values attribute of the DataFrame to get a numpy array representation, and then convert the numpy array to a list.
Here is an example algorithm:
1. Import the pandas library
2. Create a pandas DataFrame
3. Use the values attribute of the DataFrame to get a numpy array representation
4. Convert the numpy array to a list using the tolist() method
Here is a sample code snippet:
1 import pandas as pd
3 # Create a sample DataFrame
4 data = {'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}
5 df = pd.DataFrame(data)
7 # Convert the DataFrame to a list
8 df_list = df.values.tolist()
10 print(df_list)
This will output:
1 [[1, 'foo'], [2, 'bar'], [3, 'baz']]
Now, df_list is a list representation of the pandas DataFrame df. | {"url":"https://ubuntuask.com/blog/how-to-change-cell-values-to-list-format-in-pandas","timestamp":"2024-11-05T17:10:59Z","content_type":"text/html","content_length":"348275","record_id":"<urn:uuid:b8650183-3712-41b6-b07b-0ba45d07bfe1>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00439.warc.gz"} |
The Longevity of Rankings
A phase transition controlled by noise determines how volatile rankings are.
Figure 1: Phase diagram of ranking stability in the $A$–$B$ plane ($A$: fitness, $B$: noise). For a given ranking system, $A$ is a vector of constants (${A}_{i}$) representing the “fitness” of all
items of the ranked list, $B$ is a parameter measuring the ranking’s noise. Every real ranking system is represented by a line corresponding to the experimentally determined value of $B$. In analogy
to the classical phases of statistical mechanics, three phases are identified based on the stability of the top-ranked items: rank stable (solid), score stable (liquid), and unstable or volatile
(gas). $B$ is the control parameter of the phase transition. The lower panel shows the rank evolution for the top-ranked items of a stable system (diseases diagnosis in Medicare) and a volatile one
(page views in Wikipedia).
Whenever we use Google’s search engine, shop for bargains on Amazon, or evaluate a colleague through citation measures such as the $h$-index, we are relying on rankings to bring order into large and
complex datasets. We would be much better at making decisions if we could thoroughly understand the mechanisms that drive these rankings. Can we trust a ranking system to point out the items of
highest quality? Can lousy items occasionally reach the top of a ranking? Will valuable ones always emerge? Certain rankings, like those measuring the number of times scientists are cited, show
remarkable stability: it would take some effort to replace Einstein or Darwin as the most talked about scientists. Others, like bestseller lists, have a very volatile nature and fluctuate on a daily
basis. Why such a different behavior? In Physical Review Letters, Nicholas Blumm at Northeastern University and the Dana-Farber Cancer Institute, both in Boston, Massachusetts, and colleagues report
on a study of the volatility of several prominent ranking systems [1]. From their analysis, a unified theory of ranking stability emerges.
Researchers apply theories rooted in statistical mechanics to explain the properties of particularly important rankings. A ranking is typically described by distribution functions, relating the
probability that an item is ranked at a certain position to key parameters of the system [2]. For example, the American linguist George Kingsley Zipf [3] observed that the usage rank of a word is, to
a good approximation, inversely proportional to its frequency: the most frequent word will occur twice as often as the second most frequent word, three times as often as the third most frequent word,
etc. This scaling applies to all languages and has been interpreted by Zipf [3] and more recent studies [4] in terms of a least-effort principle: minimization of the efforts of both hearer and
speaker in a conversation leads to a Zipf-like distribution law, a hallmark of the efficient mechanisms by which human languages are generated. Similar scaling laws are observed in other rankings
unrelated to language, such as the distribution of incomes described by the Italian economist Vilfredo Pareto [5], who noticed that a small proportion of a population owns a large part of the wealth.
The coefficient of the Pareto’s power law is often taken as an indicator of a society’s inequalities. These examples illustrate how statistical analysis can reveal profound and sometimes hidden
mechanisms that govern the system being ranked.
Blumm et al. go beyond the description of ranking distribution functions and focus instead on what determines their stability in time. The authors search for a common law regulating ranking dynamics
by analyzing six prominent ranking systems: the use of individual words in published literature, the hourly page views in Wikipedia, the frequency of certain keywords used in Twitter, the daily
market capitalization of companies, the number of diagnosis of a specific disease recorded by Medicare, and the number of article citations in the Physical Review corpus. Each ranking system is based
on a different mechanism for assigning scores to different items of a list. The rank of a specific item is obtained by comparing its score to those of other items. Rank is thus a collective measure,
depending both on an item’s score and on what happens to the rest of the ranked system.
The authors observe that the stability of an item’s rank depends on the fluctuations of the score around its mean value. An item ranked at a certain position ( $r$) is rank-stable if the score
fluctuates less than the gap to the consecutively ranked items ( $r±1$). To describe the score dynamics, Blumm et al. apply a universal stochastic equation (a Langevin equation) that can describe the
evolution of systems under the simultaneous action of deterministic and stochastic forces. The authors assume that the deterministic and stochastic terms can be represented by power-law functions of
the item’s score, weighted by a series of constants $Ai$ (for every item) and $B$. The constant $A$ captures the “fitness” of each item, describing the aptitude to increase its score. For example, in
social media, $A$ measures the ability to acquire new friends or followers, or in publishing, the capacity of an article to get new citations. $B$, instead, models a Gaussian random noise that
determines stochastic score fluctuations. For the six investigated rankings, the authors derive empirical values of $A$ and $B$ by fitting historical data.
The interplay of these two weights determines the ranking within the system and, more importantly, its stability. The authors calculate the probability that a certain item with fitness $A$ has a
certain score $x$ at a given time. Under the assumption that the system reaches a steady-state solution, they find that the most likely score depends on the relative value of fitness compared to
other items’ fitness. The effect of the noise is to make the score fluctuate by a certain amount. The outcome depends critically on the value of the noise parameter $B$. If the noise is lower than a
certain critical value $Bc$, the score remains localized around the original value. If the noise is larger than $Bc$, the solution is no longer stable. Since the stability of the score does not
necessarily imply rank stability, two distinct regimes can be found below $Bc$. For noise between $Bc$ and a certain value $Br$, each item has a stable score, but the fluctuations are sufficient for
items with comparable score to swap their rank. Below $Br$, both ranks and scores are stable. Blumm et al. demonstrate that the volatility of ranking can be captured by a phase diagram in the $A$–
$B$ plane (shown in Fig. 1), where ranking stability properties are plotted as a function of the two parameters $A$ and $B$. Three phases are identified in analogy to the classical phases of
statistical mechanics: ranking and score stable (solid), score-only stable (liquid), and volatile (gas). Transitions between different regimes of ranking volatility can be described as phase
transitions in which the random noise ( $B$) is the control parameter.
The authors test the validity of this approach by considering the ranking dynamics for the top five items of the six investigated examples. In the $A$– $B$ diagram, one can represent every real
system with a line corresponding to the experimental value measured for $B$ (see Fig. 1). Medicare, word usage, and market cap are in the rank-stable regime, in which highly ranked items should
display rank stability, a prediction that agrees with empirical results. Conversely, Twitter keywords usage and Wikipedia page views are in the unstable phase, with high volatility of both score and
ranking. Finally, Physical Review citations fall in the score-stable, liquidlike phase: the scores fluctuate around a well-defined average, but this is not sufficient to maintain rank stability.
The work of Blumm et al. delivers a fresh contribution to the study of ranking in social and economic systems, formulating a universal, scale-invariant theory that captures the dynamics of a variety
of rankings with wildly different volatility properties. Most of the differences can be attributed to a phase transition controlled by the stochastic noise strength. It is tempting to conclude that
the ephemeral nature of modern social media like Twitter or Wikipedia explains the larger noise (hence volatility) compared to established rankings such as that of word usage in English literature.
Further studies should explore in more detail the origin of noise in ranking. Another important direction for future research is the extension to correlated noise (in real-life systems, ranking
fluctuations of different items may be mutually dependent).
It is reassuring to know that Darwin and Einstein will continue to top scientific rankings for the foreseeable future. However, as a statistical physicist, I am also intrigued by the fact that, in
our ranking-obsessed world, a small fluctuation (or a bit of luck) may be all it takes to turn today’s also-ran into tomorrow’s number one.
1. N. Blumm, G. Ghoshal, Z. Forró, M. Schich, G. Bianconi, J-P. Bouchaud, and A-L. Barabási, ”Dynamics of Ranking Processes in Complex Systems,” Phys. Rev. Lett. 109, 128701 (2012)
2. M. Mitzenmacher, “A Brief History of Generative Models for Power Law and Lognormal Distributions,” Internet Math. 1, 226 (2004)
3. G. K. Zipf, Human Behavior and the Principle of Least Effort (Addison-Wesley, Cambridge, 1949)
4. R. Ferrer i Cancho and R. V. Solé, “Least effort and the origins of scaling in human language,” Proc. Natl. Acad. Sci. U.S.A. 100, 788 (2003)
5. V. Pareto, Cours d’Économie Politique (Librairie Droz, Geneva, 1896) | {"url":"https://physics.aps.org/articles/v5/105","timestamp":"2024-11-13T07:49:26Z","content_type":"text/html","content_length":"37051","record_id":"<urn:uuid:e46ee60e-7458-42f3-a8b4-da5aee0a86a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00512.warc.gz"} |
A review and appraisal of arrival-time picking methods for downhole microseismic data
We have evaluated arrival-time picking algorithms for downhole microseismic data. The picking algorithms that we considered may be classified as window-based single-level methods (e.g., energy-ratio
[ER] methods), nonwindow-based single-level methods (e.g., Akaike information criterion), multilevel- or array-based methods (e.g., crosscorrelation approaches), and hybrid methods that combine a
number of single-level methods (e.g., Akazawa’s method). We have determined the key parameters for each algorithm and developed recommendations for optimal parameter selection based on our analysis
and experience. We evaluated the performance of these algorithms with the use of field examples from a downhole microseismic data set recorded in western Canada as well as with pseudo-synthetic
microseismic data generated by adding 100 realizations of Gaussian noise to high signal-to-noise ratio microseismic waveforms. ER-based algorithms were found to be more efficient in terms of
computational speed and were therefore recommended for real-time microseismic data processing. Based on the performance on pseudo-synthetic and field data sets, we found statistical, hybrid, and
multilevel crosscorrelation methods to be more efficient in terms of accuracy and precision. Pick errors for S-waves are reduced significantly when data are preconditioned by applying a
transformation into ray-centered coordinates.
downhole methods
arrival-time picking
signal processing
polarization filter
wavelet transform
Arrival-time picking is a fundamental step in the processing of downhole microseismic data for phase identification and later processing, such as the accurate determination of a microseismic
hypocenter (e.g., Maxwell et al., 2010). Any errors and/or misidentifications of these arrival times may have significant effects on the hypocenters and thus the interpretation of such results,
particularly if a systematic bias is present. With the ever-increasing size of microseismic data volumes, the task of manually picking arrival times in a timely fashion is impossible and automated
methods must be used. Numerous automated algorithms have been proposed for arrival-time picking, operating in either the time or frequency domain and for single- or multicomponent data. Some examples
of commonly used algorithms include the short- and long-time average ratio (STA/LTA) (Allen, 1978; Baer and Kradolfer, 1987; Earle and Shearer, 1994; Withers et al., 1998), modified energy ratio
(MER) (Han et al., 2009; Gaci, 2014), modified Coppens’ method (MCM) (Sabbione and Velis, 2010), Akaike information criterion (AIC) (Takanami and Kitagawa, 1991; Sleeman and Van Eck, 1999; Leonard,
2000; Zhang et al., 2003; Diehl et al., 2009), algorithms based on fractals (Boschetti et al., 1996; Jiao and Moon, 2000), crosscorrelation (Molyneux and Schmitt, 1999; Raymer et al., 2008; De
Meersman et al., 2009), neural networks (McCormack et al., 1993; Dai and MacBeth, 1995; Gentili and Michelini, 2006), digital image segmentation (Mousa et al., 2011), and higher order statistics such
as skewness and kurtosis (Yung and Ikelle, 1997; Saragiotis et al., 2002, 2004; Küperkoch et al., 2010; Tselentis et al., 2012; Lois et al., 2013, 2014). These algorithms have been extensively used
in earthquake and exploration seismology to pick first arrivals and other seismic phases; with some modifications, they can also be useful for picking P- and S-wave arrivals on microseismic data. For
example, Diehl et al. (2009) use the AIC algorithm in combination with STA/LTA and polarization detector to pick S-wave on local earthquakes, whereas Tan et al. (2014) pick P- and S-wave arrival
times on microseismic data using an STA/LTA-polarization-AIC hybrid approach. Küperkoch et al. (2010) use higher order statistics (skewness and kurtosis) to pick P-wave arrival times on local and
regional earthquakes, whereas Lois et al. (2014) use kurtosis to estimate S-wave arrival time on microseismic data. Similarly, VanDecar and Crosson (1990) use a crosscorrelation-based method to pick
relative phase arrival times for teleseismic events, and De Meersman et al. (2009) use an iterative algorithm based on crosscorrelation to refine initially picked arrival times for microseismic data.
Despite the plethora of algorithms from which to choose, accurate arrival-time picking remains a challenge. Microseismic data are often characterized by low signal-to-noise ratios (S/Ns) and complex
waveforms. We define S/N as the ratio of root-mean-square (rms) amplitude in an inferred signal window to rms amplitude in a noise window. Typically, higher signal frequencies and weaker amplitudes
are associated with smaller events, which makes automated arrival-time picking difficult in low-S/N environments. For microseismic data, the presence of a strong coda after the P-wave arrival as well
as mode-converted arrivals can make the picking of direct S-wave arrivals a challenging task (Chen et al., 2005; Lois et al., 2013). Picking errors will affect precision and accuracy of computed
hypocenters, particularly if a systematic bias exists.
Sharma et al. (2010) point out that no time-picking algorithm is optimal under all conditions and that algorithms tend to become unstable under noisy conditions. Thus, understanding the parameters
and the limitations of these algorithms can help to improve data processing outcomes. Similarly, knowledge of any algorithm’s speed is important, especially if the objective is to perform real-time
data analysis to provide timely feedback during well completions.
In this paper, we provide a review of existing arrival-time picking algorithms: STA/LTA, MER, MCM, AIC, phase arrival identification — kurtosis (PAI-K), hybrid techniques, and crosscorrelation-based
methods. In addition, we evaluate their performance using pseudo-synthetic and field data examples. The term “pseudo-synthetic” is applied because we add 100 realizations of Gaussian noise to a
high-S/N microseismic event recorded during hydraulic fracture monitoring. We also use downhole microseismic data (112 events) from the Hoadley Flowback Microseismic Experiment (HFME) in western
Canada (Eaton et al., 2014). We discuss key parameters for each algorithm and provide recommendations for optimal parameter selection based on our analysis and experience. Examples are presented to
explain the preconditioning of input data for arrival-time picking and also to discuss frameworks for picking P- and S-arrivals using unrotated ($x$, $y$, and $z$) and ray-centered coordinate
rotated (p, s1, and s2) waveforms. Finally, we evaluate the algorithms in terms of speed, pick error, and provide quantitative and qualitative comparisons. Analysis of algorithm performance in the
presence of complex waveforms containing confused arrivals or coda due to reflection and refraction are not considered, as these topics are beyond the scope of the present study.
Downhole microseismic data for the HFME project were acquired during and after an open-hole multistage hydraulic fracture treatment in two horizontal wells. The wells were completed in a lower
Cretaceous Glauconitic tight sand reservoir in the Hoadley gas field, Alberta, a giant gas-condensate field discovered in 1977. Vertical wellbores were initially used to produce from the most
permeable sand bodies, but the introduction of multiple horizontal well technology has shifted the focus to the immense unconventional resource potential in this play (Eaton et al., 2014).
Real-time monitoring of the microseismicity during the hydraulic-fracture treatment of two horizontal wells was conducted on 18–19 September 2012. A 12-level receiver array of 15-Hz triaxial
geophones was used for downhole recording with a sampling rate of 0.25 ms in a vertical well that was located between the two horizontal treatment wells. The total array length was 229 m with
variable receiver spacing: 15.25 m for the bottom eight receivers and 30.5 m for the top four receivers (Eaton et al., 2014).
Initial processing of the recorded data was performed by Engineering Seismology Group (ESG), and a total of 1660 events including 259 postpumping events were detected using the STA/LTA algorithm (
Eaton et al., 2014). In this paper, we select 112 representative microseismic events from the event database. Figure 1 shows an example of raw (unfiltered) three-component (3C) waveforms for a
detected microseismic event. The direct P-wave arrival is more distinct on the $z$-component, whereas the S-wave arrival is stronger on the $x$- and $y$-components.
Arrival-time picking algorithms can be applied to individual components of 3C data, or a combination of components. The use of a combination of components offers the capability to reduce data
dimensionality by providing a single attribute, which can be useful in obtaining a unique estimate of P- and S-arrival times from the 3C data and also decreases the computational cost, thus making it
more suitable for real-time monitoring applications. To increase the S/N by damping random noise, Saari (1991) uses the absolute value of the product of the amplitudes of 3C data as input to STA/LTA
algorithms. Similarly, Oye and Roth (2003) use the stack of absolute values of 3C data as input to STA/LTA and AIC algorithms. Figure 2 shows the results from both approaches for the 3C example data.
The quality of the product and stack of absolute values of 3C data is affected by the presence of strong noise in any of the components. The product suppresses the background noise efficiently but
also significantly reduces the signal if it is present only on one of the components. For most of our analyses, we use the absolute amplitude stack because it is more effective for data with a low S/
The P- and S-wave signals can also be enhanced by transforming observed data into ray-centered coordinates. Transformation into ray-centered coordinates requires time windows that define the start
and end of P- and S-arrivals. In the absence of picked initial arrival times, the start and end of these windows can be approximated using a polarization attribute-based criterion or the STA/LTA
algorithm. Figure 3 shows unrotated and ray-centered-coordinate rotated waveforms for the example data (receiver 5 from the data shown in Figure 1). The accompanying spectrograms indicate an
approximately 50 Hz dominant frequency for the P- and S-wave arrivals. The transformation into ray-centered coordinates maximizes the P- and S-wave amplitudes on the corresponding rotated waveforms
(p and s1), respectively, and also improves the S/N and waveform quality (Figure 3).
Prior to the application of any time-picking method, we recommend S/N enhancement using a noise-filtering technique such as polarization filtering (Vidale, 1986; Reading et al., 2001), $f-k$ analysis
(Maxwell et al., 2005), or wavelet-transform based denoising (Gaci, 2014). Here, we apply a minimum-phase bandpass filter (10–150 Hz) and a polarization filter (Vidale, 1986; Reading et al., 2001).
Although band-pass filtering is useful for suppressing unwanted frequencies, it is less effective when signal and noise bandwidths overlap. On the other hand, polarization filtering focuses on
temporal changes in the degree of polarization and suppresses the background noise, while preserving the character of linearly polarized signals.
Arrival-time picking algorithms can be classified into two main categories: (1) single-level and (2) multilevel (or array processing)-based algorithms. Single-level algorithms operate on
single-component or multicomponent recordings from an individual receiver level, and thus they do not make use of data from other locations within the recorded receiver array. In contrast, multilevel
algorithms, such as crosscorrelation-based algorithms, make simultaneous use of information on multiple receiver levels within the array. The single-level algorithms considered in this study can be
further classified into window-based and nonwindow-based methods. The window-based algorithms require the specification of window size and location to compute microseismic data attributes for
defining criteria for arrival-time picking. Many hybrid algorithms also exist in the literature, which combine information from different individual algorithms to achieve more accurate and precise
arrival-time picks because none of the individual algorithms are optimal in all conditions (Sharma et al., 2010).
In this section, we review the theoretical background of some algorithms, discuss their parameters, and explain the workflow used to pick P- and S-wave arrival times for each algorithm. A summary of
the methods reviewed and key references are given in Table 1.
Window-based single-level algorithms
STA/LTA ratio
The STA/LTA ratio is a measure similar to the S/N whereby the STA is sensitive to rapid fluctuations in the amplitude of the time series, whereas the LTA provides information about the background
noise (
Trnkoczy, 2002
). A configuration of the method to avoid overlap between STA and LTA windows is important to ensure statistical independence between two values (
Taylor et al., 2010
). Based on the principle of causality, the STA window should always lead the LTA window. The STA and LTA windows can be chosen as presample windows (i.e., preceding the time sample for which STA/LTA
is being computed), in which, the arrival times are picked on the maximum value of the derivative function of the STA/LTA curve. Choosing a postsample STA window (i.e., following the time sample for
which STA/LTA is being computed) and a presample LTA window allows picking of arrival times based on the maximum of STA/LTA curve. The generalized expressions for STA and LTA at the
th time sample are
represent the number of samples in the short- and long-time windows, respectively. CF is a characteristic function that can represent the waveform energy (
Wong et al., 2009
), absolute amplitude (
Trnkoczy, 2002
), or any other mathematical amalgam of microseismic data and its derivatives (e.g.,
Allen, 1978
Baer and Kradolfer, 1987
Saari, 1991
Earle and Shearer, 1994
). For simplicity, we use the energy function, which only enhances the amplitude changes.
Figure 4b shows the STA/LTA response for the unrotated components for the example data (Figure 4a). The absolute amplitude stack was computed from the unrotated ($x$, $y$, and $z$) components and
used as input to generate STA/LTA response curves. The manually picked P- and S-arrivals (best estimates) are also shown with blue and red vertical lines, respectively. In this case, the onsets of
STA/LTA for the local maxima are closer to the arrival times as compared with the time sample associated with the local maxima. We therefore pick on the maximum value of the derivative function of
STA/LTA curve around the local maxima.
Window size plays a key role in the performance of STA/LTA algorithms. An STA window that is too short will result in improper averaging of the microseismic signal and meaningless noise fluctuations
on the STA/LTA curve. On the other hand, if the LTA window is too long, it will obscure events that are closely spaced in time. The STA and LTA window sizes depend on the frequency characteristics of
the microseismic waveform. Generally, the choice of STA window should be longer than a few periods of a typical microseismic signal, whereas an optimal LTA window should be longer than a few periods
of the irregular noise fluctuations. A longer LTA window also makes the STA/LTA more sensitive to P-waves and is useful in events with a weaker P-wave compared to the S-wave signal (Earle and
Shearer, 1994; Trnkoczy, 2002).
Figure 5a shows STA/LTA curves obtained using different window sizes. The apparent dominant period of the signal ($τdom=1/50Hz=0.02 s$, which is equivalent to 80 time samples with 0.25 ms sampling
interval) estimated from the spectrograms (Figure 3) is used to compute STA and LTA windows. The STA/LTA responses are shown for STA window sizes ({1, 2, 3}$τdom$) and for LTA window sizes ({3, 5,
10, 14, 15, 21}$τdom$). The STA/LTA shows spurious fluctuations in the case of STA and LTA windows that are too short ($τdom$ and $3τdom$, respectively). This occurs because the LTA window is not
sufficiently long to give an average value of the local noise and is sensitive to noise fluctuations. A significant improvement is observed when a slightly longer LTA ($5τdom$) is used. This may
nevertheless produce false picks in the case of data with low S/N. Increasing the STA window size to $(2–3)τdom$ reduces the sensitivity to noise, whereas increasing the LTA window size improves the
averaging of local noise. Based on these considerations, we recommend an STA window size that is equivalent to $(2–3)τdom$ and an LTA window size that is 5–10 times the STA size. Similar values for
the STA and LTA windows were suggested by Han (2010).
MER algorithm
The MER algorithm, proposed by
Han et al. (2009)
, is an extension of STA/LTA in which the pre- and postsample windows are of equal size (
Mikesell et al., 2012
Gaci, 2014
). The energy ratio (ER) at the
th time sample is given by
is the window length and
is the input series. The MER is, then, given by
Because the ER function is computed using post- and presample windows, the time index associated with the maximum of the MER represents the arrival-time pick. Figure 4c shows the MER curve for the
example data. The P- and S-wave arrivals occur near the local maxima in the corresponding intervals.
Careful selection of window size is also important to achieve better results using the MER algorithm. Like STA, window size should be longer than a few periods of the microseismic signal to avoid
false picks from noise fluctuations and to pick the signal changes properly. Figure 5b shows the MER curves for different window sizes ({1, 1.5, 2, 2.5, 3, 5}$τdom$). These curves exhibit clear
local maxima for P- and S-arrivals for window sizes ($τdom-3τdom$), but the response deteriorates for a longer window size ($5τdom$). In the case of a low S/N, we recommend the use of longer
windows ($2τdom−3τdom$) for greater stability because smaller windows will be more sensitive to noise fluctuations.
Sabbione and Velis (2010)
present a modified version of
Coppens’ (1985)
method, known as MCM. Like other ER algorithms, energy is computed in two windows, but the window-size selection process differs from STA/LTA and MER. The energy windows at the
th time sample are
is the length of the leading window. The second window is an increasing window, which can be useful in providing a robust response at the onset of the first arrival. The ER is then computed as
is a stabilization constant, which is introduced to minimize the number of false picks by reducing the rapid fluctuations of the MCM curve. However,
Sabbione and Velis (2010)
find that the selection of
is not critical and fix it at 0.2 for the input data that were normalized to (
, 1). An edge-preserving smoothing filter (EPS) for the MCM curve is also recommended to enhance the transition between noise and noise plus signal for arrival-time picking (
Sabbione and Velis, 2010
). A running-average filter was chosen here for the EPS filtering. It should be noted that inclusion of this step is computationally expensive and slows the MCM algorithm. A five-point EPS operator
requires the analysis of data in five windows; considering the recommended length (
Sabbione and Velis, 2010
) for the EPS operator, in our case (
time samples), an analysis of 80 windows is required for each time sample. More details on EPS filtering are given by
Luo et al. (2002)
and by
Sabbione and Velis (2010)
Figure 4d shows the MCM curve for the example data. Like STA/LTA and MER algorithms, the purpose of the leading window is to identify the signal changes. We therefore recommend a similar window size
($τdom-3τdom$) for accurate and stable arrival-time picks.
Saragiotis et al. (2002
propose an arrival-time picking algorithm based on higher order statistics, where the characteristic curve is formed from the kurtosis values on a sliding window for the entire input waveform length.
The kurtosis, for a finite-length sequence (input data
), is defined as (
Küperkoch et al., 2010
] is the expected value of
denotes the central statistic moment
of order
. For Gaussian distributions,
becomes 3. The arrival time is picked on the maximum slope of the corresponding local maxima for the P- and S-wave arrivals on
. Because kurtosis can be regarded as a measure of the heaviness of the tails in the input data distribution, it can be very effective in the signal identification process, assuming that the noise in
the input data is close to a Gaussian distribution and the signal is non-Gaussian (
Saragiotis et al., 2002
Nippress et al., 2010
Lois et al., 2013
). Figure
shows the PAI-K curve for the example data. For this case, the P-wave arrival is clearly visible whereas a smaller PAI-K response indicates the presence of S-wave arrival. The S-wave pick appears
beyond the manually picked best estimate because a longer window length is selected.
Küperkoch et al. (2010) recommend adaptation of the window length based on the frequency characteristics of the data. A window that is too short will yield a biased kurtosis estimate, resulting in
false and early picks, especially for a low S/N. A longer window may provide picks that are beyond the real S-wave arrival, and a window that is too long will result in missing the significant
amplitude variations (Lois et al., 2013). Figure 6a shows the PAI-K curves for different window sizes ({2, 4, 6, 8, 10, 12}$τdom$). The P-wave arrival is clearly visible in all cases, but the S-wave
arrival is picked accurately only for the window size ($2τdom$). A delayed S-wave arrival pick results for other cases. The PAI-K curve for a window size of $2τdom$ tends to be unstable and may
provide false picks for data with a low S/N because the background noise becomes non-Gaussian in many windows. For this reason, we recommend use of a window length $≥10τdom$ to ensure higher values
at the signal interval while suppressing the background noise.
Li et al. (2014)
propose a short-term kurtosis and long-term kurtosis ratio (S/L-Kurt)-based approach, which was inspired by the STA/LTA method. The S/L-Kurt method is effective for P- and S-arrivals because it
reduces any bias in the short-term kurtosis (STK) and long-term kurtosis (LTK) windows. The STK and LTK windows preceding the
th time index are computed as follows:
is the sample mean in the short-term window
is the sample mean in the long-term window. The S/L-Kurt is then given as
is a small number to avoid division by zero. The arrival times are picked on the maximum slope of the local maxima in the corresponding P- and S-wave intervals. Figure
shows the S/L-Kurt curve for the example data. In comparison with PAI-K, S/L-Kurt shows the P- and S-signals clearly. The arrival time is picked on the maximum slope of the corresponding local maxima
for the P- and S-wave arrivals. Figure
shows the S/L-Kurt curves for different short ({1, 2, 3}
) and long ({3, 9, 10, 14, 15, 21}
) window sizes. In all cases, the P- and S-wave arrivals are clearly visible. A shorter window size (
) produces rapid fluctuations that are suppressed when the window size is increased (
). The size of the longer window equivalent to 3–7 times the shorter window is recommended for the S/L-Kurt because a much longer window will affect the response for later arrivals.
Nonwindow-based single-level algorithms
AIC algorithm
The AIC algorithm is based on the concept that microseismic signals are nonstationary and can be approximated by dividing an observed waveform into locally stationary segments, where each segment is
treated as an autoregressive process (
Sleeman and Van Eck, 1999
Leonard, 2000
). For the
th data sample of a microseismic waveform of length
, the AIC value is represented as
is the order of the autoregressive model,
are the variances of microseismic waveforms in the two intervals not explained by the autoregressive process, and
is a constant (
Zhang et al., 2003
). The autoregressive model order is estimated by trial and error on the data window containing noise. The AIC function computed using the estimated model order provides a measure of the model fit,
and optimal separation of the two stationary time series (noise and signal) is indicated by the time index associated with the minimum value of AIC (
Ahmed et al., 2007
Tronicke, 2007
Maeda (1985)
computes the AIC directly from the time series without using the autoregressive model coefficients. In this case, AIC is represented as
$AIC(k)=k log(var{x(1,k)})+(N−k−1)log(var{x(k+1,N)}),$
ranges through all samples of the input microseismic waveform and
is the variance function. Figure
shows the AIC response curve for the example data. The P-wave arrival is clearly visible, whereas the presence of the S-wave arrival is not clear on AIC. This is because AIC defines the onset point
as a global minimum and it is necessary to provide an estimate of arrival-time windows in the case of multiple arrivals for better accuracy (
Zhang et al., 2003
Hybrid approaches
Numerous hybrid approaches exist that merge single-level algorithms in different combinations. Anant and Dowla (1997) combine a wavelet-transform method with polarization attributes to pick P- and
S-wave arrivals. Zhang et al. (2003) use the wavelet transform and AIC to confirm the occurrence of an arrival in the data interval and to pick the arrival time. Galiana-Merino et al. (2008) pick
P-wave arrivals using a higher order statistics (kurtosis)-based criterion in the stationary wavelet domain. Akazawa (2004) uses STA/LTA, STA-LTA, and AIC to pick the P- and S-wave arrivals. Akram
et al. (2013) combine STA/LTA with peak eigenvalue ratio from the post- and presample windows to pick P- and S-wave arrivals. Recently, Maity et al. (2014) presented a neural network-based approach
with a number of attributes computed from microseismic waveforms to pick the P- and S-wave arrivals. In this paper, we discuss the wavelet-transform-based hybrid approaches (Anant and Dowla, 1997;
Zhang et al., 2003), joint energy ratio (JER; Akram et al., 2013), joint STA/LTA-polarization-AIC method (Tan et al., 2014), and Akazawa’s hybrid picking workflow (Akazawa, 2004).
Wavelet-transform-based approaches
The wavelet transform has been used in combination with other algorithms such as polarization analysis (
Anant and Dowla, 1997
), AIC (
Zhang et al., 2003
), and higher order statistics (
Galiana-Merino et al., 2008
) for improved arrival-time picking. The wavelet-transform approach is useful in the analysis of nonstationary microseismic signals because of its ability to resolve features at various scales (
Zhang et al., 2003
Ahmed et al., 2007
). The wavelet transform enables the analysis of variable window sizes for different frequency components within a signal, whereas the short-time Fourier transform uses a fixed window size, and
therefore it has a constant resolution at all times and frequencies (
Mallat, 1989
). The wavelet transform is based on projection of the signal onto a set of template functions obtained from the scaling and shift of a base wavelet to search for similarities (
Gao and Yan, 2011
). The continuous wavelet transform of a function
is defined as follows (
Anant and Dowla, 1997
Zhang et al., 2003
is a scale factor,
is a translation factor, and
is known as the analyzing wavelet that decays rapidly to zero with increasing
and has zero mean. The scale factor is used to dilate or compress the wavelet. High-frequency features are resolved at low scales, whereas high scales provide better resolution of low-frequency
features (
Zhang et al., 2003
The discrete wavelet transform (DWT) is used in practice, and it can be implemented using Mallat’s (1989) algorithm. Two quadrature mirror filters, a low-frequency filter and a high-frequency filter,
are applied to compute the DWT of level one of the input signal $x(t)$, with a subsequent reduction in the number of samples by two (Mallat, 1989; Zhang et al., 2003; Baranov, 2007). The wavelet
coefficients for the high-frequency filter characterize the details of the signal, at different scales, whereas the approximations of the signal at different scales are represented by the wavelet
coefficients for the low-frequency filter.
The Daubechies-5 (db5) wavelet is used for the wavelet transform. Our choice is based on the expected shape of a microseismic signal (Figure 7) because the wavelet basis functions that are similar to
P- or S-wave arrivals will result in strong correlations in the wavelet scale, thus leading to quality enhancement of the picked arrivals. The P- (on $x$- and $z$-components) and S-wave arrivals (
$y$-component) are compared with various wavelets (db2–db7) in the Daubechies family. The db5 wavelet shows a good match for the P- and S-wave arrivals.
AD’s method: We denote the arrival-time picking methodology presented in Anant and Dowla (1997) as AD’s method. For P-wave arrival picking, the workflow is described as follows:
1. Compute the wavelet decompositions for each of three data components.
2. Compute the eigenvalues from the covariance matrix generated from the three decomposed components on a sliding window, for each scale. Using the largest and intermediate eigenvalues (
, respectively), a rectilinearity function can be estimated:
3. Compute the composite rectilinearity function by combining the rectilinearity functions for each scale:
represents the number of wavelet decomposition levels. The position at which the composite rectilinearity function is maximum represents the P-wave arrival time. Figure
shows the composite rectilinearity function for the example data. Manually picked P-wave arrival time appears closer to the time index associated with the maximum value of composite
rectilinearity function. However, there is not enough contrast between composite rectilinearity function responses for P- and S-waves, which can result in erroneous picking for data with low S/N.
For S-waves, the arrival-time picking workflow is described as follows:
1. Using the P-wave arrival information, rotate the microseismic data into radial and transverse components.
2. Compute the wavelet decomposition for each of the rotated components ($dt$ and $dr$ denote the decomposed transverse and radial components, respectively).
3. Compute the ratio for each scale:
where env is the envelope function, which represents the positive outline of input data, in this case, the decomposed radial and transverse components.
4. Compute the composite envelope function:
represents the number of wavelet decomposition levels. The S-wave arrival time is determined from the position of the first point after the P arrival time that has a value that is at least
one-half of the maximum of
. Figure
shows the composite envelope function for the example data. The manually picked S-wave arrival time appears closer to the time index associated with the point that has a value that is at least
one-half of the maximum of
ZTR’s method: We denote the arrival-time picking methodology presented in Zhang et al. (2003) as ZTR’s method. The workflow for arrival picking is described as follows:
1. Compute the wavelet decomposition for each of the three data components.
2. Apply AIC on each scale. If the AIC values for all scales are close to each other and are inside a user-specified interval (in this case, we use 100 time samples), an arrival (P- or S-wave) is
identified in the interval. The AIC value on scale two provides the preliminary arrival time.
3. Reapply AIC on the data components in a window surrounding the preliminary pick. The minimum of AIC determines the arrival time.
Zhang et al. (2003) apply this workflow to pick P-wave arrivals only, but the method can easily be extended to pick P- and S-wave arrivals if initial estimates of the corresponding windows containing
the arrivals are known using other approaches. We propose the following steps to pick P- and S-wave arrivals directly from ZTR’s method:
1. Divide the data segment containing the microseismic event into short overlapping windows (in our case, 800 time samples, with 10% overlap). The size of the window is chosen considering the
expected maximum S-P distance as well as the minimum expected distance between two closely spaced microseismic events.
2. Perform steps 1–2 from ZTR’s method on all windows and compute the preliminary arrival times.
3. Compute the energy in a small window ($τdom–2τdom$) surrounding the preliminary arrival times.
4. Pick two windows with the highest energy. The preliminary time of the earlier window is considered as the P-wave time and the later window as the S-wave time.
5. Reapply the AIC on the data components in a window surrounding the preliminary P- and S-wave picks. The minimum AIC determines the arrival time in the corresponding windows.
Selecting a single wavelet that performs well for the entire data set is a significant challenge for wavelet-transform-based algorithms, especially in the case of low-S/N data. It is recommended to
perform a visual inspection and to compare the wavelet shape from the input data with a class of standard wavelets and then choose the best matching wavelet for wavelet-transform analysis. The length
of the sliding windows for the eigenvalue computations in AD’s method is recommended to be $(1-1.5)τdom$.
The JER algorithm by
Akram et al. (2013)
combines two ERs to enhance signal coherency and improve confidence in arrival-time picking in low-S/N microseismic data. We compute the ratio of the peak eigenvalues (PER)
are the peak eigenvalues in windows after and before the
th sample, respectively. This is followed by the computation of STA/LTA using equations
. Next, we compute the JER at the
th time sample:
are PER and STA/LTA, normalized by their respective maximum value. This algorithm can also be used to predetermine the P- and S-wave arrival windows for other algorithms, such as AIC, to work more
efficiently (
Akram, 2014
). Such an implementation of the AIC algorithm, in which the initial arrival windows are estimated through the application of JER, constitutes another hybrid approach (JER-AIC). Figure
shows the JER curve and the AIC curve on the JER estimated arrival intervals. The P- and S-wave arrivals are indicated by the time indices associated with corresponding local maxima on a JER curve,
whereas indices associated with the corresponding minimum values indicate the arrivals on an AIC curve. The window size parameters for STA/LTA are the same as recommended earlier in the paper, and
the recommended window size for peak eigenvalue ratio is
Joint STA/LTA-polarization-AIC (TYFH’s method)
A similar approach to the JER method was presented recently (Tan et al., 2014). We denote this method as TYFH’s method. This workflow can be summarized as follows:
1. Compute STA/LTA for the input data.
2. Compute the degree of polarization for the input data in a sliding time window (
, and
are the three eigenvalues of the 3C data. Another function (
) is computed using
to estimate the polarization state correctly (
Moriya, 2008
denotes the length of an averaging time window. The arrival time is indicated by the local maxima of
in the corresponding time interval.
3. The STA/LTA and the polarization function are normalized and then multiplied to obtain a picking function $Q$. Initial estimates of P- and S-arrival-time windows are obtained from the local
maxima of $Q$ in the corresponding intervals.
4. Likelihood functions are computed using
are the autoregressive model coefficients and
, and
have the same meaning as in equation
5. Finally, the normalized likelihood function is multiplied by $Q$, and the maximum of this curve indicates the respective arrival time.
Figure 8e shows the P- and S-wave arrival picking using TYFH’s method. The P- and S-wave arrivals are clearly represented on the response curve. The window size for the degree of polarization is
recommended to be $(1–1.5)τdom$ because a longer window will include other arrivals and affect the degree of polarization at the P-wave interval. On the other hand, a smaller window may result in
meaningless fluctuations.
Akazawa’s method
Akazawa (2004)
presents a hybrid workflow to pick P- and S-wave arrival times based on STA/LTA, the difference of STA and LTA (STA-LTA), and AIC algorithms. An advantage of STA-LTA over STA/LTA is that it is less
sensitive to pulselike noise, which can produce false picks. For P-wave picking, a cumulative envelope (CE) function of the input waveform (in this case, the p-component) is computed as follows:
is the amplitude of the input waveform at time sample
and max represents the maximum value of the corresponding attribute.
The CE is further modified to produce a sharp response at the onset of signal from the background noise
P-wave arrivals are picked using the following workflow:
1. First, STA/LTA is computed on the CE function shown in equation 25. The sample index ($m1$) associated with the maximum value is determined.
2. AIC is applied on the envelope function on the interval [1, $m1$]. The sample index ($m2$) associated with the minimum value is determined.
3. Another iteration of AIC is applied on the shorter interval [$m2$, $m1$].
4. The time index associated with the minimum value of AIC (step 3) is considered as the P-arrival time.
S-wave arrivals are then computed using the following workflow:
1. Forward and reverse STA-LTA differences are computed for the post P-wave arrival onset data. The sample index ($m3$) associated with the minimum value of the reverse STA-LTA, and the sample
index ($m4$) associated with the maximum value of the forward STA-LTA is determined.
2. AIC is calculated for the interval [$m3$, $m4$].
3. The time index associated with the minimum value of AIC (step 2) is considered as the S-wave arrival time.
We use the same window-length parameters for STA/LTA as discussed earlier. The CE is effective for a data set with high S/N, such as strong-motion earthquake records, but it is less effective for low
S/N data, such as microseismic data. To improve the performance of this algorithm on microseismic data, we replace the CE function with the absolute stack of waveforms from 3C data. Figure 8f and 8g
shows the arrival-time picking using Akazawa’s algorithm. The P- and S-wave arrivals are represented clearly on the response curves and can be picked using the specified workflow.
Multilevel- or array processing-based algorithms
Multilevel- or array processing-based algorithms take advantage of similar characteristics of waveforms across an array of receivers or by analysis of multiple events using a single receiver. Common
examples of multilevel algorithms for arrival-time picking include image-processing techniques (Criss et al., 2003; Mousa et al., 2011), global-optimization-based techniques (Chevrot, 2002),
beamforming (delay and stack) of waveforms (Schweitzer et al., 2002; Rawlinson and Kennett, 2004), and crosscorrelation-based techniques (VanDecar and Crosson, 1990; De Meersman et al., 2009; Liu
et al., 2009; Kapetanidis and Papadimitriou, 2011; Lou et al., 2013). In this paper, we discuss several waveform crosscorrelation-based techniques.
Waveform crosscorrelation methods
Crosscorrelation is an example of a multilevel algorithm that is widely used for time-delay estimation in electrical engineering (
Tamim and Ghani, 2009
), in the estimation of static corrections for surface seismic and microseismic data (
Bagaini, 2005
Diao et al., 2015
), and in the processing of microseismic and earthquake data for event identification and phase arrival picking (
VanDecar and Crosson, 1990
Eisner et al., 2008
Raymer et al., 2008
De Meersman et al., 2009
Song et al., 2010
). The normalized crosscorrelation of two digital waveforms
can be expressed as
, and
are the crosscorrelation of two digital waveforms at lag
and their zero-lag autocorrelation values respectively. A normalized correlation value of
indicates a perfect match, and a value of
indicates that waveforms have opposite polarity (
Keary et al., 2002
Telford et al., 2004
). The following form can be assumed for microseismic data recorded at two different receivers (
Yung and Ikelle, 1997
Bagaini, 2005
is the signal,
denotes time delay,
are the noise in the recorded data and
denotes the amplitude ratio of waveforms 2:1. The time delay is estimated from the lag at the peak value of the crosscorrelation between
In the arrival-time picking process, a reference (pilot) waveform is obtained from the data recorded within the array, and then it is crosscorrelated with waveforms from all receiver levels in the
array to determine the time delay. The selection of an appropriate pilot waveform is important because it defines the type of crosscorrelation algorithm (iterative or noniterative). The pilot
waveform can be estimated and used in the workflow in the following ways (Bagaini, 2005):
1. A high-S/N waveform from a receiver level within the array is chosen as the pilot waveform.
2. A pilot waveform is obtained from the stacking of waveforms from all receiver levels within the array. These waveforms are aligned using initial estimates of arrival times prior to stacking.
3. The pilot waveform obtained using stacking is updated iteratively after time delay estimation and lag adjustments of waveforms within the receiver array.
The first two techniques are noniterative, and, as suggested in Bagaini (2005), these are less efficient than the iterative approach. Figure 9 explains the iterative workflow based on
crosscorrelation (De Meersman et al., 2009), and we denote this workflow as DKV’s method. In this workflow, initial arrival times (either manually picked or using any of the previously described
automatic algorithms) are used to align the microseismic waveforms. Before the computation of pilot waveform, all waveforms are rescaled to equalize to the pre-event noise level. A pilot waveform is
then computed and correlated with all waveforms within the receiver array to update the time-shift. This process is repeated until the time delay converges to a value that is less than a user-defined
threshold value ($ζ$), which represents the optimal re-alignment of the input data.
An important parameter in this algorithm is the size of correlation window. A long window may affect the accuracy of crosscorrelations and may result in cycle skipping. If the initial estimates have
large deviations from the true time, then it is reasonable to pick large windows covering the arrivals. Based on our experience, a window size $≤10$ times the dominant period of the signal is
recommended unless initial estimates have small deviations from true times, in which case a window $size≤3$ times the dominant period of the signal is recommended to minimize pick errors.
Crosscorrelation-based algorithms also perform better if all arrivals have similar polarities.
Instead of refining the initially picked arrival-time picks, crosscorrelation can also be used to pick the arrival times on the microseismic waveforms. Irving et al. (2007) present a crosscorrelation
algorithm (here denoted as IKK’s method) that is based on the crosscorrelation of waveforms from all receiver levels in an array with a reference waveform with a known arrival time (manually picked).
First, the reference waveform is crosscorrelated with other waveforms, and the waveforms are aligned using the lag value. A new reference waveform is then formed from the stacking of aligned
waveforms and is again crosscorrelated with all waveforms. The process is repeated until the waveform alignment matches user’s specifications (Giroux et al., 2009). The method is semiautomatic
because it requires picking on the reference waveform. This process can be cumbersome for large data sets.
A template-based approach (Plenkers et al., 2013), in which a high-S/N master event template is crosscorrelated with the continuous data stream to detect weaker similar events, can also be used to
pick arrival times on similar events. However, the effectiveness of the arrival-time picks depends on the separation between the master event and the target event (Arrowsmith and Eisner, 2006; Song
et al., 2010).
In this paper, we describe the results using DKV’s method for the refinement of initially picked arrival times and IKK’s method for picking arrival times by crosscorrelating with a reference waveform
for which the arrival time is known (manually picked).
P- and S-wave arrival picking framework
Depending upon the input data format (unrotated or ray-centered coordinate rotated data), numerous strategies can be formulated for the identification of P- and S-wave arrival times. Typically
arrival-time picking is conducted on the extracted data around a provisionally detected microseismic event containing one or both of P- and S-wave arrivals. Therefore, simple assumptions that P- and
S-wave arrivals occur only once in the extracted event data, and that the P-wave is the first arrival, can be quite effective in the case of raw input data. The orthogonal particle motions of P- and
S-waves can also be used in validating the arrival picks. Figure 10 shows an example of arrival-time picking on unrotated input data using JER algorithm. However, a similar approach can be adopted
for other algorithms. The main difference is the pick location (maxima, minima, or the onset before the extreme value). In a general sense, our picking strategy can be summarized as follows:
1. Compute the response curve from the 3C microseismic data using the methods outlined previously. If necessary, apply an EPS (Luo et al., 2002) filter on the response curve to enhance the
transition between noise and noise plus signal. Find the time index ($ti$) associated with the global maximum of the response curve. In the case of EPS filtered response curve, the global
maxima will be a string of values, and therefore the first value is picked as the global maximum.
2. Compute the polarization angles on sliding windows for the entire length of response curve using the 3C microseismic data. Find the time indices characterized by polarization angles orthogonal to
the polarization angle ($±20°$) observed at $ti$, in two equal presample and postsample windows (each smaller than the minimum expected event separation).
3. Select the index ($tj$) from the previous step representing the local maximum of response curve.
4. If the characteristic curve requires arrival picking on the onset value prior to the local maximum, compute the derivate of the function in a small window surrounding the local maximum. The
positive maximum of the derivative function in the corresponding window then indicates the arrival time ($tj$). In the case of $tj>ti$, assign $tp=ti$ and $ts=tj$. Alternatively, if $tj<ti$,
assign $ts=ti$ and $tp=tj$.
This framework can provide accurate and precise arrival-time picks on data with high S/N. However, the precision of picking results using this framework may be deteriorated in low-S/N data because
the polarization angles and the attribute-response curve obtained from the unrotated input data are affected by the level of noise. Because we assume that each data segment for arrival picking
contains only one event (a P-wave arrival and an S-wave arrival), our current picking framework will provide erroneous picks in the case of missing arrivals or multiple arrivals in the data segment.
Another approach described by Oye and Roth (2003) uses data rotated into ray-centered coordinates. The fact that the P- and S-wave amplitudes are maximized on p, s1, and s2 components allows easier
identification of P- and S-wave arrivals on the respective component. In this framework, any of the previously described algorithms can be used to pick a P-wave arrival on p-component, and then an
S-wave can be picked on a post P-wave arrival window on the s1-component. We provide a comparison of both approaches for the field microseismic data examples in the next section.
To demonstrate the performance of arrival-time picking algorithms, we use pseudosynthetic and field microseismic data examples. The pseudosynthetic data (1200 waveforms) are generated by adding 100
Monte Carlo realizations of white Gaussian noise to a high S/N microseismic event (12 waveforms from all levels for the p-component) from the HFME experiment. Figure 11 shows the waveform data
examples after the addition of synthetic noise. The best estimates of arrival times for P- and S-waves were obtained through manual picking on the actual high S/N microseismic event used for a
pseudo-synthetic data set and on the field microseismic data (112 microseismic events). Arrival-time picking algorithms were then used, and their results are compared with manual picks for
benchmarking and performance evaluation. The pick errors were obtained by subtracting the manual picks (best estimates) from the automatic picks. In the following sections, we present performance
statistics (pie charts, mean, standard deviation, and skewness of pick errors) of arrival-time picking algorithms for the entire data as well as in specific intervals to provide more detailed
analysis in terms of precision and accuracy.
Pseudo-synthetic data
Table 2 shows the performance statistics (mean, standard deviation, and skewness of pick errors) of arrival-time picking algorithms for the entire pseudo-synthetic data and for the intervals of [$−2
ms$, 2 ms] and [$−10 ms$, 10 ms] pick error. For the entire data, higher skewness ($γ$) values associated with STA/LTA (3.56), MCM ($−8.01$), JER (4.81), Akazawa’s method ($−3.85$), and DKV’s
method (4.10) suggest that majority of arrivals are picked either earlier or after the manually picked best estimates. Higher standard deviations ($σ$) associated with STA/LTA (0.15), MER (0.22), S
/L-Kurt (0.12), AIC (0.10), AD’s method (0.12), ZTR’s method (0.12), and Akazawa’s method (0.13) show the imprecision of these algorithms, whereas higher mean ($μ$) values of pick errors for MER (
$−0.12$), ZTR’s method (0.04), STA/LTA (0.03), Akazawa’s method ($−0.03$), AD’s method (0.02), and AIC (0.02) show their poor accuracy. IKK’s method ($μ=−0.0004$, $σ=0.0052$, and $γ=−0.328$)
is found to be the most accurate, and precise relative to the other algorithms that were tested. Figure 12a, 12b, and 12c shows pick error versus S/N plots for each single-level, hybrid, and
multilevel algorithms for the all of the pseudo-synthetic data. The majority of pick errors from each algorithm lies in the range of $±10 ms$. As expected, the performance of these algorithms
deteriorated with decreasing S/N. This is especially true for picks obtained using S/L-Kurt, AIC, AD’s method, and ZTR’s method, which show large deviations from the general trend. In comparison,
pick errors are more concentrated at either earlier or later times for STA/LTA, MER, MCM, and Akazawa’s methods. This occurs because the addition of synthetic noise produced high amplitudes in some
traces near the edges of the pseudo-synthetic data file (Figure 11); because the ER algorithms are sensitive to such noise fluctuations, picks are made near the edges of the data file, resulting in
0.4–0.6 s delayed picks. Hybrid algorithms, such as JER, JER-AIC, and TYFH’s method provided better accuracy and precision than the single-level algorithms, whereas other hybrid algorithms could not
yield similar results. The reasons for the relatively poor performance of these algorithms are the use of degree of polarization and/or the wavelet transform. The degree of polarization is very
sensitive in the presence of noise and may provide unstable results. On the other hand, it is difficult to choose a single wavelet function that is suitable for the entire data set, especially in the
presence of complex waveforms, low S/N, and polarity fluctuations. Choosing a single number for decomposition levels is also difficult for the entire data set. In Figure 12c, DKV’s method uses JER
picks as the initial guess and seeks to improve relative pick times, whereas IKK’s method uses a manual pick for a reference trace and computes the absolute arrival times. IKK’s method yields picks
that are mainly concentrated within $±2 ms$; large deviations from main trends are due to cycle skipping ($±10 ms$, which is half the dominant period of signal in this data set). Despite that
IKK’s method provides better results than DKV’s method, this may change with a different choice of initial guess (more accurate than the JER algorithm).
The pick errors were grouped into intervals ([$−2 ms$, 2 ms], [$−5 ms$, 5 ms], [$−10 ms$, 10 ms], and ($−∞$, $−10 ms$] U [10 ms, $∞$))for assessing the precision of these algorithms. Figure 13
shows the pick error pie charts for all algorithms for these intervals. Table 2 shows the $μ$, $σ$, and $γ$ values of pick errors in the intervals [$−2 ms$, 2 ms] and [$−10 ms$, 10 ms]. In the [
$−2 ms$, 2 ms] pick error interval, PAI-K, AIC, JER-AIC, Akazawa’s method, and IKK’s method outperform other algorithms, which means that these algorithms were able to provide more picks within $±2
ms$ error. Other algorithms, however, provide pick errors with slightly higher standard deviations in the [$−2 ms$, 2 ms] pick error interval. PAI-K, MCM, AIC, JER, JER-AIC, Akazawa’s method, and
IKK’s method also perform better than other algorithms in the [$−5 ms$, 5 ms] pick error interval. PAI-K, TYFH’s method, and IKK’s method are found to be most accurate, whereas PAI-K and AIC yielded
the highest precision in the [$−10 ms$, 10 ms] pick error interval. In total, 68.83% (11,563/16,800) of the arrivals were picked within $±10 ms$ error using all algorithms. Because the dominant
period of the signal for this data was 0.02 s, $±10 ms$ error means that these algorithms were able to pick 68.83% of the arrivals within half of a dominant period of the true time.
For illustrative purposes, Figure 14a shows the computational cost for each single-level, hybrid, and multilevel algorithm. These values are based on a machine with i7-4720HQ CPU @ 2.60 GHz and 16 GB
RAM size. Naturally, these values are expected to vary with different CPU architectures and with different implementations of these algorithms. The ER-based algorithms (STA/LTA and MER) are more
computationally efficient than other algorithms. The large computation time associated with MCM arises from the application of EPS filter, which is a computationally expensive method. The
computational cost of MCM without EPS filter, however, reduces to a level that is similar to STA/LTA. The algorithms based on wavelet decompositions are also computationally expensive because the
computation time depends on the number of decomposition levels, as well as the window size, in the case of ZTR’s method. IKK’s method shows lower computation time, but this is a semiautomatic
algorithm that requires picking on a reference trace, in which case, the total time can be high depending on the number of traces to be picked manually. The comparison of pick errors for all
algorithms for this data set suggests that IKK’s method, Akazawa’s method, PAI-K, AIC, and JER-AIC provide more picks as compared with other algorithms in the [–2 ms, 2 ms] and [–5 ms, 5 ms]
intervals, whereas other algorithms perform with a lower but similar precision (Figure 14b and 14c).
Field data
Figure 15 shows the pick-error pie charts for all algorithms for field microseismic data examples (112 events) for [$−2 ms$, 2 ms] and [$−5 ms$, 5 ms] intervals. The pick errors are shown for cases
with 3C data and rotated data as inputs. The use of rotated data provides significant refinement of the picks in these intervals, especially for S-waves. For the [$−2 ms$, 2 ms] interval, the total
number of picks for S-waves improves from 2963 to 4976, whereas for the case of the [$−5 ms$, 5 ms] pick error interval, the number of picks improves from 4612 to 7015. In terms of precision,
JER-AIC and IKK’s methods are more precise and show consistency with the synthetic data results. JER-AIC, as expected, outperforms AIC for picking P- and S-wave arrivals. Akazawa’s method picks the
P-wave arrival with high precision as seen in the pseudo-synthetic data examples, but it provides poor-quality picks for S-wave arrivals in the two error intervals. S/L-Kurt outperforms the PAI-K
algorithm, which is opposite of the results from pseudo-synthetic data. This may be due to very weak and complex waveform for P-wave on the field microseismic data and complex S-waveforms, which
causes cycle skipping for the P-wave and misidentification of the S-wave as a P-wave arrival for PAI-K. The MER algorithm outperforms other ER-based algorithm (STA/LTA and MCM), especially in the
case of rotated data, but performs similarly to STA/LTA and MCM when 3C data are used as input. ZTR’s method also performs well for the field data examples.
These results support a suggestion by Sharma et al. (2010) that none of these algorithms perform optimally on all data sets, but there are some consistent algorithms, which can provide more accurate
and precise picks, for example, crosscorrelation-based approaches such as IKK’s and DKV’s methods; hybrid approaches such as JER-AIC and ZTR’s and Akazawa’s methods; and single-level algorithms such
as PAI-K, S/L-Kurt, and AIC. The ER algorithms are very efficient algorithms in terms of speed and can thus provide reasonable pick errors. This characteristic of ER-based algorithms makes them more
desirable for real-time microseismic data processing.
Applications to real-time monitoring
Real-time microseismic monitoring requires algorithms that are fast and are accurate and precise enough to obtain meaningful event locations. The ER-based algorithms, such as STA/LTA and MER are,
therefore, more popular for use in real-time monitoring. The simple formulation and parameter settings of these algorithms give a useful advantage. Another advantage of an STA/LTA algorithm is that
it can be simultaneously used for event detection and arrival picking. The MER algorithm can also be used for event detection if a threshold-based criterion is used in place of local maxima. The MCM
algorithm without an EPS filter has the same computational cost as observed for the STA/LTA and MER algorithms, but it cannot be used as an event-detection algorithm because it requires a continuous
increasing window. This algorithm also becomes computationally expensive with the inclusion of an EPS filter. Kurtosis-based algorithms (PAI-K and S/L-Kurt) are not as fast as STA/LTA but can improve
the pick accuracy. Choosing the model order in an AIC algorithm is difficult and may not be optimal for a real-time monitoring scenario, but an AIC algorithm without an autoregressive model order (
Maeda, 1985) could be considered. This algorithm, however, requires picking on rotated data or requires initial estimates of the arrival windows to perform optimally. The algorithms that are based on
wavelet decomposition are computationally expensive because the computation time depends on the number of decomposition levels, as well as the window size in the case of ZTR’s method. These
algorithms require selection of a single wavelet function and a number of decomposition levels that is suitable for the entire data set. These complexities in the parameter selection render these
algorithms suboptimal for real-time monitoring scenarios. In comparison, other hybrid approaches, such as JER, JER-AIC, Akazawa’s method, and TYFH’s method, require relatively less computational
effort and also reduce the errors in pick results. IKK’s method shows lower computation time, but this is a semiautomatic algorithm that requires picking on a reference waveform, in which case, the
total time can be high depending on the number of traces to be picked manually.
Considerations of computational speed may be mitigated with the increasing availability of powerful computers. The STA/LTA is an obvious choice if only limited computational resources are available,
but if powerful machines are available, other techniques, such as S/L-Kurt, PAI-K, JER, JER-AIC, Akazawa’s method, or TYFH’s method can be used. IKK’s and DKV’s methods can also be used on picks
obtained from an STA/LTA algorithm.
Applications to postacquisition processing
The focus of postacquisition processing is to achieve optimal results. We therefore recommend to the use of algorithms that provide the most accurate and precise arrival-time picks, such as hybrid
approaches and crosscorrelation-based approaches. An important aspect of postacquisition processing is manual quality control. For example, IKK’s method can be used for cases in which a reference
waveform can be picked using a single-level or hybrid algorithm, and the picked arrivals are later checked and corrected manually. Quality control can be performed on the picks obtained on other
waveforms after crosscorrelation with the reference waveform. The quality of picked arrival times from an algorithm can also be improved by using data that are S/N enhanced through a noise-filtering
approach or data rotation into ray-centered coordinate reference frame. Picking of P- and S-wave arrivals becomes much easier when rotated data components are used.
Best practices
In this paper, we have presented guidelines for selection of parameters for numerous arrival-time picking algorithms (Table 1), together with best practices for data preconditioning. In the case of
real-time monitoring, it is important to achieve reasonably accurate and precise picks in an efficient manner. We therefore recommend reducing the data dimension by using the stack or product of
absolute amplitudes of the three data components. The STA/LTA algorithm is recommended for real-time monitoring because it is simple, fast, can provide pick results of reasonable quality, and can
work simultaneously for event detection and arrival-time picking. However, if more computational resources are available, hybrid approaches or combining crosscorrelation-based approaches with STA/LTA
may be applicable to real time monitoring. In the case of postacquisition processing, we recommend S/N enhancement of the data through various noise reduction techniques, such as polarization
filtering, $f-k$ analysis, or wavelet-based denoising approaches. We also recommend rotating the data into ray-centered coordinates to further improve the P- and S-wave arrivals. The parameters for
the selected algorithm should be carefully tested on a representative subset of recorded data before being applied to the entire data. A multilevel algorithm can be used to achieve better accuracy
and precision in picked arrival times. We strongly recommend that each process be followed by a manual quality control step. A key consideration is that data quality plays a significant role on the
effectiveness of any picking algorithm.
Using pseudo-synthetic and field microseismic data examples, this paper provides a review of various single-level based, hybrid, and multilevel-based picking algorithms. The key parameters for each
algorithm are discussed, including our recommendations for optimal parameter selections based on our analysis and experience (Table 1). The window size is a key parameter for the majority of
algorithms and should be chosen based on the dominant period of the signal. We found that a window size equal to $(2–3)τdom$ is optimal for STA, MER, and STK computations, whereas $(1–3)τdom$
performs better for MCM. The LTA window should be between 5 and 10 times the STA window sizes, whereas the LTK window length should also be 3–7 times the STK window, and the PAI-K window length
should be $≥10τdom$ for efficient arrival picking. A window length of $(1–1.5)τdom$ is recommended for estimating the degree of polarization. Wavelet-transform-based approaches require careful
selection of wavelet and the number of decomposition levels for the entire data set. It is often challenging to find a single wavelet that works well for an entire data set, especially in the case of
low-S/N and complex waveforms. Autoregressive modeling methods, such as AIC, require the estimation of model order, which is obtained by trial and error. Because AIC picks the arrival time based on
the global minimum for a data window, it should be applied on small time intervals containing the arrivals. Other algorithms, such as MER, JER, or PAI-K, can be used to find an initial estimate of P-
and S-arrival windows.
We show that ER-based algorithms are more efficient in terms of computational speed when compared with other algorithms and can provide reasonably accurate and precise arrival picks. Therefore, these
algorithms should work well in real-time microseismic data processing scenarios. On the other hand, wavelet-transform-based approaches are computationally expensive. Similarly, the use of an EPS
filter in the case of the MCM algorithm significantly increases the computational time of the algorithm. Among single-level algorithms, PAI-K and MCM were found to be more precise for the entire
pseudo-synthetic data considered here, whereas the PAI-K and AIC were more precise in the [$−10 ms$, 10 ms] pick-error interval (Table 2). Other single-level algorithms perform with slightly lower,
but similar, precision. TYFH’s method, JER, and JER-AIC provided more accurate picks among hybrid algorithms on pseudo-synthetic data. Among the wavelet-transform based approaches, ZTR’s method
performed better than did AD’s method in terms of precision in the [$−10 ms$, 10 ms] pick interval. Crosscorrelation-based approaches (DMV’s and IKK’s methods) were also found to be highly accurate
and precise. For the field microseismic data set considered here, IKK’s method, ZTR’s method, JER-AIC, Akazawa’s method, and S/L-Kurt were found to be more precise. Akazawa’s method picks the P-wave
arrival with high precision; however, S-wave picking was poor for the field data examples. We also found that the pick error for the S-wave reduces significantly when rotated data are used as input.
We therefore recommend rotation of microseismic data into ray-centered coordinates as a preconditioning procedure.
Finally, our results support the view that none of these algorithms are optimal for all conditions. However, some of these algorithms, such as IKK’s method, JER-AIC, Akazawa’s method, PAI-K, and S/
L-Kurt, are more accurate and precise in the majority of cases. Regardless of the accuracy and precision of an algorithm, an interactive quality-control process should always follow the automatic
picking workflow to ensure the quality of arrival-time picks.
We would like to thank the sponsors of the Microseismic Industry Consortium (www.microseismic-research.com) for their support. E. Caffagni and ESG are thanked for providing the processed (detected)
events for use in this paper. We also thank the anonymous reviewers for their valuable comments. | {"url":"https://pubs.geoscienceworld.org/seg/geophysics/article/81/2/KS71/293715/A-review-and-appraisal-of-arrival-time-picking","timestamp":"2024-11-08T21:10:24Z","content_type":"text/html","content_length":"451871","record_id":"<urn:uuid:61999212-26cc-4756-8496-94b5f4c215e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00880.warc.gz"} |
slarrk: computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy - Linux Manuals (l)
slarrk (l) - Linux Manuals
slarrk: computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy
SLARRK - computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy
N, IW, GL, GU, D, E2, PIVMIN, RELTOL, W, WERR, INFO)
IMPLICIT NONE
INTEGER INFO, IW, N
REAL PIVMIN, RELTOL, GL, GU, W, WERR
REAL D( * ), E2( * )
SLARRK computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy. This is an auxiliary code to be called from SSTEMR.
To avoid overflow, the matrix must be scaled so that its
largest element is no greater than overflow**(1/2) *
underflow**(1/4) in absolute value, and for greatest
accuracy, it should not be much smaller than that.
See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford
University, July 21, 1966.
N (input) INTEGER
The order of the tridiagonal matrix T. N >= 0.
IW (input) INTEGER
The index of the eigenvalues to be returned.
GL (input) REAL
GU (input) REAL An upper and a lower bound on the eigenvalue.
D (input) REAL array, dimension (N)
The n diagonal elements of the tridiagonal matrix T.
E2 (input) REAL array, dimension (N-1)
The (n-1) squared off-diagonal elements of the tridiagonal matrix T.
PIVMIN (input) REAL
The minimum pivot allowed in the Sturm sequence for T.
RELTOL (input) REAL
The minimum relative width of an interval. When an interval is narrower than RELTOL times the larger (in magnitude) endpoint, then it is considered to be sufficiently small, i.e., converged.
Note: this should always be at least radix*machine epsilon.
W (output) REAL
WERR (output) REAL
The error bound on the corresponding eigenvalue approximation in W.
INFO (output) INTEGER
= 0: Eigenvalue converged
= -1: Eigenvalue did NOT converge
FUDGE REAL , default = 2
A "fudge factor" to widen the Gershgorin intervals. | {"url":"https://www.systutorials.com/docs/linux/man/l-slarrk/","timestamp":"2024-11-02T11:12:53Z","content_type":"text/html","content_length":"10380","record_id":"<urn:uuid:8dcfe9bb-e255-49c3-bb9a-4f14f85126c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00024.warc.gz"} |
수학강연회 - On the resolution of the Gibbs phenomenon
Since Fourier introduced the Fourier series to solve the heat equation, the Fourier or polynomial approximation has served as a useful tool in solving various problems arising in industrial
applications. If the function to approximate with the finite Fourier series is smooth enough, the error between the function and the approximation decays uniformly. If, however, the function is
nonperiodic or has a jump discontinuity, the approximation becomes oscillatory near the jump discontinuity and the error does not decay uniformly anymore. This is known as the Gibbs-Wilbraham
phenomenon. The Gibbs phenomenon is a theoretically well-understood simple phenomenon, but its resolution is not and thus has continuously inspired researchers to develop theories on its resolution.
Resolving the Gibbs phenomenon involves recovering the uniform convergence of the error while the Gibbs oscillations are well suppressed. This talk explains recent progresses on the resolution of the
Gibbs phenomenon focusing on the discussion of how to recover the uniform convergence from the Fourier partial sum and its numerical implementation. There is no best methodology on the resolution of
the Gibbs phenomenon and each methodology has its own merits with differences demonstrated when implemented. This talk also explains possible issues when the methodology is implemented numerically.
The talk is intended for a general audience. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=colloquia&sort_index=speaker&order_type=desc&page=3&document_srl=765295","timestamp":"2024-11-05T00:17:45Z","content_type":"text/html","content_length":"44350","record_id":"<urn:uuid:3d6636db-f484-4bb3-a1ec-88faf2fe107e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00875.warc.gz"} |
IBPS CLERK Previous Year Question Papers PDF » Gk TrendingIBPS CLERK Previous Year Question Papers PDF
You must login to ask a question.
IBPS CLERK Previous Year Question Papers PDF: – Here we are providing the IBPS CLERK Previous Year Papers for past few years in PDF format, which will help you to understand the type of questions
along with their level of difficulty. In order to achieve the best possible results, aspirant has to be at par with the exam standards. So, you can do that by solving these IBPS Clerk Previous Year
Question. We have arranged Previous Year Question Paper in year wise packages, which are comprised of Prelims as well as the main phases of the exam. You can download these notes by clicking the
download link at the bottom of this page.
We have arranged the papers in year wise packages given below. Just follow the instructions below to download the PDFs and start practicing IBPS Clerk question papers while aiming towards the
maximization of exam scores.
How To Solve IBPS CLERK Previous Year Question Papers PDF:-
As an IBPS Clerk aspirant, you must consider the following steps while solving IBPS Clerk previous year papers:
1. At the first instance, go through the entire IBPS Clerk previous year papers once to have a clear idea of the difficulty level of the paper.
2. One has to set the time limit while solving for the Paper. You have to consider this as your IBPS Clerk Exam. It has to be done in one seating.
3. Students should not look at the Answer Key or the solution while attempting the question.
4. Begin with the section and questions which you find easy and are confident about.
5. You also need to have the sectional cut-off, if that be the case in the final exam.
6. If a question is taking time, you need to leave it and move on to the next one.
7. Once you have attempted all the questions, go back and try to solve which you have left.
8. Once you are done with the test. The evaluation has to be done. One has to identify their weak areas where you need to work upon.
Benefits of solving IBPS CLERK Previous Year Question Papers:- If you solve IBPS Clerk previous year papers, it can help you in many ways. Some of these are tabulated as under:-
1. The pattern of IBPS Clerk exam changes from time to time. If someone goes through these previous year papers it will help them to understand the trend of the exam.
2. Solving IBPS Clerk previous year papers will give you an brief idea about the question being asked and also the difficulty level of the questions.
3. You can also come up with the best strategies by solving IBPS Clerk previous year papers
IBPS Clerk 2016 Previous Year Papers
IBPS Clerk 2015 Previous Year Papers
IBPS Clerk 2014 Previous Year Papers
IBPS Clerk 2013 Previous Year Papers
IBPS Clerk 2012 Previous Year Papers
IBPS Clerk 2011 Memory Based Papers | {"url":"https://gktrending.in/ibps-clerk-previous-year-question-papers-pdf-download/","timestamp":"2024-11-09T23:38:52Z","content_type":"text/html","content_length":"127759","record_id":"<urn:uuid:737369a3-4b76-4c6f-88e7-965e35ce5317>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00527.warc.gz"} |
Use number bonds to break down components of subtraction equations; use two equations to solve
Curriculum>Grade 1> Module 2>Topic B: Counting On or Taking from Ten to Solve Result Unknown and Total Unknown Problems
Students will break down a number being subtracted using a number bond. Models involve breaking down the original problem into one equation that makes 10, then a second subtracting from 10 to solve
for the total difference | {"url":"https://happynumbers.com/demo/cards/448782/","timestamp":"2024-11-08T11:57:33Z","content_type":"text/html","content_length":"15425","record_id":"<urn:uuid:bad16fc2-30a6-47fb-baed-ad55e43cb06e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00472.warc.gz"} |
Categoria: Seminari e Convegni
Stato: Archiviata
6 ottobre 2021
ore 17:30 su Zoom
Inverse problems are about the reconstruction of an unknown physical quantity from indirect measurements. Most inverse problems of interest are ill-posed and require appropriate mathematical
treatment for recovering meaningful solutions. Regularization is one of the main mechanisms to turn inverse problems into well-posed ones by adding prior information about the unknown quantity to the
problem, often in the form of assumed regularity of solutions. Classically, such regularization approaches are handcrafted. Examples include Tikhonov regularization, the total variation and several
sparsity-promoting regularizers such as the L1 norm of Wavelet coefficients of the solution. While such handcrafted approaches deliver mathematically and computationally robust solutions to inverse
problems, providing a universal approach to their solution, they are also limited by our ability to model solution properties and to realise these regularization approaches computationally.
Recently, a new paradigm has been introduced to the regularization of inverse problems, which derives regularization approaches for inverse problems in a data driven way. Here, regularization is not
mathematically modelled in the classical sense, but modelled by highly over-parametrised models, typically deep neural networks, that are adapted to the inverse problems at hand by appropriately
selected (and usually plenty of) training data.
In this talk, I will review some machine learning based regularization techniques, present some work on unsupervised and deeply learned convex regularisers and their application to image
reconstruction from tomographic and blurred measurements, and finish by discussing some open mathematical problems. | {"url":"https://www.disma.polito.it/news/(idnews)/17362/(cal_mese)/00-09-2021","timestamp":"2024-11-09T13:51:14Z","content_type":"text/html","content_length":"18317","record_id":"<urn:uuid:fc0f56c1-8717-48cd-9a84-e6b32d3a5b08>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00755.warc.gz"} |
Losing count: the mathematical magic of counting curves
Losing count: the mathematical magic of counting curves
How can you figure out which points lie on a certain curve? And how many possible curves do you count by a given number of points? These are the kinds of questions Pim Spelier of the Mathematical
Institute studied during his PhD research. Spelier received his doctorate with distinction on June 12.
Counting curves, what does that mean on an average day? ‘A lot of sitting and gazing,’ Pim Spelier replies laughingly. ‘When I'm asked what exactly I do, I can't always answer that easily. Usually I
give the example about the particle traveling through time.’
All possible curves
Imagine a particle moving through space and you follow the path the particle makes through time. That path is a curve, a geometric object. How many possible paths can the particle follow, if we
assume certain properties? For example, a straight line can only pass through two points in one way. But how many paths are possible for the particle if we look at more difficult curves? And how do
you study that? By looking at all possible curves at the same time. For example, all possible directions from a given point form with each other a circle, and that is called a moduli space. And that
circle is itself a geometric object.
The mathematical magic can happen because this set of all curves itself has geometrical properties, Spelier says, to which you can apply geometrical tricks. Next, you can make that far more
complicated with even more complex curves and spaces. So not counting in three but, for example, in eleven dimensions.
Spelier tries to find patterns that always apply to the curves he studies. His approach? Breaking up complicated spaces into small, easy spaces. You can also break curves into partial curves. That
way, the spaces in which you're counting are easier. But the curves sometimes get complicated properties, because you have to be able to glue them back together. Spelier: ‘The goal is to find enough
principles to determine the number of curves exactly.’
‘The counting becomes easier when you break up complicated spaces into small spaces’
Seeking proof for points on curves
In addition to curves, Spelier also counted points on curves. He studied the question: how many solutions does a given mathematical equation have? These are equations that are a bit more complicated
than the a^2 + b^2 = c^2 of the Pythagorean theorem. That equation is about the lengths of the sides of a right triangle. If you replace the squares with higher powers, it is more difficult to
investigate solutions. Spelier studied solutions in whole numbers, for example, 3^2 + 4^2 = 5^2.
Meanwhile, there is a method to find those solutions. Professor of Mathematics Bas Edixhoven, who died in 2022, and his PhD student Guido Lido developed an alternative approach to the same problem.
But to what extent the two methods match and differ was still unclear. During his PhD research, Spelier developed an algorithm to investigate this.
The first person with an answer
Developing that algorithm is necessary to implement the method. If you want to do it by hand, you get pages and pages of equations. Edixhoven's method uses algebraic geometry. Through clever
geometric tricks, you can calculate exactly the whole number points of a given curve. Spelier proved that the Edixhoven-Lido method is better than the old one.
‘Pim cleared up an issue that really kept mathematicians busy’
David Holmes, professor of Pure Mathematics and supervisor of Spelier, praises the proof provided. ‘When you're the first person to answer a question that everyone in our community wants an answer
to, that's very impressive. Pim proves that these two methods for finding rational points are similar, an issue that really kept mathematicians busy.’
Pim Spelier with his opposition committee. From left to right: Hendrik Lenstra (opposition), Adrien Sauvaget (opposition), Ronald van Luijk (supervisor), Jonas Carinhas (paranymph), Sergey Shadrin
(opposition), Pim Spelier, David Holmes (supervisor), Sacha Spelier (paranymph), Mohamed Daha (rector magnificus), Gianne Derks (opposition), David Lilienfeldt (opposition), Leo Herr (opposition),
Dhruv Ranganathan (opposition).
Doing math together
The best part of his PhD? The meetings with his supervisor. After the first year, it was more collaboration than supervision, both for Spelier and Holmes. Spelier: ‘ Doing math together is still more
fun than doing it alone.’ Spelier starts in September as a postdoc in Utrecht and is apparently not yet done with counting. After counting points and curves, he will soon start counting surfaces.
Pim Spelier defended his thesis titled ‘Counting curves and their rational points’. He received his doctorate with distinction. | {"url":"https://www.universiteitleiden.nl/en/news/2024/07/losing-count-the-mathematical-magic-of-counting-curves","timestamp":"2024-11-11T13:54:36Z","content_type":"text/html","content_length":"26102","record_id":"<urn:uuid:76ccbdc0-5681-4cbe-851c-6620c2b10858>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00179.warc.gz"} |
A Payment Calculator Helps You Calculate the Amount of Monthly Payment
The Payment Calculator is able to compute the interest rate, the monthly repayment or loan term for an adjustable interest loan. Use the “Auto Refinance Loan” tab in order to compute the monthly
repayments of an adjustable loan with a fixed rate. Use the “fixed payments” tab for the time taken to pay off an adjustable loan with a variable monthly repayment.
When you are comparing the interest rates and the monthly repayments, you can use the Payment Calculator to compute the amount of monthly payment that you are able to afford. The Payment Calculator
also helps you calculate how much you are likely to owe on your mortgage, the term of the mortgage, the loan amount, and the interest rate. For the mortgage loan, the calculator determines the
minimum payments required for a period of one year. For a home equity loan, it calculates the principal amount, which will include the interest paid on the loan, as well as any closing costs that
will be included in the closing cost of the loan. For a refinancing of an existing mortgage loan, the payment calculator helps you decide whether to take on additional debt or to refinance the
existing mortgage loan using the lower mortgage interest rate.
The Payment Calculator will help you determine the minimum payment required for your mortgage loan and any prepayment penalties and fees. In order to avoid these fees, it is advised to make the
payments at least six months ahead. The payment calculator calculates the principal amount, which is the amount of money that you are paying off on your loan each month. The loan payment is the total
amount of the principal amount plus the interest due and any closing costs, if any. The closing costs include expenses that you must incur before the loan can be closed, such as inspections.
When you enter in the amount of money that you have borrowed and the interest rates, the calculator determines the amount of interest that you are paying. You can then input the time duration that
you want the loan to be outstanding. The calculator then calculates the payment schedule to ensure that you pay back the loan on a regular basis. It is important to note that different types of loans
may require different lengths of time for the loan to be paid. Once you have determined the length of time that you need to pay back your loan, you can choose from among the different types of
payment options to ensure that you will be able to make your payments on time.
The calculator can help you calculate your loan payment with a “fixed” option. If the loan is for one year, you will not be required to make adjustments to the loan amount every 12 months. If you
have chosen the “variable” option, the calculator can allow you to enter in the loan amount, the interest rate, the number of years that the loan will remain outstanding, the loan amount, and the
term of the loan.
The payment calculator also helps you determine the amount of principal that you are paying each month. This amount is different than the principal amount because it includes the principal amount,
plus the interest, any closing costs, and any pre-payment penalties, if any. This amount is referred to as the amortization period. | {"url":"https://www.mystructuredsettlementcash.com/blog/a-payment-calculator-helps-you-calculate-the-amount-of-monthly-payment/","timestamp":"2024-11-01T22:29:09Z","content_type":"text/html","content_length":"20020","record_id":"<urn:uuid:e450f672-9571-44a9-92d0-a27a2e8419a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00589.warc.gz"} |
Analytical solution of the pollution transport equation with variable coefficients in river using the Laplace Transform
Document Type : Research Paper
^1 Master Student, Department of Water Engineering and Management, Faculty of Agriculture, Tarbiat Modares University, Tehran, Iran.
^2 Associate Professor, Department of Water Engineering and Management, Faculty of Agriculture, Tarbiat Modares University, Tehran, Iran.
^3 Professor, Department of Water Engineering and Management, Faculty of Agriculture, Tarbiat Modares University, Tehran, Iran.
Rivers are one of the most important natural water resources in the world. Pollution transport modeling in rivers is performed by the partial advection-dispersion-reaction equation (ADRE). In the
present study, using the Laplace transform, which is a powerful and useful tool in solving differential equations, the analytical solution of the ADRE equation was obtained in a finite domain with
variable coefficients for the upstream and downstream Dirichlet boundary conditions and the initial zero condition in the river. To use the analytical solution in this study, three examples are
presented, each of which, the river are divided into two, four, and eight parts, which, while the parameters of flow, pollution, and river geometry are variable in all three examples, for each of the
examples, the accuracy of the analytical solution available when the segmentation of the intervals increases as compared to the numerical solution. By specifying the matrices of velocity, dispersion
coefficient, cross-section, etc. as input to the problem, the diffusion matrix is calculated and, consequently, a complex system of equations is created that doubles the complexity of the work. The
amount of pollutant concentration is calculated by solving the system of the above equations. The numerical solution is used to validate the existing analytical solution, the results showed that the
greater the number of river divisions, the higher the accuracy of the solution, and the two analytical and numerical solutions will be well compatible with each other. Given the ability and
performance of the existing analytical solution, it can be acknowledged that the analytical solution in this study can be used as a tool to validate and verification numerical solutions and other
analytical solutions for the coefficients of the equation.
Main Subjects | {"url":"https://jwim.ut.ac.ir/article_84098.html?lang=en","timestamp":"2024-11-03T13:30:17Z","content_type":"text/html","content_length":"64224","record_id":"<urn:uuid:9520df65-59c4-4b0d-b4df-f79c6e4f73c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00712.warc.gz"} |
eqscplot: Plots with Geometrically Equal Scales
Version of a scatterplot with scales chosen to be equal on both axes, that is 1cm represents the same units on each
eqscplot(x, y, ratio = 1, tol = 0.04, uin, …)
vector of x values, or a 2-column matrix, or a list with components x and y
desired ratio of units on the axes. Units on the y axis are drawn at ratio times the size of units on the x axis. Ignored if uin is specified and of length 2.
proportion of white space at the margins of plot
desired values for the units-per-inch parameter. If of length 1, the desired units per inch on the x axis.
further arguments for plot and graphical parameters. Note that par(xaxs="i", yaxs="i") is enforced, and xlim and ylim will be adjusted accordingly.
invisibly, the values of uin used for the plot.
Side Effects
performs the plot.
Limits for the x and y axes are chosen so that they include the data. One of the sets of limits is then stretched from the midpoint to make the units in the ratio given by ratio. Finally both are
stretched by 1 + tol to move points away from the axes, and the points plotted.
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer. | {"url":"https://www.rdocumentation.org/packages/MASS/versions/7.3-51.5/topics/eqscplot","timestamp":"2024-11-04T12:03:00Z","content_type":"text/html","content_length":"60056","record_id":"<urn:uuid:c703c027-fb7c-40cd-9ae9-365eaf3d87a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00646.warc.gz"} |
Alternation (formal language theory) explained
In formal language theory and pattern matching, alternation is the union of two sets of strings, or equivalently the logical disjunction of two patterns describing sets of strings.
Regular languages are closed under alternation, meaning that the alternation of two regular languages is again regular.^[1] In implementations of regular expressions, alternation is often expressed
with a vertical bar connecting the expressions for the two languages whose union is to be matched,^[2] ^[3] while in more theoretical studies the plus sign may instead be used for this purpose.^[1]
The ability to construct finite automata for unions of two regular languages that are themselves defined by finite automata is central to the equivalence between regular languages defined by automata
and by regular expressions.^[4]
Other classes of languages that are closed under alternation include context-free languages and recursive languages.The vertical bar notation for alternation is used in the SNOBOL language and some
other languages.In formal language theory, alternation is commutative and associative. This is not in general true of the form of alternation used in pattern-matching languages, because of the
side-effects of performing a match in those languages.
• John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. . | {"url":"https://everything.explained.today/Alternation_(formal_language_theory)/","timestamp":"2024-11-09T01:05:20Z","content_type":"text/html","content_length":"8684","record_id":"<urn:uuid:eae3e25d-1915-4940-898a-414425ea4edb>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00275.warc.gz"} |
Universe and Relativity of Energy
Attention! Exist translational errors
'' Theory of a Complete Time or Simultaneous Universe and Relativity of Energy ''
1st PUBLICATION
│ 1986 - 1998 | 2000 - 2008 | 2008 - 2012 │
It's a newer cosmological interpretation on the basis of the general idea, that the Universe in its entirety all times -from our past up to our future- is always the same within limits of a longest
time interval (max period Tuni) and nothing changes. In a first analysis, this idea of a complete Universe in a longest time period and always this same imports a decisive trait, that scientists did
not include in physics until now or they had blindly accepted the opposite aspect. Thereby, leaving the scientific doctrine for non-existence of limits in nature intact and downgrading the enormous
experience with observations for periodical phenomena and despite the knowledge of the universal physical constants: This is the trait of a limit and restriction in space (or length), in time and
(number of) things. The insert of these limits even with arbitrary way imposes consequences and corrections in certain relations of physics after the length, time and energy and other dependent
sizes in physics could have infinite value in their quantity and size.
At first, this idea reminds us some ancient philosophical theories. The formation of this new theory did not begins with discoveries from the modern Physics and Astrophysics, but we can to proceed
and rational interpret many important observations of these sciences. Particularly, the most decisive trait of this idea imports the concept of a cyclic time (period) and limits of a longest and
minimal period Tmax - Tmin as main principle for the existence of all things. In more generally, it imports relation of a necessary connection and coexistence between the upper limit with a
minimal limit. This general beginning contains as a sperm the relation that be sought in modern physics with other terms, between the theory of general relativity and quantum physics. In physics,
these limits are determined at least in the quantity of mass and energy, in length, time and rate, in gravitational force and without conscience, these limits are summarized in few universal
constants (mostly c,G,h).
Therefore, according to the Theory of a Complete Time, time is finished in regard to the full Universe within certain time limits, in only one longest Total Time. While all material things are exist
in smaller time limits. Concerning to smaller time limits (on the existence of individual things), the Universe continues becoming and developing yet! Past and future - as we relatively know them -
constitute a wider “now” of the 100% Universe and with such rational thought the relativity of time is explained. The material (structural) elements are the initial ways (carriers) by which the
Universe begins to re-created as external (and indirectly) in its minimal moment of time. While the full Universe is relatively absentee with presence of a free, global and finite space.
The complete Universe does not have its beginning (of existence and quality) in certain separate substances or particles and it's not a result of a composition that preceded. The longest Period
cannot be divided in infinite shorter moments - as a mathematician could will theoretically claims. The longest Period (presently) is not constituted by infinite smaller moments, otherwise the
Universe would not be the same always within the limits of one biggest time interval. Consequently, a minimal limit of time exists that constitutes the minimal time of interaction and a relative
beginning of time for the existence of limited material things.
The limit in division of time (tmin) is the first amazing conclusion that results, when we consider the Universe as stabilized (and simultaneous) within constant limits of a total time (maximum
period). One of a more astonishing consequences of this concept (on a limit for the division of time) is the congruency (co-identity) of the material elements with lesser time periods. In the
minimal time of interaction and change corresponds some “minimal” things, the minimal quantities of energy. When changes are maintained such as undiminished periodical oscillations of energy with
a highest frequency and such as stationary waves, then these periodical fluctuations of energy called "matter" and "particles". They are elementary quantities of energy that are maintained by
periodical changes and exchanges of h·f energy, where the free space conveys as waves.
In the Theory of a Complete Universe, we infer that a minimal and a longest time interval are exist (Tmin, Tmax, Tuni, fmax, fmin) and correspondingly a minimal and longest distance/length. This
thought enforces theoretical limits for all phenomena, between which also a limit in the increase of speed Vmax, which is a combination by length and time, that is presented as motion. In physics,
they accepted the limit in the superior speed of motion Vmax as a postulate, without any explanation. The limit of the longest distance enforces the curvature of the free space with increase of
distance and speed. It does not exist unlimited straight line of motion in to free space (nor for light), because an unlimited Universe in space would mean inter alia, unlimited time of interaction
and in final analysis, unlimited quantity of energy.
In basis of this idea of a longest and correspondingly a minimal time, many laws are interpreted and a multitude of different phenomena are connected. Such as are the minimum time tmin, minimal
length λmin, the minimal energy in transfer Emin=h·1Hz, the marginal speed Vmax, the weakest and strongest force F, the minimum (amin) and fastest rate (amax) of change of speed, the limit to
increase of inertia, the limit to the longest length and curvature of space, the dynamic relation between finite space, matter and gravity, the isotropic space, the stability in structure of
matter, the stagnant situations in dynamic structure of matter, the universal physical constants, the maximum quantity of wave energy that can be transmitted (Emax = hfmax = Fmax λmin) and a
multitude of cases.
According to the Theory of a Complete Universe, the free space is finite and corresponds in the energy of the full Universe that has not been materialized. The Universe in its entirety of time is
not absentee and exists relatively as a finite space and common inception for the transfer of energy by waves and for what they can happen by the carriers of material interaction (in increased time
intervals). The complete Universe from the minimal radius (≈λ[min]) is presented (in reverse) with the strongest gravitational force and we call it "nuclear".
What exists simultaneously, immediately and in null time, that is to say such as a free space, it is oscillated and contains isotropic wave energy. In points that the energy is accumulated or
decreased matter is presented (as an interruption of simultaneous mediator of space (λ=h/c·M) and always with heat. The beginning for the material existence happens to conditions that cause
oscillation, stationary waves and retroaction at the highest frequencies (between 3 ×10^20 Hz - 0,4524 ×10^42 Hz), where the limit up is fmax =Vmax/λmin (when λmin ≈ h). The total energy within the
limits of a longest time (dimensions of power P) also remains a constant quantity.
Relatively, the total energy is the free space and compensates with the fastest way the lacks of itself, that are constitute the material world. But the flow with waves to counterbalance the lacks
creates again lacks very fast and so maintains the material elements, that re-transmit between them respective quantum of energy. (Balance in instability and stationary waves). Particles are sums of
energy where are exchanged and transmitted as waves in the smaller time intervals of interaction. The “Big Explosion” takes place permanently and the material Universe is created permanently
everywhere, from its smallest dimensions (≈λ[min]), with microscopic "explosions". The Universe not only never is created, but on the contrary it was always complete. The world that is absent is
immediately useful so as light, heat, radio waves and structure of matter to exist, matter through which we are presented as separate bodies (in to space and time)! The free space is the full
Universe that they seek before the moment of Big Bang and participates immediately in the structure of matter with wave phenomena!
In this theory, unlimited straight line is impossible in free space. Such a case would means an unlimited Universe and time of interaction and in final analysis, unlimited quantity of energy,
absence of a meter in change of length and infinite rates of fluctuation of energy. The free space exists with a minimal limit and simultaneously longest length. The distance in the free space is
also direction, in other words a “homocentric” multiple distance, that is to say radius (a spherical and isotropic free space with possibility for displacement and approach simultaneously). The
radius is not an accidental phenomenon and as irrelevant of the structure of matter.
The simultaneous presence of the Universe to individual things and the limit of a biggest distance for motion (that also is minimum curvature of free space) are appeared externally as the force
which physicists call it in other terms "gravity". The erstwhile philosophers called this force in the term of "unity" of things. The same attractive force from a minimal distance in microscopic
structure of matter is presented contrarily as nuclear. The same force out of a maximum distance in the external space is a gravitational field.
Matter exists as extreme fast oscillation of a stable energy and stable energy appears absentee as form of a natural space, because this total energy is in balanced situation. The free space is the
beginning and ending in the structure of matter and by materials expressed phenomena of decrease of highest speed and frequency, contrary to that we observe in the visible world (change at low
speeds and slow time developments). No theory that describes the creation of things by fundamental material elements or microscopic particles cannot gives a serious, reasonable and empirically
founded interpretation for the presence of same forces and limits everywhere in the Universe.
As it is known, gravity influences on all material things (bodies) in same proportion, independent of their chemical composition. It is not an accidental phenomenon and be owed in the manner that
material elements begin by the energy of a same and shared space. The isotropic transfer of energy where is centralized as waves -in opposition to the decentralizing behavior of light- acts as
gravity. It is the isotropic energy of the free space where consumed and carried for the maintenance of the structure of matter. All differences of material things and all physical attributes of
matter emanate by time interval in which the energy is altered and exchanged and by the quantity (h·f) that is transmitted, exchanged, increased and decreased.
The perturbation in the energy of free space cause fluctuations in lower frequency (under than 10^20 Hz), which we call them electromagnetic, and they arrive in the longest length of wave, a
department of which causes the oscillations of particles in very low frequency, that the live organisms detect as sound and vibration.
According to the Theory of a Complete Universe and Time, the form of total space, the largest distance where is extended and finally the laws of nature are not determined by the separate material
things. On the contrary, a complete (100%) Universe in its entirety time, exists relatively as a finite space and just as limited quantity of energy for what they can happen and exist relatively and
indirectly through the elements of minimal interactions (namely particles). And the total energy have been predetermined the limits in their interactions (limits of length Imin - Imax, time Tmin -
Tmax and in quantity for the transfer of energy h1Hz - h fmax).
In order for the Universe is same and constant forever, it does not has unlimited time margin… This is the general reason for the existence of probabilities in behavior and developments of things.
The answers in the queries about the con-servation and the creation of matter cannot be given without we understand, how the free space like a balanced energy participates in this material process
(in microscopic lengths).
The model of a constant 100% Universe in a Maximum Period explains and enlightens what should we understand with the famous expression " relativity of time " and why a superior marginal speed for
motion exists. Time obligatorily is relative because an important reason exists and reason is that the Universe in its entirety Time (also with total energy) always should be the same and constant!
Relative time (between limited things) does not exclude a common time and implies a relativity of energy and change. Change of energy can not happen more fast than a minimal time interval, neither
later than a maximum and force is applied dependent of time moment in pe-riodical processes. This is the reason for (disconti-nuous) transfer of energy with limit of a minimal quantity (Law of the
Conservation of Power).
If the Universe were not stabilized in a total time so as is with full energy in constant quantity, then change of energy in smaller time moments would happen by unlimited way (in any quantity
independently of a unit of time) and the energy would be deficient forever. The conservation of the energy would be an accidental phenomenon. It would not exist the minimal quantity of time tmin =
λmin / Vmax neither a highest frequency fmax.
Energy cannot be transmitted unlimitedly and independent since the total energy of the Universe, as it would happen if were not truth the law of the conservation of energy. The complete Universe be
presented no simultaneous because the total energy is decreased in a relative way, restrained and time-consuming. The discontinuity and the limits in flow of energy are imposed from the conservation
of the energy and serves this same principle. The law of the conservation of energy, again is an inexplicable expression of the law for stability of the complete Universe and its simultaneous
presence. Without this law, decrease and transfer of energy into material world would not be regulated. With the abstract law of the conservation of energy we dissemble and conceal the relation
between the conservation of energy with the passing of time and we dissemble the contradiction that exists between the concept of conservation with the endless time and change.
Finally it is easier and more logical when we try to describe how matter is maintained and renewed on the general thought of a Simultaneous Universe that always was complete 100%. It is impasse to
we shift this problem in something that existed before the Universe and in other fantasies and in the absurdity about a creation by the absolute zero, in order to accommodates few fragmentary
The rational theory of a Complete Universe and Time goes beyond than physics and soon will find the complete mathematic expression. It has an important advantage against of other cosmological
theories. The rational interpretation does not depends from the precision of a mathematic result, we do not need to think about hypothetical things and it can be comprehended through phenomena of
usual experience, something that does not accidentally happens and this same theory explains how.
In the cosmological theory of a full Time, the Universe is self-existent, because is present, immutable and “compact” within the limits of a total time - and is “immediately” real, we can simply
tell. In other words, the 100% of the Universe exists simultaneously and the "directness" coincides with "inwardness". The process of this ostensibly absurd and contradictory idea leads to
incredibly reasonable consequences and OPENS A PATH FOR THE CONNECTION BETWEEN PHYSICS AND COSMOLOGY WITH OTHER SCIENCES!
│What phenomena are described unified, interpreted and forecasted in the theory of a Completed Time - Universe >>>►│
│ THE COSMOLOGICAL THEORY OF A COMPLETE UNIVERSE │
│ │
│ THE PHYSICAL EXPLANATION AND MATHEMATICAL INVESTIGATION │
│©2010 - ISBN978-960-93-2431-1, ©2012 - ISBN978-960-93-4040-3 │
│ │
│A COMPLETE UNIVERSE AS DYNAMIC FREE SPACE. How the natural laws and forces are applied. A translation in English is ready for publication in two digital books without graphical environment >>>│
Go to Top | {"url":"https://cosmonomy.eu/eng/entry.htm","timestamp":"2024-11-12T23:34:26Z","content_type":"text/html","content_length":"45058","record_id":"<urn:uuid:71771cb5-5572-40d5-9791-473ea5cfd30a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00145.warc.gz"} |
Using Integer Programming to Convert Image Files
Taylor Watkins
Albion Mathematics Major
Department of Mathematics and Computer Science
Albion College
My Colloquium talk involves using binary programming to convert image files to pixel art. I created a model for choosing what values should be used in creating a smaller image based on the larger
image. In order to get data for the image I used a program called Gimp to save it in a format that I could use and create a binary value matrix to base my function on. I used the program MPL to
minimize the function that I created. Unfortunately I needed to split the problem into 4 problems because when I made the model I needed more variables to convert the image than I had. I took the
result matrices and combined them and used the resulting matrix in Mathematica to create an image. | {"url":"http://mathcs.albion.edu/scripts/flyer.php?year=2011&month=16&day=28&item=f","timestamp":"2024-11-02T17:37:15Z","content_type":"text/html","content_length":"1893","record_id":"<urn:uuid:6d5ac5e5-2b06-4dd1-b614-bdd54527402f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00032.warc.gz"} |
Calculating the Molarity of a Solution
What is the solutions molarity?
The molarity of the solution is determined as 5.12 M.
The molarity of a solution is calculated as follows:
Molarity is defined as the ratio of the number of moles of solute to the volume of the solution in liters.
The molarity (M) is calculated as follows:
Molarity = moles of solute / volume of solution (in liters)
Molarity = 4.28 moles / 0.836 liters = 5.12 M
Thus, the molarity of the solution is determined as 5.12 M.
Understanding Molarity in Solutions
Molarity is a key concept in chemistry that measures the concentration of a solution. It is expressed as the number of moles of solute per liter of solution. In this case, the student dissolved 4.28
moles of K3PO4 in 0.836 liters of water to produce a solution.
Molarity Calculation:
To calculate the molarity of a solution, you need to divide the number of moles of solute by the volume of the solution in liters. This calculation gives you the molarity value in moles per liter.
Formula for Molarity:
Molarity = moles of solute / volume of solution (in liters)
By plugging in the given values:
Molarity = 4.28 moles / 0.836 liters = 5.12 M
Therefore, the molarity of the solution in this case is 5.12 M, indicating a relatively high concentration of K3PO4 in the solution. | {"url":"https://www.brundtlandnet.com/chemistry/calculating-the-molarity-of-a-solution.html","timestamp":"2024-11-12T14:12:54Z","content_type":"text/html","content_length":"21518","record_id":"<urn:uuid:acb9d89d-ff8e-4207-a8d6-9df9afde0fe2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00897.warc.gz"} |
How to calculate intrinsic value of indian stocks
Buffett and Charlie Munger book online at best prices in India on Amazon.in. Read Valuations - 30 Intrinsic Value Estimations in the Style of Warren Buffett and In Stock. Sold by Repro
Books-On-Demand (4.6 out of 5 stars | 2,585 ratings) and are word for word copies of the first chapter with one calculation changed.
28 Aug 2019 Value Investing is a kind of investment strategy which involves buying those stocks that are trading for less than their intrinsic values. 9 Sep 2019 From this we can infer that Graham
never intended for intrinsic value to be thought of as a single point estimate of value. Rather, he thought of it 24 Sep 2016 calculate intrinsic value of share before you buy stocks The Indian
stock markets have rallied in the recent past and pushed up the prices of 9 Oct 2018 Fundamental analysis of a stock does not only help in determining the health In order to define a numerical
intrinsic value for the security of a If the price of the underlying stock is held constant, the intrinsic value portion of an option Benjamin Graham, also known as the father of value investing,
was known for picking cheap stocks. The graham calculator is a good tool to find a rough estimate of the intrinsic value.
4.4 = Interest Rate of AAA Corporate Bonds in USA in Year 1962.; Y = Interest Rate of AAA Corporate Bonds in USA as on today.; Ben Graham’s Formula Updated for India. The above formula has many
limitations. Experts of fundamental analysis of stocks prefer going into more detailed calculation to estimate intrinsic value.
9 Sep 2019 From this we can infer that Graham never intended for intrinsic value to be thought of as a single point estimate of value. Rather, he thought of it 24 Sep 2016 calculate intrinsic value
of share before you buy stocks The Indian stock markets have rallied in the recent past and pushed up the prices of 9 Oct 2018 Fundamental analysis of a stock does not only help in determining the
health In order to define a numerical intrinsic value for the security of a If the price of the underlying stock is held constant, the intrinsic value portion of an option
10 Sep 2019 Intrinsic value is the actual value of a company's stock which is determined calculated forecast to predict movement of future stock price and profit automobile companies, i.e., Maruti
Suzuki India and Tata. Motors Ltd and
PRICE CALCULATOR: Is a Unique Tool which helps you identify the MRP (right price) of a stock – its intrinsic value. Click here to know Home > Indian Stocks > How to Invest > Know MRP Price of Stock
Intrinsic Value. Sensex 34103.48. 30 Aug 2014 Intrinsic value of a stock can be calculated by estimating the company's future cash flows, which are then discounted at an appropriate rate. Since, it
is impossible 18 Nov 2018 Real life example of valuing stocks from Indian stock market using graham formula. Closing thoughts. Overall, this post is going to be really 29 Oct 2018 There are
multiple intrinsic value calculators available for valuing stocks. Moreover, it doesn't matter much whether the calculator is offered by 30 Aug 2016 Get Knowledge about how to calculate intrinsic
value. Also get ideas of advantages stock prising, price v/s value and the graph analysis.
Real life example of valuing stocks from Indian stock market using graham formula. Closing thoughts. Overall, this post is going to be really helpful for all the beginners who are stuck with the
valuation of stocks and want to learn the easiest approach to find the true intrinsic value of companies.
Buffett and Charlie Munger book online at best prices in India on Amazon.in. Read Valuations - 30 Intrinsic Value Estimations in the Style of Warren Buffett and In Stock. Sold by Repro
Books-On-Demand (4.6 out of 5 stars | 2,585 ratings) and are word for word copies of the first chapter with one calculation changed. Finding Value With the P/E Ratio. The most popular method used to
estimate the intrinsic value of a stock is the price to earnings ratio. It's simple to use, and the Stock valuation, DCF valuation of Nestle India Limited with intrinsic value for to figure out if
the current market price of the stock is overvalued or undervalued. 10 Sep 2019 Intrinsic value is the actual value of a company's stock which is determined calculated forecast to predict movement of
future stock price and profit automobile companies, i.e., Maruti Suzuki India and Tata. Motors Ltd and 28 Aug 2019 Value Investing is a kind of investment strategy which involves buying those stocks
that are trading for less than their intrinsic values. 9 Sep 2019 From this we can infer that Graham never intended for intrinsic value to be thought of as a single point estimate of value. Rather,
he thought of it
Data are drawn from a sample of 3756 Bombay Stock Exchange (BSE) listed Calculation of intrinsic values using FCFE, RIM, PE_M, PB_M and PS_M
9 Sep 2019 From this we can infer that Graham never intended for intrinsic value to be thought of as a single point estimate of value. Rather, he thought of it 24 Sep 2016 calculate intrinsic value
of share before you buy stocks The Indian stock markets have rallied in the recent past and pushed up the prices of 9 Oct 2018 Fundamental analysis of a stock does not only help in determining the
health In order to define a numerical intrinsic value for the security of a If the price of the underlying stock is held constant, the intrinsic value portion of an option Benjamin Graham, also
known as the father of value investing, was known for picking cheap stocks. The graham calculator is a good tool to find a rough estimate of the intrinsic value. Real life example of valuing stocks
from Indian stock market using graham formula. Closing thoughts. Overall, this post is going to be really helpful for all the beginners who are stuck with the valuation of stocks and want to learn
the easiest approach to find the true intrinsic value of companies.
29 Oct 2018 There are multiple intrinsic value calculators available for valuing stocks. Moreover, it doesn't matter much whether the calculator is offered by 30 Aug 2016 Get Knowledge about how to
calculate intrinsic value. Also get ideas of advantages stock prising, price v/s value and the graph analysis. 18 May 2015 Buying stocks that are trading below their intrinsic value can prove very We
put the 200 stocks in the S&P BSE 200 index through 10 filters. | {"url":"https://bestbtcxifcdd.netlify.app/cothran79088kyw/how-to-calculate-intrinsic-value-of-indian-stocks-xady.html","timestamp":"2024-11-11T00:58:47Z","content_type":"text/html","content_length":"32407","record_id":"<urn:uuid:355a7d4e-865e-40b0-869c-c3fcd4c5c4d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00456.warc.gz"} |
A classical unified field theory of gravitation, electromagnetism and spin - Foundations
A classical unified field theory (UFT) of gravitation and electromagnetism, based on the SU(2) bundle for the kinematic background and the generalized Einstein equation of M for the dynamic content,
is derived and discussed. The principal fiber-bundle structure of the classical universe is explored, and the fundamental field equations of the UFT are presented. The space-time structure obtained
is shown to imply the existence of electromagnetism and intrinsic spin, and generalized Maxwell equations are derived to account for formal discrepancies between the UFT and EM theory. The possible
existence of magnetic charge or current as products of the interaction of gravitation and electromagnetism is considered.
Scientia Sinica Series Mathematical Physical Technical Sciences
Pub Date:
January 1986
□ Electromagnetism;
□ Gravitation Theory;
□ Relativistic Theory;
□ Unified Field Theory;
□ Einstein Equations;
□ Manifolds (Mathematics);
□ Maxwell Equation;
□ Space-Time Functions;
□ Spin Dynamics;
□ Physics (General) | {"url":"https://ui.adsabs.harvard.edu/abs/1986SSSMP..29...51Y/abstract","timestamp":"2024-11-14T21:54:56Z","content_type":"text/html","content_length":"35169","record_id":"<urn:uuid:18aacb68-380d-4b2e-9d29-7702f0535ca9>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00631.warc.gz"} |
Desi Ghee 15kg Tin Price - Shahjighee - ShahJi Ghee
Desi Ghee 15kg Tin Price – Shahjighee
A2 desi ghee is the purest type of ghee on the market. Because A2 ghee is made from superior desi cow\’s milk in India using the traditional hand-churned bilona method. It is a traditional Vedic
method of producing ghee.
There are two kinds of ghee available:
• A2 Desi Ghee or Organic Ghee
• Plain Ghee or Market Ghee
Most people prefer A2 Desi Ghee because it is the purest form of ghee and has numerous health benefits. As a result, most doctors, nutritionists, and health experts recommend A2 ghee.
So, if you\’re looking to buy or learn more about the best desi ghee 15 liters price in India, you\’ve come to the right place. Today, we\’ll show you a price list for pure A2 desi ghee in India.
Desi Ghee Price 15 liters Calculation
When it comes to ghee, 1 kg is not equal to 1 liter. Because
At 30°C, ghee has a specific density of 0.91.
As a result, the mass of 1 liter of Ghee = 0.91✖1000ml = 910 grams.
Price of production of 15 Liter of Pure Desi Ghee
The bilona method used 25-28 liters of desi cow\’s milk to make 1 liter of Pure Desi Ghee (ghee made from curd), 1 liter of Desi Milk costs between 70 and 120 rupees, depending on the Desi Cow Breed,
such as Gir or Sahiwal.
1 liter of Pure Desi Ghee required 25-28 liters of milk
• If we take the cheapest option, the price of a liter of Desi Ghee is equal to 25 liters multiplied by 70 rupees equaling ₹1,750.
• Similarly, 15 liters of pure desi cow ghee costs Rs. 1,750 ✖15 = Rs. 26,250/-.
2) If we take the most expensive option, the price of 1 liter of Desi Ghee is = 28 liters x 120 = ₹3,360/-
1. Similarly, the cost of 15 liters of pure desi cow ghee is equal to ₹3,360 ✖ 15 = ₹50,400/-
So, the average price of one liter of pure desi cow ghee = (₹1,750 + ₹3,360)/2 = ₹2,555/-
• Similarly, the average price of 15 liters of pure desi cow ghee = (₹26,250 + ₹50,400)/2 = ₹38,325/-
Ghee jars cost between ₹50 and ₹150.
So the total cost of producing 15 liters of pure Desi ghee is approximate = ₹26,250 to ₹50,400 + Ghee Jar Cost + Delivery Charge + 18% GST tax.
Ghee prices vary according to the desi cow breed, so if someone is selling ghee for less than this price, the ghee is not original or the company is offering a discount to gain a new customer.
To keep your and your family\’s health in check, we recommend that you consume A2 Desi Ghee. You can purchase A2 desi ghee directly from our farm.
So you can buy mother\’s hand made pure desi ghee from our farm here.
Normal Ghee Price 15 liters Calculation
Price of production of 15 Liters of Normal Ghee
Because normal ghee is made from milk cream, the production cost is low, and it is referred to as low-quality ghee. The market price for 1 kg of milk cream is approximately 200–250 kg.
1 kg of milk cream can yield approximately 1/2 kg of ghee.
• If we take the cheapest option, the price of a liter of normal Ghee is equal to the cost of 2 kg of milk cream i.e. ₹200 x 2 = ₹400.
• Similarly, 15 liters of normal cow ghee costs = ₹400 ✖15 = ₹6, 000/-.
2) If we take the most expensive option, the price of 1 liter of normal ghee is = ₹250 x 2 = ₹500.
1. Similarly, the cost of 15 liters of normal cow ghee is equal to ₹500 ✖ 15 = ₹7,500/-
So, the average price of 1 liter of normal cow ghee = (₹400 + ₹500)/2 = ₹450/-
• Similarly, the average price of 15 liters of normal cow ghee = (₹6, 000 + ₹7,500)/2 = ₹6,750/-
So the total cost of producing 15 liters of normal ghee is approximate = ₹6,000 to ₹7,500 + Ghee Jar Cost + Delivery Charge + 18% GST tax.
So, if you see ghee priced around ₹400 to ₹500 per liter in the market, it is made from cream and has a low nutritional value. We do not recommend regular ghee because it does more harm than good to
your body.
We recommend that you consume A2 Desi Ghee to maintain your and your family\’s health. You can purchase A2 desi ghee directly from our farm.
Industrial Ghee Price 1kg Calculation
Price of production of 15 Liter of Industrial Ghee
Because industrial ghee is made from butter using machines, the production cost is very low.
Butter is purchased at a low cost by industries.
Machines were used to extract ghee (clarified butter) from butter.
So, the market price for 1 liter of industrial ghee is approximately = ₹200 – ₹450 kg. And for 15 liters, it would be ₹3,000 – ₹6,750/-.
This ghee is known as \”market ghee\” and is of very low quality. It causes more harm than good. As a result, we do not recommend eating market ghee to our customers.
We recommend that you consume A2 Desi Ghee to maintain your and your family\’s health. You can purchase A2 desi ghee directly from our farm.
Also Read;
Buy mother\’s hand made pure desi ghee from our farm here. | {"url":"https://shahjighee.com/desi-ghee-15kg-tin-price-shahjighee/","timestamp":"2024-11-05T19:51:19Z","content_type":"text/html","content_length":"206689","record_id":"<urn:uuid:ec67d61a-6927-40f7-b0b3-2b0f752753bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00860.warc.gz"} |
How to make a table in MATLAB? | Candid.Technology
Matlab is a programming platform used to analyse data, create algorithms and also create models. Tables in Matlab are used to make storing and reading of data more efficient and understandable. They
consist of rows and column-oriented variables.
All the variables in the table can be of different sizes and data types, ensuring that all the variables have an equal number of rows.
The table is a data type used for tabular data. It is one of the most effective and efficient ways to summarise any given information in columns, which also helps find specific information easily.
Also Read: How to clear the command window in MATLAB?
Tables in MATLAB
Tables store the column-oriented data in a variable. Often table, table array, and matrix are confused in Matlab. They are similar with just a slight difference in their characteristics.
Table and Array Table
As mentioned above, they consist of rows and column-oriented variables and store column-oriented data like columns from any text file or spreadsheet directly. The table variables can be of different
data types and sizes, though a specific column must contain all the same data type variables. The only constraint in this is that the number of rows needs to be the same throughout. Table and Table
array are the same thing.
Martix and Array
Matlab stands for Matrix Laboratory, and hence it is primarily designed to perform matrix operations. A matrix or array should have the data type of all the variables to be the same even in different
columns. This means that the entire matrix or array will be of only one data type. Matrices aren’t as memory efficient as tables.
Also Read: Bash functions explained
How to create a table in MATLAB?
We already know that working with tables improves efficiency and the capability to understand the data. They are vital for better readability of tables and increased efficiency in understanding the
The methods for creating tables discussed below use the following functions:
With these simple functions, the most complex array, cell or structure can be converted into a table, with and without the variable names depending on the syntax. Listed below are the different ways
of creating a table in Matlab.
Using keyword ‘table’
The keyword ‘table’ creates a table array with named variables that can contain different types. The syntax for the table is explained in the following example.
Here, all the variables with their specific data are initialised. A table named T is then declared and assigned the values of all the variables initialised previously. When this code is run, the
output obtained is a 6×4 table.
Syntax to create a table
The output table has a mix of data types standing true to its property.
Output Table
Using Function array2table
The function array2table is used to convert a homogeneous array to a table. There are two possible conversions using this method.
Without Variable Names
A matrix A has been declared in the syntax below, and a table named T. Table T is assigned to the function array2table to convert matrix A into a table.
Syntax for array to table conversion
The output for this code is observed as a 4×4 table without any specific variable names given. Since the code did not specify any variable names, the table takes the variable names to be A1, A2, A3,
and A4.
Output for array to table conversion without variable name
To assign variable names to this, follow the method below.
With Variable Names
The syntax for printing the table from an array with variable names is very similar to the syntax without variable names. The only key difference is in using the function array2table assigned to
Table T.
Syntax for array to table conversion with variable names
Here, since the variable names were given, the output does not have the names A1, A2, A3, and A4 like it did previously. Instead, it has the variable names as that of fruits.
Output for array to table conversion with variable name
Also, note that in this code, one of the numbers is in the float data type (8.7), but this has caused no error since the upper-class data type is still numeric.
Using Function cell2table
The function cell2table is used to convert a cell array to a table. Just like the array2table function, this can also be done in two ways.
Without Variable Names
The following syntax for converting a cell A predefined with values to a table T using the function cell2table.
Syntax for cell to table conversion
Similar to the functioning of array2table, the table can get misleading without variable names, hence defying the table’s purpose completely.
Output for cell to table conversion without variable name
To assign variable names, the method below can be followed.
With Variable Names
The following syntax is used to convert cell A into a table T with the variable names for each column.
Syntax for cell to table conversion with Variable Names
It can be clearly observed that the table’s readability is maximised.
Output for cell to table conversion with variable name
Using Function struct2table
The function struct2table is used to convert a structure array to a table. There can be two types of structure arrays while converting it to a table. The syntax for the function struct2table for both
types of structure arrays is below.
Scalar Structure
Syntax for structure to table conversion for Scalar Structure
The above code shows the initialisation of the Scalar Structure, ‘S’. T is the table’s declaration, which uses the function struct2table is converting the structure to a tabular form, as shown below.
Output for structure to table conversion for Scalar Structure
Due to the initialisation of the variables, while giving the data, the table is not unnamed as A1, A2 and A3 as seen in our previous case of array2table and cell2table.
Non-Scalar Structure
The following code represents the syntax for the conversion of a non-scalar structure to a table using the struct2table function.
Syntax for structure to table conversion for Non-Scalar Structure
Following is the output table for a non-scalar structure. The output obtained in both cases is the same.
Output for structure to table conversion for Non-Scalar Structure
Also Read: How to concatenate strings in Python? | {"url":"https://candid.technology/matlab-table/","timestamp":"2024-11-11T10:38:59Z","content_type":"text/html","content_length":"227636","record_id":"<urn:uuid:392359a2-9735-49c9-a0c9-aac4c267a657>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00241.warc.gz"} |
USACO 2015 December Contest, Gold
Problem 3. Bessie's Dream
Contest has ended.
Log in to allow submissions in analysis mode
After eating too much fruit in Farmer John's kitchen, Bessie the cow is getting some very strange dreams! In her most recent dream, she is trapped in a maze in the shape of an $N \times M$ grid of
tiles ($1 \le N, M \le 1,000$). She starts on the top-left tile and wants to get to the bottom-right tile. When she is standing on a tile, she can potentially move to the adjacent tiles in any of the
four cardinal directions.
But wait! Each tile has a color, and each color has a different property! Bessie's head hurts just thinking about it:
• If a tile is red, then it is impassable.
• If a tile is pink, then it can be walked on normally.
• If a tile is orange, then it can be walked on normally, but will make Bessie smell like oranges.
• If a tile is blue, then it contains piranhas that will only let Bessie pass if she smells like oranges.
• If a tile is purple, then Bessie will slide to the next tile in that direction (unless she is unable to cross it). If this tile is also a purple tile, then Bessie will continue to slide until she
lands on a non-purple tile or hits an impassable tile. Sliding through a tile counts as a move. Purple tiles will also remove Bessie's smell.
(If you're confused about purple tiles, the example will illustrate their use.)
Please help Bessie get from the top-left to the bottom-right in as few moves as possible.
INPUT FORMAT (file dream.in):
The first line has two integers $N$ and $M$, representing the number of rows and columns of the maze.
The next $N$ lines have $M$ integers each, representing the maze:
• The integer '0' is a red tile
• The integer '1' is a pink tile
• The integer '2' is an orange tile
• The integer '3' is a blue tile
• The integer '4' is a purple tile
The top-left and bottom-right integers will always be '1'.
OUTPUT FORMAT (file dream.out):
A single integer, representing the minimum number of moves Bessie must use to cross the maze, or -1 if it is impossible to do so.
In this example, Bessie walks one square down and two squares to the right (and then slides one more square to the right). She walks one square up, one square left, and one square down (sliding two
more squares down) and finishes by walking one more square right. This is a total of 10 moves (DRRRULDDDR).
Problem credits: Nathan Pinsker, inspired by the game "Undertale".
Contest has ended. No further submissions allowed. | {"url":"https://usaco.org/index.php?page=viewproblem2&cpid=575","timestamp":"2024-11-07T10:24:09Z","content_type":"text/html","content_length":"9782","record_id":"<urn:uuid:185785ad-e8fc-4e3c-aa9e-cc0ea675a56c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00287.warc.gz"} |
athoms to Feet (US survey)
Fathoms to Feet (US survey) Converter
Enter Fathoms
Feet (US survey)
β Switch toFeet (US survey) to Fathoms Converter
How to use this Fathoms to Feet (US survey) Converter π €
Follow these steps to convert given length from the units of Fathoms to the units of Feet (US survey).
1. Enter the input Fathoms value in the text field.
2. The calculator converts the given Fathoms into Feet (US survey) in realtime β using the conversion formula, and displays under the Feet (US survey) label. You do not need to click any button.
If the input changes, Feet (US survey) value is re-calculated, just like that.
3. You may copy the resulting Feet (US survey) value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Fathoms to Feet (US survey)?
The formula to convert given length from Fathoms to Feet (US survey) is:
Length[(Feet (US survey))] = Length[(Fathoms)] / 0.16666700001185336
Substitute the given value of length in fathoms, i.e., Length[(Fathoms)] in the above formula and simplify the right-hand side value. The resulting value is the length in feet (us survey), i.e.,
Length[(Feet (US survey))].
Calculation will be done after you enter a valid input.
Consider that a ship anchors in water that is 30 fathoms deep.
Convert this depth from fathoms to Feet (US survey).
The length in fathoms is:
Length[(Fathoms)] = 30
The formula to convert length from fathoms to feet (us survey) is:
Length[(Feet (US survey))] = Length[(Fathoms)] / 0.16666700001185336
Substitute given weight Length[(Fathoms)] = 30 in the above formula.
Length[(Feet (US survey))] = 30 / 0.16666700001185336
Length[(Feet (US survey))] = 179.9996
Final Answer:
Therefore, 30 fath is equal to 179.9996 ft.
The length is 179.9996 ft, in feet (us survey).
Consider that a diver descends to a depth of 10 fathoms.
Convert this depth from fathoms to Feet (US survey).
The length in fathoms is:
Length[(Fathoms)] = 10
The formula to convert length from fathoms to feet (us survey) is:
Length[(Feet (US survey))] = Length[(Fathoms)] / 0.16666700001185336
Substitute given weight Length[(Fathoms)] = 10 in the above formula.
Length[(Feet (US survey))] = 10 / 0.16666700001185336
Length[(Feet (US survey))] = 59.9999
Final Answer:
Therefore, 10 fath is equal to 59.9999 ft.
The length is 59.9999 ft, in feet (us survey).
Fathoms to Feet (US survey) Conversion Table
The following table gives some of the most used conversions from Fathoms to Feet (US survey).
Fathoms (fath) Feet (US survey) (ft)
0 fath 0 ft
1 fath 6 ft
2 fath 12 ft
3 fath 18 ft
4 fath 24 ft
5 fath 29.9999 ft
6 fath 35.9999 ft
7 fath 41.9999 ft
8 fath 47.9999 ft
9 fath 53.9999 ft
10 fath 59.9999 ft
20 fath 119.9998 ft
50 fath 299.9994 ft
100 fath 599.9988 ft
1000 fath 5999.988 ft
10000 fath 59999.88 ft
100000 fath 599998.8 ft
A fathom is a unit of length used primarily in maritime contexts to measure water depth. One fathom is equivalent to 6 feet or approximately 1.8288 meters.
The fathom is defined as 6 feet, making it a convenient measurement for nautical and maritime applications, particularly for depth soundings and underwater measurements.
Fathoms are commonly used in navigation, fishing, and marine activities to describe the depth of water. The unit provides a practical measurement for underwater distances and has historical
significance in maritime practices.
Feet (US survey)
A foot (US survey) is a unit of length used in land surveying and mapping in the United States. One foot (US survey) is defined as exactly 1200/3937 meters, which is approximately 0.3048006096 meters
or about 0.3048 meters.
The US survey foot is slightly different from the international foot, which is defined as exactly 0.3048 meters. The difference is due to historical measurement standards and is used in specific
contexts such as land surveying and engineering in the United States.
US survey feet are used primarily in the United States for property measurement, land surveying, and mapping, ensuring consistency in measurements within these fields.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Fathoms to Feet (US survey) in Length?
The formula to convert Fathoms to Feet (US survey) in Length is:
Fathoms / 0.16666700001185336
2. Is this tool free or paid?
This Length conversion tool, which converts Fathoms to Feet (US survey), is completely free to use.
3. How do I convert Length from Fathoms to Feet (US survey)?
To convert Length from Fathoms to Feet (US survey), you can use the following formula:
Fathoms / 0.16666700001185336
For example, if you have a value in Fathoms, you substitute that value in place of Fathoms in the above formula, and solve the mathematical expression to get the equivalent value in Feet (US survey). | {"url":"https://convertonline.org/unit/?convert=fathoms-foot_us_survey","timestamp":"2024-11-06T04:40:01Z","content_type":"text/html","content_length":"91392","record_id":"<urn:uuid:f0b4470f-65f8-49ee-bf41-f2f16016d707>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00038.warc.gz"} |
Fraction Exponents Calculator - Algebra Calculator Online
Fraction Exponents Calculator
Fraction Exponents Calculator Online:
use our Fraction Exponents Calculator Online.
Fraction Exponents Calculator formula:
x^m/n =a
Fraction Exponents formula
Fraction Exponents Definition:
Definition of Fraction Exponents:
A radical can be expressed as a value with a fractional exponent by following the convention x^m/n =a. Rewriting radicals as fractional exponents can be useful in simplifying some radical
expressions. When working with fractional exponents, remember that fractional exponents are subject to all of the same rules as other exponents when they appear in algebraic expressions.
Fraction Exponents Definition:
Fractional exponents
More Calculator | {"url":"https://engineeringunits.com/fraction-exponents-calculator/","timestamp":"2024-11-12T19:49:54Z","content_type":"text/html","content_length":"73335","record_id":"<urn:uuid:d548d933-cda7-4f72-9c56-94c49f86ab4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00153.warc.gz"} |
Transactions Online
Behrooz SAFARINEJADIAN, Mohammad B. MENHAJ, Mehdi KARRARI, "A Distributed Variational Bayesian Algorithm for Density Estimation in Sensor Networks" in IEICE TRANSACTIONS on Information, vol. E92-D,
no. 5, pp. 1037-1048, May 2009, doi: 10.1587/transinf.E92.D.1037.
Abstract: In this paper, the problem of density estimation and clustering in sensor networks is considered. It is assumed that measurements of the sensors can be statistically modeled by a common
Gaussian mixture model. This paper develops a distributed variational Bayesian algorithm (DVBA) to estimate the parameters of this model. This algorithm produces an estimate of the density of the
sensor data without requiring the data to be transmitted to and processed at a central location. Alternatively, DVBA can be viewed as a distributed processing approach for clustering the sensor data
into components corresponding to predominant environmental features sensed by the network. The convergence of the proposed DVBA is then investigated. Finally, to verify the performance of DVBA, we
perform several simulations of sensor networks. Simulation results are very promising.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E92.D.1037/_p
author={Behrooz SAFARINEJADIAN, Mohammad B. MENHAJ, Mehdi KARRARI, },
journal={IEICE TRANSACTIONS on Information},
title={A Distributed Variational Bayesian Algorithm for Density Estimation in Sensor Networks},
abstract={In this paper, the problem of density estimation and clustering in sensor networks is considered. It is assumed that measurements of the sensors can be statistically modeled by a common
Gaussian mixture model. This paper develops a distributed variational Bayesian algorithm (DVBA) to estimate the parameters of this model. This algorithm produces an estimate of the density of the
sensor data without requiring the data to be transmitted to and processed at a central location. Alternatively, DVBA can be viewed as a distributed processing approach for clustering the sensor data
into components corresponding to predominant environmental features sensed by the network. The convergence of the proposed DVBA is then investigated. Finally, to verify the performance of DVBA, we
perform several simulations of sensor networks. Simulation results are very promising.},
TY - JOUR
TI - A Distributed Variational Bayesian Algorithm for Density Estimation in Sensor Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1037
EP - 1048
AU - Behrooz SAFARINEJADIAN
AU - Mohammad B. MENHAJ
AU - Mehdi KARRARI
PY - 2009
DO - 10.1587/transinf.E92.D.1037
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E92-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2009
AB - In this paper, the problem of density estimation and clustering in sensor networks is considered. It is assumed that measurements of the sensors can be statistically modeled by a common Gaussian
mixture model. This paper develops a distributed variational Bayesian algorithm (DVBA) to estimate the parameters of this model. This algorithm produces an estimate of the density of the sensor data
without requiring the data to be transmitted to and processed at a central location. Alternatively, DVBA can be viewed as a distributed processing approach for clustering the sensor data into
components corresponding to predominant environmental features sensed by the network. The convergence of the proposed DVBA is then investigated. Finally, to verify the performance of DVBA, we perform
several simulations of sensor networks. Simulation results are very promising.
ER - | {"url":"https://global.ieice.org/en_transactions/information/10.1587/transinf.E92.D.1037/_p","timestamp":"2024-11-12T00:54:40Z","content_type":"text/html","content_length":"61394","record_id":"<urn:uuid:f858a106-9c24-4ac9-991d-8881789456d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00827.warc.gz"} |
Engineering Hydrology Questions and Answers - Flood Empirical Formulas - Set 2 - Sanfoundry
Engineering Hydrology Questions and Answers – Flood Empirical Formulas – Set 2
This set of Engineering Hydrology Multiple Choice Questions & Answers (MCQs) focuses on “Flood Empirical Formulas – Set 2”.
1. Estimate the peak flood flow (in m^3s) for a 32 km^2 catchment area in the Gangetic plains region?
a) 80.7
b) 91.5
c) 112.1
d) 147.9
View Answer
Answer: a
Explanation: Since the area lies in the north Indian plains, Dickens formula should be used with a constant of 6. The peak flood is given as,
Q[p]=C[D]*\(A^{\frac{3}{4}}\)=6*32^0.75=80.7 m^3s
2. Estimate the peak flood flow (in m^3/s) for a 2500 hectare catchment area in the Chola Nadu region Tamil Nadu?
a) 51.3
b) 58.1
c) 72.7
d) 87.2
View Answer
Answer: b
Explanation: Since the area lies within 80 km of the east coast, Ryves formula should be used with a constant of 6.8. The peak flood is given as,
Q[p]=C[R]*\(A^{\frac{2}{3}}=6.8*(25)^{\frac{2}{3}}\)=58.1 m^3/s
3. Find the peak flow of a flood with 25-year return period in a 5030 ha area of the western ghats?
a) 500 m^3/s
b) 600 m^3/s
c) 700 m^3/s
d) 800 m^3/s
View Answer
Answer: d
Explanation: Using Inglis formula,
\(Q=\frac{124A}{\sqrt{A+10.4}}=\frac{124*50.3}{\sqrt{50.3+10.4}}\)=800.56 m^3/s≅800 m^3/s
4. The peak flood in a 10 km^2 catchment in Karnataka was mistakenly found using Dickens formula with a constant of 11. Find the percentage error in the calculated value. Take Ryves constant as 10.2.
a) 12%
b) 23%
c) 31%
d) 62%
View Answer
Answer: c
Explanation: Incorrect value using Dickens formula is,
Q[pD]=C[D]*\(A^{\frac{3}{4}}\)=11*10^0.75=61.86 m^3/s
Correct value using Ryves formula,
Q[pR]=C[R]*\(A^{\frac{2}{3}}=10.2*10^{\frac{2}{3}}\)=47.34 m^3/s
∴Percentage error=\(\frac{Q_{pD}-Q_{pR}}{Q_{pR}} *100=\frac{61.86-47.34}{47.34}\)*100=30.67%≈31%
5. The peak flood of an area as estimated by Inglis formula was found to be 200 m3/s. Find the area of the region (in hectares)?
a) 340
b) 666
c) 533
d) 178
View Answer
Answer: b
Explanation: As per Inglis formula,
⇒40000*(A+10.4)=15376*A^2 ⇒15376A^2-40000A-416000=0
Solving for A by using quadratic formula,
\(A=\frac{40000±\sqrt{40000^2+(4*15376*416000)}}{2*15376}=\frac{40000±164880.76}{30752}\) \(A=\frac{204880.76}{30752}\)or-\(\frac{124880.76}{30752}\)=6.66 or-4.06 (rejected)
Therefore, the area of the catchment is 6.66 km^2 or 666 hectares.
6. Estimate the maximum 24 hour flood (in m^3/s) with a return period of 50 years for an area of 27 km^2. Take Fuller’s constant as 0.8.
a) 26.36
b) 46.14
c) 59.31
d) 103.82
View Answer
Answer: a
Explanation: The maximum flood as per Fuller’s formula is,
Q[p]=C[f] A^0.8 (1+0.8 log T) = 0.8*27^0.8*(1+0.8*log(50))
⇒Q[p]=26.36 m^3/s
7. Envelope curve method of peak flood estimation involves plotting graph between which two data?
a) Peak discharge and rainfall intensity
b) Rainfall duration and catchment area
c) Rainfall intensity and catchment area
d) Peak discharge and catchment area
View Answer
Answer: d
Explanation: In envelope curve method, the available flood data from suitable catchments is collected and plotted on a log-log graph in the form of a graph between the peak flood discharges and the
catchment area.
8. Estimate the maximum flood discharge (in m^3/s) with a return period of 100 years for an area of 4760 ha. Take Fuller’s constant as 1.4.
a) 80
b) 46
c) 144
d) 112
View Answer
Answer: a
Explanation: The maximum flood as per Fuller’s formula is,
Q[p]=C[f] A^0.8 (1+0.8 log T) = 1.4* 47.6^0.8*(1+0.8*log(100))=80 m^3/s
9. What is the peak flood discharge (in m^3/s) for a 45 km^2 area as per maximum world flood experience?
a) 893.2
b) 1502.3
c) 1786.7
d) 2836.4
View Answer
Answer: b
Explanation: The peak discharge as per maximum world flood experience is given as,
\(Q_p=\frac{3025A}{(278+A)^{0.78}} = \frac{3025*45}{(278+45)^{0.78}}\) = 1502.3 m^3/s
10. What is the area for which Ryves and Dickens formula will give same peak flood? Assume the constants as equal.
a) 1 ha
b) 10 ha
c) 100 ha
d) 1000 ha
View Answer
Answer: c
Explanation: Equating the Ryves and Dickens formulae,
\(C_D*A^{\frac{3}{4}}=C_R*A^{\frac{2}{3}} ⇒ A^{\frac{3}{4}}=A^{\frac{2}{3}}\) ⇒A=1 km^2=100 ha
Sanfoundry Global Education & Learning Series – Engineering Hydrology.
To practice all areas of Engineering Hydrology, here is complete set of 1000+ Multiple Choice Questions and Answers. | {"url":"https://www.sanfoundry.com/engineering-hydrology-questions-answers-flood-empirical-formulas-set-2/","timestamp":"2024-11-02T17:59:52Z","content_type":"text/html","content_length":"150592","record_id":"<urn:uuid:edabe9a0-e844-488e-8d0d-90b7044d2933>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00636.warc.gz"} |
Visualizing a bivariate discrete distribution and other distributions derived from it - information for practice
A single discrete random variable is depicted by a stick diagram, a 2D picture. Naturally, to visualize a bivariate discrete distribution, one can use a bivariate stick diagram, a 3D picture.
Unfortunately, many students have difficulty understanding and processing 3D pictures. Therefore, we construct an alternative 2D disc plot to depict the bivariate distribution of (X$$ X $$, Y$$ Y
$$), from which we obtain graphically the conditional distributions of Y$$ Y $$ given X=a$$ X=a $$, and X$$ X $$ given Y=b$$ Y=b $$; the marginal distributions of X$$ X $$, Y$$ Y $$, X+Y$$ X+Y $$,
X−Y$$ X-Y $$. Furthermore, we depict the mean and the standard deviation of each distribution using a single-headed arrow. We hope these visualizations will help students better comprehend these
concepts and avoid some misconceptions. | {"url":"https://ifp.nyu.edu/2024/journal-article-abstracts/test-12370/","timestamp":"2024-11-14T20:54:16Z","content_type":"text/html","content_length":"34778","record_id":"<urn:uuid:c324c972-a52e-4e95-b4cf-42c57919e060>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00561.warc.gz"} |
QCQP tuner difficulties
Gurobi is unable to solve my convex quadratically-constrained quadratic program (QCQP) for certain values of a tuning parameter.
This is my explanation.
Call the tuning parameter epsilon. When epsilon is taken to be small, the program becomes essentially non-convex.
The issue in 2 dimensions is that ||x|| <= c is a convex set, but ||x|| = c is not.
So if we know ||x|| >= c, and we constrain ||x|| <= c + epsilon for small epsilon, it starts looking like ||x|| = c.
Any thoughts?
• Hi Jake,
When epsilon is taken to be small, the program becomes essentially non-convex.
How small is your epsilon in the nonconvex case? Your approach can lead to numerical difficulties if epsilon is very small, i.e., it is close or smaller than Gurobi's tolerances.
It might be best to solve the model as a nonconvex one instead of trying to "trick" it into a convex one. If there is no hope for solving the nonconvex model, you could try relaxing the PSDTol.
Please note that usually playing with tolerance is not a good idea as this may lead to all sorts of numerical trouble.
Best regards,
• Hi Jaromił,
It might be best to solve the model as a nonconvex one instead of trying to "trick" it into a convex one.
Two questions:
1) How do I know at what value of epsilon the model goes from convex to nonconvex?
2) How should I let Gurobi know this (I previously set Method=2)?
• Hi Jake,
1) How do I know at what value of epsilon the model goes from convex to nonconvex?
I am not sure whether this can even be calculated rigorously. If you have a tight upper bound \(U\) on \(||x||\) then you could use it to compute \(\epsilon = U - c\). But I guess that computing
such an upper bound is not practical and the bound probably would be very weak.
2) How should I let Gurobi know this (I previously set Method=2)?
You can set the NonConvex parameter to 2 even if your model is convex. This way you will prevent running into an error when your model becomes nonconvex.
Best regards,
Please sign in to leave a comment. | {"url":"https://support.gurobi.com/hc/en-us/community/posts/10552291534737-QCQP-tuner-difficulties","timestamp":"2024-11-09T04:19:24Z","content_type":"text/html","content_length":"42569","record_id":"<urn:uuid:92026f1e-a732-4317-9d45-04422595e277>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00171.warc.gz"} |
vers 1.1.0
A number of interesting packages are available to perform Correspondence Analysis in R. At the best of my knowledge, however, they lack some tools to help users to eyeball some critical CA aspects
(e.g., contribution of rows/cols categories to the principal axes, quality of the display,correlation of rows/cols categories with dimensions, etc). Besides providing those facilities, this package
allows calculating the significance of the CA dimensions by means of the ‘Average Rule’, the Malinvaud test, and by permutation test. Further, it allows to also calculate the permuted significance of
the CA total inertia.
The package comes with some datasets drawn from literature:
brand_coffee: after Kennedy R et al, Practical Applications of Correspondence Analysis to Categorical Data in Market Research, in Journal of Targeting Measurement and Analysis for Marketing, 1996
breakfast: after Bendixen M, A Practical Guide to the Use of Correspondence Analysis in Marketing Research, in Research on-line 1, 1996, 16-38 (table 5)
diseases: after Velleman P F, Hoaglin D C, Applications, Basics, and Computing of Exploratory Data Analysis, Wadsworth Pub Co 1984 (Exhibit 8-1)
fire_loss: after Li et al, Influences of Time, Location, and Cause Factors on the Probability of Fire Loss in China: A Correspondence Analysis, in Fire Technology 50(5), 2014, 1181-1200 (table 5)
greenacre_data: after Greenacre M, Correspondence Analysis in Practice, Boca Raton-London-New York, Chapman&Hall/CRC 2007 (exhibit 12.1)
List of implemented functions
• aver.rule: average rule chart.
• caCluster: clustering row/column categories on the basis of Correspondence Analysis coordinates from a space of user-defined dimensionality.
• caCorr(): chart of correlation between rows and columns categories.
• caPercept(): perceptual map-like Correspondence Analysis scatterplot.
• caPlot(): intepretation-oriented Correspondence Analysis scatterplots, with informative and flexible (non-overlapping) labels.
• caPlus(): facility for interpretation-oriented CA scatterplot.
• caScatter(): basic scatterplot visualization facility.
• cols.cntr(): columns contribution chart.
• cols.cntr.scatter(): scatterplot for column categories contribution to dimensions.
• cols.qlt(): chart of columns quality of the display.
• groupBycoord(): define groups of categories on the basis of a selected partition into k groups employing the Jenks’ natural break method on the selected dimension’s coordinates.
• malinvaud(): Malinvaud’s test for significance of the CA dimensions.
• rescale(): rescale row/column categories coordinates between a minimum and maximum value.
• rows.cntr(): rows contribution chart.
• rows.cntr.scatter(): scatterplot for row categories contribution to dimensions.
• rows.qlt(): chart of rows quality of the display.
• sig.dim.perm(): permuted significance of CA dimensions.
• sig.dim.perm.scree(): scree plot to test the significance of CA dimensions by means of a randomized procedure.
• sig.tot.inertia.perm(): permuted significance of the CA total inertia.
• table.collapse(): collapse rows and columns of a table on the basis of hierarchical clustering.
Description of implemented functions
aver.rule(): allows you to locate the number of dimensions which are important for CA interpretation, according to the so-called average rule. The reference line showing up in the returned histogram
indicates the threshold of an optimal dimensionality of the solution according to the average rule.
caCluster(): plots the result of cluster analysis performed on the results of Correspondence Analysis, and plots a dendrogram, a silouette plot depicting the “quality” of the clustering solution, and
a scatterplot with points coded according to the cluster membership. The function provides the facility to perform hierarchical cluster analysis of row and/or column categories on the basis of
Correspondence Analysis result. The clustering is based on the row and/or colum categories’ coordinates from:
1. a high-dimensional space corresponding to the whole dimensionality of the input contingency table;
2. a high-dimensional space of dimensionality smaller than the full dimensionality of the input dataset;
3. a bi-dimensional space defined by a pair of user-defined dimensions.
To obtain (1), the dim parameter must be left in its default value (NULL); to obtain (2), the dim parameter must be given an integer (needless to say, smaller than the full dimensionality of the
input data); to obtain (3), the dim parameter must be given a vector (e.g., c(1,3)) specifying the dimensions the user is interested in.
The method by which the distance is calculated is specified using the dist.meth parameter, while the agglomerative method is speficied using the aggl.meth parameter. By default, they are set to
euclidean and ward.D2 respectively.
The user may want to specify beforehand the desired number of clusters (i.e., the cluster solution). This is accomplished feeding an integer into the ‘part’ parameter. A dendrogram (with rectangles
indicating the clustering solution), a silhouette plot (indicating the “quality” of the cluster solution), and a CA scatterplot (with points given colours on the basis of their cluster membership)
are returned. Please note that, when a high-dimensional space is selected, the scatterplot will use the first 2 CA dimensions; the user must keep in mind that the clustering based on a
higher-dimensional space may not be well reflected on the subspace defined by the first two dimensions only.
Also note:
• if both row and column categories are subject to the clustering, the column categories will be flagged by an asterisk (*) in the dendrogram (and in the silhouette plot) just to make it easier to
identify rows and columns;
• the silhouette plot displays the average silhouette width as a dashed vertical line; the dimensionality of the CA space used is reported in the plot’s title; if a pair of dimensions has been
used, the individual dimensions are reported in the plot’s title;
• the silhouette plot’s labels end with a number indicating the cluster to which each category is closer.
An optimal clustering solution can be obtained setting the opt.part parameter to TRUE. The optimal partition is selected by means of an iterative routine which locates at which cluster solution the
highest average silhouette width is achieved. If the opt.part parameter is set to TRUE, an additional plot is returned along with the silhouette plot. It displays a scatterplot in which the cluster
solution (x-axis) is plotted against the average silhouette width (y-axis). A vertical reference line indicate the cluster solution which maximize the silhouette width, corresponding to the suggested
optimal partition.
The function returns a list storing information about the cluster membership (i.e., which categories belong to which cluster).
Further info and Disclaimer about the caCluster() function:
The silhouette plot is obtained from the silhouette() function out from the cluster package. For a detailed description of the silhouette plot, its rationale, and its interpretation, see:
• Rousseeuw P J. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis, Journal of Computational and Applied Mathematics 20, 53-65
For the idea of clustering categories on the basis of the CA coordinates from a full high-dimensional space (or from a subset thereof), see:
• Ciampi et al. 2005. Correspondence analysis and two-way clustering, SORT 29 (1), 27-4
• Beh et al. 2011. A European perception of food using two methods of correspondence analysis, Food Quality and Preference 22(2), 226-231
Please note that the interpretation of the clustering when both row AND column categories are used must procede with caution due to the issue of inter-class points’ distance interpretation. For a
full description of the issue (also with further references), see:
• Greenacre M. 2007. Correspondence Analysis in Practice, Boca Raton-London-New York, Chapman&Hall/CRC, 267-268.
caCorr(): allows you to calculate the strenght of the correlation between rows and columns of the contingency table. A reference line indicates the threshold above which the correlation can be
considered important.
caPercept(): plots a variant of the traditional Correspondence Analysis scatterplots that allows facilitating the interpretation of the results. It aims at producing what in marketing research is
called perceptual map, a visual representation of the CA results that seeks to avoid the problem of interpreting inter-spatial distance. It represents only one type of points (say, column points),
and “gives names to the axes” corresponding to the major row category contributors to the two selected dimensions.
caPlot(): plots different types of CA scatterplots, adding information that are relevant to the CA interpretation. Thanks to the ggrepel package, the labels tends to not overlap so producing a nicely
readable chart. The function provides the facility to produce:
1. a regular (symmetric) scatterplot, in which points’ labels only report the categories’ names;
2. a scatterplot with advanced labels. If the user’s interest lies (for instance) in interpreting the rows in the space defined by the column categories, by setting the parameter ‘cntr’ to
“columns” the columns’ labels will be coupled with two asterisks within round brackets; each asterisk (if present) will indicate if the category is a major contributor to the definition of
the first selected dimension (if the first asterisk to the left is present) and/or if the same category is also a major contributor to the definition of the second selected dimension (if the
asterisk to the right is present). The rows’ labels will report the correlation (i.e., sqrt(COS2)) with the selected dimensions; the correlation values are reported between square brackets;
the left-hand side value refers to the correlation with the first selected dimensions, while the right-hand side value refers to the correlation with the second selected dimension. If the
parameter ‘cntr’ is set to “rows”, the row categories’ labels will indicate the contribution, and the column categories’ labels will report the correlation values.
3. a perceptual map, in which axes’ poles are given names according to the categories (either rows or columns, as specified by the user) having a major contribution to the definition of the
selected dimensions; rows’ (or columns’) labels will report the correlation with the selected dimensions.
The function returns a dataframe containing data about row and column points:
a. coordinates on the first selected dimension
b. coordinates on the second selected dimension
c. contribution to the first selected dimension
d. contribution to the second selected dimension
e. quality on the first selected dimension
f. quality on the second selected dimension
g. correlation with the first selected dimension
h. correlation with the second selected dimension
k. asterisks indicating whether the corresponding category is a major contribution to the first and/or second selected dimension.
caPlus(): plots Correspondence Analysis scatterplots modified to help interpreting the analysis’ results. In particular, the function aims at making easier to understand in the same visual context: *
(a) which (say, column) categories are actually contributing to the definition of given pairs of dimensions; * (b) which (say, row) categories are more correlated to which dimension.
caScatter(): allows to get different types of CA scatterplots. It is just a wrapper for functions from the ca and FactoMineR packages.
cols.cntr(): column equivalent of rows.cntr() (see below).
cols.cntr.scatter(): column equivalent of rows.cntr.scatter() (see below).
cols.corr(): column equivalent of rows.corr() (see below).
cols.corr.scatter(): column equivalent of rows.corr.scatter() (see below).
cols.qlt(): column equivalent of rows.qlt() (see below).
groupBycoord(): allows to group the row/column categories into k user-defined partitions. K groups are created employing the Jenks’ natural break method applied on the selected dimension’s
coordinates. A dotchart is returned representing the categories grouped into the selected partitions. At the bottom of the chart, the Goodness of Fit statistic is also reported. The function also
returns a dataframe storing the categories’ coordinates on the selected dimension and the group each category belongs to.
malinvaud(): performs the Malinvaud test, which assesses the significance of the CA dimensions. The function returns both a table and a plot. The former lists relevant information, among which the
significance of each CA dimension. The dotchart graphically represents the p-value of each dimension; dimensions are grouped by level of significance; a red reference lines indicates the 0.05
rescale(): allows to rescale the coordinates of a selected dimension to be constrained between a minimum and a maximum user-defined value. The rationale of the function is that users may wish to use
the coordinates on a given dimension to devise a scale, along the lines of what is accomplished in: Greenacre M 2002, The Use of Correspondence Analysis in the Exploration of Health Survey Data,
Documentos de Trabajo 5, Fundacion BBVA, pp. 7-39. The function returns a chart representing the row/column categories against the rescaled coordinates from the selected dimension. A dataframe is
also returned containing the original values (i.e., the coordinates) and the corresponding rescaled values.
rows.cntr(): calculates the contribution of the row categories to a selected dimension. It displays the contribution of the categories as a dotplot. A reference line indicates the threshold above
which a contribution can be considered important for the determination of the selected dimension. The parameter sort=TRUE sorts the categories in descending order of contribution to the inertia of
the selected dimension. At the left-hand side of the plot, the categories’ labels are given a symbol (+ or -) according to wheather each category is actually contributing to the definition of the
positive or negative side of the dimension, respectively. The categories are grouped into two groups: ‘major’ and ‘minor’ contributors to the inertia of the selected dimension. At the right-hand
side, a legend (which is enabled/disabled using the leg parameter) reports the correlation (sqrt(COS2)) of the column categories with the selected dimension. A symbol (+ or -) indicates with which
side of the selected dimension each column category is correlated.
rows.cntr.scatter(): plots a scatterplot of the contribution of row categories to two selected dimensions. Two references lines (in RED) indicate the threshold above which the contribution can be
considered important for the determination of the dimensions. A diagonal line (in BLACK) is a visual aid to eyeball whether a category is actually contributing more (in relative terms) to either of
the two dimensions. The row categories’ labels are coupled with + or - symbols within round brackets indicating to which side of the two selected dimensions the contribution values that can be read
off from the chart are actually referring. The first symbol (i.e., the one to the left), either + or -, refers to the first of the selected dimensions (i.e., the one reported on the x-axis). The
second symbol (i.e., the one to the right) refers to the second of the selected dimensions (i.e., the one reported on the y-axis).
rows.corr(): calculates and graphically displays the correlation (sqrt(COS2)) of the row categories with the selected dimension. The parameter sort=TRUE arranges the categories in decreasing order of
correlation. In the returned chart, at the left-hand side, the categories’ labels show a symbol (+ or -) according to which side of the selected dimension they are correlated, either positive or
negative. The categories are grouped into two groups: categories correlated with the positive (‘pole +’) or negative (‘pole -’) pole of the selected dimension. At the right-hand side, a legend
indicates the column categories’ contribution (in permils) to the selected dimension (value enclosed within round brackets), and a symbol (+ or -) indicating whether they are actually contributing to
the definition of the positive or negative side of the dimension, respectively. Further, an asterisk (*) flags the categories which can be considered major contributors to the definition of the
rows.corr.scatter(): plots a scatterplot of the correlation (sqrt(COS2)) of row categories with two selected dimensions. A diagonal line (in BLACK) is a visual aid to eyeball whether a category is
actually more correlated (in relative terms) to either of the two dimensions. The row categories’ labels are coupled with two + or - symbols within round brackets indicating to which side of the two
selected dimensions the correlation values that can be read off from the chart are actually referring. The first symbol (i.e., the one to the left), either + or -, refers to the first of the selected
dimensions (i.e., the one reported on the x-axis). The second symbol (i.e., the one to the right) refers to the second of the selected dimensions (i.e., the one reported on the y-axis).
rows.qlt(): plots the quality of row categories display on the sub-space determined by a pair of selected dimensions.
sig.dim.perm(): calculates the significance of a pair of selected dimensions via a permutation test, and displays the results as a scatterplot; a large RED dot indicates the observed inertia.
Permuted p-values are reported in the axes’ labels.
sig.dim.perm.scree(): tests the significance of the CA dimensions by means of permutation of the input contingency table. A scree-plot displays for each dimension the observed eigenvalue and the 95th
percentile of the permuted distribution of the corresponding eigenvalue. Observed eigenvalues that are larger than the corresponding 95th percentile are significant at least at alpha 0.05. P-values
are displayed into the chart.
sig.tot.inertia.perm(): calculates the significance of the CA total inertia via permutation test; a histogram of the permuted total inertia is displayed along with the observed total inertia and the
95th percentile of the permuted total inertia. The latter can be regarded as a 0.05 alpha threshold for the observed total inertia’s significance.
table.collapse(): allows to collapse the rows and columns of the input contingency table on the basis of the results of a hierarchical clustering. The function returns a list containing the input
table, the rows-collapsed table, the columns-collapsed table, and a table with both rows and columns collapsed. It optionally returns two dendrograms (one for the row profiles, one for the column
profiles) representing the clusters. The hierarchical clustering is obtained using the FactoMineRs HCPC() function.
Rationale: clustering rows and/or columns of a table could interest the users who want to know where a significant association is concentrated by collecting together similar rows (or columns) in
discrete groups (Greenacre M, Correspondence Analysis in Practice, Boca Raton-London-New York, Chapman&Hall/CRC 2007, pp. 116, 120). Rows and/or columns are progressively aggregated in a way in which
every successive merging produces the smallest change in the table’s inertia. The underlying logic lies in the fact that rows (or columns) whose merging produces a small change in table’s inertia
have similar profiles. This procedure can be thought of as maximizing the between-group inertia and minimizing the within-group inertia. A method essentially similar is that provided by the
FactoMineR package (Husson F, Le S, Pages J, Exploratory Multivariate Analysis by Example Using R, Boca Raton-London-New York, CRC Press, pp. 177-185). The cluster solution is based on the following
rationale: a division into Q (i.e., a given number of) clusters is suggested when the increase in between-group inertia attained when passing from a Q-1 to a Q partition is greater than that from a Q
to a Q+1 clusters partition. In other words, during the process of rows (or columns) merging, if the following agggregation raises highly the within-group inertia, it means that at the further step
very different profiles are being aggregated.
## History version 1.1.0: * minor changes to optimize the calculation of permuted p-values returned by the functions sig.dim.perm(), sig.dim.perm.scree(), and sig.tot.inertia.perm().
• sig.dim.perm.scree() and sig.dim.perm() now return permuted p-values in a dataframe (besides reporting them in the output plots).
• minor improvements and typo fixes to the package’s help documentation.
version 1.0.0: first release to CRAN.
Companion website | {"url":"https://cran.case.edu/web/packages/CAinterprTools/readme/README.html","timestamp":"2024-11-06T12:16:14Z","content_type":"application/xhtml+xml","content_length":"25379","record_id":"<urn:uuid:a70f43c9-fdd6-42ee-b80e-67f54ce85982>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00152.warc.gz"} |
Interpreting Results of Principal Component Analysis
17.7.1.2 Interpreting Results of Principal Component Analysis
Principal Component Analysis Report Sheet
Descriptive Statistics
The descriptive statistics table can indicate whether variables have missing values, and reveals how many cases are actually used in the principal components.
If there are only a few missing values for a single variable, it often makes sense to delete an entire row of data. This is known as listwise exclusion. If there are missing values for two and more
variables, it is typically best to employ pairwise exclusion.
Inspection of means and standard deviations (SDs) can reveal univariate/variance differences between the groups. We should take notice when the means and SDs are very different, as this may indicate
that the variables are measured on different scales. In this case, we may use correlation matrix for analysis.
Correlation Matrix
This table reveals relationships between variables. PCA aims to produce a small set of independent principal components from a larger set of related original variables. In general, higher values are
more useful, and you should consider excluding low values from the analysis.
Eigenvalues of the Correlation/Covariance Matrix
Eigenvalue Eigenvalues of the correlation/covariance matrix. This represents a partitioning of the total variation accounted for each principal component.
Proportion The proportion of variance explained by each eigenvalue.
Cumulative The cumulative proportion of the variance accounted for by the current and all preceding principal components. If the i-th component retains over 90% original information, it is usually
recommended to retain i components.
Note: If we select Covariance Matrix from the Analyze radio box in dialog, the result of Bartlett's Test, which is used to test whether the eigenvalues along each principal component are equal, will
be shown in the additional 3 columns of the table.
Extracted Eigenvectors
The principal component variables are defined as linear combinations of the original variables $X_1, ...,X_k,...,X_m$. The Extracted Eigenvectors table provides coefficients for equations below.
$Y_k = C_{k1}X_1 + C_{k2}X_2 + ... +C_{km}X_m$ (1)
• $Y_k$ is the k-th principal component $k$
• $C's$ are the coefficients in table
Scree Plot
The scree plot is a useful visual aid for determining an appropriate number of principal components. The scree plot graphs the eigenvalue against the component number. To determine the appropriate
number of components, we look for an "elbow" in the scree plot. The component number is taken to be the point at which the remaining eigenvalues are relatively small and all about the same size.
Loading Plot
The Loading Plot is a plot of the relationship between original variables and subspace dimensions. It is used for interpreting relationships among variables.
Scores Plot
The score plot is a projection of data onto subspace. It is used for interpreting relations among observations.
The bi-plot shows both the loadings and the scores for two selected components in parallel.
Score Data
The worksheet provides the principal component scores for each variable. | {"url":"https://cloud.originlab.com/doc/en/Origin-Help/PCA-Result","timestamp":"2024-11-03T23:12:20Z","content_type":"text/html","content_length":"137854","record_id":"<urn:uuid:45b422e3-e9a9-4d07-8a62-1c61e4b0efbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00236.warc.gz"} |
The Empty, Universal, and Identity Relations on a Set
The Empty, Universal, and Identity Relations on a Set
Recall from the Relations on Sets page that if $X$ is a set then a relation $R$ on $X$ is a subset of the Cartesian product $X \times X$ where if $(x, y) \in R$ then we write $x \: R \: y$ and say "
$x$ relates $y$ and if $(x, y) \not \in R$ then we write $x \: \not R \: y$ and say $x$ does not relate $y$.
We will now look at three rather basic relations on a set $X$.
Definition: Let $X$ be a set. The Empty Relation $\emptyset$ on $X$ is defined to be the relation where for all $x, y \in X$ we have that $x \: \not R \: y$.
For example, consider the set of integers $\mathbb{Z}$ and let $R$ be the relation such that for $x, y \in \mathbb{Z}$ we have that $x \: R \: y$ if both $x + y$ is even and $x + y$ is odd. Clearly
the sum $x + y$ cannot both be even and odd, and so $R$ is the empty relation since for all $x, y \in X$ we have that $x \: \not R \: y$, i.e., $R = \emptyset$.
Definition: Let $X$ be a set. The Universal Relation or Full Relation $\mathcal U$ on $\mathbb{Z}$ is defined to be the relation where for all $x, y \in X$ we have that $x \: \mathcal U \: y$.
For example, consider the set of integers $\mathbb{Z}$ again. Define $R$ to be the relation such that for $x, y \in \mathbb{Z}$ we have that $x \: R \: y$ if $x + y \in \mathbb{Z}$. The sum of any
two integers is always going to be an integer, and so for all $x, y \in \mathbb{Z}$ we have that $x \: R \: y$ so $R$ is the universal relation on $X$ so $R = \mathcal U$
Definition: Let $X$ be a set. The Identity Relation $\mathcal I$ on $\mathbb{Z}$ is defined to be the relation where for all $x, y \in X$ we have that $x \: \mathcal I \: y$ if and only if $x = y$.
For example, consider the set of integers $\mathbb{Z} \setminus \{ 0 \}$. Define $R$ to be the relation such that for $x, y \in \mathbb{Z}$ we have that $x \: R \: y$ if $\frac{x}{y} = 1$.
If $x \: R \: y$ then $\frac{x}{y} = 1$ so $x = y$. Conversely, if $x, y \in \mathbb{Z} \setminus \{0 \}$ and $x = y$ then $\frac{x}{y} = 1$ so $x \: R \: y$. Therefore $R$ is the identity relation
on $\mathbb{Z} \setminus \{ 0 \}$ so $R = \mathcal I$. | {"url":"http://mathonline.wikidot.com/the-empty-universal-and-identity-relations-on-a-set","timestamp":"2024-11-13T19:22:06Z","content_type":"application/xhtml+xml","content_length":"17592","record_id":"<urn:uuid:69b81753-92b9-4a59-80da-a2fdec976809>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00624.warc.gz"} |
The geometry of shooting
A key principle in my approach to football analytics is to make sure that all data is related back to the game itself. We should never introduce a number, a statistic or a metric unless we can say
what it means in terms of a player’s actions and coaching decisions. We have already seen this principle at play in Guardiola’s organization of resting defence, the way we evaluated Traore in terms
of high-speed dribbles and space creation by Manchester United’s left-back, Luke Shaw.
We now look for the same principles in the way we use expected goals. Without these principles the numbers don’t tell us anything.
The first thing to consider when it comes to evaluating a shot is the view the player has of the goal: the more he or she can see, the better your chance of scoring. Players learn this early. They
notice that if they overrun the ball in the box, they end up hitting the side netting. It also underlies the most basic advice for defenders: showing the attacking player the way out to goal line, to
narrow down his angle.
The goal angle idea is illustrated below. In (a) the angle at the point of shooting to two lines drawn to the posts in 38 degrees. For (b) and © it is 17 degrees.
Note that moving out to the side is equivalent to moving further away from the goal. The same principle applies: the wider the angle between the goal posts the better the chance of scoring.
Another principle behind evaluating a shot is goalkeeper reaction time. Ajax sports scientist, Vosse de Boode, conducted experiments on goalkeeper reaction times and the time it takes to complete a
dive to show that, “Goalkeepers are chanceless within a 16m of the goal, if [and this is a big if] a shot is placed in the top corner with maximum shot speed.”.
De Boode findings mean that close to the goal, it is in their own hands (or feet) whether or not they score. Combined with the goal angle, a non-linear effect is created whereby wider-angle shots are
more valuable if they are closer. It is this effect which creates the squashed ring effect in the probability of scoring at different distances shown below:
It is this picture (and de Boode’s advice about the top corner) that every attacking player should have in their head when making shot decisions.
One important discussion I have had with several players is about the 7% ring. One interpretation might be that a player should not shoot unless they are within this ring. This is wrong! Instead, the
7% ring tells us about how much a few steps closer to goal can increase the chance of scoring. For example, shots from the top corner of the penalty box are 2% chances. A few steps centrally can
triple the chance of scoring.
Up to now, we have ignored a very important factor which determines if a shot is a goal or not: the defending players and the goalkeeper! We can account for these in expected goals model, using
tracking data. We can also, using a machine learning model, measure how these factors combine with distance and goal angle in determining the probability of a shot’s success.
In the example below, Sadio Mane has a very large angle and a short distance, but has serveral opponent players between the goal and his position.
Twelve data scientist, Jernej Flisar has developped a method based on, what is known as Shapley values, to calculate how much different factors contribute to the quality of a chance. In this example,
distance and angle (in red) are positive contributions. The number of defenders between the Mane and the goal (in blue) are negative contributions.
Without the opponents in the way, this would have been a 0.25xG chance, but with the opponents it drops to 0.18xG. Mane scored this chance with a cheeky flick of the side of his foot.
Further examples below illustrate the principles behind this approach. This longer distance effort has higher xG because of a favourable goal angle and although the opponents are nearby, they are not
significantly in the way of the shot.
In the final example, below, the angle and distance both make the shot less likely to result in a goal than average, although lack of opponents in path of the shot improves the opportunity somewhat.
Our approach to expected goals, based on angles, distances and the positioning of the players, can be used to talk to players about their decision-making: how important it is to have a clear sight on
goal? what is the value of beating one more player before shooting? why is the top corner so important? Different scenarios can be presented and discussions can be let to evolve around how to create
better shooting locations. Expected goals ensures that these discussions are based in facts and data, not in speculation.
The approach is also useful in scouting and opposition analysis. Some players over-perform (score more goals than xG predicts) in certain type of shooting situations: maybe a player is better at
scoring even in a crowded box or can score from narrower angles. In the latter case, defenders should be aware that allowing that player to run wide might not be the advantage it usually is.
Used in this way, expected goals is so much more than just a count of chances. It is a deep understanding of the geometry of shooting. | {"url":"https://soccermatics.medium.com/the-geometry-of-shooting-7cbdd0d5da3b?responsesOpen=true&sortBy=REVERSE_CHRON&source=user_profile---------0----------------------------","timestamp":"2024-11-04T12:33:34Z","content_type":"text/html","content_length":"117022","record_id":"<urn:uuid:ce55ae24-eccb-4df2-bd04-4615d00e202b>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00022.warc.gz"} |
Sudoku level 3
Submitted by Atanu Chaudhuri on Thu, 21/07/2016 - 00:55
Step by step easy and quick solution to medium Sudoku level 3 game 8
Solution to the medium Sudoku level 3 game 8 is explained step by step in an easy to understand manner. Each breakthrough by Sudoku technique explained.
We'll solve the following medium hard Sudoku level 3 puzzle. First try to solve the puzzle before going through the solution.
Strategy of solving a Sudoku hard puzzle
As a strategy, we'll always try to get a valid cell by row column scan first, by possible digit analysis or DSA second and by reduction caused by Cycles third. Cycles are valuable resources and we'll
form a Cycle whenever we get the chance, even if it doesn't result in an immediate valid cell hit.
In addition we'll also use other advanced Sudoku techniques of single digit lock, parallel digit scan or X wing whenever we get the chance.
Evaluation of possible digit subsets for all empty cells is often prescribed as a must-do activity. We strongly recommend to avoid this time-consuming activity as far as possible and instead go in
for filling up the empty cells by unique valid digits using Sudoku techniques needed.
The Sudoku techniques for solving this eighth hard Sudoku level 3 puzzle game are briefly but clearly explained along with the step by step solution.
Let's go through the solution of the game.
Solution to medium hard Sudoku level 3 game 8 Stage 1: Breakthrough by possible digit subset analysis or DSA technique
First valid cell by row column scan for digit 2: R5C9 2 by scan R4, R6, C7 -- R2C8 2 by scan in R1, R3 -- R8C6 2 by scan in R7, C4, C5 -- R9C2 2 by scan in R7, R8, C1, C3. Digit 2 fully filled.
Next success by row column scan is for 6: R4C8 6 by scan in R5, R6 -- R9C1 6 by scan for 6 in R7, C2, C3 -- and lastly R8C9 6 by scan in R7, R9, C7. All 6s are filled.
For next larger digit 7: R5C7 7 by scan in R6.
Cycle (4,5,9) in R6C7, R6C8, R6C9 and Cycle (1,8) by reduction in R6C2, R6C4.
By going back to scanning for lower incomplete digits, success for 3: R7C1 3 by scan C2, C3 -- R9C9 3 by scan in R7, R8 -- R7C9 8 by scan for 8 in R8, C8.
R9C5 8 by DSA reduction of [5,9] from DS [5,8,9] in C5 -- R1C5 5 by DS reduction -- R5C5 9 as leftover digit in C5. R7C3 9 by scan in R9, C2 -- R4C1 9 by scan in R5, C2, C3.
Breakthrough Cycle (1,4) formed in R5C3, R9C3 by DSA. We'll see its effect in next stage.
Game status at this first stage shown below.
Solution to the Sudoku level 3 game 8 Stage 2: Breakthroughs By DSA and Cycles
Breakthrough R2C3 8 by reduction of 1 because of the Cycle (1,4) in C3 -- R4C3 7 as leftover digit in C3.
R2C6 4 by reduction of [5,9] from DS [4,5,9] in three empty cells of R2 -- R2C1 5 by reduction -- R2C4 9 as leftover digit in R2.
Cycle (1,4,7) in cells R3C2, R7C2 and R8C2 by reduction of [5,8] from DS [1,4,5,7,8] in the column C2 -- R6C2 8 by reduction of 1 because of this Cycle -- R4C2 5 by reduction -- R4C6 8 by reduction
-- R6C4 1 by reduction -- R9C4 4 by reduction -- R8C4 7 by reduction -- R7C4 5 by reduction of 1 from DS [1,5] in R7C4, R7C6 -- R7C6 1 as leftover digit in bottom middle major square.
Result of actions taken at this stage shown below.
Finding a valid cell by Sudoku technique of possible digit subset analysis or DSA and characteristics of a Cycle
We'll understand two things:
• How we have got the Cycle (1,4,7) in cells of C2 by DSA reduction, and,
• What are the characteristics of a Cycle such as Cycle (1,4,7).
DSA reduction technique is the short form of possible Digit Analysis Technique on empty cells of a zone that may be a row, a column or even a major square. Our objective would always be to get a
unique digit in a valid cell, but if we don't get it we'll at least get short length possible digit subsets in the cells where we expect a valid cell.
In this game we have got such possible digit subsets [1,4,7] in all three cells R3C2, R7C2 and R8C2 of column C2 by reduction of [5,8] from DS [1,4,5,7,8] in the column C2. Observe how it happened.
First we have formed the possible digit subset of [1,4,5,7,8] in five empty cells of column C2. These are the digits that are still missing in C2.
At the second phase, we have checked the other two parent zones (other than C2) or each of these three cells to see how many of these five digits [1,4,5,7,8] are present in the two zones combined for
each cell.
For R3C2, the two parent zones other than C2 are R3 and top left major square. Observe that combined effect of all digits present in these two zones is to reduce the DS [1,4,5,7,8] by [5,8]. Row R3
doesn't have any of these two, but the top left major square has both. As a result, the possible digit subset for R3C2 reduces to [1,4,7].
This is the mechanism of finding a valid cell or a short length DS by possible digit subset analysis or DSA technique.
Similarly for R7C2 with DS [1,4,5,7,8], [5,8] exists in both the parent zones R7 and bottom left major square. Thus the possible digit subset for R7C2 reduces to [1,4,7].
For R8C2 with possible digit subset [1,4,5,7,8], [5,8] again exists in both R8 and parent bottom left major square, reducing the possible digit subset DS in R8C2 also to [1,4,7].
Characteristics of the Cycle (1,4,7)
Instead a direct valid cell hits of unique digits we have got same DS [1,4,7] in three cells of the same column C2. What does it signify?
We call this digit pattern as the Cycle of (1,4,7) in column C2. Effectively, these three digits can be placed only in these three cells and in no other cell. If one of these cells, say R3C2 gets
finally 1 and another cell R7C2 gets finally 7, the third cell in the Cycle must have the the third digit 4 of the Cycle. Similarly for other possibilities in the final solution for these cells.
This as if these three digits Cycle through the three cells. That's why the name Cycle.
Okay, but what is the effect of a Cycle on the other digits in the empty cells of C2?
There we get finally the breakthrough: R6C2 8 by reduction of 1 from DS [1,8] because of this Cycle of [1,4,7].
This is the basic result-bearing effect of a Cycle - the Cycled digits are eliminated from the possible digit subsets of all other empty cells in the zones parent to the whole Cycle, in this case,
column C2.
What are the conditions a Cycle must satisfy?
A proper Cycle of possible digit subsets or DSs in empty cells of a zone have the following characteristics:
• Number of digits in the Cycle must be same as number of cells involved in the Cycle. There may be two, three, four or higher digit long Cycles, two digit Cycles being more common and easier to
• Each digit in a Cycle must appear in at least two cells involved in a Cycle.
• A Cycle may be formed in a column, a row or even in the cells of a major square without being in a row or column. In this last case, the Cycle will reduce the digits only in the DSs of the major
square of the Cycle.
• The digits involved in a Cycle cannot appear in any cell outside the Cycle, thus reducing the DSs of empty cells in the Cycled zone. For example, in this case of Cycle (1,4,7), digit 1 is reduced
from the DS [1,8] in R6C2 giving a valid cell hit R6C2 8.
• A proper Cycle cannot contain a second smaller length Cycle. For example, if DSs of R7C2 and R8C2 were just [4,7] these two cells would have formed a smaller length Cycle reducing the DS of R3C2
to a direct valid cell hit of R3C2 1.
• A valid cell hit by a Cycle formation always would have an alternate way to get the valid cell hit by a Parallel scan. For example in this case we have got the valid cell hit R6C2 8 because of
the formation of the Cycle (1,4,7) in C2. Alternately we could have got the same hit of R6C2 8 by parallel scan for 8 on empty cells of C2. 8 in top left major square and in bottom left major
square eliminate three of the five empty cells in C2 for 8. Also 8 in R4 eliminates the fourth empty cell R4C2 for 8 leaving the single cell R6C2 for 8. Often we don't form the Cycle and get a
valid cell hit just by a quick bit of parallel scan.
Cycles are valuable assets to have for reducing the overall uncertainty in a Sudoku game.
Solution to Sudoku level 3 game 8 Final Stage 3: Easy to find valid cells
With 4 in R9C4, R9C3 1 by reduction. With 5 in R7C4, R5C4 3 by reduction -- R5C6 5 by reduction. With 1 in R9C3, R5C3 4 by reduction -- R5C1 1 by reduction -- R3C2 1 by scan for 1 in C1.
With 7 in R8C4, R8C2 4 -- R7C2 7 -- R8C7 1 as leftover digit -- R7C8 4 as leftover digit -- R6C8 5 by reduction.
Possible digit subset DS [1,3] in R1C8, R3C8 in C8 and with 1 in C3, R3C8 3 by reduction -- R1C8 1 as leftover digit -- R3C6 7 by reduction -- R1C6 3 by reduction.
R3C4 8 as leftover digit in top middle major square.
R1C7 8 by scan R3, C9. R1C9 4 by DSA reduction of [5,9] in R1 from DS [4,5,9] in C9 and hence in R1C9 -- R3C1 4 by scan for 4 in R1 -- R1C1 7 as leftover digit in R1, C1 and top left major square.
With 4 in R1C9, R6C9 9 by reduction -- R6C7 4 by reduction -- R3C9 5 as leftover digit in C9 -- R3C7 9 as leftover digit in whole game.
Final solution shown below.
A new game for you to solve at Sudoku level 3
We leave you here with a new Sudoku level 3 game to solve.
Other Sudoku game plays you may like to go through at leisure
If you start at beginner level Sudoku solutions and go through higher levels step by step, you should become an Expert Sudoku solver.
Expert Sudoku: Play and learn how to solve harder Expert level Sudoku puzzles. Many of these are extremely difficult to solve with easy to understand detailed solutions.
Hard Sudoku level 4 games: Selected main category of hard Sudoku puzzles with very detailed solutions. Learn hard Sudoku Strategies and Techniques.
New York Times Hard Sudoku games with compact one-step solutions and a few NYT hard Sudoku games with a little more detailed solutions.
Medium Sudoku level 3: With these games, you really grow up from the beginner level. Learning to solve this level of games will give you confidence to solve really hard Sudoku games of higher
difficulty level. Don't skip this level of games while still learning Sudoku. These games are with detailed step by step solutions as well.
Next series of game to play are the main category of hard Sudoku level 4 games with detailed solutions.
Beginner level Sudoku: For beginners, Sudoku beginner level 1 and level 2 game solutions are ideal to start with. Learn to solve easiest Sudoku games and the most basic techniques of finding a hidden
single by row column scan and a naked single by possible digit subset analysis (DSA).
Enjoy playing Sudoku. | {"url":"https://suresolv.com/sudoku/sudoku-third-level-game-play-8","timestamp":"2024-11-06T04:30:18Z","content_type":"text/html","content_length":"41134","record_id":"<urn:uuid:95f42497-4674-41a2-880f-c5628e208f9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00084.warc.gz"} |
In how many different ways can the letters of the word MISSISSIPPI be
Question Stats:
69% 31% (02:16) based on 438 sessions
In how many different ways can the letters of the word MISSISSIPPI be arranged if the vowels must always be together?
A. 48
B. 144
C. 210
D. 420
E. 840
Ooh, a combinatorics problem with some fairly large numbers. The best technique for these is to mentally break them down into simpler problems. Otherwise, the solutions tend to sort of look like
magic tricks - sure, you can use the formula, but how are you supposed to know to use that formula, and not one of the many similar-looking ones?
There are four vowels, and they're all the same. Let's set those vowels aside: (IIII)
Now we have the letters MSSSSPP. How many ways can just those letters be arranged? Well, if they were all different, we could arrange them in 7x6x5x4x3x2x1 = 7! ways. However, they aren't all
different. For instance, four of the letters are (S). I'm going to color them to demonstrate why that matters:
one arrangement: M
a 'different' arrangment: M
We counted both of those arrangements, but we actually don't want to. All of the Ss look the same - they aren't actually different colors - so we want to make those arrangements the same. Because
there are 4x3x2x1 = 24 ways to order the different Ss, we want every set of 24 arrangements where the Ss are in the same place, to just count as 1 arrangement, instead. So, we can divide out the
extra possibilities by dividing our total by 24 (or 4!):
Do the same thing with the two Ps. We still have twice as many arrangements as we need, since we've counted as if the two Ps were different, but they're actually the same. So, divide the total by 2:
7!/(4! x 2)
Now we have to put the (IIII) letters back in. They all have to go together. Start by looking at one of the arrangements of the other letters: PMSSPSS. Where can the four Is go? There are 8 places
where we can put them:
So, for each arrangement, we have to multiply by 8, to account for the eight possible ways to put the vowels back in. Here's the final answer:
(8 x 7!) / (4! x 2) = (8 x 7 x 6 x 5 x 4 x 3 x 2) / (4 x 3 x 2 x 2) = 4 x 7 x 6 x 5 = 4 x 210 = 840. | {"url":"https://gmatclub.com/forum/in-how-many-different-ways-can-the-letters-of-the-word-mississippi-be-216328.html","timestamp":"2024-11-13T14:00:24Z","content_type":"application/xhtml+xml","content_length":"929139","record_id":"<urn:uuid:ce8d5611-5a51-4e8c-be07-d987d2423724>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00260.warc.gz"} |
Eureka Math Grade 3 Module 7 Lesson 8 Answer Key
Engage NY Eureka Math 3rd Grade Module 7 Lesson 8 Answer Key
Eureka Math Grade 3 Module 7 Lesson 8 Pattern Sheet Answer Key
multiply by 6 (1─5)
6 × 1 = 6
6 × 2 = 12
6 × 3 = 18
6 × 4 = 24
6 × 5 = 30.
Eureka Math Grade 3 Module 7 Lesson 8 Problem Set Answer Key
Question 1.
Fold and cut the square on the diagonal. Draw and label your 2 new shapes below.
ABCD is a Square.
ABC is the half part of the square when folded and cut.
DEF is the other half part of the square when folded and cut.
Question 2.
Fold and cut one of the triangles in half. Draw and label your 2 new shapes below.
ABC is an Triangle.
ABD is the half part of of the Triangle when folded and cut.
CEF is the other half part of of the Triangle when folded and cut.
Question 3.
Fold twice, and cut your large triangle. Draw and label your 2 new shapes below.
ABC Triangle is twice folded and cut into two halves.
ADEC is first half part of twice folded Triangle.
CFG is the second half part of twice folded Triangle.
Question 4.
Fold and cut your trapezoid in half. Draw and label your 2 new shapes below.
ABCD is a Trapezium, folded and cut.
ABFE and CDHG are two parts of them formed.
We can call ABFE and CDHG as Quadrilaterals.
Question 5.
Fold and cut one of your trapezoids. Draw and label your 2 new shapes below.
GDCH is a half part of Trapezium.
When folded the Trapezium, we get a GDIJ Square and a CHK Triangle are formed.
Question 6.
Fold and cut your second trapezoid. Draw and label your 2 new shapes below.
AEFB is the second Trapezium.
When the AEFB Trapezium is folded and cut, we get a AEOP Parallelogram and a BFN Triangle .
Question 7.
Reconstruct the original square using the seven shapes.
a. Draw lines inside the square below to show how the shapes go together to form the square. The first one has been done for you.
ABCD is the Square.
AC is the first line used to divide the Square.
EB is the second line drawn, AEB and BEC Triangles are formed.
GF is the third line drawn, GFC Triangle is formed.
GI is the fourth line drawn, DGI Triangle is formed.
IJ is the fifth line drawn, IJA Triangle is formed.
HE is the sixth line drawn, IJEH Rectangle and HEFG Rectangle are formed.
We can reconstruct this seven shapes together to form the original square.
b. Describe the process of forming the square. What was easy, and what was challenging?
I first put the two big triangles to form the Square, which was quiet easy. Later one by one shape I joined to put on the square, this process was challenging .
Firstly, dividing the square into two Triangle was the easy one. Later on forming other shapes which are smaller in size than the first step, was quiet difficult one because while reconstructing it
back to original shape was little messy and challenging. Overall, it made me to go back to my problem and check the shapes in the list one by one, which finally gave me my original shape of square to
rejoin with all seven different shapes.
Eureka Math Grade 3 Module 7 Lesson 8 Exit Ticket Answer Key
Choose three shapes from your tangram puzzle. Trace them below. Label the name of each shape, and describe at least one attribute that they have in common.
ABJ, BJC, CIF, FED, KHG are the triangles in the ABCD square.
AEKG is a Parallelogram in the ABCD square.
KFIH is the Rectangle in the ABCD square.
They all are having one right angle in common in KHG, KFIH and AEKG shapes.
Eureka Math Grade 3 Module 7 Lesson 8 Homework Answer Key
Question 1.
Draw a line to divide the square below into 2 equal triangles.
EFGH is a square.
EG is the line drawn in the square, making EFG and GHE two triangles into equal halves.
Question 2.
Draw a line to divide the triangle below into 2 equal, smaller triangles.
ABC is the Triangle.
CD is the line drawn, which divides the triangle into two equal triangle ADC and BDC triangles.
Question 3.
Draw a line to divide the trapezoid below into 2 equal trapezoids.
ABCD is the trapezoids
EF is the line drawn, dividing the trapezoids into two equal trapezoids AEFD and BEFC. trapezoids.
Question 4.
Draw 2 lines to divide the quadrilateral below into 4 equal triangles.
EFGH is the quadrilateral.
Lines are drawn in the quadrilateral joining EG and HF center O, making EOF, FOG, GOH and HOE four equal triangles.
Question 5.
Draw 4 lines to divide the square below into 8 equal triangles.
ABCD is a Square.
AC and BD are joined by drawing two lines.
EF and GH are another two lines drawn making center O.
AOH, HOB, BOF, FOC, COG, GOD, DOE, EOA equal triangles are formed.
Question 6.
Describe the steps you took to divide the square in Problem 5 into 8 equal triangles.
I have drawn two lines dividing the square into two equal triangles.
I have later drawn two lines in between the two triangles formed.
I have drawn two lines dividing the square into two equal triangles.
ABC, ABD, BCD, CDA Triangles are formed.
I have later drawn two lines in between the two triangles formed.
AOH, HOB, BOF, FOC, COG, GOD, DOE, EOA eight equal triangles are formed having Center O.
Leave a Comment | {"url":"https://bigideasmathanswers.com/eureka-math-grade-3-module-7-lesson-8/","timestamp":"2024-11-10T01:41:38Z","content_type":"text/html","content_length":"156531","record_id":"<urn:uuid:d9f9f817-b9b1-4889-a52c-9de898a96f1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00549.warc.gz"} |
Return On Investment (ROI) Calculator - Step By Step Math Problem Solver | MathCrave
Return on Investment (ROI) Calculator
Return on Investment (ROI) Calculator
ROI stands for Return on Investment. It is a performance measure used to evaluate the efficiency or profitability of an investment. ROI is calculated by dividing the net profit (or benefit) of an
investment by its initial cost, and it is usually expressed as a percentage. The formula is:
ROI Formula
The formula to calculate ROI is:
\[ \text{ROI} (\%) = \left( \frac{\text{Net Profit}}{\text{Cost of Investment}} \right) \times 100 \]
• Net Profit is the total return from the investment minus the initial cost.
• Cost of Investment is the initial amount of money invested.
Example Calculation
Let’s consider an example where Tony W. Batholet invests in cleaning equipment for Tony Cleaning Services. The initial cost of the equipment is $5,000. After one year, the additional revenue
generated due to this investment is $7,500. The net profit would be calculated as follows:
\[ \text{Net Profit} = \text{Revenue} – \text{Cost of Investment} \]
\[ \text{Net Profit} = \$7,500 – \$5,000 = \$2,500 \]
Using the ROI formula:
\[ \text{ROI} (\%) = \left( \frac{\$2,500}{\$5,000} \right) \times 100 = 50\% \]
An ROI of 50% means that the investment in the cleaning equipment has returned 50% more than the initial cost of the investment. This indicates a profitable investment.
• Net Profit: The profit earned from the investment after subtracting the initial cost.
• Cost of Investment: The initial amount of money spent on the investment.
Return on Investment (ROI) is a critical metric for assessing the profitability of an investment. By using the ROI formula, businesses like Base Burn Cleaning Services can evaluate the financial
returns of their investments and make informed decisions to maximize profitability. | {"url":"https://mathcrave.com/return-on-investment-roi/","timestamp":"2024-11-05T22:35:00Z","content_type":"text/html","content_length":"299617","record_id":"<urn:uuid:c5e037b6-8579-45fb-92ce-8e7a16142a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00377.warc.gz"} |
Which Orbitals Are Not Allowed? | Free Printable Calendar Monthly
Which Orbitals Are Not Allowed?
Therefore, the 1p orbital doesn’t exist. In the second shell, both 2s and 2p orbitals exist, as it can have a maximum of 8 electrons. In the third shell, only the 3s, 3p and 3d orbitals exist, as it
can hold a maximum of 18 electrons. Therefore, the 3f orbitals do not exist.
Which of following orbital is not possible?
<br> 1p, 2s, 3f and 4d. (i) The first shell has only one sub-shell, i.e., 1s, which has only one orbital, i.e., 1s orbital. Therefore, 1p orbital is not possible.
What orbitals are allowed?
There are four types of orbitals that you should be familiar with s, p, d and f (sharp, principle, diffuse and fundamental). Within each shell of an atom there are some combinations of orbitals.
Is 2d orbital possible?
2d orbital can’t exist in an atom. We can explain it from its subsidiary quantum number and principal quantum number (n). The value ℓ gives the sub-shell or sub-level in a given principal energy
shell to which an electron belongs. … So, 2d orbital can’t exist.
Is 4f possible?
For any atom, there are seven 4f orbitals. The f-orbitals are unusual in that there are two sets of orbitals in common use.
Is 4s orbital possible?
In all the chemistry of the transition elements You may also read,
Which sublevel is not allowed?
In the 1st energy level, electrons occupy only in the s sublevel, so there is no d sublevel. In the 3rd energy level, electrons occupy only the s, p, and d sublevels, so there is no f sublevel. Check
the answer of
Why does 2 F Subshell not exist?
Transcribed image text: In terms of quantum numbers, the 2f subshell does not exist because the value of l must be equal to the value of n. the value of l cannot be greater than the value of n. the
value of m_l must be equal to the value of l.
Why 1p and 2d is not possible?
In the first shell, there is only the 1s orbital, as this shell can have a maximum of only 2 electrons. Therefore, the 1p orbital doesn’t exist. In the second shell, both 2s and 2porbitals exist, as
it can have a maximum of 8 electrons. … Therefore, the 3f orbitals donot exist. Read:
Is 1s orbital possible?
At the first energy level, the only orbital available to electrons is the 1s orbital, but at the second level, as well as a 2s orbital, there are also orbitals called 2p orbitals.
What is the value of N and L for 4f orbitals?
n l Orbital Name
4 0 4s
1 4p
2 4d
3 4f
Is 5g Orbital possible?
For any atom, there are nine 5g orbitals. The higher g-orbitals (6g and 7g) are more complex since they have spherical nodes.
Why do we fill 3d before 4s?
Why is the 3d orbital filled before the 4s orbital when we consider transition metal complexes? According to the aufbau principle
Why is 3d bigger than 4s?
The 4s electrons are lost first followed by one of the 3d electrons. … The electrons lost first will come from the highest energy level, furthest from the influence of the nucleus. So the 4s orbital
must have a higher energy than the 3d orbitals.
Is 4s higher than 3d?
The 4s electrons are lost first followed by one of the 3d electrons. … The electrons lost first will come from the highest energy level, furthest from the influence of the nucleus. So the 4s orbital
must have a higher energy than the 3d orbitals. | {"url":"https://bizzieme.com/which-orbitals-are-not-allowed/","timestamp":"2024-11-02T07:48:57Z","content_type":"text/html","content_length":"62795","record_id":"<urn:uuid:39a229b0-5c87-4cf3-b030-d54bdccc43ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00632.warc.gz"} |
XGboost Remove Outliers With Isolation Forest
Outliers in training data can negatively impact the performance and generalization of XGBoost models.
These anomalous data points, which significantly deviate from the majority of the data, can skew the model’s learned parameters and lead to suboptimal results.
The Isolation Forest algorithm is an unsupervised method for identifying outliers that works by isolating anomalies in the data.
This example demonstrates how to use the Isolation Forest algorithm to detect and remove outliers from a dataset, followed by training two XGBoost models—one on the original data (with outliers) and
another on the cleaned data (outliers removed).
By comparing the performance of these models, we can observe the impact of outliers on the model’s accuracy and generalization.
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import IsolationForest
import numpy as np
import xgboost as xgb
# Generate synthetic dataset with outliers
X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=2, random_state=42)
outlier_indices = np.random.choice(len(X), size=50, replace=False)
X[outlier_indices] += np.random.normal(loc=0, scale=5, size=(50, 10))
# Use Isolation Forest to identify and remove outliers
iso_forest = IsolationForest(n_estimators=100, contamination=0.05, random_state=42)
outlier_labels = iso_forest.fit_predict(X)
outlier_mask = outlier_labels != -1
X_cleaned, y_cleaned = X[outlier_mask], y[outlier_mask]
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train_cleaned, X_test_cleaned, y_train_cleaned, y_test_cleaned = train_test_split(X_cleaned, y_cleaned, test_size=0.2, random_state=42)
# Train XGBoost models on original and cleaned data
model_original = xgb.XGBClassifier(n_estimators=100, learning_rate=0.1, random_state=42)
model_original.fit(X_train, y_train)
model_cleaned = xgb.XGBClassifier(n_estimators=100, learning_rate=0.1, random_state=42)
model_cleaned.fit(X_train_cleaned, y_train_cleaned)
# Evaluate models on test sets
y_pred_original = model_original.predict(X_test)
y_pred_cleaned = model_cleaned.predict(X_test_cleaned)
accuracy_original = accuracy_score(y_test, y_pred_original)
accuracy_cleaned = accuracy_score(y_test_cleaned, y_pred_cleaned)
print(f"Test accuracy (with outliers): {accuracy_original:.4f}")
print(f"Test accuracy (outliers removed): {accuracy_cleaned:.4f}")
The code snippet first generates a synthetic dataset using scikit-learn’s make_classification function and adds outliers to the dataset by sampling from a normal distribution with a larger scale
parameter. An Isolation Forest model is then instantiated and fitted to the dataset. The model identifies outliers by assigning them a label of -1, while inliers are labeled 1. The outliers are
removed from the dataset using a boolean mask.
Next, the original dataset (with outliers) and the cleaned dataset (outliers removed) are split into train and test sets. Two XGBoost classifiers are instantiated and trained on the respective
training sets. Finally, the models’ performance is evaluated on the corresponding test sets using the accuracy metric, and the results are printed for comparison.
By removing outliers from the training data using the Isolation Forest algorithm, the XGBoost model can learn more robust and generalizable patterns, potentially leading to improved performance on
unseen data. This unsupervised approach to outlier detection and removal offers an alternative to methods like Z-score, which relies on statistical assumptions about the data distribution. However,
the impact of outliers on model performance may vary depending on the dataset and the problem at hand. It is essential to carefully consider the nature of the outliers and the specific requirements
of the application before deciding on an outlier removal strategy. | {"url":"https://xgboosting.com/xgboost-remove-outliers-with-isolation-forest/","timestamp":"2024-11-02T17:33:23Z","content_type":"text/html","content_length":"13175","record_id":"<urn:uuid:082ea7a6-49fe-410f-aede-3dba70785b16>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00536.warc.gz"} |
Introduction to Data Analysis
Anyone involved in operations, project management, business analysis, or management who needs an introduction to Data Analysis, would benefit from this class.
Course Objectives
• Learn the terms, jargon, and impact of business intelligence and data analytics.
• Gain knowledge of the scope and application of data analysis.
• Explore ways to measure the performance of and improvement opportunities for business processes.
• Be able to describe the need for tracking and identifying the root causes of deviation or failure.
• Review the basic principles, properties, and application of Probability Theory.
• Discuss data distribution including Central Tendency, Variance, Normal Distribution, and non-normal distributions.
• Learn about Statistical Inference and drawing conclusions about a Data Population.
• Learn about Forecasting, including introduction to simple Linear Regression analysis.
• Learn about Sample Sizes and Confidence Intervals and Limits, and how they influence the accuracy of your analysis.
• Explore different methods and easy algorithms for forecasting future results and to reduce current and future risk.
Who should attend
• Business Analyst, Business Systems Analyst, Staff Analyst
• Those interested in CBAP®, CCBA®®, or other business analysis certifications
• Systems, Operations Research, Marketing, and other Analysts
• Project Manager, Team Leads, Project Leads, Project Assistants, Project Coordinators
• Those interested in PMP®, CAPM®, or other project management certifications
• Program Managers, Portfolio Managers, Project Management Office (PMO) staff
• Data Modelers and Administrators, DBAs
• Technical & other Subject Matter Experts (SMEs)
• IT Staff, Manager, VPs
• Finance Staff, Manager
• Operations Analyst, Supervisor
• External and Internal Consultants
• Risk Managers, Operations Risk Professionals
• Operations Managers, Line Managers, Operations Staff
• Process Improvement, Compliance, Audit, & other Governance Staff
• Thought Leaders, Transformation & Change Champions, Change Manager
• Executives, Directors, & other senior starr exploring cost reduction and process improvement options
• Executive and Administrative Assistants and Coordinators
• Job seekers and those who want to show dedication to data analysis and process improvement
• Leaders at all levels who wish to increase their Data Analysis capabilities
Although it is not mandatory, students who have completed the self-paced Introduction to R eLearning course have found it very helpful when completing this course.
Part 1: What are BI and DA? Definitions of BI History of BI How is BI used to help Businesses Definition of DA The relationship between BI and DA Part 2: Data Here, There, and Everywhere! Oracle
study on business data preparedness Overview of Study Findings-overwhelmed by volume of data and inability to utilize data effectively Possible solutions to data overflow problems Part 3: Got Data?
The Unique Role of the Data Analyst Role of a Data Analyst Skill set required to be an effective Data Analyst Exercise: "Channeling Your Inner Analyst"- Students are told to imagine receiving a memo
from their supervisor explaining that the company is downsizing. They are expected to take on additional responsibilities including doing data analysis. They must rewrite their current job
description to include the new data analyst duties.
FACTS or Feelings: Your Choice As data becomes more widely available, businesses are finding more success in adopting a fact-based decision model rather than relying on traditional intuition alone.
In this module, we examine more closely the two types of decision models businesses use as well as the benefits of the fact-based model. We cover the steps of the Rational Decision Model, a
fact-based method for decision making.
Part 4: Fact-Based Decision-Making Process The two types of Decision Models Businesses use The Benefits of Fact-Based Decision Making Rational Decision Model: Six- Step Method Pal's Diner: An Example
of how the Rational Model is used in practice Exercise: " Who's The Boss?"- Students are divided into groups and told to imagine that they are the CEO of their own company. They define a
business-related decision that they need to make and then apply the steps of the Rational Decision Model to arrive at the final conclusion.
'BIG DATA' ANATOMY In this module, we revisit the Big Data trend with a more detailed focus. We begin by defining the buzzword-"BIG DATA", examining its core attributes, and outlining the factors
that contribute to data being 'big'. We explore how businesses collect structured and unstructured data, and the challenges they face in storing and effectively using both types of data.
Part 5: Big Data Anatomy The Attributes of Big Data Definition of Big Data The 4 V's of Big Data Structured versus Unstructured Data The Challenges of Big Data Exercise: "Camp WoeBeData"- Students
are asked to describe some of the big data challenges that their companies face and to outline what steps are being taken to address the problems.
GETTING TO KNOW YOUR DATA In order to better understand how to analyze data, we first have to comprehend its depth. This requires drilling deep beneath the server it is located on and understanding
its composition. Assume we are given a structured data set with labeled columns and completed rows. There are plenty of ways to summarize the story behind the data, but we cannot dive in without
first getting to understand its fundamental structure. We begin by classifying the collected data as quantitative or qualitative. Then we further classify our column variables according to the way
data is measured: nominal, ordinal, interval, or ratio. It is only after understanding this classification that we are able to proceed to the next step of choosing the appropriate analysis techniques
which correspond to nominal, ordinal, interval or ratio variables.
Part 6: Getting to Know Your Data Data Types: Qualitative versus Quantitative Taking a Closer Look: Data Measurement Four Types of Data Variables Definition and examples of Nominal Variables: Name
only Definition and examples of Ordinal Variables: Order Matters Definition and examples of Interval Variables Definition and examples of Ratio Variables Summary of Statistics/Operations that can be
performed on each type Exercise: "Marketing to Low Renters" - Students are told to put on their data analyst thinking caps. They have been employed as a junior data analyst for a Marketing Company
whose goal is to make a marketing campaign for a client who plans on targeting the 'needy' population. Students are given a public housing data set and told to classify each variable according to its
DATA VISUALIZATION A picture is worth a thousand words, and there definitely is no exception when it comes to summarizing data. This module is dedicated to highlighting the importance of visualizing
data, and how the human eye depends on visual representation to get a quick sense of data relevance. Visual representation is the audience's first impression of the data and forms a crucial step in
inviting and maintaining a genuine interest in a subject matter. We demonstrate how to create colorful, easy to understand tables, charts, and graphs that aid in helping us convey the story behind
the data set being analyzed.
Part 7: The Fundamental Ways we use data Visualization techniques The five ways we use data visualization techniques Part 8: Displaying Tabular Data in Excel How to create custom tables in Excel How
to Sort/Filter tabular data How to create and manipulate pivot tables Part 9: Using Charts and Graphs to Communicate Data How to create Pie, Column, and Line charts using Excel Communicating
effectively using different chart types How to choose the correct chart to display the correct data type Exercise: "Table Mining"- Students develop tables to summarize trends in a data set related to
Low Rent and Section 8 housing.
Exercise: "Charting Poverty"- Students develop charts and graphs to summarize the poor housing epidemic in a public housing data set.
NUMERICAL DATA SUMMARIES Another way that data analysts summarize data is by providing a single number, or summary statistic, that has meaning. This module explores how the mean, median, and mode can
be used to summarize the center of discrete and continuous grouped data. The range, standard deviation, and inter-quartile range measure the dispersion in the data set and provide information about
how data points are spread.
Part 10: Using Numerical Descriptives to Summarize Data Measures of Centrality: Mean, Median, Mode Format of Data Values: Grouped Discrete and Grouped Continuous Formulas for the Mean Examples:
Applying 3M's to Grouped Discrete and Grouped Continuous Data Measures of Spread: Standard Deviation, Range, Inter-quartile Range Examples: Applying Measures of Spread to Grouped Discrete and Grouped
Continuous Data Exercise: "Faulty Wear"- Students use mean, median, and mode to summarize information about returns from a department store.
Exercise:"Faulty Wear: Take 2"- Students use measures of spread to summarize the distribution of faulty garments from a department store.
QUANTIFYING UNCERTAINTY Probability is not only the most important data analysis foundation, but it is by far the strongest measure that businesses can use to quantify uncertainty. There are risky
decisions that businesses encounter when investing in certain stocks and taking a chance on whether its value will rise. The insurance industry calculates the most probable life expectancy for the
population and bases its rates on that uncertainty. There is an abundance of examples that illustrate why understanding probability benefits the business industry, so this module is designed to
expose students to a solid understanding of the topic. We use simple, easy to understand examples to introduce students to traditional and conditional probability. Then we follow up with other
business probability applications that involve relative frequency and expectation.
Part 11: Probability: Quantifying Uncertainty Origin of Probability Probability: Examples of Business Applications The traditional definition of Probability Simple Computation: The TopBottomFraction
Method How to calculate probabilities from contingency tables How to Calculate conditional probability from contingency tables Applying probability to calculate relative frequency Applying
probability to calculate the expected value Using Expected Value in Decision Making Exercise:" Pocket Probability"-Students practice calculating basic probabilities using the change in their pocket.
Exercise:"Magazine Money"- A magazine subscription service conducted a survey to study the relationship between the number of subscriptions per household and family income. Students use contingency
tables to answer questions related to conditional probability.
Exercise: "Home Sweet Home"- Students consider an international dataset involving the different types of home dwellings located in Bradfield, England. They use information from contingency tables to
calculate regular and conditional probabilities.
Exercise: "Peck Up Your Speed"-Students analyze the relative frequency of workers' typing speeds.
Exercise: "Rent Me"-Students calculate the expected value in order to assist the Callow Corporation in making an important decision regarding leasing computer equipment.
NORMALITY If you look at the graph of your data and notice that it looks like a bell-shaped curve, it probably follows a normal distribution. The manufacturing industry, for example, measures volumes
and weights of products. Data collected from these instances are normally distributed. In fact, if we take any process and calculate its mean value enough times, then its mean value is also normally
distributed. This module examines the famous 'Normal Distribution' and how it can be used to provide useful information about the probability of certain events.
Part 12: The Normal Distribution Examples of Normally Distributed Data Variables Characteristics of the Normal Distribution Interpreting the Empirical Rule Components of the Normal Distribution:
Probabilities and X values Using the NORMDIST function in Excel to calculate probability from a normal distribution Using the NORM. INV function in Excel to calculate X values related to a normal
distribution Exercise: "Fill 'R' Up"- Students help a manufacturer use the normal distribution of volumes from a cup dispenser to calculate probabilities of certain events.
ASSOCIATION AND PREDICTION If we have data measurements for at least two variables, it is natural to ask if there is some relationship between the two. This module takes a look at the two strongest
measures of association, correlation, and regression, and explores how to use them to quantify the relationship. It also examines how regression output can be used to predict future observations.
Part 13: Correlation and Regression Definition of Correlation and Regression The relationship between Correlation and Regression Correlation Coefficient: Values Examples of Correlation Interpretation
of a Regression Equation Step-by-Step example of How to Do a Regression Analysis Exercise: "Paid Sickouts"- Students use correlation and regression to help a company determine the relationship
between the number of sick days employees took and the wages they earned.
Sorry! It looks like we haven’t updated our dates for the class you selected yet. There’s a quick way to find out. Contact us at 502.265.3057 or email info@training4it.com
Request a Date | {"url":"https://training4it.com/Course/introduction-to-data-analysis-DATALYTICS","timestamp":"2024-11-07T19:36:57Z","content_type":"text/html","content_length":"39050","record_id":"<urn:uuid:770bae88-d128-4387-a068-0edb20627f32>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00274.warc.gz"} |
Geotechnical Engineering: Design And Plaxis Modelling Challenge
Get free samples written by our Top-Notch subject experts for taking online assignment help
Executive Summary
In this assignment, the design of the RC-L-shaped cantilever and the strip foundation is described briefly with proper justification. The analysis of the design is based on the different failure
mechanism techniques that are mentioned in the assignment. It has been observed that the cantilever system is performed in different conditions under the parameters of the failure mechanism. The
design of the model is done by using the PLAXIS simulation technique. The mathematical expressions for the bearing capacity are discussed in this assignment. The cantilever retaining wall is designed
according to assumptions that are considered while designing the overall system. An axis-symmetric model of Single pile problem definition has been assessed with the PLAXIS software solutions.
Shallow raft foundation problem definition has also been implied with the calculation of un-drained and drained cohesion.
Question 1
1.1 Problem Definition
The first question is regarding the design of the RC-L-shaped cantilever with different optimum conditions. The cantilever is a structural element that is designed horizontally and is supported in
one end. The cantilever retaining wall is shown in the assignment. As commented by YAVAN et al. (2020, p.552), the main task of this question is to design a suitable cantilever that satisfies the
failure mechanisms such as overturning, sliding, and bearing capacity. Hence, some numerical calculations are performed to develop the effect of the failure mechanism parameters. Besides, the concept
of wall material, interface, and foundation slab are described briefly in this section. Different type of design parameters is introduced to design the cantilever with the assumptions. An excel sheet
is developed in order to record the calculations for the design of the cantilever. The justification of the optimum design of RC-L-shaped cantilever is done in this question.
1.2 Problem analysis
The analysis of this design process is performed using different numerical solutions. The optimum design is based on the bearing capacity, overturning, and sliding parameters. The design is developed
in such a way that it should maximize the benefits with less use of materials (Yadav et al. 2021, p.230). The bearing capacity is defined as the horizontal pressure on the backfill side which is used
to generate the overturning in the base. When the electricity supply is low the bearing pressure is the maximum under the compression of footing.
The formula for bearing capacity is,
Bearing capacity= Rv / (0.75 * L – 1.5 * e)
L is the length of the cantilever and Rv is the vertical force inside the cantilever.
The overturning is the moment that is associated with the axial load which is placed in the RC-L-shaped cantilever.
The formula of the overturning moment is,
Overturning moment= ? Horizontal forces with respect to the distance of the footing base
The formula of sliding is,
Sliding = (resisting force/driving lateral force)
The analysis of this design is based on the above formulas.
Figure 1.1: Diagram of Cantilever retaining wall
(Source: Self Created)
1.3 Results and discussion
The results are obtained by using different formulas of the parameters. The optimum geometrical design is developed with appropriate length in the foundation slab. The design of the cantilever system
is designed in such a way that the overall system should maintain the failure mechanisms such as overturning, sliding, and bearing capacity. As stated by Uray et al. (2019, p.112), the numerical
calculations are performed for each of the parameters which depend on the overall mechanism of the cantilever system. On the other hand, the design of the cantilever depends on the economical aspects
as the developed design should maximize the profit with less use of the components. Therefore, it has been observed that the RC-L-shaped cantilever is designed to manage the dry, cohesion less
backfill parameters which are important in the retaining wall problem. The readers would be able to collect practical information regarding the different parameters of the cantilever system.
1.4 Conclusion
In this section, the design of the RC-L-shaped cantilever is described with practical calculations. The concept of the different parameters is discussed in this section. The three different failure
mechanisms are discussed in this section. Overturning, sliding, and the bearing capacity of a cantilever are described with mathematical expression and practical considerations. The profit of the
cantilever system is considered as the main parameter while designing the RC-L-shaped cantilever. The different assumptions for the cantilever are mentioned in order to design the retaining wall of
the cantilever properly with better requirements. There is a part where the model is developed with different friction angles, wall materials, and foundation soil materials. The interface property is
analyzed to choose different foundation soil values through which the design is developed.
Question 2
2.1 Problem Definition
The model strip foundation is designed according to the requirements for the failure mechanisms. The numerical calculation of the bearing capacity is performed for different cases such as under
drained loading with under drained cohesion, detained loading with drained cohesion, and increase value of friction. As commented by Chanmee et al. (2016, p.1077), the comparison of the results is
done on the basis of PLAXIS calculation and theoretical calculations. This comparison is described using a single graph that represents the friction angle of the foundation soil. The results for the
bearing capacity are displayed using graphical representation which is obtained from the PLAXIS simulation. Therefore, the design of the model strip foundation is designed based on the above
conditions. The same analysis of the system is performed by increasing the foundation level of the strip. Again the comparison is done based on the results.
2.2 Problem analysis
In this section, the different numerical calculations are performed for different cases under the failure mechanisms. The foundation system is developed using these parameters.
Bearing capacity is the amount of bearing stress which occurs in the strip foundation. When the stress of the foundation level is increased the cohesion conditions are,
D is the foundation depth and γ is the unit weight of the soil
According to Alander et al. (2020, p.2019), the formula for bearing capacity is,
q[f] =c.N[c] +q[o].N[q] + ½g.B .N[g ]
For un-drained loading, calculations are in terms of total stresses; the un-drained shear strength (s[u]); N[q] = 1.0 and Ng = 0
c = apparent cohesion intercept
q[o] = g . D (i.e. density x depth)
D = founding depth
B = breadth of foundation
g = unit weight of the soil removed.
For drained loading conditions, the bearing capacity is based on the stress on the strip foundation. The formula of bearing capacity is,
q[f] =c.N[c] +q[o].N[q] + ½g.B .N[g]
For drained loading, calculations are in terms of effective stresses; f´ is > 0 and N [c], N[q,] and Ng are all > 0.
qf= 1.8 (N[q] - 1) tanΦ
=0.317 m^-2
qf= 1.8 (N[q] - 1) tanΦ
=0.655 m^-2
qf= 1.8 (N[q] - 1) tanΦ
=1.039 m^-2
Figure 2.1: Diagram of Shallow raft foundation
(Source: Self Created)
2.3 PLAXIS modelling
Figure 2.2: PLAXIS Design
(Source: Self-created)
The diagram has been developed using the PLAXIS software and this defines the effective system measures in the software design tools. This design has effectively derived the construction of the
Shallow raft foundation and these are effective in terms of analysing the structure with different materials (Lam, 2018, p.181). A design is developed according to the design specifications and these
are effective in terms of deriving the idea on the system performance. These can be relevant to the implications of the system assessments and the following results have been developed that provides
effectiveness for configurations. The configurations are useful to derive the relevant data from the system.
2.4 Results & Discussion
Figure 2.3: PLAXIS Output
(Source: Self-created)
The output is developed based on the configuration of the system and these are subjective and relevant with the implications of the system model. These assessments are subjective in terms of
determining a difficult approach for the system. An effective system is developed based on the materials of construction and these assessments are necessary for better construction of the system. The
output is also effective with the objectives of the system designs and the relevance of the system is also implied with the implications of the system structure. The results are hence based on
several implications and these results output on the results.
Figure 2.4: PLAXIS Output
(Source: Self-created)
The report of the construction has been depicted in terms of determining the ideas on the developed construction. These constructions are effective in determining the issues based on the system
development and the approaches can be rationalized with the design of the beam system and the reports on the materials also shown an effective result (Som, 2017, p.79). Thus, an effective system
construction is possible and these can be relevant in terms of developing a suitable system and its performance. These can also be relative to the objections and implications of the systems and the
rational development of the system is possible. The designs are hence effective with the provided material.
2.5 Conclusion
In this section, the design of the model strip foundation is developed under different conditions. The numerical analysis of the bearing capacity is performed according to the ultimate conditions
that are mentioned in the question. Some graphical representation is done based on the analysis of the loading conditions. Therefore, the readers would be able to achieve detailed information
regarding the ultimate bearing capacity and its applications. The calculation of bearing capacity, friction angle, and other parameters are done properly with mathematical expressions to simplify the
analysis of the design process. PLAXIS simulation is used to design the proposed design to demonstrate the operation of the system.
Question 3
3.1 Problem definition
The problem is based on the axisymmetric model of the provided structure and this is a vertical single pile problem. The problem is defined as the reinforced concrete pile and the pile can be
subjective with the several cases. These cases include undrained loading indulged with the values of cu and φ = 0 and also the separate conditions with the values of drained loading as drained
cohesion, c’ and friction angle, φ’ = 10, 20, and 30 degrees. The prime motive of the problem assessment and its solution can be infiltrated with the objectives of deriving ultimate bearing
capacities (Zhao and You, 2018, p.11). This is to be determined with the graphical development of the factors concerning the soil friction angle, φ’. These aspects are effective in terms of
determining the drained and undrained coercion and these can also be subjected to determine PLAXIS simulations based on the derived and assessed data from the provided tables.
3.2 Problem analysis
The formula for cohesion can be derived from the assessments as s = c + σ tan φ and these aspects have provided several variables as s = shear stress, c = cohesion, σ = normal stress, and φ = angle
of friction. This is assessed from the system that an effective measure of the system is possible. These assessments are subjective to determine the relevance of the PLAXIS simulation. The problem is
hence developed based on the following aspect to develop a single pile problem.
Figure 3.1: Single pile problem definition
(Source: Developed by the Researcher)
The problem is developed based on the assessment of the single piles and the L = 16 m, D = 1 m, cu = 140 kPa, and c’ = 15 kPa has been selected. The results are derived, in the context, with the help
of the values of stress and strain and the drained and undrained cohesion has been developed. Thus, the relevant results have been developed.
3.3 PLAXIS modelling
Figure 3.2: PLAXIS Design
(Source: Self-created)
The design has been developed based on the system approaches and the Single pile system is effectively implied with the implications of the assessments. These can be relevant to the implications of
the system assessments and the following results have been developed that provides effectiveness for configurations (Jostad et al. 2020, p.699). The configurations are useful to derive the relevant
data from the system. These designs are hence realized with the development of the beam. Sand is used as the material for primary development. A design is developed according to the design
specifications and these are effective in terms of deriving the idea on the system performance.
3.4 Results & Discussion
Figure 3.3: PLAXIS Output
(Source: Self-created)
The design outputs have been implied and these are required to be analysed with the approaches to system performance. These assessments are subjective in terms of determining a difficult approach for
the system (Malekjafarian et al. 2021, p.1015). An effective system is developed based on the materials of construction and these assessments are necessary for better construction of the system.
Thus, an effective design needs to be developed for this particular system and the other parameters are also required to be assessed. Thus, an effective system construction is possible and these can
be relevant in terms of developing a suitable system and its performance. These can also be relative to the objections and implications of the systems and the rational development of the system is
Figure 3.4: PLAXIS Output
(Source: Self-created)
The relevance of this modified output can be subjective in terms of relevant development issues and these measures are objective in terms of mitigating the issues. The development is a new system
with the axis-symmetric model in PLAXIS 2D (Zhussupbekov et al. 2019, p.144). The designs are hence effective in providing ideas on the system performance. The capability of the system issues is
relative in terms of mitigating these issues and this is relevant in terms of assessments.
The assessment has been effective in terms of deriving the ideas on the system parameters like shear stress and normal stress. These are subjective to derive the undrained cohesion and drained
cohesion from the angles of the friction and also, the numerical and PLAXIS simulations have been developed. These assessments have been subjected to derive the stress and strain in the system and
these are effectively analysed with normal calculations and simulation assessments. The results have described that the system is effective with the development of undrained and drained cohesion.
These can be analysed with the implementation of the system performance. The nature and characteristics of the system are effectively assessed and implied. The differential calculations have also
been developed from the hand-calculation and PLAXIS simulation. These results have provided a better idea of the system Single pile and these can be subjected to analyse the effectiveness of the
Alander, P., Perea-Lowery, L., Vesterinen, K., Suominen, A., SÄilynoja, E. and Vallittu, P.K., 2020. Layer structure and load-bearing properties of fibre reinforced composite beam used in cantilever
fixed dental prostheses. Dental Materials Journal, pp.2019-428.
Chanmee, N., DT, B., Hino, T. and LG, L., 2016. Analysis and simulations of erosion protection designs using the PLAXIS 2D and Slide programs. Japanese Geotechnical Society Special Publication, 2
(29), pp.1075-1078.
Jostad, H.P., Dahl, B.M., Page, A., Sivasithamparam, N. and Sturm, H., 2020. Evaluation of soil models for improved design of offshore wind turbine foundations in dense sand. Géotechnique, 70(8),
Lam, A.K., 2018. An engineering solution for a hillside project in Hong Kong. Geotechnical Research, 5(3), pp.170-181.
Malekjafarian, A., Jalilvand, S., Doherty, P. and Igoe, D., 2021. Foundation damping for monopile supported offshore wind turbines: A review. Marine Structures, 77, p.102937.
Som, N., 2017. Geotechnical Challenges of Kolkata Metro Construction. GEOTECHNICAL ENGINEERING, 48(2), pp.72-79.
Uray, E., Çarba?, S., Erkan, ?.H. and Tan, Ö., 2019. Parametric investigation for discrete optimal design of a cantilever retaining wall. Challenge Journal of Structural Mechanics, 5(3), pp.108-120.
Yadav, P., Singh, D.K., Dahale, P.P. and Padade, A.H., 2021. Analysis of retaining wall in static and seismic condition with inclusion of geofoam using Plaxis 2D. In Geohazards (pp. 223-240).
Springer, Singapore.
YAVAN, O., ONUR, M.?. and TUNCAN, A., 2020.BEHAVIOR OF CANTILEVER RETAINING WALLS UNDER STAT?C AND DYNAMIC LOADS CONSTRUCTED IN SATURATED CLAY SOIL.
Zhao, L. and You, G., 2018. Stability study on the northern batter of MBC Open Pit using Plaxis 3D. Arabian Journal of Geosciences, 11(6), pp.1-11.
Zhussupbekov, A., Omarov, A. and Tanyrbergenova, G., 2019. Design of anchored diaphragm wall for deep excavation. International Journal, 16(58), pp.139-144. | {"url":"https://www.newassignmenthelpaus.com/geotechnical-engineering-design-and-plaxis-modelling-challenge","timestamp":"2024-11-02T21:16:30Z","content_type":"text/html","content_length":"348564","record_id":"<urn:uuid:52c38ef9-b599-4acc-b069-4af43d7cc15f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00579.warc.gz"} |
High vibration damping in in situ In – Zn composites | Request PDF
High vibration damping in in situ In – Zn composites
Indium-zinc in situ composites were fabricated and their viscoelastic properties studied over 8.5 decades of frequency. Material with 5% indium by weight was found to have a stiffness damping product
(the figure of merit for damping layers) of 1.9 GPa at 10 Hz; 3 times better than the peak of polymer damping layers and over a wider frequency range. Material with 15% indium had a stiffness damping
product of 1.8 GPa. The indium segregated in a platelet morphology, particularly favorable for attaining high damping from a small concentration, as predicted by viscoelastic composite theory.
No full-text available
To read the full-text of this research,
you can request a copy directly from the authors.
... A common measure to reduce the vibrations of a structure is to place damping material in the connections between its components. As most metals provide only poor damping performance if the
deformations are small [1], polymers with high loss factors like polyurethane elastomers are often employed for this purpose. Their relatively small stiffness, however, can lead to even larger
vibrations in directly excited components [2]. ...
... Metals with good damping properties at small strains were studied in [1,[3][4][5][6][7]. For example, cadmium showed a shear modulus of 20.7 GPa and a frequency-dependent loss factor from 0.05 to
0.03 in the range of 1 Hz to 1000 Hz [3]. ...
... Nevertheless, apart from other constraints, it should be considered that heavy metals such as cadmium can have a negative impact on human health, which limits their application in building
products [8]. High stiffness and damping values were also measured from samples of indium-tin [5], indium-zinc [1], as well as composites combining indium-tin with tungsten [4] or silicon carbide [6]
to increase stiffness. ...
Metal lattice structures filled with a damping material such as polymer can exhibit high stiffness and good damping properties. Mechanical simulations of parts made from these composites can however
require a large modeling and computational effort because relevant features such as complex geometries need to be represented on multiple scales. The finite cell method (FCM) and numerical
homogenization are potential remedies for this problem. Moreover, if the microstructures are placed in between the components of assemblies for vibration reduction, a modified mortar technique can
further increase the efficiency of the complete simulation process. With this method, it is possible to discretize the components separately and to integrate the viscoelastic behavior of the
composite damping layer into their weak coupling. This paper provides a multiscale computational material design framework for such layers, based on FCM and the modified mortar technique. Its
efficiency even in the case of complex microstructures is demonstrated in numerical studies. Therein, computational homogenization is first performed on various microstructures before the resulting
effective material parameters are used in larger-scale simulation models to investigate their effect and to verify the employed methods.
... scoelastic parameters and time dependence of creep stiffness modulus can be obtained from the simulation of the experimental data. The result shows that creep stiffness modulus decreases rapidly
at the initial stage of loading, then the rate of change decreases, and finally creep stiffness modulus approaches to a stable value at the end of loading.Balch, Lakes (2015). Indium-zinc in situ
composites were fabricated and their viscoelastic properties studied over 8.5 decades of frequency. Material with 5% indium by weight was found to have a stiffness damping product (the figure of
merit for damping layers) of 1.9 GPa at 10 Hz; 3 times better than the peak of polymer damping layers and over a wider fre ...
Visco-elastic dampers utilize high damping from Viscoelastic materials to dissipate energy through shear deformation. Viscoelastic materials are highly influenced by parameters like temperature,
frequency, dynamic strain rate, time effects such as creep and relaxation, aging, and other irreversible effects. Hence selecting a proper viscoelastic material is the key. This paper presents an
overview of literature related to the viscoelastic materials used in visco elastic dampers. The review includes different materials like asphalt, rubber, polymer and glassy substances. There have
been few investigations on these materials, its advantages and disadvantages are discussed and detailed review is carried out.
Through a combination of a molecular dynamics (MD) simulation and experimental method, in this work we have methodically expatiated the essential mechanism of the observably enhanced damping
performance of nitrile-butadiene rubber (NBR) ascribed to the introduction of hindered phenol AO-70. The computed results revealed that four types of hydrogen bonds (H-bonds), namely, type A (AO-70)
-OH?NC- (NBR), type B (AO-70) -OH?OC- (AO-70), and type C (AO-70) -OH?OH- (AO-70), type D (AO-70) -OH?O-C- (AO-70) were formed in the AO-70/NBR composites, where type A was the most stable.
Meanwhile, the AO-70/NBR composite with AO-70 content of 109 phr had the largest number of H-bonds, highest binding energy, and smallest fractional free volume (FFV), demonstrating a good
compatibility between NBR and AO-70 and the best damping property of the composites. The experimental results were highly consistent with the MD simulation results, which means the combining methods
can provide a new attempt for the design of optimum damping materials.
High viscoelastic damping is observed in InZn materials over ranges of composition, frequency, temperature, and annealing time. Microscopy reveals InZn when cast segregates into a heterogeneous
micro-structure resembling an in situ composite consisting of a zinc matrix with soft indium platelet inclusions. This morphology is predicted to be advantageous for maximizing the damping figure of
merit Etanδ by viscoelastic composite theory. InZn is found to be linearly viscoelastic, unlike other high damping metals. The damping of InZn varies little over a substantial range of temperature,
in contrast with polymers. For 5 % In material, the optimal composition, Etanδ is 2.8 GPa at 10 Hz, compared to a peak of 0.6 GPa for high damping rubbers. After annealing for 13 years, Etanδ was
still high at 1.9 GPa. InZn demonstrates high damping under a wide range of conditions.
Material damping of laminated composites is experimentally determined by the half-power bandwidth method for cantilever beam specimens excited with an impulse excitation. Data acquisition and
manipulation are carried out using both an IBM PC-AT and a GenRad 2500 Series FFT Analyzer. Unidirectional continuous fiber 0° and 90° laminates were fabricated from glass/epoxy (Hercules S2-Glass/
3501-6), graphite/epoxy (Hercules AS4/3501-6) and graphite/poly (ether ether ketone) (ICI AS4/PEEK[APC-2]) to investigate the effect of fiber and matrix properties as a function of frequency, up to
1000 Hz, on the damping of composites. The S2-glass/3501-6 composite had a higher loss factor than the AS4/3501-6 in the 0° orientation with the loss factor for the AS4/3501-6 exhibiting a linear
increase with increasing frequency and the loss factor for the S2-glass varying nonlinearly with frequency. The 90° material exhibited a higher damping loss factor than the 0°, varying nonlinearly
with increasing frequency. In the 90° orientation, the glass fiber composite had loss factors that were approximately fourfold greater than the 0° orientation at frequencies greater than 200 Hz. The
0° AS4/PEEK had a loss factor that was approximately equal to that of the 0° AS4/3501-6. The 90° AS4/PEEK had a loss factor that was approximately 50% less than the AS4/3501-6 and 25% greater than
the S2-glass/3501-6 composite.
Metal matrix composites of silicon carbide particles in indium–tin alloy were fabricated with the aim of achieving a high value of the product of stiffness and viscoelastic damping tan , without
excess density. Stiffness and viscoelastic damping were measured over a wide range of frequency. For monodisperse 40% by volume SiC, and for hierarchical 60% by volume SiC the composite damping
increased compared with the matrix at frequencies above 100 Hz. Composite shear modulus was almost a factor two greater than matrix for 40% and a factor of four greater than that of matrix for 60%.
The product of stiffness and damping exceeds that of well-known materials including polymer damping layers. Hashin–Shtrikman analysis modelled the observed stiffness increase. The damping increase at
higher frequency cannot be accounted for by a purely mechanical composite model; it is attributed to thermoelastic coupling and an increase in matrix dislocations during fabrication.
Characterization of the mechanical damping properties of a series of die-cast zinc-aluminum alloys is described. Over the range of variables (temperature, frequency, and vibration strain amplitude)
normally encountered in service applications, it is shown that the damping consists of two components. Both components are due to linear relaxation mechanisms: the first is a thermoelastic relaxation
and the second is the low-temperature tail of a broadened boundary relaxation. Some of the alloys exhibit elevated damping levels over a useful frequency range, particularly at the temperatures
encountered in under-hood applications in automobiles.
A theoretical study of the viscoelastic properties of composites is presented with the aim of identifying structures which give rise to a combination of high stiffness and high loss tangent.
Laminates with Voigt and Reuss structures, as well as composite materials attaining the Hashin-Shtrickman bounds on stiffness were evaluated by the correspondence principle. Similarly, viscoelastic
properties of composites containing spherical or platelet inclusions were explored. Reuss laminates and platelet-filled materials composed of a stiff, low-loss phase and a compliant high-loss phase
were found to exhibit high stiffness combined with a high loss tangent.
This article compares resonant ultrasound spectroscopy (RUS) and other resonant methods for the determination of viscoelastic properties such as damping. RUS scans from 50 to 500 kHz were conducted
on cubical specimens of several materials including brass, aluminum alloys, and polymethyl (methacrylate) (PMMA), a glassy polymer. Comparison of damping over the frequency ranges for broadband
viscoelastic spectroscopy (BVS) and RUS for indium tin alloy in shear modes of deformation discloses a continuation of the tan δ power-law trend for ultrasonic frequencies up to 300 kHz. For PMMA,
resonant peaks were sufficiently broad that higher modes in RUS began to overlap. Tan δ via RUS and BVS for PMMA agreed well in the frequency range where the methods overlap. RUS is capable of
measuring tan δ as high as several percent at the fundamental frequency. Since higher modes are closely spaced, it is impractical to determine tan δ above 0.01-0.02 at frequencies other than the
Viscoelastic materials are widely used for acoustic attenuation, isolation of continuous vibration, and shock mountings. The properties of these materials are dependent upon temperature and frequency
of excitation, molecular structure of the base polymer, and chemical cross-linking systems and fillers. This paper describes a transfer function technique for the measurement of the
frequency-dependent Young's modulus and loss tangent. Algorithms for time-temperature superposition are also discussed. It is then shown how the results of such measurements can be used in the
selection of viscoelastic materials and fillers in the design of constrained-layer damping structures. Comparisons of mathematical modeling and experimentally determined damping are given for some of
the chlorobutyl formulations discussed.
Understanding viscoelasticity is pertinent to design applications as diverse as earplugs, gaskets, computer disks, satellite stability, medical diagnosis, injury prevention, vibration abatement, tire
performance, sports, spacecraft explosions, and music. This book fits a one-semester graduate course on the properties, analysis, and uses of viscoelastic materials. Those familiar with the author's
precursor book, Viscoelastic Solids, will see that this book contains many updates and expanded coverage of the materials science, causes of viscoelastic behavior, properties of materials of
biological origin, and applications of viscoelastic materials. The theoretical presentation includes both transient and dynamic aspects, with emphasis on linear viscoelasticity to develop physical
insight. Methods for the solution of stress analysis problems are developed and illustrated. Experimental methods for characterization of viscoelastic materials are explored in detail. Viscoelastic
phenomena are described for a wide variety of materials, including viscoelastic composite materials. Applications of viscoelasticity and viscoelastic materials are illustrated with case studies.
Composite micro-structures are studied, which give rise to high stiffness combined with high viscoelastic loss. We demonstrate that such properties are most easily achieved if the stiff phase is as
stiff as possible. Incorporation of a small amount of damping in the stiff phase has little effect on the composite damping. Experimental results are presented for laminates consisting of cadmium and
tungsten and of InSn alloy and tungsten. The combination of stiffness and loss (the product E tan &dgr;) exceeds that of well-known materials.
The effective moduli of platelet reinforced media are derived for aligned and randomly oriented circular platelets at both dilute and finite concentrations. The platelets are modeled as very thin
oblate spheroids in which edge effects caused by the presence of the sharp corners can be significant, depending upon the relative magnitudes of the thickness-to-diameter ratio and the ratio of the
matrix stiffness to that of the reinforcer. The edge effects become negligible when the latter ratio greatly exceeds the former, in which case the platelets act effectively as infinite layers. In
general, the non-uniform stress fields in the vicinity of the sharp corners or edges cause a reduction in the effective moduli. When the aspect ratio greatly exceeds the stiffness ratio, the
inclusions become equivalent to rigid disks, and the pertinent concentration parameter is not the volume fraction, which is zero, but a number analogous to the crack density parameter for solids
containing cracks. Effective medium theories for finite concentrations of rigid disks predict that the effective Poisson's ratio tends to the value 0.1557… as the concentration increases, and the
self-consistent theory displays a critical disk density at which the composite becomes rigid.
The figure of merit for structural damping and damping layer applications is the product of stiffness E and damping tan δ. For most materials, even practical polymer damping layers, E tan δ is less
than 0.6 GPa. We consider several methods to achieve high values of this figure of merit: high damping metals, metal matrix composites and composites containing constituents of negative stiffness. As
for high damping metals, damping of polycrystalline zinc was determined and compared with InSn studied earlier. Damping of Zn is less dependent on frequency than that of InSn, so Zn is superior at
high frequency. High damping and large stiffness anomalies are possible in viscoelastic composites with inclusions of negative stiffness. Negative stiffness entails a reversal of the usual
directional relationship between force and displacement in deformed objects. An isolated object with negative stiffness is unstable, but an inclusion embedded in a composite matrix can be stabilized
under some circumstances. Ferroelastic domains in the vicinity of a phase transition can exhibit a region of negative stiffness. Metal matrix composites containing vanadium dioxide were prepared and
studied. The concentration of embedded particles was sensitive to the processing method.
The β-type Ti alloys with high oxygen solid solution was developed as a new type of high-damping alloy, in which oxygen could cause both a strengthening effect for higher strength and a huge Snoek
damping peak. Snoek damping mechanism was applied to Ti-Nb-O alloys, and the high damping capacity and high strength induced by a certain amount of oxygen solid solution in the alloys were much
better than those obtained in already developed high damping alloys. A strengthening effect was observed in the Ti-25Nb-1.5 O alloy with the 1.7% oxygen solid solution, in which the yield strength
was increased to 665.3 MPa with a decrease of elongation to 17.1%. When the oxygen composition was increased to 3.0%, the as-cast alloy ingot ruptured during the elastic deformation stage, which was
caused by the microcracks formed in the ingot during the rapid solidification process in the cold crucible.
Internal friction and elastic moduli of the intermetallic compound TiNi were measured as a function of temperature from -170° to 800°C. There appear in the internal friction curve two well‐defined
peaks at -70° and 600°C, respectively, a small peak at 350°C, and a group of several sharp peaks in the temperature range from -50° to 40°C. The elastic modulus has a positive temperature coefficient
in the temperature range from 40° to 520°C. These results are discussed in terms of the crystal‐structure model of Wang and others.
Variational principles in the linear theory of elasticity, involving the elastic polarization tensor, have been applied to the derivation of upper and lower bounds for the effective elastic moduli of
quasi-isotropic and quasi-homogeneous multiphase materials of arbitrary phase geometry. When the ratios between the different phase moduli are not too large the bounds derived are close enough to
provide a good estimate for the effective moduli. Comparison of theoretical and experimental results for a two-phase alloy showed good agreement. | {"url":"https://www.researchgate.net/publication/277354027_High_vibration_damping_in_in_situ_In_-_Zn_composites","timestamp":"2024-11-15T04:55:06Z","content_type":"text/html","content_length":"374372","record_id":"<urn:uuid:6924797c-f5f7-4229-b7cf-52f29a9eae6d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00024.warc.gz"} |
Stratified sampling: Allocation rules with advantages and disadvantages
Stratified sampling: Definition, Allocation rules with advantages and disadvantages
Stratified sampling is a sampling plan in which we divide the population into several non overlapping strata and select a random sample from each stratum in such a way that units within the strata
are homogeneous but between strata they are heterogeneous.
Stratum is a group of elements where all the units of elements “within the strata are homogeneous but between strata they are heterogeneous”. Homogeneous means alike or contains same characteristics
and heterogeneous means different from each other or contains different characteristics. [Note: ‘Stratum’ is singular form and ‘strata’ is plural form].
Stratified sampling is a probability sampling.
Allocation rules of stratified sampling
• Equal allocation
• Proportional allocation
• Neymanallocation
• Optimum allocation
Equal Allocation
In equal allocation we have to divide the sample size(n) by the number of strata.
Proportional allocation
In proportional allocation we have divide the sample size(n) by the total smple size(N) and multiply with stratum size(Ni).
Neyman or optimal allocation
Neyman allocation is a special case of optimal allocation.
Advantages of stratified sampling
• Stratification tends to decrease the variances of the sample estimates. This results is smaller bound on the error of estimation. This is particularly true if measurement within strata are
• By stratification, the cost per observation in the survey may be reduced by stratification of the population elements into convenient groupings.
• When separate estimates for population parameters for each sub population within an overall population is required , stratification is rewarding.
• Stratification makes it possible to use different sampling designs in different strata.
• Stratification is particularly more effective when there are extremely values in the population, which can be segregated into separate strata. Thereby reducing the variability within strata.
• It is most effective in handling heterogeneous population.
• In stratified sampling, confidence intervals may be constructed individually for the parameter of interest in each stratum.
Disadvantages of stratified sampling
The major disadvantages are that it may take more time to select the sample than would be the case for simple random sampling. More time is involved because complete frames are necessary within
each of the strata and each stratum must be sampled. There are some other disadvantages of stratified sampling-
• It requires more administrative works as compared with Simple Random Sampling.
• It is sometimes hard to classify each kind of population into clearly distinguished classes.
• It can be tedious and time consuming job to those who are not keen towards handling such data. | {"url":"https://www.statisticalaid.com/stratified-sampling-definition-allocation-rules-with-advantages-and-disadvantages/","timestamp":"2024-11-07T23:25:22Z","content_type":"text/html","content_length":"194145","record_id":"<urn:uuid:e15f198b-5ea0-4ac5-b47e-1b0024cb37e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00014.warc.gz"} |
Solving one-step equations and inequalities worksheets
solving one-step equations and inequalities worksheets Related topics: worksheets modeling algebraic equations using balancing scales
algebra sample question grade 8
ks2 maths mental workout book 6 answers
online factorer
sample math tests grade 10 ontario
free compound inequality solver
completing the square calculator
Author Message
niihoc Posted: Sunday 10th of Dec 07:38
Hello Dudes , I am desperately in need of help for clearing my maths test that is nearing. I really do not intend to resort to the service of private masters and web tutoring since they
prove to be quite pricey . Could you suggest a perfect teaching tool that can guide me with learning the principles of Algebra 2. Particularly, I need assistance on side-side-side
similarity and perfect square trinomial.
Back to top
kfir Posted: Monday 11th of Dec 08:16
I understand your situation because I had the same issues when I went to high school. I was very weak in math, especially in solving one-step equations and inequalities worksheets and my
grades were really terrible . I started using Algebrator to help me solve questions as well as with my assignments and eventually I started getting A’s in math. This is a remarkably good
product because it explains the problems in a step-by-step manner so we understand them well. I am absolutely certain that you will find it useful too.
From: egypt
Back to top
TihBoasten Posted: Monday 11th of Dec 10:04
I got my first diploma studying online last week. I happened to be using Algebrator as well during my entire course duration .
Back to top
Mov Posted: Wednesday 13th of Dec 09:51
Algebrator is a easy to use software and is certainly worth a try. You will also find many interesting stuff there. I use it as reference software for my math problems and can swear that
it has made learning math much more fun .
Back to top | {"url":"https://softmath.com/algebra-software-1/solving-one-step-equations-and.html","timestamp":"2024-11-10T09:12:27Z","content_type":"text/html","content_length":"38735","record_id":"<urn:uuid:ea04b1dc-6ddc-4067-882b-a13f6a6e5968>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00872.warc.gz"} |
Nonlinear eigenvalue problems with positively convex operators
We consider the equation u = λAu (λ > 0), where A is a forced isotone positively convex operator in a partially ordered normed space with a complete positive cone K. Let Λ be the set of positive λ
for which the equation has a solution u ε{lunate} K, and let Λ[0] be the set of positive λ for which a positive solution-necessarily the minimum one-can be obtained by an iteration u[n] = λAu[n-1], u
[0] = 0. We show that if K is normal, and if Λ is nonempty, then Λ[0] is nonempty, and each set Λ[0], Λ is an interval with inf(Λ[0]) = inf(Λ) = 0 and sup(Λ[0]) = sup(Λ) (= λ*, say); but we may have
λ* ∉ Λ[0] and λ* ε{lunate} Λ. Furthermore, if A is bounded on the intersection of K with a neighborhood of 0, then Λ[0] is nonempty. Let u^0(λ) = lim[n→∞](λA)^n(0) be the minimum positive fixed point
corresponding to λ ε{lunate} Λ[0]. Then u^0(λ) is a continuous isotone convex function of λ on Λ[0].
ASJC Scopus subject areas
• Analysis
• Applied Mathematics
Dive into the research topics of 'Nonlinear eigenvalue problems with positively convex operators'. Together they form a unique fingerprint. | {"url":"https://experts.arizona.edu/en/publications/nonlinear-eigenvalue-problems-with-positively-convex-operators","timestamp":"2024-11-11T20:51:15Z","content_type":"text/html","content_length":"51186","record_id":"<urn:uuid:7311549e-eddc-4ccd-93f5-246e3cc10633>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00179.warc.gz"} |
Addressing challenges in uncertainty quantification: the case of geohazard assessments
Articles | Volume 16, issue 6
© Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License.
Addressing challenges in uncertainty quantification: the case of geohazard assessments
We analyse some of the challenges in quantifying uncertainty when using geohazard models. Despite the availability of recently developed, sophisticated ways to parameterise models, a major remaining
challenge is constraining the many model parameters involved. Additionally, there are challenges related to the credibility of predictions required in the assessments, the uncertainty of input
quantities, and the conditional nature of the quantification, making it dependent on the choices and assumptions analysts make. Addressing these challenges calls for more insightful approaches yet to
be developed. However, as discussed in this paper, clarifications and reinterpretations of some fundamental concepts and practical simplifications may be required first. The research thus aims to
strengthen the foundation and practice of geohazard risk assessments.
Received: 23 Aug 2022 – Discussion started: 17 Oct 2022 – Revised: 24 Jan 2023 – Accepted: 06 Mar 2023 – Published: 21 Mar 2023
Uncertainty quantification (UQ) helps determine the uncertainty of a system's responses when some quantities and events in such a system are unknown. Using models, the system's responses can be
calculated analytically, numerically, or by random sampling (including the Monte Carlo method, rejection sampling, Monte Carlo sampling using Markov chains, importance sampling, and subset
simulation) (Metropolis and Ulam, 1949; Brown, 1956; Ulam, 1961; Hastings, 1970). Sampling methods are frequently used because of the high-dimensional nature of hazard events and associated
quantities. Sampling methods result in less expensive and more tractable uncertainty quantification than analytical and numerical methods. In the sampling procedure, specified distributions of the
input quantities and parameters are sampled, and respective outputs of the model are recorded. This process is repeated as many times as required to achieve the desired accuracy (Vanmarcke, 1984).
Eventually, the distribution of the outputs can be used to calculate probability-based metrics, such as expectations or probabilities of critical events. Model-based uncertainty quantification using
sampling is now more often used in geohazard assessments, e.g. Uzielli and Lacasse (2007), Wellmann and Regenauer-Lieb (2012), Rodríguez-Ochoa et al. (2015), Pakyuz-Charrier et al. (2018), Huang et
al. (2021), Luo et al. (2021), and Sun et al. (2021a).
This paper considers recent advances in UQ and analyses some remaining challenges. For instance, we note that a major problem persists, namely constraining the many parameters involved. Only some
parameters can be constrained in practice based solely on historical data (e.g. Albert et al., 2022). Another challenge is that model outputs are conditional on the choice of model parameters and the
specified input quantities, including initial and boundary conditions. For example, a geological system model could be specified to include some geological boundary conditions (Juang et al., 2019).
Such systems are usually time-dependent and spatial in nature and may involve, e.g. changing conditions (e.g. Chow et al., 2019). Incorporating uncertainties related to such conditions complicates
the modelling and demands further data acquisition. Next, models could accurately reproduce data from past events but may be inadequate for unobserved outputs or predictions. This might be the case
when predicting, e.g. extreme velocities in marine turbidity currents, which are driven by emerging and little-understood soil and fluid interactions (Vanneste et al., 2019). Overlooking these
challenges implies that the quantification will only reflect some aspects of the uncertainty involved. These challenges are, unfortunately, neither exhaustively nor clearly discussed in the geohazard
literature. Options and clarifications addressing these challenges are underreported in the field. Analysing these challenges can be useful in treating uncertainties consistently and providing
meaningful results in an assessment. This paper's objective is to bridge the gap in the literature by providing an analysis and clarifications enabling a useful quantification of uncertainty.
It should be emphasised that, in this paper, we consider uncertainty quantification in terms of probabilities. Other approaches to measure or represent uncertainty have been studied by, for example,
Zadeh (1968), Shafer (1976), Ferson and Ginzburg (1996), Helton and Oberkampf (2004), Dubois (2006), Aven (2010), Flage et al. (2013), Shortridge et al. (2017), Flage et al. (2018), and Gray et al.
(2022a, b). These approaches will not be discussed here. The discussion about the complications in UQ related to computational issues generated by sampling procedures is also beyond the scope of the
current work.
The remainder of the paper is as follows. In Sect. 2, based on recent advances, we describe how uncertainty quantification using geohazard models can be conducted. Next, some remaining challenges in
UQ are identified and illustrated. Options to address the challenges in UQ are discussed in Sect. 3. A simplified example, further illustrating the discussion, is found in Sect. 4, while the final
section provides some conclusions.
2Quantifying uncertainty using geohazard models
In this section, we make explicit critical steps in uncertainty quantification (UQ). We describe a general approach to UQ that considers uncertainty as the analysts' incomplete knowledge about
quantities or events. The UQ approach described is restricted to probabilistic analysis. Emphasis is made on the choices and assumptions usually made by analysts.
A geohazard model can be described as follows. We consider a system (e.g. debris flow) with a set of specified input quantities X (e.g. sediment concentration, entrainment rate) whose relationships
to the model output Y (e.g. runout volume, velocity, or height of flow) can be expressed by a set of models 𝓜. Analysts identify or specify X, Y, and 𝓜. A vector Θ[m] (including, e.g. friction,
viscosity, turbulence coefficients) parameterises a model m in 𝓜. The parameters Θ[m] determine specific functions among a family of potential functions modelling the system. Accordingly, a model m
can be described as a multi-output function with, e.g. Y={runout volume, velocity, height of flow}. Based on Lu and Lermusiaux (2021), we can write
$\begin{array}{}\text{(1)}& m:{\mathbit{X}}_{s,t×}{\mathbf{\Theta }}_{m}\to {\mathbit{Y}}_{s,t}\text{(2)}& m\equiv \left({\mathbit{E}}_{m},\phantom{\rule{0.25em}{0ex}}\mathbit{S}{\mathbit{G}}_{m},\
Realisations of Y are the model responses y when elements in X take the values x at a spatial location s∈S and a specific time t∈T, and parameters θ[m]∈Θ[m] are used. In expression (1), $\mathbit{X}\
subset {\mathbb{R}}^{{d}_{X}}$ is the set of specified input quantities, $\mathbit{T}\subset {\mathbb{R}}^{{d}_{\mathrm{T}}}$ is the time domain, $\mathbit{S}\subset {\mathbb{R}}^{{d}_{\mathrm{S}}}$
is the spatial domain, ${\mathbf{\Theta }}_{m}\subset {\mathbb{R}}^{{d}_{{\mathrm{\Theta }}_{m}}}$ corresponds to a parameter vector, and $\mathbit{Y}\subset {\mathbb{R}}^{{d}_{Y}}$ is the set of
model outputs. To consider different dimensions, $d=\mathit{\left\{}\mathrm{1},\phantom{\rule{0.25em}{0ex}}\mathrm{2},\text{ or }\mathrm{3}\mathit{\right\}}$. The system is fully described if m is
specified in terms of a set of equations E[m] (e.g. conservation equations), the spatial domain geometry SG[m] (e.g. extension, soil structure), the boundary conditions BC[m] (e.g. downstream flow),
and the initial conditions IC[m] (e.g. flow at t=t[0]); see Eq. (2).
Probabilities reflecting analysts' uncertainty about input quantities are specified in uncertainty quantification. Such distributions are then sampled many times, and the distribution of the produced
outputs can be calculated. The output probability distribution for a model m can be denoted as $f\left(y|x,{\mathit{\theta }}_{m},m\right)$, for realisations y, x, θ[m], m of Y, X, Θ[m], and 𝓜,
Betz (2017) has suggested that the parameter set is fully described by a parameter vector Θ; Eq. (3) is as follows:
$\begin{array}{}\text{(3)}& \mathbf{\Theta }=\mathit{\left\{}{\mathbf{\Theta }}_{m},{\mathbf{\Theta }}_{X},{\mathbf{\Theta }}_{\mathit{\epsilon }},{\mathbf{\Theta }}_{\mathrm{o}}\mathit{\right\}},\
in which Θ[m] refers to parameters of the model m, Θ[X] are parameters linked to the input X, Θ[ε] is the vector of the output-prediction error ε, and Θ[o] is the vector associated with observation/
measurement errors. More explicitly, to compute an overall joint probability distribution, we may have the following distributions:
• $f\left(y|x,{\mathit{\theta }}_{m},m\right)$ is the distribution of Y when X takes the values x, and parameters θ[m]∈Θ[m] and a model m∈𝓜 are used to compute y;
• f(x|θ[X],m) is the conditional distribution of X given the parameters θ[X]∈Θ[X] and the model m. Note that each m defines which elements in X are to be considered in the analysis;
• $f\left(x|\stackrel{\mathrm{^}}{x},{\mathit{\theta }}_{\mathrm{o}}\right)$ is a distribution of X given the observed values $\stackrel{\mathrm{^}}{\mathbit{X}}=\stackrel{\mathrm{^}}{x}$ and the
observation/measurement error parameters θ[o]∈Θ[o];
• additionally, one can consider $f\left({y}^{*}|y,{\mathit{\theta }}_{\mathit{\epsilon }}$,m), which is a distribution of Y^*, the future system's response, conditioned on the model output y and
the output-prediction error vector θ[ε]∈Θ[ε]. The output-prediction error ε is the mismatch between the model predictions and non-observed system responses y^*. ε is used to correct the imperfect
model output y (Betz, 2017; Juang et al., 2019).
If, for example, the parameters Θ[m] are poorly known, a prior distribution π(θ[m]|m) weighing each parameter value θ[m] for a model m is usually specified. A prior is a subjective probability
distribution quantified by expert judgement representing uncertainty about the quantities prior to considering data (Raices-Cruz et al., 2022). When some measurements $=\mathit{\left\{}\stackrel{\
mathrm{^}}{\mathbit{Y}}=\stackrel{\mathrm{^}}{y},\stackrel{\mathrm{^}}{\mathbit{X}}=\stackrel{\mathrm{^}}{x}\mathit{\right\}}$ are available, such parameter values θ[m], or their distributions π(θ[m]
|m), can be constrained by back-analysis methods. Note that measurements 𝓓, i.e. $\in \mathsc{D}$. Back-analysis methods include matching experimental measurements $\stackrel{\mathrm{^}}{y}$ and
calculated model outputs y using different assumed values ${\mathit{\theta }}_{m}^{\prime }$. Values for θ[m] can be calculated as follows (based on Liu et al., 2022):
$\begin{array}{}\text{(4)}& {\mathit{\theta }}_{m}=\text{argmin}\left[\stackrel{\mathrm{^}}{y}-y\left(\stackrel{\mathrm{^}}{x},{\mathit{\theta }}_{m}^{\prime }\right)\right].\end{array}$
The revision or updating of the prior π(θ[m]|m) with measurements $\mathit{\pi }\left({\mathit{\theta }}_{m}|,m\right)$ is also an option in back analysis. The updating can be calculated as follows
(based on Juang et al., 2019; Liu et al., 2022):
$\begin{array}{}\text{(5)}& \mathit{\pi }\left({\mathit{\theta }}_{m}|,m\right)=\frac{\mathsc{L}\left({\mathit{\theta }}_{m}|\right)\mathit{\pi }\left({\mathit{\theta }}_{m}\mathrm{|}m\right)}{\int \
mathsc{L}\left({\mathit{\theta }}_{m}\mathrm{|}\right)\mathit{\pi }\left({\mathit{\theta }}_{m}\mathrm{|}m\right)\mathrm{d}{\mathit{\theta }}_{m}},\end{array}$
where $\mathsc{L}\left({\mathit{\theta }}_{m}|\right)=f\left(|{\mathit{\theta }}_{m}\right)$ is a likelihood function, i.e. a distribution that weighs θ[m].
Similarly, we can constrain any of the distributions above, e.g. $f\left(y|x,{\mathit{\theta }}_{m},m\right)$, or $f\left(x|{\mathit{\theta }}_{X},m\right)$ to obtain $f\left(y|x,{\mathit{\theta }}_
{m},,m\right)$ and $f\left(x|{\mathit{\theta }}_{X},,m\right)$, respectively.
For a geohazard problem, it is often possible to specify several competing models, e.g. distinct geological models with diverse boundary conditions; see expression (2). If the available knowledge is
insufficient to determine the best model, different models m can be considered. The respective overall output probability distribution is computed as (Betz, 2017; Juang et al., 2019)
$\begin{array}{}\text{(6)}& f\left(y|x,\mathbf{\Theta },\mathsc{D},\mathsc{M}\right)=\sum f\left(y|x,\mathit{\theta },,m\right)\mathit{\omega }\left(m|\mathsc{D},\mathsc{M}\right)\text{(7)}& f\left(y
|x,\mathit{\theta },,m\right)=\int f\left(y|x,\mathit{\theta },m\right)\mathit{\pi }\left(\mathit{\theta }|,m\right)\mathrm{d}\mathit{\theta }.\end{array}$
In Eq. (6), $\mathit{\omega }\left(m|\mathsc{D},\mathsc{M}\right)$ is a distribution weighing each model m in 𝓜.
The various models 𝓜, their inputs X, parameters Θ, outputs Y, and experimental data 𝓜, X, Θ, Y, and Y^* is illustrated in Fig. 1.
The previous description of a general approach to UQ considers uncertainty as that reflected in the analysts' incomplete knowledge about quantities or events. In UQ, to measure or describe
uncertainty, subjective probabilities can be used and constrained using observations 𝓜 chosen by analysts. Analysts might also select several parameters Θ and initial and boundary conditions, BC[m]
and IC[m]. Based on the above description, in the following, we analyse some of the challenges that arise when conducting UQ.
As mentioned, back-analysis methods help constrain some elements in Θ. However, given the considerable number of parameters (see expressions 1–3) and data scarcity, constraining Θ is often only
achieved in a limited fashion. Back-analysis is further challenged by the potential dependency among Θ or 𝓜 and between Θ and SG[m], BC[m], and IC[m]. We also note that back analysis, or, more
specifically, inverse analysis, faces problems regarding non-identifiability, non-uniqueness, and instability. Non-identifiability occurs when some parameters do not drive changes in the inferred
quantities. Non-uniqueness arises because more than one set of fitted or updated parameters may adequately reproduce observations. Instability in the solution arises from errors in observations and
the non-linearity of models (Carrera and Neuman, 1986). Alternatively, in specifying a joint distribution f(x,θ) to be sampled, analysts may consider the use of e.g. Bayesian networks (Albert et al.,
2022). However, under the usual circumstance of a lack of information, establishing such a joint distribution is challenging and requires that analysts encode many additional assumptions (e.g. prior
distributions, likelihood functions, independence, linear relationships, normality, stationarity of the quantities and parameters considered); see e.g. Tang et al. (2020), Sun et al. (2021b), Albert
et al. (2022), Pheulpin et al. (2022). A more conventional choice is that x or θ are specified using the maximum entropy principle (MEP), to specify the least biased distributions possible on the
given information (Jaynes, 1957). Such distributions are subject to the system's physical constraints based on some available data. The information entropy of a probability distribution measures the
amount of information contained in the distribution. The larger the entropy, the less information is provided by the distribution. Thus, by maximising the entropy over a suitable set of probability
distributions, one finds the least informative distribution in the sense that it contains the least amount of information consistent with the system's constraints. Note that a distribution is sought
over all the candidate distributions subject to a set of constraints. The MEP has been questioned since its validity and usefulness lie in the proper choice of physical constraints (Jaynes, 1957;
Yano 2019). Doubts are also raised regarding the potential information loss when using the principle. Analysts usually strive to use all available knowledge and avoid unjustified information loss
(Christakos, 1990; Flage et al., 2018).
Options to address the parametrisation challenge also include surrogate models, parameter reduction, and model learning (e.g. Lu and Lermusiaux, 2021; Sun et al., 2021b; Albert et al., 2022; Degen et
al., 2022; Liu et al., 2022). Surrogate models are learnt to replace a complicated model with an inexpensive and fast approximation. Parameter reduction is achieved based on either principal
component analysis or global sensitivity analysis to determine which parameters significantly impact model outputs and are essential to the analysis (Degen et al., 2022; Wagener et al., 2022).
Remarkably, versions of the model learning option do not need any prior information about model equations E[m] but require local verification of conservation laws in the data credibility of
unobserved surrogate model outputs can always be questioned, since, for instance, records may miss crucial events (Woo, 2019). Models may also fail to reproduce outputs caused by recorded abrupt
changes (e.g. extreme velocities of turbidity currents) (Alley, 2004). An additional point is the issue of incomplete model response, which refers to a model not having a solution for some
combinations of the specified input quantities (Cardenas, 2019; van den Eijnden et al., 2022).
In bypassing the described challenges when quantifying uncertainty, simplifications are usually enforced, sometimes unjustifiably, in the form of assumptions, denoted here by Ą. The set Ą can include
one or more of the assumptions listed in Table 1. Note that the set of assumptions can be increased with those assumptions imposed by using specific models 𝓜 (e.g. conservation of energy, momentum,
or mass, Mohr–Coulomb's failure criterion).
3Addressing the challenges in uncertainty quantification
From the previous section, we saw that it is very difficult in geohazard assessments to meet data requirements for the ideal parameterisation of models. Further, we have noted that, although fully
parameterised models could potentially be accurate at reproducing data from past events, these may turn out to be inadequate for unobserved outputs. We also made explicit that predictions are not
only conditional on Θ but possibly also on SG[m], BC[m], and IC[m]; see expressions (1)–(7). Ultimately, assumptions made also condition model outputs. More importantly, note that when only some
model input quantities or parameters can be updated using data
Among the clarifications, we consider a major conceptualisation suggested by the literature, which is the definition of uncertainty. Uncertainty refers to incomplete information or knowledge about a
quantity or the occurrence of an event (Society for Risk Analysis, 2018). In Table 3, we denote this clarification as C1. Embracing this definition has some implications for uncertainty
quantification using geohazard models. We use these implications to address the major complications and challenges. For instance, if uncertainty is measured in terms of probability, one such
implication is that analysts are discouraged from using so-called frequentist probabilities. We note that frequentist probabilities do not measure uncertainty or lack of knowledge. Rather. such
probabilities reflect frequency ratios representing fluctuation or variation in the outcomes of quantities. Frequentist probabilities are of limited use because these assume that quantities vary in
large populations of identical settings, a condition which can be justified only for rather few geohazard quantities. The often one-off nature of many geohazard features and the impossibility of
verifying or validating data by, e.g. a large number of repeated tests, make it difficult to develop such probabilities. Thus, a more meaningful and practical approach suggests to measure uncertainty
by the use of knowledge-based (also referred to as judgemental or subjective) probabilities (Aven, 2019). A knowledge-based probability is an expression of the degree of belief in the occurrence of
an event or quantity by a person assigning the probability conditional on the available knowledge 𝓚. Such knowledge 𝓚 includes not only data in the form of measurements 𝓓. The models 𝓜 chosen for the
prediction and the modelling assumptions Ą made by analysts are also part of 𝓚. Accordingly, to describe uncertainty about quantities, probabilities are assigned based on 𝓚, and, therefore, those
probabilities are conditional on 𝓚. In the previous section, we have made evident the conditional nature of the uncertainty quantification (i.e. the probabilities) on measured data 𝓜 and wrote the
expression f(y|x, Θ, 𝓓, 𝓜) for the overall output probability distribution (see Eq. 6). If assumptions Ą are also acknowledged as a conditional argument of the uncertainty quantification, we write
more explicitly f(y|x, Θ, 𝓓, 𝓜, Ą) or equivalently $f\left(y|x,\mathbf{\Theta },\mathsc{K}\right)$. We can therefore write
$\begin{array}{}\text{(8)}& f\left(y|x,\mathbf{\Theta },\mathsc{K}\right)=f\left(y|x,\mathbf{\Theta },\mathsc{D},\mathsc{M},\mathbit{\text{Ą}}\right).\end{array}$
The meaning of this expression is explained next. If, in a specific case, we would write $f\left(y|x,\mathbf{\Theta },\mathsc{K}\right)=f\left(y|x,\mathit{\theta },\mathsc{D}\right)$, it means that 𝓓
summarises all the knowledge that analysts have to calculate y given (realised or known) x and θ. Accordingly, the full expression in Eq. (8) implies that to calculate y, and given the knowledge of x
and θ, the background knowledge includes 𝓓, 𝓜, and Ą. Note that 𝓚 can also be formed by observations, justifications, rationales, and arguments; thus, Eq. (8) can be further detailed to include these
aspects of 𝓚. Structured methods exist to assign knowledge-based probabilities (see, e.g. Apeland et al., 2002; Aven, 2019). Here we should note, however, that since models form part of the available
background knowledge 𝓚, models can also inform these knowledge-based probability assignments. It follows that, based on knowledge-based input probabilities, an overall output probability distribution
calculated using models is also subjective or knowledge-based (Jaynes, 1957). Some of the implications of using knowledge-based probabilities are described throughout this section.
According to the left column in Table 2, the focus of the challenges relates to the model outputs, more specifically predictions (CH1 and CH2), input quantities (CH3–CH6), parameters (CH7–CH9), and
models (CH10). We recall that uncertainty quantification helps determine the system's response uncertainty based on specified input quantities. Accordingly, an assessment focuses on the potential
system's responses. The focus is often on uncertainty about future non-observed responses Y^*, which are approximated by the model output Y, considering some specified input quantities X. We recall
that Y^* and X^* are quantities that are unknown at the time of the analysis but will take some value in the future and possibly become known. Thus, during an assessment, Y^* and X^* are the
uncertain quantities of the system since we have incomplete knowledge about Y^* and X^*. Accordingly, the output-prediction error ε, the mismatch between the model prediction values y, and the
non-observed system's response values y^* can only be specified based on the scrutiny of 𝓚.
There is another consequence of considering the definition of uncertainty put forward in C1, which links uncertainty solely to quantities or events. The consequence is that models, as such, are not
to be linked to uncertainty. Models are merely mathematical artefacts. Models, per se, do not introduce uncertainty, but they are likely inaccurate. Accordingly, another major distinction is to be
set in place. We recall that models, by definition, are simplifications, approximations of the system being analysed. They express or are part of the knowledge of the system. Models should therefore
be solely used for understanding the performance of the system rather than for illusory perfect predictions. In Table 3, we denote the latter clarification as C2.
Regarding the challenges CH1 and CH2, we should note that geohazard analysts are often more interested in predictions rather than known system outputs. For instance, predictions are usually required
to be calculated for input values not contained in the validation data. We consider that predictions are those model outputs not observed or recorded in the data, i.e. extrapolations out of the range
of values covered by observations. Thus, the focus is on quantifying the uncertainty of the system's responses rather than on the accuracy of a model reproducing recorded data. This is the
clarification C3 in Table 3. Considering this, models are yet to provide accuracy in reproducing observed outputs but, more importantly, afford credibility in predictions. Such credibility is to be
assessed mainly in terms of judgements, since conventional validation cannot be conducted using non-observed outputs. Recall that model accuracy usually relates to comparing model outputs with
experimental measurements (Roy and Oberkampf, 2011; Aven and Zio, 2013) and is the basis for validating models. Regarding the credibility of predictions, Wagener et al. (2022) have reported that such
credibility can be mainly judged in terms of the physical consistency of the predictions. Such consistency is judged by checks rejecting physically impossible representations of the system. The
credibility of predictions may also include the verification of the ability of models to accurately reproduce disruptive changes recorded in the data (Alley, 2004). However, as we have made explicit
in the previous section, model predictions are conditional on a considerable number of critical assumptions and choices made by analysts (see Table 1 and clarification C4 in Table 3). Therefore,
predictions can only be as good as the quality of the assumptions made. The assumptions could be wrong, and the examination of the impact of these deviations on the predictions must be assessed. To
provide credibility of predictions, such assumptions and choices should be justified and scrutinised; see option O5 in Table 2. Option O5 addresses the challenge CH1; however, when conducting UQ, O5
has a major role when investigating input uncertainty, which is discussed next.
A critical task in UQ is the quantification of input uncertainty. Input uncertainty may originate when crucial historical events or disruptive changes are missing in the records (CH3). Some critical
input quantities may also remain unidentified to analysts during an assessment (CH4). Analysts can unintendedly fail to identify relevant elements in X^* due to insufficiencies in data or limitations
of existing models. For example, during many assessments, trigger factors that could bring a soil mass to failure could remain unknown to analysts (e.g. Hunt et al., 2013; Clare et al., 2016; Leynaud
et al., 2017; Casalbore et al., 2020). UQ requires simulating sampled values from X, and elements in X can be mutually dependent. However, the joint distribution of X, namely f(x), is often also
unknown. This is the challenge CH6. Considering the potential challenges CH3 to CH6, to specify f(x), we cannot solely rely on using the maximum entropy principle (MEP). The MEP may fail to advance
an exhaustive uncertainty quantification in the input, e.g. by missing relevant values not recorded in the measured data. This would undermine the quality of predictions and, therefore, uncertainty
quantification. Recall that the MEP suggests using the least informative distribution among candidate distributions constrained solely on measurements. Using counterfactual analysis, as described in
Table 2, is an option. However, the counterfactual analysis will also fail to provide quality predictions, since this analysis focuses on counterfactuals (alternative events to observed facts $\in \
mathsc{D}$) and little on the overall knowledge available 𝓚. Note that the knowledge 𝓚 about the system includes, e.g. the assumptions made in the UQ, such as those shown in Table 1. Further note
that such assumptions relate not only to data but also to input quantities, modelling, and predictions. Thus, it appears that the examination of these assumptions should be at the core of UQ in
geohazard assessments, as suggested in Table 2, option O5. The risk assessment of deviations from assumptions was originally suggested by Aven (2013) and exemplified by Khorsandi and Aven (2017). An
assumption deviation risk assessment evaluates different deviations, their associated probabilities of occurrence, and the effect of the deviations. A major distinctive feature of the assumption
deviation risk assessment approach is the evaluation of the credibility of the knowledge 𝓚 supporting the assumptions made. Another feature of this approach is questioning the justifications
supporting the potential for deviations. The examination of 𝓚 can be achieved by assessing the justifications for the assumptions made, the amount and relevance of data or information, the degree of
agreement among experts, and the extent to which the phenomena involved are understood and can be modelled accurately. Justifications might be in the form of direct evidence becoming available,
indirect evidence from other observable quantities, supported by modelling results, or possibly inferred by assessments of deviations of assumptions. This approach is succinctly demonstrated in the
following section. Accordingly, we suggest specifying f(x) in terms of knowledge-based probabilities in conjunction with investigating input uncertainty using the assumptions deviation approach. This
is identified as consideration C5 in Table 3.
Another point to consider is that when uncertainty is measured in terms of knowledge-based probabilities, analysts should be aware of what conditionality means. If, for example, a quantity X[2] is
conditional on a quantity X[1], this implies that increased knowledge about X[1] will change the uncertainty about X[2]. The expression that denotes this is conventionally written as X[2]|X[1].
Analysts may exploit this interpretation when specifying, e.g. the joint distribution f(x,θ). For example, when increased knowledge about a quantity X[1] will not result in increased knowledge about
another quantity X[2], analysts may simplify the analysis according to the scrutiny of 𝓚, meaning that a distribution $f\left(y|{x}_{\mathrm{1},}{x}_{\mathrm{2}}\right)$ to be specified may reduce to
$f\left(y|{x}_{\mathrm{1}}\right)f\left(y|{x}_{\mathrm{2}}\right)$ according to probability theory. Apeland et al. (2002) have illustrated how conditionality in the setting of knowledge-based
probabilities can inform the specification of a joint distribution.
The parameterisation problem, which involves the challenges CH7 to CH9 in Table 2, warrants exhaustive consideration. Addressing these challenges also requires some reinterpretation. To start, note
that parameters are coefficients determining specific functions among a family of potential functions modelling the system. Those parameters constrain a model's output. Recall that y, as realisations
of Y, are the model output when X takes the values x, and some parameters θ∈Θ, and models m∈𝓜 are used. Thus, as shown in the previous section, any output y is conditional on θ, and so is the
uncertainty attached to y^*. We may also distinguish two types of parameters. We may have parameters associated with a property of the system. Other parameters exist that are merely artefacts in the
models and are not properties of the system. As suggested, if uncertainty can solely be attached to events or quantities, we may say that parameters that are not properties of the system are not to
be linked to any uncertainty. This is identified as clarification C6 in Table 3. For example, analysts may consider that the parameters not being part of the system as such are those linked to the
output-prediction error ε, the vector associated with observation/measurement errors Θ[o], and the overall attached hyperparameters linked to probability distributions (including priors, likelihood
functions). Analysts may consider the latter parameters as modelling artefacts, so it is questionable to attach uncertainty to them. Thus, focused on the uncertainty of the system responses rather
than model inaccuracies, uncertainty is to be assigned to those parameters that represent physical quantities. Fixed single values can be assigned to those parameters that are not properties of the
system. To help identify those parameters to which some uncertainty can be linked, we can scrutinise, e.g. the physical nature of these. In fixing parameters to a single value, we can still make use
of back-analysis procedures, as mentioned previously. Analysts may have some additional basis to specify parameter values when the background knowledge available 𝓚 is scrutinised. 𝓚 can be examined
to verify that not only data measurements but other sources of data, models, and assumptions made strongly support a specific parameter value. Based on this interpretation, setting the values of the
parameters that are not properties of the system to a single value reduces the complications in quantifying uncertainty considerably. It also follows that analysts are encouraged to make explicit
that model outputs are conditional on these fixed parameters, and on the model or models chosen, as we have shown in the previous section. The latter also leads us to argue that the focus of UQ is on
the uncertainty of the system response rather than the inaccuracies of the models. This implies in a practical sense that in geohazard assessments, when parameters are clearly differentiated from
specified input quantities, and models providing the most credible predictions are chosen, uncertainty quantification can then proceed. This parsimonious modelling approach is identified as
consideration C7 in Table 3. This latter consideration addresses, to an extent, the challenge CH10.
In the following section, we further illustrate the above discussion by analysing a documented case in which UQ in a geohazard assessment was informed by modelling using sampling procedures.
To further describe the proposed considerations, we analyse a case reported in the specialised literature. The case deals with the quantification of uncertainty of geological structures, namely
uncertainty about the subsurface stratigraphic configuration. Conditions in the subsurface are highly variable, whereas site investigations only provide sparse measurements. Consequently, subsurface
models are usually inaccurate. At a given location, subsurface conditions are unknown until accurately measured. Soil investigation at all locations is usually impractical and uneconomical, and
point-to-point condition variation cannot be known (Vanmarcke, 1984). Such uncertainty means significant engineering and environmental risk to, e.g. infrastructure built on the surface. One way to
quantify this uncertainty is by calculating the probability of every possible configuration of the geological structures (Tacher et al., 2006; Thiele et al., 2016; Pakyuz-Charrier et al., 2018).
Sampling procedures for UQ are helpful in this undertaking. We use an analysis and information from Zhao et al. (2021), which refer to a site located in the Central Business District, Perth, Western
Australia, where six boreholes were executed. The case has been selected taking into account its simplicity to illustrate the points of this paper, but at the same time, it provides details to allow
some discussion. Figure 2 displays the system being analysed.
In the system under consideration, a particular material type to be found in a non-bored point, a portion of terrain not penetrated during soil investigation, is unknown and thus uncertain. The goal
is to compute the probability of encountering a given type of soil at these points. Zhao et al. (2021) focus on calculating the probabilities of encountering clay in the subsurface. The approach
advocated was a sampling procedure to generate many plausible configurations of the geological structures and evaluate their probabilities. In a non-penetrated point in the ground, to calculate the
probability of encountering a given type of soil c, p(y=c), Zhao et al. (2021) used a function that depends on two correlation parameters, namely the horizontal and vertical scale of fluctuation θ[h]
and θ[v]. Note that spatial processes and their properties are conventionally assumed as spatially correlated. Such spatial variation may presumably be characterised by correlation functions, which
depend on a scale of fluctuation parameter. The scale of fluctuation measures the distance within which points are significantly correlated (Vanmarcke, 1984). Equation (9) describes the basic
components of the model chosen by Zhao et al. (2021) (specific details are given in the Appendix to this paper) as follows:
$\begin{array}{}\text{(9)}& m:{\mathbit{X}}_{{s}^{×}}{\mathbf{\Theta }}_{m}\to {\mathbit{Y}}_{s}\to p\left(\mathbit{y}=c\right),\end{array}$
where X is the collection of all specified quantities at borehole points s[x] which can take values x from the set {sand, clay, gravel}, according to the setting in Fig. 2. Y is the collection of all
model outputs with values y at non-borehole points s[y]. Probabilities p(y=c) are computed based on the sampling of the values y and x, and a chosen model using the parameters θ[h]=11.1 and θ[v]=4.1
m, θ[h], θ[v]∈Θ[m]. Using the maximum likelihood method, the parameters were determined based on the borehole data revealed at the site. In determining parameters, the sampling from uniform and
mutually independent distributions of θ[h] and θ[v] was the procedure advocated. The system is further described by a set of equations E[m] (a correlation function and a probability function), the
spatial domain geometry sg[m] (a terrain block of 30×80m), and the boundary conditions bc[m] (the conditions at the borders). More details are given in the Appendix to this paper. Since this system
is not considered time-dependent, the initial conditions IC[m] were not specified.
The summary results reported by Zhao et al. (2021) are shown in Fig. 3. In Fig. 3, the most probable stratigraphic configuration, along with the spatial distribution of the probability of the
existence of clay, is displayed. The authors focused on this sensitive material, which likely represents a risk to the infrastructure built on the surface.
Zhao et al. (2021) stated that “characterisation results of the stratigraphic configuration and its uncertainty are consistent with the intuition and the state of knowledge on site characterisation”.
Next, throughout Zhao et al.'s (2021) analysis, the following assumptions were enforced (Table 4), although these were not explicitly disclosed by the authors.
Unfortunately, the authors did not report enough details on how the majority of these assumptions are justified. We should note, however, that providing these justifications was not the objective of
their research. Yet, here we analyse how assumptions can be justified by scrutinising 𝓚 and using some elements of the assumption deviation approach described in the previous section. Table 5
summarises the analysis conducted and only reflects the most relevant observations and reservations we identified. Accordingly, the information in Table 5 may not be exhaustive but is still useful
for the desired illustration. Table 5 displays some of our observations related to the credibility of the knowledge 𝓚. The examination of 𝓚 is achieved by assessing the amount and relevance of data
or information, the extent to which the phenomena involved are understood and can be modelled accurately, the degree of agreement among experts, and the justifications for the assumptions made.
Observations regarding the justifications for potential deviations from assumptions also form part of the analysis.
Not surprisingly, the observations in our analysis concentrate on the predictions' credibility. Recall that UQ focuses on the system's response, approximated by model predictions (considerations C2
and C3 in Table 3). For example, although using correlations is an accepted practice and a practical simplification, correlation functions appear counterintuitive to model geological structures or
domains. Further, correlation functions do not help much in understanding the system (consideration C2 in Table 3). Recall that such structures are mainly disjoint domains linked to a finite set of
possible categorical quantities (masses of soil or rock) rather than continuous quantities. Next, the variation of such structures can occur by abrupt changes in materials; thus, the use of smoothed
correlation functions to represent them requires additional consideration. Moreover, the physical basis of the correlation functions is not clear, and physical models based on deposition processes
may be suggested (e.g. Catuneanu et al., 2009). We should note a potential justification for the deviation from the assumption regarding the credibility of predictions. This is because knowledge from
additional sources such as surface geology, sedimentology, local geomorphic setting, and structural geology was not explicitly taken into account in quantifying uncertainty. The revision of this
knowledge can contribute to reducing the probability of deviation in predictions. Based on the observations in Table 5, we can conclude that there is potential to improve the credibility of
The choices made by Zhao et al. (2021) regarding the use of parameters with fixed values together with the choice for a single best model can be highlighted. These choices illustrate the points
raised in considerations C6 and C7 (Table 3). The maximum likelihood method supported these choices; a back-analysis method focused on matching measurements and calculated model outputs using
different assumed values for θ[h] and θ[v]. We highlight that a model judged to be the best model was chosen. This includes the specification of a particular spatial domain geometry in SG[m].
Investigating the impact of the variation of SG[m] was considered unnecessary. There was no need to specify several competing models, which is in line with our consideration labelled as C7 in this
Zhao et al. (2021) investigated the joint distribution f(x), which was sampled to calculate probabilities. However, someone can suggest that the joint distribution f(x,θ,sg[m],bc[m]) could have been
produced. Nevertheless, we can argue that establishing such a joint distribution is challenging and requires, in many instances, that analysts encode many additional assumptions (e.g. prior
distributions, likelihood functions, independence, linear relationships, normality, stationarity of the quantities and parameters considered).
A more crucial observation derived from the analysis of potential deviations of assumptions might considerably impact the credibility of predictions. This observation comes from revisiting the
knowledge sources of Zhao et al.'s (2021) analysis, available from https://australiangeomechanics.org/downloads/ (last access: 29 June 2022). Another type of sensitive material was revealed by other
soundings in the area, more specifically, silt. Depending on the revision of 𝓚, this fourth suspected material could be analysed in an extended uncertainty quantification of the system. Note that the
specified input quantities X were originally assumed to take values x from the set {sand, clay, gravel}. Such an assumption was based on the records of six boreholes which were believed to be
accurate. The latter illustrates the relevance of consideration C5 in Table 3.
Another choice by Zhao et al. (2021) is that they disregarded the possibility of incorporating measurement errors of the borehole data into the UQ, probably because these data were judged to be
accurate. We recall in this respect that these errors reflect the inaccuracy of the measurements rather than the uncertainty about the system. As stated for consideration C6 (Table 3), we can hardly
justify attaching uncertainty to measurement error parameters, since measurement errors are not a property of the system. The same can be said for the parameters θ[h] and θ[v], which are not
properties of the system. Note that their physical basis is questioned. We should note, however, that assuming global coefficients for the parameters θ[h] and θ[v] is an established practice
(Vanmarcke, 1984; Lloret-Cabot et al., 2014; Juang et al., 2019). It can be pointed out that uncertainty quantification in this kind of system is, to an extent, sensitive to the choice of scale of
fluctuation values (Vanmarcke, 1984). It can also be argued that using a global rather than local correlation between spatial quantities can misrepresent geological structure variation. Accordingly,
further examination of the existing knowledge 𝓚 justifies some assessment of the impact of assuming a local rather than global scale of fluctuation.
Overall, the Zhao et al. (2021) analysis is, to an extent, based on the previously suggested definition of uncertainty; see the consideration C1 in Table 3.
We should stress that Zhao et al.'s (2021) uncertainty quantification refers specifically to the ground model described at the beginning of this section. In other words, the probabilities displayed
in Fig. 3b are conditional on the parameters chosen (θ[h]=11.1 and θ[v]=4.1m), the model selected (described by Eqs. 9, A1 and A2 in the Appendix to this paper), the specified spatial domain
geometry sg[m] (a terrain block of 30×80m), and ultimately the assumptions made (listed in Table 4). This information is to be reported explicitly to the users of the results. This reflects the
clarification C4 in Table 3.
Regarding the consideration of subjective probabilities, there has been some agreement on their use in this kind of UQ since Vanmarcke (1984). However, the use of knowledge-based probabilities in the
extension described here is recommended, given the illustrated implications to advance UQ (as discussed in the previous section and stated in consideration C5). For example, increased examination of
𝓚 might have resulted in using a more informative distribution f(θ[h],θ[v]) than the uniform distribution. The increased examination of 𝓚 might have led to different values for θ[h] and θ[v], and a
different model. Recall that the selection of the model and determination of parameters were based on the maximum likelihood method, which only uses measured data
In our analysis of Zhao's et al. (2021) assessment, the examination of supporting knowledge 𝓚 resulted essentially in
1. judging the credibility of predictions;
2. providing justifications for assessing assumption deviations by considering the modelling of a fourth material;
3. considering additional data other than the borehole records, such as surface geology, sedimentology, local geomorphic setting, and structural geology;
4. analysing the possibility of distinct geological models with diverse spatial domain geometry and local correlations; and
5. ultimately, further examining the existing 𝓚.
In this paper, we have discussed challenges in uncertainty quantification (UQ) for geohazard assessments. Beyond the parameterisation problem, the challenges include assessing the quality of
predictions required in the assessments, quantifying uncertainty in the input quantities, and considering the impact of choices and assumptions made by analysts. Such challenges arise from the
commonplace situation of limited data and the one-off nature of geohazard features. If these challenges are kept unaddressed, UQ lacks credibility. Here, we have formulated seven considerations that
may contribute to providing increased credibility in the quantifications. For example, we proposed understanding uncertainty as lack of knowledge, a condition that can only be attributed to
quantities or events. Another consideration is that the focus of the quantification should be more on the uncertainty of the system response rather than the accuracy of the models used in the
quantification. We drew attention to the clarification that models, in geohazard assessments, are simplifications used for predictions approximating the system's responses. We have also considered
that since uncertainty is only to be linked to the properties of the system, models do not introduce uncertainty. Inaccurate models can, however, produce poor predictions and such models should be
rejected. Then, an increased examination of background knowledge will be required to quantify uncertainty credibly. We also put forward that there could not be uncertainty about those elements in the
parameter set that are not properties of the system. The latter also has pragmatic implications, including how the many parameters in a geohazard system could be constrained in a geohazard
We went into detail to show that predictions, and in turn UQ, are conditional on the model(s) chosen together with the assumptions made by analysts. We identified limitations of measured data to
support the assessment of the quality of predictions. Accordingly, we have proposed that the quality of UQ needs to be judged based also on some additional crucial tasks. Such tasks include the
exhaustive scrutiny of the knowledge coupled with the assessment of deviations of those assumptions made in the analysis.
Key to enacting the proposed clarifications and simplifications is the full consideration of knowledge-based probability. Considering this type of probability will help overcome the identified
limitations of the maximum entropy principle or counterfactual analysis to quantify uncertainty in input quantities. We have exposed that the latter approaches are prone to produce unexhausted
uncertainty quantification due to their reliance on measured data, which can miss crucial events or overlook relevant input quantities.
In this Appendix, the necessary details of the original analysis made by Zhao et al. (2021) are given. The following are the basic equations E[m] used by these authors:
$\begin{array}{}\text{(A1)}& p\left(\mathbit{y}=c\right)\sim \frac{{\sum }_{{x}_{s}×\mathbit{Y}}{\mathit{\rho }}_{x=c,y=c}}{{\sum }_{c=\mathrm{1}}^{C}\sum _{{x}_{s}×\mathbit{Y}}{\mathit{\rho }}_{x=c,
\phantom{\rule{0.125em}{0ex}}y=c}}\text{(A2)}& {\mathit{\rho }}_{xy}=\mathrm{exp}\left(-\mathit{\pi }\frac{\stackrel{\mathrm{‾}}{{s}_{x}{s}_{y}}}{{\mathit{\theta }}_{\mathrm{h}}}-\mathit{\pi }\frac{\
mathrm{|}{s}_{x}{s}_{y}}{{\mathit{\theta }}_{\mathrm{v}}}\right),\end{array}$
where X is the collection of all specified quantities at borehole points, which take values x. Y is the collection of all outputs at non-borehole points with values y. ρ[xy] is the value of
correlation between a quantity value x at a penetrated point s[x]∈S[x] and the value y at a non-penetrated point s[y]∈S[y]. $\stackrel{\mathrm{‾}}{{s}_{x}{s}_{y}}$ is the horizontal distance between
points s[x] and s[y], while |s[x]s[y] is the vertical one. θ[h] and θ[v] are the horizontal and vertical scales of fluctuation, respectively. Each material class considered is associated exclusively
with an element in the set of integers $\mathit{\left\{}\mathrm{1},\mathrm{2},\mathrm{\dots },C\mathit{\right\}}$. p(y=c) is the probability of encountering a type of material c in a point s[y]. Such
probability is initially approximated using Eq. (A1). More accurate probabilities are computed based on the repeated sampling of the joint distribution f(x,y), which was approximated using Eq. (A1).
Equation (A1), described in short, approximates probabilities as the ratio of the sum of correlation values, calculated for a penetrated point in the set S[x] and the set of non-penetrated points S[y
] for a given material c, to the sum of correlation values for all points and all materials.
Based on data collected at borehole locations, the selection of the type of correlation function and the scales of fluctuation took place using the maximum likelihood method. The authors considered
three types of correlation functions, namely squared exponential, single exponential, and second-order Markov. In this case, the likelihood function $\mathsc{L}\left({\mathit{\theta }}_{m}|\right)=f\
left(|{\mathit{\theta }}_{m}\right)$ represents the likelihood of observing θ[m]. The squared exponential function yielded the maximum likelihood when the horizontal and vertical scales of
fluctuation were set to 11.1 and 4.1m, respectively. Hence, the squared exponential function correlation, whose expression is Eq. (A2) in this Appendix, was selected. Equations (A3) and (A4)
correspond to the single exponential and the second-order Markov functions, respectively.
$\begin{array}{}\text{(A3)}& {\mathit{\rho }}_{xy}=\mathrm{exp}\left(-\mathrm{2}\frac{\stackrel{\mathrm{‾}}{{s}_{x}{s}_{y}}}{{\mathit{\theta }}_{\mathrm{h}}}-\mathrm{2}\frac{\mathrm{|}{s}_{x}{s}_{y}}
{{\mathit{\theta }}_{\mathrm{v}}}\right)\text{(A4)}& \begin{array}{rl}{\mathit{\rho }}_{xy}& =\left(\mathrm{1}+\mathrm{4}\frac{\stackrel{\mathrm{‾}}{{s}_{x}{s}_{y}}}{{\mathit{\theta }}_{\mathrm{h}}}\
right)\left(\mathrm{1}+\mathrm{4}\frac{\mathrm{|}{s}_{x}{s}_{y}}{{\mathit{\theta }}_{\mathrm{v}}}\right)\\ & \cdot \mathrm{exp}\left(-\mathrm{4}\frac{\stackrel{\mathrm{‾}}{{s}_{x}{s}_{y}}}{{\mathit{\
theta }}_{\mathrm{h}}}-\mathrm{4}\frac{\mathrm{|}{s}_{x}{s}_{y}}{{\mathit{\theta }}_{\mathrm{v}}}\right)\end{array}\end{array}$
No data sets were used in this article.
ICC: conceptualisation, methodology, writing (original draft preparation), investigation, and validation. TA: investigation, supervision, and writing (reviewing and editing). RG: investigation,
supervision, and writing (reviewing and editing).
The contact author has declared that none of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The authors are very grateful to the reviewers, who provided valuable and useful suggestions.
This research is funded by ARCEx partners and the Research Council of Norway (grant no. 228107).
This paper was edited by Dan Lu and David Ham and reviewed by Anthony Gruber and one anonymous referee.
Albert, C. G., Callies, U., and von Toussaint, U.: A Bayesian approach to the estimation of parameters and their interdependencies in environmental modeling, Entropy, 24, 231, https://doi.org/10.3390
/e24020231, 2022.
Alley, R. B.: Abrupt climate change, Sci. Am., 291, 62–69, https://doi.org/10.1126/science.1081056, 2004.
Apeland, S., Aven, T., and Nilsen, T.: Quantifying uncertainty under a predictive, epistemic approach to risk analysis, Reliab. Eng. Syst. Saf., 75, 93–102, https://doi.org/10.1016/S0951-8320(01)
00122-3, 2002.
Aven, T.: On the need for restricting the probabilistic analysis in risk assessments to variability, Risk Anal., 30, 354–360, https://doi.org/10.1111/j.1539-6924.2009.01314.x, 2010.
Aven, T.: Practical implications of the new risk perspectives, Reliab. Eng. Syst. Saf., 115, 136–145, https://doi.org/10.1016/j.ress.2013.02.020, 2013.
Aven, T.: The science of risk analysis: Foundation and practice, Routledge, London, https://doi.org/10.4324/9780429029189, 2019.
Aven, T. and Kvaløy, J. T.: Implementing the Bayesian paradigm in risk analysis, Reliab. Eng. Syst. Saf., 78, 195–201, https://doi.org/10.1016/S0951-8320(02)00161-8, 2002.
Aven, T. and Pörn, K.: Expressing and interpreting the results of quantitative risk analyses, Review and discussion, Reliab. Eng. Syst. Saf., 61, 3–10, https://doi.org/10.1016/S0951-8320(97)00060-4,
Aven, T. and Zio, E.: Model output uncertainty in risk assessment, Int. J. Perform. Eng., 29, 475–486, https://doi.org/10.23940/ijpe.13.5.p475.mag, 2013.
Betz, W.: Bayesian inference of engineering models, Doctoral dissertation, Technische Universität München, 2017.
Brown, G. W.: Monte Carlo methods, Modern Mathematics for the Engineers, 279–303, McGraw-Hill, New York, 1956.
Cardenas, I.: On the use of Bayesian networks as a meta-modelling approach to analyse uncertainties in slope stability analysis, Georisk, 13, 53–65, https://doi.org/10.1080/17499518.2018.1498524,
Carrera, J. and Neuman, S.: Estimation of aquifer parameters under transient and steady state conditions: 2. Uniqueness, stability, and solution algorithms, Water Resour. Res., 22, 211–227, https://
doi.org/10.1029/WR022i002p00211, 1986.
Casalbore, D., Passeri, F., Tommasi, P., Verrucci, L., Bosman, A., Romagnoli, C., and Chiocci, F. L.: Small-scale slope instability on the submarine flanks of insular volcanoes: the case-study of the
Sciara del Fuoco slope (Stromboli), Int. J. Earth Sci., 109, 2643–2658, https://doi.org/10.1007/s00531-020-01853-5, 2020.
Catuneanu, O., Abreu, V., Bhattacharya, J. P., Blum, M. D., Dalrymple, R. W., Eriksson, P. G., Fielding, C. R., Fisher, W. L., Galloway, W. E., Gibling, M. R., Giles, K. A., Holbrook, J. M., Jordan,
R., Kendall, C. G. St. C., Macurda, B., Martinsen, O. J., Miall, A. D., Neal, J. E., Nummedal, D., Pomar, L., Posamentier, H. W., Pratt, B. R., Sarg, J. F., Shanley, K. W., Steel, R. J., Strasser,
A., Tucker, M. E., and Winker, C.: Towards the standardisation of sequence stratigraphy, Earth-Sci. Rev., 92, 1–33, https://doi.org/10.1016/j.earscirev.2008.10.003, 2009.
Chow, Y. K., Li, S., and Koh, C. G.: A particle method for simulation of submarine landslides and mudflows, Paper presented at the 29th International Ocean and Polar Engineering Conference, 16–21
June, Honolulu, Hawaii, USA, ISOPE-I-19-594, 2019.
Christakos, G.: A Bayesian/maximum-entropy view to the spatial estimation problem, Math. Geol., 22, 763–777, https://doi.org/10.1007/BF00890661, 1990.
Clare, M. A., Clarke, J. H., Talling, P. J., Cartigny, M. J., and Pratomo, D. G.: Preconditioning and triggering of offshore slope failures and turbidity currents revealed by most detailed monitoring
yet at a fjord-head delta, Earth Planet. Sc. Lett., 450, 208–220, https://doi.org/10.1016/j.epsl.2016.06.021, 2016.
Degen, D., Veroy, K., Scheck-Wenderoth, M., and Wellmann, F.: Crustal-scale thermal models: Revisiting the influence of deep boundary conditions, Environ. Earth Sci., 81, 1–16, https://doi.org/
10.1007/s12665-022-10202-5, 2022.
Dubois, D.: Possibility theory and statistical reasoning, Comput. Stat. Data Anal., 51, 47–69, https://doi.org/10.1016/j.csda.2006.04.015, 2006.
Ferson, S. and Ginzburg, L. R.: Different methods are needed to propagate ignorance and variability, Reliab. Eng. Syst. Saf., 54, 133–144, https://doi.org/10.1016/S0951-8320(96)00071-3, 1996.
Flage, R., Baraldi, P., Zio, E., and Aven, T.: Probability and possibility-based representations of uncertainty in fault tree analysis, Risk Anal., 33, 121–133, https://doi.org/10.1111/
j.1539-6924.2012.01873.x, 2013.
Flage, R., Aven, T., and Berner, C. L.: A comparison between a probability bounds analysis and a subjective probability approach to express epistemic uncertainties in a risk assessment context – A
simple illustrative example, Reliab. Eng. Syst. Saf., 169, 1–10, https://doi.org/10.1016/j.ress.2017.07.016, 2018.
Gray, A., Ferson, S., Kreinovich, V., and Patelli, E.: Distribution-free risk analysis, Int. J. Approx. Reason., 146, 133–156, https://doi.org/10.1016/j.ijar.2022.04.001, 2022a.
Gray, A., Wimbush, A., de Angelis, M., Hristov, P. O., Calleja, D., Miralles-Dolz, E., and Rocchetta, R.: From inference to design: A comprehensive framework for uncertainty quantification in
engineering with limited information, Mech. Syst. Signal Process., 165, 108210, https://doi.org/10.1016/j.ymssp.2021.108210, 2022b.
Hastings, W. K.: Monte Carlo sampling methods using Markov chains and their applications, Biometrika, 87, 97–109, https://doi.org/10.2307/2334940, 1970.
Helton, J. C. and Oberkampf, W. L.: Alternative representations of epistemic uncertainty, Reliab. Eng. Syst. Saf., 1, 1–10, https://doi.org/10.1016/j.ress.2011.02.013, 2004.
Huang, L., Cheng, Y. M., Li, L., and Yu, S. Reliability and failure mechanism of a slope with non-stationarity and rotated transverse anisotropy in undrained soil strength, Comput. Geotech., 132,
103970, https://doi.org/10.1016/j.compgeo.2020.103970, 2021.
Hunt, J. E., Wynn, R. B., Talling, P. J., and Masson, D. G.: Frequency and timing of landslide-triggered turbidity currents within the Agadir Basin, offshore NW Africa: Are there associations with
climate change, sea level change and slope sedimentation rates?, Mar. Geol., 346, 274–291, https://doi.org/10.1016/j.margeo.2013.09.004, 2013.
Jaynes, E. T.: Information theory and statistical mechanics, Phys. Rev., 106, 620, https://doi.org/10.1103/PhysRev.106.620, 1957.
Juang, C. H., Zhang, J., Shen, M., and Hu, J.: Probabilistic methods for unified treatment of geotechnical and geological uncertainties in a geotechnical analysis, Eng. Geol, 249, 148–161, https://
doi.org/10.1016/j.enggeo.2018.12.010, 2019.
Khorsandi, J. and Aven, T.: Incorporating assumption deviation risk in quantitative risk assessments: A semi-quantitative approach, Reliab. Eng. Syst. Saf., 163, 22–32, https://doi.org/10.1016/
j.ress.2017.01.018, 2017.
Leynaud, D., Mulder, T., Hanquiez, V., Gonthier, E., and Régert, A.: Sediment failure types, preconditions and triggering factors in the Gulf of Cadiz, Landslides, 14, 233–248, https://doi.org/
10.1007/s10346-015-0674-2, 2017.
Liu, Y., Ren, W., Liu, C., Cai, S., and Xu, W.: Displacement-based back-analysis frameworks for soil parameters of a slope: Using frequentist inference and Bayesian inference, Int. J. Geomech., 22,
04022026, https://doi.org/10.1061/(ASCE)GM.1943-5622.0002318, 2022.
Lloret-Cabot. M., Fenton, G. A., and Hicks, M. A.: On the estimation of scale of fluctuation in geostatistics, Georisk, 8, 129–140, https://doi.org/10.1080/17499518.2013.871189, 2014.
Lu, P. and Lermusiaux, P. F.: Bayesian learning of stochastic dynamical models, Phys. D, 427, 133003, https://doi.org/10.1016/j.physd.2021.133003, 2021.
Luo, L., Liang, X., Ma, B., and Zhou, H.: A karst networks generation model based on the Anisotropic Fast Marching Algorithm, J. Hydrol., 126507, https://doi.org/10.1016/j.jhydrol.2021.126507, 2021.
Metropolis, N. and Ulam, S.: The Monte Carlo method, J. Am. Stat. A., 44, 335–341, https://doi.org/10.1080/01621459.1949.10483310, 1949.
Montanari, A. and Koutsoyiannis, D.: A blueprint for process-based modeling of uncertain hydrological systems, Water Resour. Res., 48, W09555, https://doi.org/10.1029/2011WR011412, 2012.
Nilsen, T. and Aven, T.: Models and model uncertainty in the context of risk analysis, Reliab. Eng. Syst. Saf., 79, 309–317, https://doi.org/10.1016/S0951-8320(02)00239-9, 2003.
Pakyuz-Charrier, E., Lindsay, M., Ogarko, V., Giraud, J., and Jessell, M.: Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for
disturbance distribution selection and parameterization, Solid Earth, 9, 385–402, https://doi.org/10.5194/se-9-385-2018, 2018.
Pearl, J.: Comment: graphical models, causality and intervention, Statist. Sci., 8, 266–269, 1993.
Pheulpin, L., Bertrand, N., and Bacchi, V.: Uncertainty quantification and global sensitivity analysis with dependent inputs parameters: Application to a basic 2D-hydraulic model, LHB, 108, 2015265,
https://doi.org/10.1080/27678490.2021.2015265, 2022.
Raíces-Cruz, I., Troffaes, M. C., and Sahlin, U.: A suggestion for the quantification of precise and bounded probability to quantify epistemic uncertainty in scientific assessments, Risk Anal., 42,
239–253, https://doi.org/10.1111/risa.13871, 2022.
Rodríguez-Ochoa, R., Nadim, F., Cepeda, J. M., Hicks, M. A., and Liu, Z.: Hazard analysis of seismic submarine slope instability, Georisk, 9, 128–147, https://doi.org/10.1080/17499518.2015.1051546,
Roy, C. J. and Oberkampf, W. L.: A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing, Comput. Methods Appl. Mech. Eng., 200, 2131–2144,
https://doi.org/10.1016/j.cma.2011.03.016, 2011.
Sankararaman, S. and Mahadevan, S.: Integration of model verification, validation, and calibration for uncertainty quantification in engineering systems, Reliab. Eng. Syst. Saf., 138, 194–209, https:
//doi.org/10.1016/j.ress.2015.01.023, 2015.
Shafer, G.: A mathematical theory of evidence, in: A mathematical theory of evidence, Princeton university press, 1976.
Shortridge, J., Aven, T., and Guikema, S.: Risk assessment under deep uncertainty: A methodological comparison, Reliab. Eng. Syst. Saf., 159, 12–23, https://doi.org/10.1016/j.ress.2016.10.017, 2017.
Society for Risk Analysis: Society for Risk Analysis glossary, https://www.sra.org/wp-content/uploads/2020/04/SRA-Glossary-FINAL.pdf (last access: 25 June 2021), 2018.
Sun, X., Zeng, P., Li, T., Wang, S., Jimenez, R., Feng, X., and Xu, Q.: From probabilistic back analyses to probabilistic run-out predictions of landslides: A case study of Heifangtai terrace, Gansu
Province, China, Eng. Geol, 280, 105950, https://doi.org/10.1016/j.enggeo.2020.105950, 2021a.
Sun, X., Zeng, X., Wu, J., and Wang, D.: A Two-stage Bayesian data-driven method to improve model prediction, Water Resour. Res., 57, e2021WR030436, https://doi.org/10.1029/2021WR030436, 2021b.
Tacher, L., Pomian-Srzednicki, I., and Parriaux, A.: Geological uncertainties associated with 3-D subsurface models, Comput. Geosci., 32, 212–221, https://doi.org/10.1016/j.cageo.2005.06.010, 2006.
Tang, X. S., Wang, M. X., and Li, D. Q.: Modeling multivariate cross-correlated geotechnical random fields using vine copulas for slope reliability analysis, Comput. Geotech., 127, 103784, https://
doi.org/10.1016/j.compgeo.2020.103784, 2020.
Thiele, S. T., Jessell, M. W., Lindsay, M., Wellmann, J. F., and Pakyuz-Charrier, E.: The topology of geology 2: Topological uncertainty, J. Struct. Geol., 91, 74–87, https://doi.org/10.1016/
j.jsg.2016.08.010, 2016.
Ulam, S. M.: Monte Carlo calculations in problems of mathematical physics, Modern Mathematics for the Engineers, 261–281, McGraw-Hill, New York, 1961.
Uzielli, M. and Lacasse, S.: Scenario-based probabilistic estimation of direct loss for geohazards, Georisk, 1, 142–154, https://doi.org/10.1080/17499510701636581, 2007.
van den Eijnden, A. P., Schweckendiek, T., and Hicks, M. A.: Metamodelling for geotechnical reliability analysis with noisy and incomplete models, Georisk, 16, 518–535, https://doi.org/10.1080/
17499518.2021.1952611, 2022.
Vanmarcke, E. H.: Random fields: Analysis and synthesis, The MIT Press, Cambridge, MA, 1984.
Vanneste, M., Løvholt, F., Issler, D., Liu, Z., Boylan, N., and Kim, J.: A novel quasi-3D landslide dynamics model: from theory to applications and risk assessment, Paper presented at the Offshore
Technology Conference, 6–9 May, Houston, Texas, OTC-29363-MS, https://doi.org/10.4043/29363-MS, 2019.
Wagener, T., Reinecke, R., and Pianosi, F.: On the evaluation of climate change impact models, Wiley Interdiscip, Rev. Clim. Change, e772, https://doi.org/10.1002/wcc.772, 2022.
Wellmann, J. F. and Regenauer-Lieb, K.: Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models, Tectonophysics, 526, 207–216, https://doi.org/10.1016/
j.tecto.2011.05.001, 2012.
Woo, G.: Downward counterfactual search for extreme events, Front. Earth Sci., 7, 340, https://doi.org/10.3389/feart.2019.00340, 2019.
Yano, J. I.: What is the Maximum Entropy Principle? Comments on “Statistical theory on the functional form of cloud particle size distributions”, J. Atmos. Sci., 76, 3955–3960, https://doi.org/
10.1175/JAS-D-18-0223.1, 2019.
Zadeh, L. A.: Probability measures of fuzzy events, J. Math. Anal. Appl., 23, 421–427, https://doi.org/10.1016/0022-247X(68)90078-4, 1968.
Zhao, C., Gong, W., Li, T., Juang, C. H., Tang, H., and Wang, H.: Probabilistic characterisation of subsurface stratigraphic configuration with modified random field approach, Eng. Geol, 288, 106138,
https://doi.org/10.1016/j.enggeo.2021.106138, 2021. | {"url":"https://gmd.copernicus.org/articles/16/1601/2023/","timestamp":"2024-11-09T13:08:20Z","content_type":"text/html","content_length":"306099","record_id":"<urn:uuid:09f9bc48-6ead-47b4-9e0c-117d71e62aea>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00419.warc.gz"} |
An R package for Bayesian meta-analysis that accounts for publication bias or p-hacking.
publipha is an package for doing Bayesian meta-analysis that accounts for publication bias or p-hacking. Its main functions are:
• psma does random effects meta-analysis under publication bias with a one-sided p-value based selection probability. The model is roughly the same as that of (Hedges, 1992)
• phma does random effects meta-analysis under a certain model of p-hacking with a one-sided p-value based propensity to p-hack. This is based on the forthcoming paper of by Moss and De Bin (2019).
• cma does classical random effects meta-analysis with the same priors as psma and cma.
Use the following command from inside R:
Call the library function and use it like a barebones metafor::rma. The alpha tells psma or phma where they should place the cutoffs for significance.
# Publication bias model
set.seed(313) # For reproducibility
model_psma = publipha::psma(yi = yi,
vi = vi,
alpha = c(0, 0.025, 0.05, 1),
data = metadat::dat.bangertdrowns2004)
# p-hacking model
model_phma = publipha::phma(yi = yi,
vi = vi,
alpha = c(0, 0.025, 0.05, 1),
data = metadat::dat.bangertdrowns2004)
# Classical model
model_cma = publipha::cma(yi = yi,
vi = vi,
alpha = c(0, 0.025, 0.05, 1),
data = metadat::dat.bangertdrowns2004)
You can calculate the posterior means of the meta-analytic mean with extract_theta0:
If you wish to plot a histogram of the posterior distribution of tau, the standard deviation of the effect size distribution, you can do it like this:
How to Contribute or Get Help
If you encounter a bug, have a feature request or need some help, open a Github issue. Create a pull requests to contribute. | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/publipha/readme/README.html","timestamp":"2024-11-04T02:51:02Z","content_type":"application/xhtml+xml","content_length":"12354","record_id":"<urn:uuid:164cffd2-393d-45b8-b415-ef1e6b252023>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00317.warc.gz"} |
HashMap in 25 lines of C
This post shows how to implement a simple hash table of arbitrary length, allowing to store all values c knows and doing so while being as minimal as possible. It does however not include collision
handling, to implement this, simply swap the Map.buckets array with an array of linked list and insert into the linked lists instead of only into the bucket array and you should be good to go.
1#include <assert.h>
2 3typedef struct Map { size_t size; size_t cap; void **buckets; } Map;
4const size_t BASE = 0x811c9dc5;
5const size_t PRIME = 0x01000193;
6size_t hash(Map *m, char *str) {
7 size_t initial = BASE;
8 while(*str) {
9 initial ^= *str++;
10 initial *= PRIME;
11 }
12 return initial & (m->cap - 1);
15Map init(size_t cap) {
16 Map m = {0,cap};
17 m.buckets = malloc(sizeof(void*)*m.cap);
18 assert(m.buckets != NULL);
19 return m;
22void put(Map *m, char *str, void *value) {
23 m->size++;
24 m->buckets[hash(m, str)] = value;
27void* get(Map *m, char *str) {
28 return m->buckets[hash(m, str)];
Hashing ##
For the hashing I decided to go with FNV1a, simply because its easy to implement and very fast. We are only using strings as keys, thus we could use the java way of hashing strings. The idea is to
start with an initial value (BASE) and assign this to the xored value of itself and the data (current character). This is then assigned to its multiplication with the PRIME.
The last line of the hash-function includes an optimisation for quicker modulus computation, it is equivalent to initial % m->cap but is a lot faster. It does however only work for the cap being
powers of two.
Storing and accessing everything ##
c allows for storage of all and any values via void*, I abuse this to only store the pointers to values, thus allowing the values to be of any kind, the only downside being, the user of the map has
to allocate, ref, deref and cast the values to the expected types.
1#include <stdio.h>
2#include <stdlib.h>
3 4// ... Map implementation
5 6int main(void) {
7 Map m = init(1024);
8 double d1 = 25.0;
9 double d2 = 50.0;
10 put(&m, "key1", (void*)&d1);
11 put(&m, "key2", (void*)&d2);
13 printf("key1=%f;key2=%f\n", *(double*)get(&m, "key1"), *(double*)get(&m, "key2"));
15 free(m.buckets);
16 return EXIT_SUCCESS;
This showcases the casting and the ref + deref necessary to interact with the hash table and of course the user has to free the memory used for the table buckets. | {"url":"https://xnacly.me/posts/2024/c-hash-map/","timestamp":"2024-11-01T23:49:53Z","content_type":"text/html","content_length":"18454","record_id":"<urn:uuid:a241fc95-2741-42aa-ad2d-038362cb4c64>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00019.warc.gz"} |
The Blockchain
October 04, 2017
So far we have covered:
• Transactions are bundled into blocks
• Blocks are signed by the validators
• There’s a limited block size and block time (rate at which blocks can be created).
The next thing to understand is that blocks are also chained.
In particular, every block points to a single block before it, and a single block after it. This is obviously necessary when you think of it because if the blocks weren’t chained, then you wouldn’t
know which order the transactions occurred in. For example, if one block contains a transaction from A → B, and another contains a transaction from B → C, you don’t know whether a particular
transaction was valid unless you knew the order in which they happened.
In that sense, a “chain” of blocks represents everything you need to figure out everyone’s balance. That’s why you’ll often hear people referring to the bitcoin blockchain — they’re just talking
about all the transactions that have happened thus far.
Every blockchain starts with a genesis block, which is the first block ever created. In the case of bitcoin, Satoshi created this block, and the only transaction in it was the reward he got for
creating the block (the small validator reward we talked about in the example earlier). After this block, the only person with a balance in the ledger was Satoshi. Other blocks followed, with rewards
being captured by other people and containing transactions from Satoshi to others, and through this mechanism the ledger grew to contain many more balances than just that of Satoshi.
Note: If you’d like to get a deeper understanding of how the blockchain works, check out The Ultimate Guide to the Blockchain. | {"url":"https://www.commonlounge.com/the-blockchain-9a110735913f4cffaed4f8ec261569e6/","timestamp":"2024-11-09T03:03:02Z","content_type":"text/html","content_length":"29949","record_id":"<urn:uuid:cad383de-fef9-443a-8cef-df83c9da499f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00495.warc.gz"} |
Equity Derivatives Using Closed-Form Solutions
Financial Instruments Toolbox™ supports four types of closed-form solutions and analytical approximations to calculate price and sensitivities (greeks) of vanilla options:
• Black-Scholes model
• Black model
• Roll-Geske-Whaley model
• Bjerksund-Stensland 2002 model
Black-Scholes Model
The Black-Scholes model is one of the most commonly used models to price European calls and puts. It serves as a basis for many closed-form solutions used for pricing options. The standard
Black-Scholes model is based on the following assumptions:
• There are no dividends paid during the life of the option.
• The option can only be exercised at maturity.
• The markets operate under a Markov process in continuous time.
• No commissions are paid.
• The risk-free interest rate is known and constant.
• Returns on the underlying stocks are log-normally distributed.
The Black-Scholes model implemented in Financial Instruments Toolbox software allows dividends. The following three dividend methods are supported:
• Cash dividend
• Continuous dividend yield
• Constant dividend yield
However, not all Black-Scholes closed-form pricing functions support all three dividend methods. For more information on specifying the dividend methods, see stockspec.
Closed-form solutions based on a Black-Scholes model support the following tasks.
Task Function
Price European options with different dividends using the Black-Scholes option pricing model. optstockbybls
Calculate European option prices and sensitivities using the Black-Scholes option pricing model. optstocksensbybls
Calculate implied volatility on European options using the Black-Scholes option pricing model. impvbybls
Price European simple chooser options using Black-Scholes model. chooserbybls
For an example using the Black-Scholes model, see Pricing Using the Black-Scholes Model.
Black Model
Use the Black model for pricing European options on physical commodities, forwards or futures. The Black model supported by Financial Instruments Toolbox software is a special case of the
Black-Scholes model. The Black model uses a forward price as an underlier in place of a spot price. The assumption is that the forward price at maturity of the option is log-normally distributed.
Closed-form solutions for a Black model support the following tasks.
Task Function
Price European options on futures using the Black option pricing model. optstockbyblk
Calculate European option prices and sensitivities on futures using the Black option pricing model. optstocksensbyblk
Calculate implied volatility for European options using the Black option pricing model. impvbyblk
For an example using the Black model, see Pricing Using the Black Model.
Roll-Geske-Whaley Model
Use the Roll-Geske-Whaley approximation method to price American call options paying a single cash dividend. This model is based on the modification of the observed stock price for the present value
of the dividend and also supports a compound option to account for the possibility of early exercise. The Roll-Geske-Whaley model has drawbacks due to an escrowed dividend price approach which may
lead to arbitrage. For further explanation, see Options, Futures, and Other Derivatives by John Hull.
Closed-form solutions for a Roll-Geske-Whaley model support the following tasks.
Task Function
Price American call options with a single cash dividend using the Roll-Geske-Whaley option pricing model. optstockbyrgw
Calculate American call prices and sensitivities using the Roll-Geske-Whaley option pricing model. optstocksensbyrgw
Calculate implied volatility for American call options using the Roll-Geske-Whaley option pricing model. impvbyrgw
For an example using the Roll-Geske-Whaley model, see Pricing Using the Roll-Geske-Whaley Model.
Bjerksund-Stensland 2002 Model
Use the Bjerksund-Stensland 2002 model for pricing American puts and calls with continuous dividend yield. This model works by dividing the time to maturity of the option in two separate parts, each
with its own flat exercise boundary (trigger price). The Bjerksund-Stensland 2002 method is a generalization of the Bjerksund and Stensland 1993 method and is considered to be computationally
efficient. For further explanation, see Closed Form Valuation of American Options by Bjerksund and Stensland.
Closed-form solutions for a Bjerksund-Stensland 2002 model support the following tasks.
Task Function
Price American options with continuous dividend yield using the Bjerksund-Stensland 2002 option pricing model. optstockbybjs
Calculate American options prices and sensitivities using the Bjerksund-Stensland 2002 option pricing model. optstocksensbybjs
Calculate implied volatility for American options using the Bjerksund-Stensland 2002 option pricing model. impvbybjs
For an example using the Bjerksund-Stensland 2002 model, see Pricing Using the Bjerksund-Stensland Model.
Barone-Adesi-Whaley Model
The Barone-Adesi-Whaley model is used for pricing American vanilla options. Closed-form solutions for a Barone-Adesi-Whaley model support the following tasks.
Task Function
Calculate the prices of an American call and put options using the Barone-Adesi-Whaley approximation model. optstockbybaw
Calculate the prices and sensitivities of an American call and put options using the Barone-Adesi-Whaley approximation model. optstocksensbybaw
Calculate the implied volatility for American options using the Barone-Adesi-Whaley model. impvbybaw
For an example using the Barone-Adesi-Whaley model, see Compute American Option Prices Using the Barone-Adesi and Whaley Option Pricing Model.
Pricing Using the Black-Scholes Model
Consider a European stock option with an exercise price of $40 on January 1, 2008 that expires on July 1, 2008. Assume that the underlying stock pays dividends of $0.50 on March 1 and June 1. The
stock is trading at $40 and has a volatility of 30% per annum. The risk-free rate is 4% per annum. Using this data, calculate the price of a call and a put option on the stock using the Black-Scholes
option pricing model:
Strike = 40;
AssetPrice = 40;
Sigma = .3;
Rates = 0.04;
Settle = 'Jan-01-08';
Maturity = 'Jul-01-08';
Div1 = 'March-01-2008';
Div2 = 'Jun-01-2008';
Create RateSpec and StockSpec:
RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle, 'EndDates',...
Maturity, 'Rates', Rates, 'Compounding', -1);
StockSpec = stockspec(Sigma, AssetPrice, {'cash'}, 0.50,{Div1,Div2});
Define two options, one call and one put:
OptSpec = {'call'; 'put'};
Calculate the price of the European options:
Price = optstockbybls(RateSpec, StockSpec, Settle, Maturity, OptSpec, Strike)
The first element of the Price vector represents the price of the call ($3.21); the second is the price of the put ($3.40). Use the function optstocksensbybls to compute six sensitivities for the
Black-Scholes model: delta, gamma, vega, lambda, rho, and theta and the price of the option.
The selection of output parameters and their order is determined by the optional input parameter OutSpec. This parameter is a cell array of character vectors, each one specifying a desired output
parameter. The order in which these output parameters are returned by the function is the same as the order of the character vectors contained in OutSpec.
As an example, consider the same options as the previous example. To calculate their Delta, Rho, Price, and Gamma, build the cell array OutSpec as follows:
OutSpec = {'delta', 'rho', 'price', 'gamma'};
[Delta, Rho, Price, Gamma] = optstocksensbybls(RateSpec, StockSpec, Settle,...
Maturity, OptSpec, Strike, 'OutSpec', OutSpec)
Delta =
Rho =
Price =
Gamma =
Pricing Using the Black Model
Consider two European call options on a futures contract with exercise prices of $20 and $25 that expire on September 1, 2008. Assume that on May 1, 2008 the contract is trading at $20 and has a
volatility of 35% per annum. The risk-free rate is 4% per annum. Using this data, calculate the price of the call futures options using the Black model:
Strike = [20; 25];
AssetPrice = 20;
Sigma = .35;
Rates = 0.04;
Settle = 'May-01-08';
Maturity = 'Sep-01-08';
Create RateSpec and StockSpec:
RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle,...
'EndDates', Maturity, 'Rates', Rates, 'Compounding', -1);
StockSpec = stockspec(Sigma, AssetPrice);
Define the call option:
Calculate price and all sensitivities of the European futures options:
OutSpec = {'All'}
[Delta, Gamma, Vega, Lambda, Rho, Theta, Price] = optstocksensbyblk(RateSpec,...
StockSpec, Settle, Maturity, OptSpec, Strike, 'OutSpec', OutSpec);
The first element of the Price vector represents the price of the call with an exercise price of $20 ($1.59); the second is the price of the call with an exercise price of $25 ($2.89).
The function impvbyblk is used to compute the implied volatility using the Black option pricing model. Assuming that the previous European call futures are trading at $1.5903 and $0.3037, you can
calculate their implied volatility:
Volatility = impvbyblk(RateSpec, StockSpec, Settle, Maturity,...
OptSpec, Strike, Price);
As expected, you get volatilities of 35%. If the call futures were trading at $1.50 and $0.50 in the market, the implied volatility would be 33% and 42%:
Volatility = impvbyblk(RateSpec, StockSpec, Settle, Maturity,...
OptSpec, Strike, [1.50;0.5])
Volatility =
Pricing Using the Roll-Geske-Whaley Model
Consider two American call options, with exercise prices of $110 and $100 on June 1, 2008, that expire on June 1, 2009. Assume that the underlying stock pays dividends of $0.001 on December 1, 2008.
The stock is trading at $80 and has a volatility of 20% per annum. The risk-free rate is 6% per annum. Using this data, calculate the price of the American calls using the Roll-Geske-Whaley option
pricing model:
AssetPrice = 80;
Settle = 'Jun-01-2008';
Maturity = 'Jun-01-2009';
Strike = [110; 100];
Rate = 0.06;
Sigma = 0.2;
DivAmount = 0.001;
DivDate = 'Dec-01-2008';
Create RateSpec and StockSpec:
StockSpec = stockspec(Sigma, AssetPrice, {'cash'}, DivAmount, DivDate);
RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle,...
'EndDates', Maturity, 'Rates', Rate, 'Compounding', -1);
Calculate the call prices:
Price = optstockbyrgw(RateSpec, StockSpec, Settle, Maturity, Strike)
The first element of the Price vector represents the price of the call with an exercise price of $110 ($0.84); the second is the price of the call with an exercise price of $100 ($2.02).
Pricing Using the Bjerksund-Stensland Model
Consider four American stock options (two calls and two puts) with an exercise price of $100 that expire on July 1, 2008. Assume that the underlying stock pays a continuous dividend yield of 4% as of
January 1, 2008. The stock has a volatility of 20% per annum and the risk-free rate is 8% per annum. Using this data, calculate the price of the American calls and puts assuming the following current
prices of the stock: $80, $90 (for the calls) and $100 and $110 (for the puts):
Settle = 'Jan-1-2008';
Maturity = 'Jul-1-2008';
Strike = 100;
AssetPrice = [80; 90; 100; 110];
DivYield = 0.04;
Rate = 0.08;
Sigma = 0.20;
Create RateSpec and StockSpec:
StockSpec = stockspec(Sigma, AssetPrice, {'continuous'}, DivYield);
RateSpec = intenvset('ValuationDate', Settle, 'StartDates', Settle,...
'EndDates', Maturity, 'Rates', Rate, 'Compounding', -1);
Define the option type:
OptSpec = {'call'; 'call'; 'put'; 'put'};
Compute the option prices:
Price = optstockbybjs(RateSpec, StockSpec, Settle, Maturity, OptSpec, Strike)
Price =
The first two elements of the Price vector represent the price of the calls ($0.41 and $2.18), the last two elements represent the price of the put options ($4.72 and $1.72). Use the function
optstocksensbybjs to compute six sensitivities for the Bjerksund-Stensland model: delta, gamma, vega, lambda, rho, and theta and the price of the option. The selection of output parameters and their
order is determined by the optional input parameter OutSpec. This parameter is a cell array of character vectors, each one specifying a desired output parameter. The order in which these output
parameters are returned by the function is the same as the order of the character vectors contained in OutSpec. As an example, consider the same options as the previous example. To calculate their
delta, gamma, and price, build the cell array OutSpec as follows:
OutSpec = {'delta', 'gamma', 'price'};
The outputs of optstocksensbybjs are in the same order as in OutSpec.
[Delta, Gamma, Price] = optstocksensbybjs(RateSpec, StockSpec, Settle,...
Maturity, OptSpec, Strike, 'OutSpec', OutSpec)
Delta =
Gamma =
Price =
For more information on the Bjerksund-Stensland model, see Closed-Form Solutions Modeling.
Compute American Option Prices Using the Barone-Adesi and Whaley Option Pricing Model
Consider an American call option with an exercise price of $120. The option expires on Jan 1, 2018. The stock has a volatility of 14% per annum, and the annualized continuously compounded risk-free
rate is 4% per annum as of Jan 1, 2016. Using this data, calculate the price of the American call, assuming the price of the stock is $125 and pays a dividend of 2%.
StartDate = datetime(2016,1,1);
EndDate = datetime(2018,1,1);
Basis = 1;
Compounding = -1;
Rates = 0.04;
Define the RateSpec.
RateSpec = intenvset('ValuationDate',StartDate,'StartDate',StartDate,'EndDate',EndDate, ...
RateSpec = struct with fields:
FinObj: 'RateSpec'
Compounding: -1
Disc: 0.9231
Rates: 0.0400
EndTimes: 2
StartTimes: 0
EndDates: 737061
StartDates: 736330
ValuationDate: 736330
Basis: 1
EndMonthRule: 1
Define the StockSpec.
Dividend = 0.02;
AssetPrice = 125;
Volatility = 0.14;
StockSpec = stockspec(Volatility,AssetPrice,'Continuous',Dividend)
StockSpec = struct with fields:
FinObj: 'StockSpec'
Sigma: 0.1400
AssetPrice: 125
DividendType: {'continuous'}
DividendAmounts: 0.0200
ExDividendDates: []
Define the American option.
OptSpec = 'call';
Strike = 120;
Settle = datetime(2016,1,1);
Maturity = datetime(2018,1,1);
Compute the price for the American option.
Price = optstockbybaw(RateSpec,StockSpec,Settle,Maturity,OptSpec,Strike)
See Also
assetbybls | assetsensbybls | cashbybls | cashsensbybls | chooserbybls | gapbybls | gapsensbybls | impvbybls | optstockbybls | optstocksensbybls | supersharebybls | supersharesensbybls | impvbyblk |
optstockbyblk | optstocksensbyblk | impvbyrgw | optstockbyrgw | optstocksensbyrgw | impvbybjs | optstockbybjs | optstocksensbybjs | spreadbybjs | spreadsensbybjs | basketbyju | basketsensbyju |
basketstockspec | maxassetbystulz | maxassetsensbystulz | minassetbystulz | minassetsensbystulz | spreadbykirk | spreadsensbykirk | asianbykv | asiansensbykv | asianbylevy | asiansensbylevy |
lookbackbycvgsg | lookbacksensbycvgsg | basketbyls | basketsensbyls | basketstockspec | asianbyls | asiansensbyls | lookbackbyls | lookbacksensbyls | spreadbyls | spreadsensbyls | optstockbyls |
optstocksensbyls | optpricebysim | optstockbybaw | optstocksensbybaw
Related Examples
More About | {"url":"https://www.mathworks.com/help/fininst/equity-derivatives-using-closed-form-solutions.html","timestamp":"2024-11-04T13:31:04Z","content_type":"text/html","content_length":"119178","record_id":"<urn:uuid:1abc5356-27fb-4123-aeef-bbec9981c9f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00334.warc.gz"} |
Are Mathematical Economists Just an Intimidated Bunch Who Have an Inferiority Complex?
There is no place for math in the science of economics, in the manner it is used in the physical sciences, since there are no numerical constants in the science of economics. Economics, thus, is a
different science from that of the physical sciences. We know, for example, that water freezes at 32 degrees. No such numerical constants exist in the field of human action. But, that doesn't mean
that many economists don't try to force mathematical equations into the science in ways that are incorrect, or at best add a level of complexity that is not necessary.
Friedrich Hayek in his important book,
The Counter Revolution of Science
, argued that social scientists, who employed empirical mathematical techniques, suffered from an inferiority complex by attempting to mimic the physical sciences, when the social sciences are of a
different nature.
In a
new paper
, Kimmo Eriksson, reports on a fascinating test he conducted using nonsense mathematical equations that apparently impressed everyone, except hardcore mathematicians and scientists using hardcore
math. Note in the bar chart below, that appears in Eriksson's paper, social scientists fall into the group that reviewed positively the paper with the nonsense math. In the abstract to his paper he
writes (my highlight)
In those disciplines where most researchers do not master mathematics, the use of mathematics may be held in too much awe. To demonstrate this I conducted an online experiment with 200
participants, all of which had experience of reading research reports and a postgraduate degree (in any subject). Participants were presented with the abstracts from two published papers (one in
evolutionary anthropology and one in sociology). Based on these abstracts, participants were asked to judge the quality of the research. Either one or the other of the two abstracts was
manipulated through the inclusion of an extra sentence taken from a completely unrelated paper and presenting an equation that made no sense in the context. The abstract that included the
meaningless mathematics tended to be judged of higher quality. However, this "nonsense math effect" was not found among participants with degrees in mathematics, science, technology[...]
In the his conclusion Eriksson writes:
Specifically, the experimental results suggest a bias for nonsense math in judgments of quality of research. Further, this bias was only found among people with degrees from areas outside
mathematics, science and technology[...]It may be that[...]mathematics [is] held in undeserved awe among nonexperts. It may also be that people always tend to become impressed by what they do not
understand, irrespective of what field it represents—much in line with the "Guru effect" discussed by Sperber (2010).
Bottom line: Don't get caught up in mathematical mumbo jumbo, always understand the logic of an argument. Just because an equation is thrown in doesn't mean an argument is any stronger. Most of the
math employed in journal articles, and elsewhere, by economists is of the worthless variety. If you see a mathematical equation as part of an economic argument, your guard should go up.
4 comments:
1. In 40 years, I've never met anyone not already an Austrian who understood Austrian economics. No one was ever impressed by what they did not and refused to understand, even when I carried around
a big fat book by Von Mises.
2. Totally agree, and Bob Murphy's experience relayed in your interview with some of the wizened econ professors admission is telling.
However I would nitpick on two things: differentiating between micro and macro; and that logic is still part of mathematics, like for example ordinal relationships of utility (instead of cardinal
calculations of utility used in non-Austrian econ). It may not look like the mathematics most people think of, and indeed it's almost never ever used in the physical sciences, but you'll
encounter it in discrete/foundational/formal meta mathematics, in computer science theory, and shared by a branch of philosophy.
Technically, the math in macro econ is correct, it's just that the model is wrong and will never be right given the realities of human action.
3. "Refused to understand" is the perfect choice of wording. You can explain AuEcon in 5 minutes, and show the axiomatic truth of it and how it comes from the simple "Man Acts" principle, but most
people are too tied up in their POV to accept it.
4. It's smoke and mirrors to hide their appalling ignorance of basic questions like asset bubbles, inflation, etc that most mainstream econ guys have zero grasp of but can always write out long
complex bullshit equations to explain situations. | {"url":"https://www.economicpolicyjournal.com/2013/01/are-mathematical-economists-just.html","timestamp":"2024-11-05T14:00:21Z","content_type":"application/xhtml+xml","content_length":"104154","record_id":"<urn:uuid:d92679e2-205b-4c4f-89d5-8a75564ea130>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00378.warc.gz"} |
Largest Element in a stream
In this article, we have explored the problem of finding the largest element in a Stream of numbers. We have presented three approaches where in the efficient approach, we can solve the problem in O
(1) time and O(1) space.
Table of content:
1. Problem Statement
2. Approach 1: Sorting
3. Approach 2: Insertion Sort
4. Approach 3: Efficient method [O(1) time]
Go through this article as a simpler version of this problem:
In summary:
Approach Time Complexity Space Complexity
Sorting O(N^2 logN) O(N)
Insertion Sort O(N^2) O(N)
Efficient Approach O(N) O(1)
Problem Statement
Given a stream of numbers, we have the find the largest number whenever a new element is added from the stream.
A stream is a sequence of numbers where at time t(i), a number A(i) is added. At time J, the elements added include:
A(0), A(1), ..., A(J-1), A(J)
At every time J, we have to find the largest element in the set of all numbers added.
There are three metrics:
• Time Complexity for all operations
• Time Complexity for a single operation
• Space Complexity
We will tackle this problem with three approaches:
• Approach 1: Sorting
• Approach 2: Insertion Sort
• Approach 3: Efficient method [O(1) time]
Approach 1: Sorting
The steps to find the largest element using this approach are:
1. Maintain a set of Numbers S
2. For each time stamp t,
2.1. add the new element E to the set S
2.2. Sort the set using an efficient sorting algorithm
2.3. Get the largest element
S = {} // Empty set
for(i=0; i<N; i++)
int new_element = get_number();
// Add new_element to set S
// Sort in increasing order
// This step takes O(I logI) time
// where I is the number of elements in S
int answer = S[0];
print answer;
• Time Complexity: O(N logN) for each time stamp
• Time Complexity: O(N^2 logN) for all N elements
• Space Complexity: O(N)
Approach 2: Insertion Sort
The steps to find the largest element using this approach are:
1. Maintain a set of Numbers S
2. For each time stamp t,
2.1. add the new element E to the set S
2.2. Sort the set using Insertion Sort
2.3. Get the largest element
This difference in this approach is that we are using Insertion Sort instead of a general sorting algorithm. This improves the time complexity because our array is almost sorted with only one element
(new) which is out of order. The remaining elements are sorted.
So, in this case, Insertion sort can sort the last element in linear time O(N).
S = {} // Empty set
for(i=0; i<N; i++)
int new_element = get_number();
// Add new_element to set S
// Sort in increasing order
// This step takes O(I logI) time
// where I is the number of elements in S
int answer = S[0];
print answer;
• Time Complexity: O(N) for each time stamp
• Time Complexity: O(N^2) for all N elements
• Space Complexity: O(N)
Approach 3: Efficient method [O(1) time]
The steps to find the largest element using this approach are:
1. Maintain a set of Numbers S
2. Maintain a variable LARGEST = NEGATIVE_INFINITY
3. For each time stamp t,
2.1. Let the new element be E
2.2. If E > LARGEST, then LARGEST = E
2.3. Report the answer as LARGEST
S = {} // Empty set
int LARGEST = NEGATIVE_INFINITY
for(i=0; i<N; i++)
int new_element = get_number();
// Add new_element to set S
if new_element > LARGEST
LARGEST = new_element
int answer = LARGEST;
print answer;
• Time Complexity: O(1) for each time stamp
• Time Complexity: O(N) for all N elements
• Space Complexity: O(1)
In summary:
Approach Time Complexity Space Complexity
Sorting O(N^2 logN) O(N)
Insertion Sort O(N^2) O(N)
Efficient Approach O(N) O(1)
With this article at OpenGenus, you have the complete idea of finding the largest element in a stream efficiently. Enjoy. | {"url":"https://iq.opengenus.org/largest-element-in-stream/","timestamp":"2024-11-04T17:08:13Z","content_type":"text/html","content_length":"32467","record_id":"<urn:uuid:6bc73a40-4235-4969-bafa-d03c6dd2472a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00260.warc.gz"} |
One of those rather mindbending things I sometimes like to contemplate is Hyperspace. And beyond.
As in, what our universe really is. As opposed to the one we think we experience.
Suffice to say, the likelihood is, our perception is- flat.
The universe doesn't actually fit together the way we think it does.
We experience three spatial dimensions, whereas in fact there are- according to string theory- eleven.
And I'm sure you're confused by this. And no, we don't understand them. But it seems pretty clear that the difference between what we see and what is actually the case, probably explains most of what
we don't yet understand. In other words we are perceiving eleven dimensional reality through a flat three dimensional perception.
Why do we only perceive three dimensions? My guess is because there really is no advantage in us doing so. Perhaps if we had three eyes, we would see space in four dimensions, I don't know. But
clearly there is no advantage in us doing so. Clearly whatever four dimensional reality looks like, there are no practical advantages to be gained by lifeforms having a third eye. Whereas the
difference between seeing in three dimensions and seeing in two is quite marked.
Or perhaps it wouldn't make a difference. Either way, our senses tell us there are three spatial dimensions. In fact, our senses fail to spot eight.
And because the space we understand is understood in three dimensional terms we are grasping at the multitude of particles that apparently exist in our three dimensional universe, struggling to
understand what is the underlying difference between them. And ultimately, it can only boil down to one thing. The true structure of space that we don't yet understand. There is almost certainly only
one particle in 11 dimensional space, it is merely when translated to our flat perception of three dimensional space that we have photons and quarks and neutrinos, surely.
There are two obvious analogies to use here. The first is that of radiation. Not all animals see the same radiation. In fact, many mammals are colour blind, or can't see all colours. There are parts
of the light spectrum they can't see. Whereas snakes can see radiation that we can't see; infra red for instance. The colour spectrum we see is only a tiny part of the radiation spectrum. We see the
bit it is useful for us to see. Our eyes have evolved to pick up the bits that might be useful to us. Everything between red and blue. Seeing anything outside that would be useless to us in many
ways. So we don't waste energy picking it up.
I suppose the only logical conclusion one can draw from this is that to all intents and purposes, matter is three dimensional. That whilst space might well be multidimensional, and indeed matter is
in reality eleven dimensional, the eight dimensions we do not see make no difference at our scale. Some scientists suggest they all of them only have effect on the sub atomic level, but I'm not sure
why they believe this. It could be that our familiar dimensions are the ones bang in the middle. That there are four which only effect the universe at sub atomic levels and four which only effect it
above the galactic level.
When of course we say effect the universe at these levels, what we actually mean is that it is only at these levels that you can spot the difference between a universe of three dimensions and one of
eleven dimensions. In theory. Because of course, it is all just theory.
The second analogy I'm going to use, is to demonstrate exactly the mindbending wierdness this might entail and why geometry as we understand it completely breaks down at the macro level (with facts
like in the real universe any straight line eventually goes back to it's starting point).
Imagine you live in a comic book. You're a comic book Egyptian. And you live by a comic book pyramid. You see the comic book pyramid every day. You see it in comic book form. Flat.
To you, it has three points. Joined by three edges. And it has one surface. You are, of course, unaware of the concept of solids. Surfaces is about as far as it goes for you. You live in a two
dimensional world.
If you lived in a one dimensional world, even surfaces would be beyond you. Your pyramid would now be down to just two points, joined by a line. And lines really would be the limit.
You are unaware that a three dimensional world exists where pyramids have a FOURTH point. And not three, but six edges. And that one surface isn't the finality; there are four surfaces, which
together make up one solid, the complete three dimensional pyramid.
You are unaware, because your comic book existence doesn't need you to comprehend three dimensional reality.
Well, let's say our existence can be seen in the same light. Let us hypothesise a fourth spatial dimension.
In this a fifth point is added to the pyramid; this four dimensional hypersolid has twelve edges and twelve surfaces. It also has five adjacent solids, adjacent to eachother in the way the four
surfaces are in the three dimensional pyramid.
The point is is that where you in your comic book existence see one surface, there are three others you can't see. Where you see three edges you can't see. So in fact, if space is three dimensional
and you only see in two dimensions, your understanding of the two you can see is limited also.
Because if space is four dimensional, when we look at a hypersolid, we'd would see it as a regular pyramid, just as comic book you sees the pyramid as a triangle. We'd see only see one sold and miss
the four adjacent solids. We'd see four surfaces and miss out on eight.
And we've only advanced here to thinking in four dimensions.
Think of how much we can't see, if space is eleven dimensional.
Now don't go looking at your desk now and wondering if there really are surfaces you can't see. In the sense it actually matters, clearly not.
What it means, quite simply is the space it's actually in is more complex than you think. It's more than just up or down, backwards or forwards, left or right. You can't conceive of what that means
and nor to be honest can I.
I don't think right now that many people can.
But at some point we will almost certainly learn how to do something that we are only able to do BECAUSE space is eleven dimensional and not three dimensional.
One day it won't seem mindbending.
Any more than the size of the sun does to us now.
4 comments:
as they say, the universe is not only strange, but perhaps far stranger than we can imagine, or ever hope to understand.
My universe has 12 dimensions, because I refuse to be a conformist.
"Why do we only perceive three dimensions? My guess is because there really is no advantage in us doing so"
Why do you figure that Crushed?
Isn't time a dimension?
Wouldn't it be useful to be able to see where a truck was going to be occupying space and not be there?
Or to be able to step out of/through a collapsing building?
Winter time? no apples in the tree to eat, well try last fall...
Hunting? Just step out with a big rock at the exact moment dinner was underneath and whack it...
Need I say more?
(ooh I love Prodigy)
I love to think about these things too. One of my favourite writers to read...difficult to find translations but Giordano Bruno believed we would find alternate universe...in the 15th century!
Check out Bruno when you get a chance...you will dig! | {"url":"https://crushedwithkisses.blogspot.com/2009/01/hyperspace.html","timestamp":"2024-11-07T16:09:40Z","content_type":"application/xhtml+xml","content_length":"89213","record_id":"<urn:uuid:8ccad22c-0310-4b55-9a58-9c388ce4ce03>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00023.warc.gz"} |
Safe Haskell None
Language Haskell2010
Boxed vectors
data Vector a #
Boxed vectors, supporting efficient slicing.
Monad Vector
Defined in Data.Vector
Functor Vector
Defined in Data.Vector
MonadFail Vector Since: vector-0.12.1.0
Defined in Data.Vector
Applicative Vector
Defined in Data.Vector
Foldable Vector
Defined in Data.Vector
Traversable Vector
Defined in Data.Vector
Eq1 Vector
Defined in Data.Vector
Ord1 Vector
Defined in Data.Vector
Read1 Vector
Defined in Data.Vector
Show1 Vector
Defined in Data.Vector
MonadZip Vector
Defined in Data.Vector
Alternative Vector
Defined in Data.Vector
MonadPlus Vector
Defined in Data.Vector
NFData1 Vector Since: vector-0.12.1.0
Defined in Data.Vector
Vector Vector a
Defined in Data.Vector
IsList (Vector a)
Defined in Data.Vector
type Item (Vector a) #
Eq a => Eq (Vector a)
Defined in Data.Vector
Data a => Data (Vector a)
Defined in Data.Vector
Ord a => Ord (Vector a)
Defined in Data.Vector
Read a => Read (Vector a)
Defined in Data.Vector
Show a => Show (Vector a)
Defined in Data.Vector
Semigroup (Vector a)
Defined in Data.Vector
Monoid (Vector a)
Defined in Data.Vector
NFData a => NFData (Vector a)
Defined in Data.Vector
rnf :: Vector a -> () #
type Mutable Vector
Defined in Data.Vector
type Item (Vector a)
Defined in Data.Vector
type Item
a) = a
data MVector s a #
Mutable boxed vectors keyed on the monad they live in (IO or ST s).
MVector MVector a
Defined in Data.Vector.Mutable
Length information
Extracting subvectors
slice #
:: Int i starting index
-> Int n length
-> Vector a
-> Vector a
O(1) Yield a slice of the vector without copying it. The vector must contain at least i+n elements.
take :: Int -> Vector a -> Vector a #
O(1) Yield at the first n elements without copying. The vector may contain less than n elements in which case it is returned unchanged.
drop :: Int -> Vector a -> Vector a #
O(1) Yield all but the first n elements without copying. The vector may contain less than n elements in which case an empty vector is returned.
splitAt :: Int -> Vector a -> (Vector a, Vector a) #
O(1) Yield the first n elements paired with the remainder without copying.
Note that splitAt n v is equivalent to (take n v, drop n v) but slightly more efficient.
replicate :: Int -> a -> Vector a #
O(n) Vector of the given length with the same value in each position
generate :: Int -> (Int -> a) -> Vector a #
O(n) Construct a vector of the given length by applying the function to each index
iterateN :: Int -> (a -> a) -> a -> Vector a #
O(n) Apply function n times to value. Zeroth element is original value.
Monadic initialisation
replicateM :: Monad m => Int -> m a -> m (Vector a) #
O(n) Execute the monadic action the given number of times and store the results in a vector.
generateM :: Monad m => Int -> (Int -> m a) -> m (Vector a) #
O(n) Construct a vector of the given length by applying the monadic action to each index
iterateNM :: Monad m => Int -> (a -> m a) -> a -> m (Vector a) #
O(n) Apply monadic function n times to value. Zeroth element is original value.
create :: (forall s. ST s (MVector s a)) -> Vector a #
Execute the monadic action and freeze the resulting vector.
create (do { v <- new 2; write v 0 'a'; write v 1 'b'; return v }) = <a,b>
unfoldr :: (b -> Maybe (a, b)) -> b -> Vector a #
O(n) Construct a vector by repeatedly applying the generator function to a seed. The generator function yields Just the next element and the new seed or Nothing if there are no more elements.
unfoldr (\n -> if n == 0 then Nothing else Just (n,n-1)) 10
= <10,9,8,7,6,5,4,3,2,1>
unfoldrN :: Int -> (b -> Maybe (a, b)) -> b -> Vector a #
O(n) Construct a vector with at most n elements by repeatedly applying the generator function to a seed. The generator function yields Just the next element and the new seed or Nothing if there are
no more elements.
unfoldrN 3 (\n -> Just (n,n-1)) 10 = <10,9,8>
unfoldrM :: Monad m => (b -> m (Maybe (a, b))) -> b -> m (Vector a) #
O(n) Construct a vector by repeatedly applying the monadic generator function to a seed. The generator function yields Just the next element and the new seed or Nothing if there are no more elements.
unfoldrNM :: Monad m => Int -> (b -> m (Maybe (a, b))) -> b -> m (Vector a) #
O(n) Construct a vector by repeatedly applying the monadic generator function to a seed. The generator function yields Just the next element and the new seed or Nothing if there are no more elements.
constructN :: Int -> (Vector a -> a) -> Vector a #
O(n) Construct a vector with n elements by repeatedly applying the generator function to the already constructed part of the vector.
constructN 3 f = let a = f <> ; b = f <a> ; c = f <a,b> in <a,b,c>
constructrN :: Int -> (Vector a -> a) -> Vector a #
O(n) Construct a vector with n elements from right to left by repeatedly applying the generator function to the already constructed part of the vector.
constructrN 3 f = let a = f <> ; b = f<a> ; c = f <b,a> in <c,b,a>
enumFromN :: Num a => a -> Int -> Vector a #
O(n) Yield a vector of the given length containing the values x, x+1 etc. This operation is usually more efficient than enumFromTo.
enumFromN 5 3 = <5,6,7>
enumFromStepN :: Num a => a -> a -> Int -> Vector a #
O(n) Yield a vector of the given length containing the values x, x+y, x+y+y etc. This operations is usually more efficient than enumFromThenTo.
enumFromStepN 1 0.1 5 = <1,1.1,1.2,1.3,1.4>
enumFromTo :: Enum a => a -> a -> Vector a #
O(n) Enumerate values from x to y.
WARNING: This operation can be very inefficient. If at all possible, use enumFromN instead.
enumFromThenTo :: Enum a => a -> a -> a -> Vector a #
O(n) Enumerate values from x to y with a specific step z.
WARNING: This operation can be very inefficient. If at all possible, use enumFromStepN instead.
Restricting memory usage
force :: Vector a -> Vector a #
O(n) Yield the argument but force it not to retain any extra memory, possibly by copying it.
This is especially useful when dealing with slices. For example:
force (slice 0 2 <huge vector>)
Here, the slice retains a reference to the huge vector. Forcing it creates a copy of just the elements that belong to the slice and allows the huge vector to be garbage collected.
Modifying vectors
Safe destructive update
modify :: (forall s. MVector s a -> ST s ()) -> Vector a -> Vector a #
Apply a destructive operation to a vector. The operation will be performed in place if it is safe to do so and will modify a copy of the vector otherwise.
modify (\v -> write v 0 'x') (replicate 3 'a') = <'x','a','a'>
Elementwise operations
imap :: (Int -> a -> b) -> Vector a -> Vector b #
O(n) Apply a function to every element of a vector and its index
Monadic mapping
mapM :: Monad m => (a -> m b) -> Vector a -> m (Vector b) #
O(n) Apply the monadic action to all elements of the vector, yielding a vector of results
imapM :: Monad m => (Int -> a -> m b) -> Vector a -> m (Vector b) #
O(n) Apply the monadic action to every element of a vector and its index, yielding a vector of results
mapM_ :: Monad m => (a -> m b) -> Vector a -> m () #
O(n) Apply the monadic action to all elements of a vector and ignore the results
imapM_ :: Monad m => (Int -> a -> m b) -> Vector a -> m () #
O(n) Apply the monadic action to every element of a vector and its index, ignoring the results
forM :: Monad m => Vector a -> (a -> m b) -> m (Vector b) #
O(n) Apply the monadic action to all elements of the vector, yielding a vector of results. Equivalent to flip mapM.
forM_ :: Monad m => Vector a -> (a -> m b) -> m () #
O(n) Apply the monadic action to all elements of a vector and ignore the results. Equivalent to flip mapM_.
izipWith :: (Int -> a -> b -> c) -> Vector a -> Vector b -> Vector c #
O(min(m,n)) Zip two vectors with a function that also takes the elements' indices.
izipWith3 :: (Int -> a -> b -> c -> d) -> Vector a -> Vector b -> Vector c -> Vector d #
Zip three vectors and their indices with the given function.
Monadic zipping
zipWithM :: Monad m => (a -> b -> m c) -> Vector a -> Vector b -> m (Vector c) #
O(min(m,n)) Zip the two vectors with the monadic action and yield a vector of results
izipWithM :: Monad m => (Int -> a -> b -> m c) -> Vector a -> Vector b -> m (Vector c) #
O(min(m,n)) Zip the two vectors with a monadic action that also takes the element index and yield a vector of results
zipWithM_ :: Monad m => (a -> b -> m c) -> Vector a -> Vector b -> m () #
O(min(m,n)) Zip the two vectors with the monadic action and ignore the results
izipWithM_ :: Monad m => (Int -> a -> b -> m c) -> Vector a -> Vector b -> m () #
O(min(m,n)) Zip the two vectors with a monadic action that also takes the element index and ignore the results
unzip6 :: Vector (a, b, c, d, e, f) -> (Vector a, Vector b, Vector c, Vector d, Vector e, Vector f) #
Working with predicates
ifilter :: (Int -> a -> Bool) -> Vector a -> Vector a #
O(n) Drop elements that do not satisfy the predicate which is applied to values and their indices
takeWhile :: (a -> Bool) -> Vector a -> Vector a #
O(n) Yield the longest prefix of elements satisfying the predicate without copying.
dropWhile :: (a -> Bool) -> Vector a -> Vector a #
O(n) Drop the longest prefix of elements that satisfy the predicate without copying.
partition :: (a -> Bool) -> Vector a -> (Vector a, Vector a) #
O(n) Split the vector in two parts, the first one containing those elements that satisfy the predicate and the second one those that don't. The relative order of the elements is preserved at the cost
of a sometimes reduced performance compared to unstablePartition.
unstablePartition :: (a -> Bool) -> Vector a -> (Vector a, Vector a) #
O(n) Split the vector in two parts, the first one containing those elements that satisfy the predicate and the second one those that don't. The order of the elements is not preserved but the
operation is often faster than partition.
span :: (a -> Bool) -> Vector a -> (Vector a, Vector a) #
O(n) Split the vector into the longest prefix of elements that satisfy the predicate and the rest without copying.
break :: (a -> Bool) -> Vector a -> (Vector a, Vector a) #
O(n) Split the vector into the longest prefix of elements that do not satisfy the predicate and the rest without copying.
elem :: Eq a => a -> Vector a -> Bool infix 4#
O(n) Check if the vector contains an element
elemIndex :: Eq a => a -> Vector a -> Maybe Int #
O(n) Yield Just the index of the first occurence of the given element or Nothing if the vector does not contain the element. This is a specialised version of findIndex.
elemIndices :: Eq a => a -> Vector a -> Vector Int #
O(n) Yield the indices of all occurences of the given element in ascending order. This is a specialised version of findIndices.
foldl' :: (a -> b -> a) -> a -> Vector b -> a #
O(n) Left fold with strict accumulator
foldr' :: (a -> b -> b) -> b -> Vector a -> b #
O(n) Right fold with a strict accumulator
ifoldl :: (a -> Int -> b -> a) -> a -> Vector b -> a #
O(n) Left fold (function applied to each element and its index)
ifoldl' :: (a -> Int -> b -> a) -> a -> Vector b -> a #
O(n) Left fold with strict accumulator (function applied to each element and its index)
ifoldr :: (Int -> a -> b -> b) -> b -> Vector a -> b #
O(n) Right fold (function applied to each element and its index)
ifoldr' :: (Int -> a -> b -> b) -> b -> Vector a -> b #
O(n) Right fold with strict accumulator (function applied to each element and its index)
Specialised folds
any :: (a -> Bool) -> Vector a -> Bool #
O(n) Check if any element satisfies the predicate.
sum :: Num a => Vector a -> a #
O(n) Compute the sum of the elements
Monadic folds
ifoldM :: Monad m => (a -> Int -> b -> m a) -> a -> Vector b -> m a #
O(n) Monadic fold (action applied to each element and its index)
foldM' :: Monad m => (a -> b -> m a) -> a -> Vector b -> m a #
O(n) Monadic fold with strict accumulator
ifoldM' :: Monad m => (a -> Int -> b -> m a) -> a -> Vector b -> m a #
O(n) Monadic fold with strict accumulator (action applied to each element and its index)
foldM_ :: Monad m => (a -> b -> m a) -> a -> Vector b -> m () #
O(n) Monadic fold that discards the result
ifoldM_ :: Monad m => (a -> Int -> b -> m a) -> a -> Vector b -> m () #
O(n) Monadic fold that discards the result (action applied to each element and its index)
foldM'_ :: Monad m => (a -> b -> m a) -> a -> Vector b -> m () #
O(n) Monadic fold with strict accumulator that discards the result
ifoldM'_ :: Monad m => (a -> Int -> b -> m a) -> a -> Vector b -> m () #
O(n) Monadic fold with strict accumulator that discards the result (action applied to each element and its index)
Monadic sequencing
Prefix sums (scans)
prescanl :: (a -> b -> a) -> a -> Vector b -> Vector a #
O(n) Prescan
prescanl f z = init . scanl f z
Example: prescanl (+) 0 <1,2,3,4> = <0,1,3,6>
postscanl :: (a -> b -> a) -> a -> Vector b -> Vector a #
O(n) Scan
postscanl f z = tail . scanl f z
Example: postscanl (+) 0 <1,2,3,4> = <1,3,6,10>
scanl :: (a -> b -> a) -> a -> Vector b -> Vector a #
O(n) Haskell-style scan
scanl f z <x1,...,xn> = <y1,...,y(n+1)>
where y1 = z
yi = f y(i-1) x(i-1)
Example: scanl (+) 0 <1,2,3,4> = <0,1,3,6,10>
scanl' :: (a -> b -> a) -> a -> Vector b -> Vector a #
O(n) Haskell-style scan with strict accumulator
scanr :: (a -> b -> b) -> b -> Vector a -> Vector b #
O(n) Right-to-left Haskell-style scan
scanr' :: (a -> b -> b) -> b -> Vector a -> Vector b #
O(n) Right-to-left Haskell-style scan with strict accumulator
iscanr :: (Int -> a -> b -> b) -> b -> Vector a -> Vector b #
O(n) Right-to-left scan over a vector with its index
iscanr' :: (Int -> a -> b -> b) -> b -> Vector a -> Vector b #
O(n) Right-to-left scan over a vector (strictly) with its index
Different vector types
Mutable vectors
copy :: PrimMonad m => MVector (PrimState m) a -> Vector a -> m () #
O(n) Copy an immutable vector into a mutable one. The two vectors must have the same length. | {"url":"http://hackage-origin.haskell.org/package/rio-0.1.20.0/docs/RIO-Vector-Boxed.html","timestamp":"2024-11-14T07:14:25Z","content_type":"application/xhtml+xml","content_length":"186350","record_id":"<urn:uuid:10ebc731-4c70-432e-919b-0a5cc5bf0c17>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00577.warc.gz"} |